Its that xerox bug on steroids, where scanned pages would get their digits swapped by other digits...
I'd want to see some proper hallucination analysis.
https://arxiv.org/pdf/2405.15306
Most OCR pipelines like this, along with excellent commercial ones like doctly.ai, are focused on OCR for LLM consumption - while I’d like to be able to recreate the original scientific work that predates digital typesetting in modern typeset - for yes LLM but also to preserve and promote science of yore, much of which includes discoveries forgotten but relevant still to problems we face today.
This project was just hobby and my first time posting something. I didn’t imagine people would care this much… Next time I will prepare better before sharing.
- I could change the meaning of the output and the output entirely. - If I can control one part of a larger set of data that is analyzed , I could influence the whole output. - I could try to make the process take forever in order to waste resources.
I'd say the first scenario is most interesting, especially if I could then potentially also influence how an LLM trained on the output behaves and do even more damage using this down the line.
Let's say I'm a disgruntled website author. I want my users to see correct information on my website but don't want any LLM to be trained on it. In this case I could probably successfully use prompt injection to "poison" the model.
I love the double prompting to keep GPT from translating the text. I've definitely had this problem before, and spent ages trying to prompt it into not randomly translating the text.
If it still misbehaves in any edge cases, feel free to open an issue on GitHub — happy to patch it up.
In addition, for figures and diagrams, I use Gemini Pro Vision not just to extract the content, but to generate context-aware, structured descriptions that are better suited as ML training input — rather than just dumping raw image text.
So in short, generative AI is used here more as a smart post-processing layer to enhance the usability and semantic clarity of the OCR outputs.
the whole pipeline is not open source
The local pipeline would include:
• Tesseract or TrOCR for general OCR
• Pix2Struct, Donut, or DocTR for document structure understanding
• OpenAI CLIP for image-text semantic alignment
• Gemma / Phi / LLaMA / Mistral for downstream reasoning tasks
Goal is to make the system fully self-hostable for offline and private use.
In contrast, this project focuses less on preserving the visual layout for human readers, and more on extracting structured semantic data for machine learning training.
So instead of optimizing for clean Markdown or HTML, it extracts context-aware elements like:
• table data as JSON,
• math expressions in LaTeX,
• diagrams with image descriptions,
• multilingual text segments,
• and semantic roles (e.g. “question”, “explanation”, etc.)
In short: Marker is great for reading, This is built for feeding into ML pipelines — especially for tasks like question-answering, diagram reasoning, or multimodal pretraining.
This initial release is mostly a working prototype to demonstrate the full pipeline logic, and I’ll continue improving stability, modularity, and usability. A lot more updates are in the pipeline, so stay tuned! Feel free to open issues or suggestions anytime — feedback is always welcome!
A key challenge after OCR is organizing the extracted data into a coherent knowledge structure. We've seen significant improvements in downstream ML tasks when the extracted data is organized using a hierarchical, MECE (Mutually Exclusive, Collectively Exhaustive) framework. This ensures that relationships between entities (tables, diagrams, text) are explicitly captured.
Does your pipeline include capabilities for semantic structuring of the extracted content beyond basic layout analysis? That seems like the next frontier for maximizing the value of OCR data in ML training.
Right now, the pipeline focuses on generating OCR outputs optimized for ML models by cleaning, deduplicating, and segmenting content across modalities (text, tables, figures, formulas). For diagrams and tables, we add semantic tags and preserve layout relationships to aid downstream modeling.
I’m planning to add a semantic structuring module that goes beyond basic layout analysis — something that builds hierarchical, MECE-style representations and identifies entity relationships across sections. That’s absolutely the next frontier, and I really appreciate you pointing it out.
Thanks again for the thoughtful feedback!
One posibility is to write the answer in Korean and use autotranslation. (And post only the autotranslation.) Double check the technical terms, because autotranslation sometimes choose the wrong synonym.
Another posibility is to write the answer in English inside gmail, and gmail will highlight orthographical and gramar errors. So you can fix them.
Most people here will tolerate a few mistakes if the answer has your own personal style.
(Nice project, by the way.)
:( My phone does not have orthography correction, and I didn't have my notebook.
Edit: fixed typo: gave -> have
For that very reason, an LLM would have worked perfectly for you: laying out your thoughts just as you intended, but without the distractions caused by poor spelling or grammatical mistakes. LLMs are tools—as you well know—that are already essential and will become even more so over time. The fact that some people on this platform get irritated by their use just means they’ll eventually become the dinosaurs of the future.
The problem is that I read the emails from my friends using their voice and speaking style.
I'd do the same with HN comments, but I never heard most (any?) of them. Anyway, each commenter has a personal style, or at least I have an informal list in my head of a few hundreds commenters. I remember someone made a few good comments about some topic, so it adds in my mind weight to their opinion. I remember some details of their lives, like where they live, family, work, unusual past events, which topics they are interested, ..., they are persons!
With too much AI, comments get bland. They all read like the same corporate speak. AI would not add pasta recipes to antirez comments, or yadayada to patio11 comments. Also, the topics I'd trust their opinions are very different.
I don't mind using AI to fix the text. Moreover, in one of my previous comments I recomendad to write it in Gmail. I guess Gmail is using a mix of an expert system and modern AI. I hope someday Google adds that feature to the textbox in Chrome.
The problem is that some people is using AI to write short "somewhat related" comments, that are not wrong but not very relevant. Also to write giant "walls of text" that discuss the topic and the 5 most important ramifications. So there is an overreaction to correct orthography, grammar and "AI style".
> The fact that some people on this platform get irritated by their use just means they’ll eventually become the dinosaurs of the future.
Remember that birds are dinosaurs. And if you think that nobody is scared of birds, you should visit a pen full of rheas (ostrich are a fine substitution). If you have any brilliant ornament on your cloth they will try to eat it and you will be hit by the peak. Also they will steal food from your hands and it hurts. We visit an open zoo with my older daughter when she was a kid. Rheas were locked inside a pen for security reasons, there were a lot of ducks and baby ducks that are cute, and the goose were scary because they are evil and come in organized groups to "ask" for food.
Personally, I've always held myself to a high standard in how I write, even in text messages. Some might see that as bordering on perfectionism, but for me, it's about respecting the principle behind communication: to be as clear and correct as possible.
Now that we have tools that help ensure that clarity, or at the very least, reduce distractions caused by grammar or spelling mistakes, of course I'm going to use them. I used to agonize over my comments on Twitter because you couldn't edit them after posting. I would first write them elsewhere and review them several times for any errors before finally posting. For context: I'm a retired 69-year-old physician, and even after witnessing decades of technological advancement, I'm still in awe of what this new technology can do.
Yes, I love beautiful, natural writing. I'm a voracious reader of the great classics. I regularly immerse myself in Shakespeare, Hardy, Eliot, Dickens, Dostoyevsky, Austen, Tolstoy, and many other literary masters. But I also fully embrace this tool that can elevate even the clumsiest writer's work to a clarity we've never had access to before. If that comes at the cost of a bit of stylistic uniformity, that's a reasonable trade-off. It's up to the user to shape the output, review it, and make sure their own voice and ideas shine through.
Back to your original point, I truly wasn't offended on his behalf. I was just curious. As it turns out, he was using an LLM, because his native language is Korean. Good for him. And just to be clear, I didn't intend to make your question seem inappropriate or to embarrass him in any way. If it came across that way, I apologize.