> Unlike other solutions in the space, we're specifically focused on three core areas: (1) the computer vision layer, (2) LLM context engineering, and (3) the surrounding product tooling.
I assume the goal is to continue to serve this via an API? That would be immensely helpful to teams building other products around these capabilities.
We've seen customers integrate these in a few interesting ways so far:
1. Agents (exposing these APIs as tools in certain cases, or into a vector DB for RAG)
2. Real-time experiences in their product (e.g. we power all of Brex's user-facing document upload flows)
3. Embedded in internal tooling for back-office automation
Our customers are already requesting new APIs and capabilities for all the other problems they run into with documents (e.g. fintech customers want fraud detection, healthcare users need form filling). Some of these we'll be rolling out soon!
1. Trellis (YC W24) 2. Roe AI (YC W24) 3. Omni AI (YC W24) 4. Reductor (YC W24)
Other players(extended):
1. Unstract: Open-source ETL for documents (https://github.com/Zipstack/unstract) 2. Datalab: Makers of Surya/Marker 3. Unstructured.io
And hosted model: https://docstrange.nanonets.com/
One persistent challenge was generalizing across “wild” PDFs, especially multi-page tables.
Your mention of agentic OCR correction and semantic chunking really caught my attention. I’m curious — how did you architect those to stay consistent across diverse layouts without relying on massive rule sets?
A lot of customers choose us for our handwriting, checkbox, and table performance. To handle complex handwriting, we've built an agentic OCR correction layer which uses a VLM to review and make edits to low confidence OCR errors.
Tables are a tricky beast, and the long tail of edge cases here is immense. A few things we've found to be really impactful are (1) semantic chunking that detects table boundaries (so a table that spans multiple pages doesn't get chopped in half) and (2) table-to-HTML conversion (in addition to markdown). Markdown is great at representing most simple tables, but can't represent cases where you have e.g. nested cells.
You can see examples of both in our demo! https://dashboard.extend.ai/demo
Accuracy and data verification is challenging. We have a set of internal benchmarks we use, which gets us pretty far, but that's not always representative of specific customer situations. That's why one of the earliest things we built was a evaluation product, so that customers can easily measure performance on their exact docs and use cases. We recently added support for LLM-as-a-judge and semantic similarity checks, which have been really impactful for measuring accuracy before going live.
https://docs.extend.ai/2025-04-21/product/general/how-credit...
Are those just different SLAs or different APIs or what?
Our goal is to provide customers with as much transparency & flexibility as possible. Our pricing has 2 axes:
- the complexity of the task
- performance processing vs cost-optimized processing
Complexity matters because e.g. classification is much easier than extraction, and as such it should be cheaper. That unlocks a wide range of use cases, such as tagging and filtering pipelines.
Toggles for performance is also important because not all use cases are created equal. Similar to how having options between cheaper and the best foundation models is important, the same applies to document tasks.
For certain use cases, you might be willing to take a slight hit to accuracy in exchange for better costs and latency. To support this, we offer a "light" processing mode (with significantly lower prices) that uses smaller models, fewer VLMs, and more heuristics under the hood.
For other use cases, you simply want the highest accuracy possible. Our "performance" processing mode is a great fit for that, which enables layout models, signature detection, handwriting VLMs, and the most performant foundation models.
In fact, most pipelines we seen in production often end up combining the two (cheap classification and splitting, paired with performance extraction).
Without this level of granularity, we'd either be overcharging certain customers or undercharging others. I definitely understand how this is confusing though, we'll work on making our docs better!
The amount that your users care about.
At a large enough scale, users will care about the cost differences between extraction and classification (very different!) and finding the right spot on the accuracy-latency curve for their use case.
One interesting thing we've learned is, most production pipelines often end up using a combination of the two (e.g. cheap classification and splitting, paired with performance extraction).
Our goal is to provide customers with as much flexibility as possible. For certain use cases, you might be willing to take a slight hit to accuracy in exchange for better costs and latency. To support this, we offer a "light" processing mode (with significantly lower prices) that uses smaller models, fewer VLMs, and more heuristics under the hood.
For other use cases, you simply want the highest accuracy possible. Our "performance" processing mode is a great fit for that, which enables layout models, signature detection, handwriting VLMs, and the most performant foundation models.
We back this up with a native evals experience in the product, so you can directly measure the % accuracy difference between the two modes for your exact use case.
As a rule of thumb, light processing mode is great for (1) most classification tasks, (2) splitting on smaller docs, (3) extraction on simpler documents, or (4) latency sensitive use cases.
ng3n is more of a grid-like workflow solution on top of documents. it's a user-facing application geared towards non-technical users that have processing needs.
if there are all these new problems that became solvable, what exactly are they?
id be interested in replacing datalab with extend, but im not sure what avenues that opens for ng3n. would be very curious to learn!
The world today is quite different though. In the last 24 months, the "TAM" for document processing has expanded by multiple orders of magnitude. In the next 10 years, trillions of pages of documents will be ingested across all verticals.
Previous generations of tools were always limited to the same set of structured/semi-structured documents (e.g. tax forms). Today, engineering teams are ingesting truly the wild west of documents, from 500pg mortgage packages to extremely messy healthcare forms. All of those legacy providers fall apart when tackling these types of actual unstructured docs.
We work with hundreds of customers now, and I'd estimate 90% of the use cases we tackle weren't technically solvable until ~12 months ago. So it's nearly all greenfield work, and very rarely replacing an existing vendor or solution already in place.
All that to say, the market is absolutely huge. I do suspect we'll see a plateau in new entrants though (and probably some consolidation of current ones). With how fast the AI space moves, it's nearly impossible to compete if you enter a market just a few months too late.
For example, we expose options for AI teams to control how chunking works, whether to enable a bounding box citation model, and whether a VLM should correct handwriting errors.
Most customers we speak with, the evaluation is actually between Extend or building it in-house (and we have a pretty good win rate here).