I've been evaluating schema libraries for a better-than-Zod source of truth, and ArkType is where I've been focused. Zod v4 just entered beta[1], and it improves many of my problems with it. For such a mature library to improve like this, v4 is treat and speaks volumes to the quality of engineering. But ArkType has a much larger scope, and feels to me more like a data modeling language than a library. Something I definitely want as a dev!
The main downside I see is that its runtime code size footprint is much larger than Zod. For some frontends this may be acceptable, but it's a real cost that isn't wise to pay in many cases. The good news is with precompilation[2] I think ArkType will come into its own and look more like a language with a compiler, and be suitable for lightweight frontends too.
I was so shocked by how good this is that I ended up writing up a small deck (haven't had time to write this into a doc yet): https://docs.google.com/presentation/d/1fToIKvR7dyvQS1AAtp4Y...
Shockingly good (for backend)
[0] Typia: https://typia.io/
[1] Nestia: https://nestia.io/
This is because it relies on patching the TypeScript implementation. I'm curious if its approach is even feasible with Go?
Author + contributors and ts-patch team[0] seem up for a rewrite in Go based on that thread! Might be bumpy, but a pure TS approach is really appealing. I'm rooting for them :)
[0] https://github.com/nonara/ts-patch/issues/181#issuecomment-2...
I'm not a Typia user myself, but my RPC framework has the same feature, and the MinLength issue you mentioned doesn't crop up if you only use the type tags at the client-server boundary, which is enough in my experience.
Once everything clicked (quite shortly in), I was a bit blown away by everything "just working" as pure TypeScript; I can only describe the DX as "smooth" compared to Zod because now it's TypeScript.
So...it's a parser. Like Zod or effect schema.
This becomes very nice because ArkType's data model is close to an enriched version of TypeScript's own data model. So it's like having your TypeScript types introspectable and transformable at runtime.
Yes, it unfortunately really does bloat your bundle a lot, which is a big reason I personally chose to go with Valibot instead (it also helps that it's a lot closer to zods API so it's easier to pickup).
Thanks for linking that issue, I'll definitely revisit it if they can get the size down.
export reflect type User = {
id: number;
username: string;
// ...
};
Edit: just remembered about this one: https://github.com/GoogleFeud/ts-runtime-checksIt’s a miracle it can be 100x faster than Zod, but speed was never my issue with zod to begin with.
The thing is Zod seems fairly standard in the ecosystem, and I value that more than novelty.
Heads up, seems overall more scannable than an equivalent zod schema though given the similarity to 'raw' TS.
Also it seems like a fairly short hop to this engine being used with actual raw TS types in a compilation step or prisma-style codegen?
If you mess that (either by being too flat, too customizeable or too limited), library users will start coming up with their own wrappers around it, which will make your stuff slower and your role as a maintainer hell.
(source: 15 years intermittently maintaining a similar project).
There is an obvious need for a validation library nowadays that bridges oop, functional and natural languages. Its value, if adopted as a standard, would be immense. The floor is still lava though, can't make it work in today's software culture.
The need is obvious. As natural language becomes more proeminent in programming, there will be a need for a bridge to the older traditional paradigms. I can't give more details, it's the kind of thing you can't put in prose yet.
That doesn't work for JavaScript; you need full backwards compatibility at all times. Porting runtime type information to JS would change the language innards so much, it'd just be a different language in the end. At that point we could equally argue whether browser vendors should ship a Python runtime in browsers and add support for <script type="application/python">.
I mean it.
I've been parsing (not just validating) runtime values from a decade (io-ts, Zod, effect/schema, t-comb, etc) and I find the performance penalty irrelevant in virtually any project, either FE or BE.
Seriously, people will fill their website with Google tracking crap, 20000 libraries, react crap for a simple crud, and then complain about ms differences in parsing?
Are you using a lot of deeply nested objects + unions/intersections?
I agree though, that filling your website with tracking crap is a stupid idea as well.
Zod alone accounts for a significant portion of the CPU time.
> Just to give an idea, in our sample of data, it takes sub 0.1ms to validate 1 row, ~3ms to validate 1,000 and ~25ms to validate 100,000 rows.
Still far behind if the 100x is to be believed. v4 isn't even a 10x improvement. Nice changes though.
There’s a few tools out there that generate code that typescript will prove will validate your schema. That I think is the path forward.
Using a library like zod requires you to trust that Zod will correctly validate the type. Instead, I much prefer to have schema validation code that typescript proves will work correctly. I want the build-type checks that my runtime validation is correct.
Typia generates runtime code that typescript can check correctly validates a given schema https://typia.io/docs/validators/assert/ . I've never actually used it, but this is closer to the realm I prefer.
Not sure I understand this -- are you assuming there’s an existing schema, either a TS type or maybe something else like JSON Schema, and you’re trying to ensure a separate Zod schema parses it correctly?
The usual way to use Zod (or Valibot, etc) is to have the Zod schema be the single source of truth; your TS types are derived from it via z.infer<typeof schema>. That way, there’s no need to take anything on trust, it just works.
- parse as their validation mechanism
- compose small units (close to the type system’s own semantics for their runtime equivalents, trivially verifiable), with general composition semantics (also close to the type system), into larger structures which are (almost) tautologically correct.
Obviously there’s always room for some error, but the approach is about as close to “safe” as you can get without a prover. The most common source of error isn’t mistakes in the underlying implementation but mistakes in communicating/comprehending nuances of the semantics (especially where concepts like optionality and strictness meet concepts like unions and intersections, which are also footguns in the type system).
// zod 3 syntax
import { z } from 'zod'
const RGB = z.schema({
red: z.number(),
green: z.number(),
blue: z.number(),
})
type RGB = z.infer<typeof RGB>
// same thing as:
// type RGB = { red: number; green: number; blue: number };
For the initial migration, there are tools that can automatically convert types into the equivalent schema. A quick search turned up https://transform.tools/typesgcript-to-zodghl, but I've seen others too.For what it's worth, I have come to prefer this deriving types from parsers approach to the other way around.
import { z } from ‘zod’
type Message = { body: string; }
const messageSchema: z.Type<Message> = z.object({ body: z.string() })
The first one is removing a TS feature that failed to make its way into JS, and the second is about explicitly carving out a space in the syntax so that you can use TS (or Flow!) in JS codebases without being locked into any particular tooling.
https://developer.huawei.com/consumer/en/doc/harmonyos-guide...
Me: "Awesome, so I get an object from an API, it will be trivial to check at runtime if it's of a given type. Or to have a debug mode that checks each function's inputs to match the declared types. Otherwise the types would be just an empty charade. Right?"
TS: "What?"
Me: "What?"
Realizing this was a true facepalm moment for me. No one ever thought of adding a debug TS mode where it would turn
function myFunc(a: string, b: number) {}
into
function myFunc(a: string, b: number) { assert(typeof a === "string") assert(typeof b === "number") }
to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
The debug mode sounds interesting at first thought, but quickly explodes in complexity when you deal with more complex object types and signatures. To enable automatic runtime validation for all cases, you would need to rewrite programs so thoroughly that you’re pretty much guaranteed to introduce bugs and behaviour changes that were not present in the source code.
In my opinion it’s great that TS draws a strict boundary to avoid runtime impact at all cost, and leave that to libraries like Zod, which handle dealing with external data.
> to catch all the "somebody fetched a wrong type from the backend" or "someone did some stupid ((a as any) as B) once to silence the compiler and now you can't trust any type assertion about your codebase" problems. Nada.
Those type casts are sure annoying, but what’s the alternative? Even in your hypothetical debug mode, you would not be safe here, since you’re effectively telling the compiler you know better and it’s supposed to transform that type, not assert it. Or do you want to remove the escape hatch „as“ is? Because that would be a major pain in the ass in situations where you just do know better, or don’t want to ensure perfect type safety for something you know will work.
You can’t make things idiot proof, no matter how hard you try. That doesn’t make preprocessing type hints useless.
I don't think so. I just added typia's is* checks to places where I am digesting a json input, it was rather trivial, and now I can actually trust for the first time that the object I am holding actually matches the declared type.
> "You can’t make things idiot proof"
I just did. You can't have a non-idiot-proof program running in the wild and blindly trusting the outputs of whatever API it uses.
However, if we're talking about the TypeScript compiler, the complexity required to ensure end-to-end runtime type soundness is orders of magnitude greater than sprinkling a bunch of isString checks here and there.
The closest thing to JavaScript would probably be Dart, now that it has sound types [1].
You’re right that it’s a pain in some situations.