- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
const getItem = async (itemId) => { ... }
which does a GET /item/{item_id}
and on the backend we have a function that looks like Item getItem(String itemId) { ... }
with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.I'd like to ask seasoned devs and engineers here. Is it the normal industry-wide blind spot where people still crave for and are happy creating 12 different description of the same things across remote, client, unit tests, e2e tests, orm, api schemas, all the while feeling much more productive than <insert monolith here> ?
It depends on the system in question, sometimes it's really worth it. Such setups are brittle by design, otherwise you get teams that ship fast but produce bugs that surface randomly in the runtime.
I'm not sure what would lead to this setup. For years there are frameworks that support generating their own OpenAPI spec, and even API gateways that not only take that OpenAPI spec as input for their routing configuration but also support exporting it's own.
But I don't like the micromanagement of field encoding formats, and I don't like the HTTP3 streaming stuff that makes it impossible to directly consume gRPC APIs from JavaScript running in the browser, and I don't like the code generators that produce unidiomatic client libraries that follow Google's awkward and idiosyncratic coding standards. It's not that I don't see their value, per se*. It's more that these kinds of features create major barriers to entry for both users and implementers. And they are there to solve problems that, as the continuing predominance of ad-hoc JSON slinging demonstrates, the vast majority of people just don't have.
A lot of people just do whatever comes to mind first and don't think about it so they don't get stuck with analysis paralysis.
curl -fail
Handling failure might be the real hardest programming problem ahead of naming and caches and such. It boggles my mind the hate people have for Exceptions which at least make you "try" quite literally if you don't want the system to barrel past failures, some seem nostalgic for errno and others will fight mightily with Either<A,B> or Optional<X> or other monads and wind up just barreling past failures in the end anyway. A 500 is a 500.1. Catch exceptions third-party code and talking to the outside world right away.
2. Never catch exceptions that we throw ourselves.
3. Only (and always) throw exceptions when you're in a state where you can't guarantee graceful recovery. Exceptions are for those exceptional circumstances where the best thing to do is fail fast and fail hard.
I'm joking, but I did actually implement essentially that internally. We start with TypeScript files as its type system is good at describing JSON. We go from there to JSON Schema for validation, and from there to the other languages we need.
Watch out, OpenAPI is now 3 versions deep and supports both JSON and YAML.
YAML made me miss JSON. JSON made me miss XML.
same goes for java vs php/python
But boy, how messy spaghetti don't we get for it, sometimes.
(Invent their own, badly, at first. Sigh.)
https://www.codeproject.com/Articles/1186940/Lisps-Mysteriou...
Schema validation and type generation vary by language. When we need to validate schemas in JS/TS land, we're using `ajv`. Our generation step exports the JSON Schema to a valid JS file, and we load that up with AJV and grab schemas for specific types using `getSchema`.
I evaluated (shallowly) for our use case (TS/JS services, PHP monolith, several deployment platforms):
- typespec.io (didn't like having a new IDL, mixes transport concerns with service definition)
- trpc (focused on TS-only codebases, not multi language)
- OpenAPI (too verbose to write by hand, too focused on HTTP)
- protobuf/thrift/etc (too heavy, we just want JSON)
I feel like I came across some others, but I didn't see anyone just using TypeScript as the IDL. I think it's quite good for that purpose, but of course it is a bit too powerful. I have yet to put in guardrails that will error out when you get a bit too type happy, or use generics, etc.
I was however impressed with FastAPI, a python framework which brought together API implementation, data types and generating swagger specs in a very nice package. I still had to take care of integration tests by myself, but with pytest that's easy.
So there are some solutions that help avoid schema duplication.
But then, as the project evolves, you actually discover that these models have specific differences in different layers, even though they are mostly the same, and it becomes much harder to maintain them as {common model} + {differences}, than it is to just admit that they are just different related models.
For some examples of very common differences:
- different base types required for different languages (particularly SQL vs MDW vs JavaScript)
- different framework or language-specific annotations needed at different layers (public/UNIQUE/needs to start with a capital letter/@Property)
- extra attached data required at various layers (computed properties, display styles)
- object-relational mismatches
The reality is that your MDW data model is different from your Database schema and different from your UI data model (and there may be multiple layers as well in any of these). Any attempt to force them to conform to be kept automatically in sync will fail, unless you add to it all of the logic of those differences.
And then you don't really need most of it, and one thing you need is so utterly complicated, that it is stupid (no RoI) to even bother being compliant.
And truly, less is more.
CORBA is less "incoherent", but I'm not sure that's actually helpful, since it's still a huge mess. You can most likely become a lot more proficient with RESTful APIs and be more productive with them, much faster than you could with CORBA. Even if CORBA is extremely well specified, and "RESTful" is based more on vibes than anything specific.
Though to be clear I'm talking about the current definition of REST APIs, not the original, which I think wasn't super useful.
Circa 2006 I was working on a site that needed to calculate sales tax and we were looking for an API that could help with that. One vendor uses SOAP which would have worked if we were running ASP.NET but we were running PHP. In two days I figured out enough to reverse engineer the authentication system (docs weren't quite enough to make something that worked) but then I had more problems to debug. A competitive vendor used a much simpler system and we had it working in 45 min -- auth is always a chokepoint because if you can't get it working 100% you get 0% of the functionality.
HTTP never had an official authentication story that made sense. According to the docs there are basic, digest, etc. Have you ever seen a site that uses them? The world quietly adopted cookie-based auth that was an ad-hoc version of JSON Web Tokens, once we got an intellectually coherent spec snake oil vendors could spam HN with posts about how bad JWT is because... It had a name and numerous specifics to complain about.
Look at various modern HTTP APIs and you see auth is all across the board. There was the time I did a "shootout" of roughly 10 visual recognition APIs, I got all of them working in 20-30 mins except for Google where I had to install a lot of software on my machine, trashed my Python, and struggled mightily because... they had a complex theory of authentication which was a barrier to doing anything at all.
Worse is better.
I have no idea where you got that idea from. I'm yet to work in a project where any service doesn't employ a mix of bearer token authentication schemes and API keys.
I lost the thread...are we talking websites or APIs?
Both use HTTP, but those are pretty different interfaces.
(I have been offering REST’ish and gRPC in software I write for many years now. With the REST’ish api generated from the gRPC APIs. I’m leaning towards dropping REST and only offering gRPC. Mostly because the generated clients are so ugly)
REST is just too "floppy", there are too many ways to do things. You can transfer data as a part of the path, as query parameters, as POST fields (in multiple encodings!), as multipart forms, as streaming data, etc.
But if you have multiple targets, or unusual compilers, or don't enjoy working with build systems, stay away from complex stuff. Sure, REST may need some manual scaffolding, but no matter what your target is, there is a very good chance it has JSON and HTTP libs.
There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards that sloppyREST has casually dispatched (pun fully intended) in the real world. After some 30+ years of highly prescriptive RPC mechanisms, at some point it becomes time to stop waiting for those things to unseat "sloppy" mechanisms and it's time to simply take it as a brute fact and start examining why that's the case.
Fortunately, in 2025, if you have a use case for such a system, and there are many many such valid use cases, you have a number of solid options to choose from. Fortunately sloppyREST hasn't truly killed them. But the fact that it empirically dominates it in the wild even so is now a fact older than many people reading this, and bears examination in that light rather than casual dismissals. It's easy to list the negatives, but there must be some positives that make it so popular with so many.
Care to list them? REST mania started around early 2000-s, and at that time there was only CORBA available as a cross-language portable RPC. Microsoft had DCOM.
And that was it. There was almost nothing else.
It was so bad that ZeroC priced their ICE suite based on a PERCENTAGE OF GROSS SALES: https://web.archive.org/web/20040603094344/http://www.zeroc.... Their ICE suite was basically an RPC with a human-designed IDL and non-crazy bindings for C/C++/Java.
Then the situation got WORSE when SOAP came.
At this point, anything, literally anything, that didn't involve XML was greeted with enthusiasm.
Arguably the closest thing to a prescriptive winner is laying OpenAPI on top of REST APIs.
Also, REST defined as "A vaguely HTTP-ish API that carries JSON" would have to be put later than that. Bear in mind that even after JSON was officially "defined" it's not like it instantly spread everywhere. I am among the many people that reconstructed something like it because we didn't know about it yet, even though it was nominally years old by that point. It took years to propagate out. I'd put "REST as we are talking about it" as late 200xs at the earliest for when it was really popular and only into the 2010s as to when you started expecting people to mean that when they said "Web API".
They won inside large companies: Coral in Amazon, Protobufs/gRPC in Google, Thrift in Facebook, etc. And they are slowly spreading outside of them.
OpenAPI is indeed an attempt to bring some order into the HTTP RPC world, and it's pretty successful. I'm pretty sure all the APIs that I used lately were based on OpenAPI descriptions.
So the trend is clear: move away from loosely-defined HTTP APIs into strict RPC frameworks with code generation because this is a superior approach. But once you start doing it, HTTP becomes a hindrance, so alternatives like gRPC are gaining popularity.
> Also, REST defined as "A vaguely HTTP-ish API that carries JSON" would have to be put later than that.
Ruby-on-Rails came out in 2005, and Apple shipped in 2006. REST-ful APIs were one of its major selling points ( https://web.archive.org/web/20061020014807/http://manuals.ru... ).
AWS S3 API, designed around the same time, also was fully REST-ful. This was explicitly one of its design goals, and it was not really appreciated by most people.
My meta point is that it is easy for programmers to come to the conclusion that all that should exist is the stuff that large companies use, as I see so many people believe, but if you try to model the world on that assumption you end up very frustrated and confused by how the real world actually works. You can't push a highly proscriptive, very detailed, high up-front work methology out on everyone. Not because it's a bad idea per se, or because it "wouldn't work", but because you literally can't. You can't force people to be "better" programmers than they are by pushing standards on them.
My gut leans in the direction of better specifications and more formal technologies, but I can't push that on my users. It really needs to be a pull operation.
Oh, for sure. A company can just mandate something internally, whether it's a good idea or not. But superior approaches tend to slowly win out on merit even in the wider world. Often by standardizing existing practices, like OpenAPI did.
And I believe that strict prescriptive APIs with code generation are superior. This is also mirrored by the dynamic and static typing languages. I remember how dynamic languages were advertised in early 2000-s as more "productive" than highly prescriptive C++/Java.
But it turned out to be a mistake, so now even dynamic languages are gaining types.
From the top of my head, OData.
Of course, REST won handily. We're not in this environment anymore, thankfully, and REST now is getting some well-deserved scrutiny.
OData officially started out in 2007. Roy Fielding's thesis was published in 2000.
They appeared almost simultaneously, for the very same reason: REST by itself is too vague and unreliable.
It's the question of long-term consequences for supportability and product evolution. Will the next person supporting the API know all the hidden gotchas?
Which are...terrible.
Example: structured schema, but no way to require fields.
Make 5 decisions like that and you lost 31/32 of the market.
A vague architecture style is not competition to a concrete framework. At best, you're claiming that the competition to gRPC is rolling your own ad-hoc RPC implementation.
What I expect to happen now is an epiphany. Why do most developers look at tools like gRPC and still decide it's a far better option to roll their own HTTP-based RPC interface? I mean, it's a rational choice for most. Think about that for a moment.
You have all the permutations that sail under the name "REST" to some degree, where there seems to be no rules and everyone does something different. And then you have an RPC mechanism that is about two orders of magnitude tigher and people complain about not having required fields? How? Why? What are they on about?
I mean, if you write validation code for every type, by hand, you will probably still have to do less overall work than for REST'ish monstrosities. But since you have a lot more regularity, you can actually generate this code. Or use reflection.
How much time do people really spend on their interface types? After the initial design I rarely touch them. They're like less than a percent of the overall work.
I think there is some degree of confusion in your reply. You're trying to compare a framework with an architecture style. It's like comparing, say, OData with rpc-over-HTTP.
I'm aware this is an unappealingly rustic reality, but it is nonetheless the reality experienced by most.
Besides in the practical world we are able to observe, REST isn't even an architectural style: it is several architectural styles multiplied by every possible permutation of how you address a dozen or more different concerns. Necessitating disambiguation whenever you talk about it. First to state the obvious, that it isn't really what Fielding described, then on to communicating what vector describes your particular permutation of choices.
It's okay. We don't need to pretend any of us care about REST beyond as an interesting academic exercise.
So you can't rely on having structured errors for common codes such as 401/403/404, it's very typical to get unstructured text in payloads for such errors. Not a few REST bindings just fail with unhelpful serialization exceptions in such cases.
At some point you need real time aware libraries and whatever language you use they've been through several iterations of them (Javascript Date, moment, dayjs, ...) because they got it wrong the first time and probably the second time to.
With ISO 8601 it is easy to get the yyyy, yyyy-mm, hh and other things you might work with primitive tools (awk). Getting the day of the week or the time involved is not hard which gets you to the chronological rosetta stone
https://en.wikipedia.org/wiki/Julian_day
which is a multiplier and and offset away from Unixtime except for all those leap seconds and whatnot. With Unix timestamps comparison is easy and differences are easy and even knowing it is Thorsday is easy; they don't sort as strings but GNU sort has a -n option, only trouble is it is a bunch of twisty little numbers that look alike.
"2025-07-10T09:48:27+01:00"
That contains, by my quick glance, at least 8 fields of information. I would argue the one field it does not carry but probably should is the _name_ of the timezone it is for.Since ISO 8601 costs 133 CHF I suspect hardly anybody has actually read it, I think if you wanted something that supports all the weird stuff you might find somebody wrote it in 390 assembly.
What do you think primitive types are supposed to be?
It’s a bit odd to say fielding “won the war” when for years he had a blog pointing out all the APIs doing RPC over HTTP and calling it REST.
He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.
If that’s what you call victory, I guess Marx can rest easy.
I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.
That is a false dichotomy. Fielding gave a name to a specific concept / architectural style, the concept got ignored (rightly or wrongly, doesn’t matter) while the name he coined got recycled for something essentially entirely unrelated.
What I object to about eg xml-rpc is that it layers a second RPC protocol over HTTP so now I have two of them...
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.
However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.
Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.
That was the theory, but it was never true in practice.
The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.
Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.
In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.
So why bother?
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Ultimately it's all about nitpicking.
Because words have specific meanings. There’s a specific expectation when using them. It’s like if someone said “I can’t install this app on my iPhone” but then they have an android phone. They are similar in that they’re both smartphones and overall behave and look similar, but they’re still different.
If you are told an api is restful there’s an expectation of how it will behave.
Few people actually use the word RESTful anymore, they talk about REST APIs, and what they mean is almost certainly very far from what Roy had in mind decades ago.
People generally do not refer to all smartphones as iPhones, but if they did, that would literally change the meaning of the word. Examples: Zipper, cellophane, escalator… all specific brands that became ordinary words.
And today, for most people in most situations, that expectation doesn’t include anything to do with HATEOAS.
Only because we never had the tools and resources that, say, GraphQL has.
And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well
GraphQL promised to solve real-world problems.
What real world problems does HATEOAS addresses? None.
HATEOAS didn't need to "promise" anything since it was just describing already existing protocols and capabilities that you can see in the links I posted.
And that's how you got POST-only GraphQL which for years has been busily reinventing half of HTTP
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
Because what got backnamed HATEOAS is the very core of what Fielding called REST: https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
Everything else is window dressing.
Because September isn't just for users.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
Have you ever heard of HTTP's OPTIONS verb?
https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
Follow-up trick question: how come you never heard of it and still managed quite well to live without it?
Yes, I'm aware of this header and know the web standards well enough.
In hypermedia API you communicate to client the list of all operations in the context of the resource (note: not ON the resource), which includes not only basic CRUD but also operations on adjacent resources (e.g. on user account you may have an operation of sending a message to this user). Yes, in theory one could use OPTIONS with a non-standard response body to communicate such operations that cannot be expressed in plain HTTP verbs in Allow header.
However such solution is not practical, because it requires an extra round trip for every resource. There's a better alternative, which is to provide the list of operations with the resource using one of the common standards - HAL, JSON-LD, Siren etc. The example in my another comment in this thread is based on HAL. If you wonder what is that, look no further than at Spring - it does support HAL APIs out of the box from quite a long time. And of course there's an RFC draft and a Wikipedia article (https://en.wikipedia.org/wiki/Hypertext_Application_Language).
{
… resource model
_links: {
“delete” : { “href” : “.” }
}
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.So that never got done (because it's complex) and people started building apps like "my airline reservation app" but then realized to to build that domain app you don't need all the abstraction of a full REST system.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
More links here: https://news.ycombinator.com/item?id=44510745
Neo4j's old REST API was really good about that. See e.g. get node: https://neo4j.com/docs/rest-docs/current/#rest-api-get-node
Perhaps the real issue was that XML is awful and a much thinner resource representation simplifies most of the problems for developers and users.
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
You still have to "hard code" somewhere what action anything needs to do over an API (and there is more missing metadata, like icons, translations for action description...).
Mostly to say that any thought of this approach being more general is only marginal, and really an illusion!
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
..that was written before swagger/openAPI was a thing. now there's a real spec with real adoption and real tools and folks can let the whole rest-epic-semantic-fail be an early chapter of web devs doing what they do (like pointing at remotely relevant academic paper to justify what they're doing at work)
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
I cannot even recall a time where it caused me enough issues to even think about it later on. The business logic. I have had moments where I thought something was strange in a Elasticsearch API but again it was of no consequence.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
It seems that nesting isn't super common in my experience. Maybe two levels if completely composite but they tend to be fairly flat.
And then you get a list of all buildings for this company.
Every building has a url like: /buildings/:buildingId
So you constantly get back to the root.
Only exception is generally a tenant id which goes upfront for all requests for security/scoping purposes.
E.g. GitHub /repos/:owner/:repo/pulls/comments/:comment_id
But flat is better than nested, esp if globally unique IDs are used already (and they often are).
Also it is possible to embed a sub resource (or part of it).
Think a blog post.
/blogPost/:blogPostId
You can embed a blog object with the url and title so you can show the blogpost on a page with the name of the blog in one go.
If you need more details on the blog you can request /blogs/:blogId
Sometimes that's a pragmatic choice too. I've worked with HTTP clients that only supported GET and POST. It's been a while but not that long ago.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
Lol. Have you read them?
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
I really wish people just used 200 status code and put encoded errors in the payloads themselves instead of trying to fuse the transport layer's (which HTTP serves as, in this case) concerns with the application's concerns. Seriously, HTTP does not mandate that e.g. "HTTP/1.1 503 Ooops\r\n\r\n" should be stuffed into the TCP's RST packet, or into whatever TLS uses to signal severe errors, for bloody obvious reasons: it doesn't belong there.
Like, when you get a 403/404 error, it's very bloody difficult to tell apart the "the reverse proxy before the server is misconfigured and somebody forgot to expose the endpoint" and "the server executed your request to look up an item perfectly fine: the DB is functional, and the item you asked for is not in there" scenarios. And yeah, of course I could (and should) look at and try to parse the response's body but why? This "let's split off the 'error code' part of the message from the message and stuff it somewhere into the metadata, that'll be fine, those never get messed up or used for anything else, so no chance of confusion" approach just complicates things for everyone for no benefit whatsoever.
"...and therefore using different status codes in the responses is mostly pointless. Therefore, use 200 and put "s":"error" in the response".
> Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern.
One of the very common failures is for the request to simply never reach "the server". In my experience, one of the very first steps in improving the error handling quality (on the client's side) is to start distinguishing between the low-level errors of "the user has literally no connection Internet" and "the user has connected somewhere, but that thing didn't really speak the server protocol", and the high-level errors "the client has talked with the application server (using the custom application protocol and everything), and there was an error on the application server's side". Using HTTP-status codes for both low- and high-level errors makes such distinctions harder to figure out.
I think a more precise term for what you're describing is transport errors vs business errors. You're right that you don't want to model all your business errors as HTTP status codes. Your business scenarios are most certainly numerous and need to be much more fine grained than what the standard offers. But the important thing is all errors business or transport eventually need to map to a HTTP status code because that's the protocol you're ultimately speaking.
Yes, pretty much.
> But the important thing is all errors business or transport eventually need to map to a HTTP status code because that's the protocol you're ultimately speaking.
"But the important thing is, all errors, business or transport, eventually need to map to the set of TCP flags (SYN, ACK, FIN, RST, ...) because that's the protocol you're ultimately speaking". Yeah, they do map, technically speaking: to just an ACK. Because it's a payload, transported agnostically to its higher-level meaning. It's a yet another application of the end-to-end principle.
But to me, "REST" means "use the HTTP verbs to talk about resources". The whole point is that for resource-oriented APIs, you don't need another layer. In which case serving 404s for things that don't exist, or 409s when you try to put things into a weird state makes perfect sense.
I had to chuckle here. So true!
I think good rest api design is more a service for the engineer than the client.
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
Some developers just do not understand http.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
>>Clients shouldn’t assume or hardcode paths like /users/123/posts
Is it really net better to return something like the following just so you can change the url structure.
"_links": { "posts": { "href": "/users/123/posts" }, }
I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").
If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.
It's brittle and will break some time in the future.
It isn't clear what insurance you are really buying here. You can't possibly mean another physical server. Obviously that happens all the time with any site but no one is changing links to point to the actual hardware - just use a normal load balancer. Is it domain name change insurance? That doesn't add up either.
>> If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
Normally you would just fix the problem instead of doing weird application level encryption stuff.
>> The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes
If those "frontend" developers are paying customers as in the case of AWS, OpenAI, Anthropic then you probably want to make your API as simple as possible for them to understand.
This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.
(devil's advocate, I use http codes :))
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.
SEARCH is from RFC 5323 (WebDAV).
REST purists will not be happy, but that's reality.
We're talking JSON APIs -- HTML forms are incompatible with that no matter the verb.
> Basic redirect support is pretty universal, but things quickly fall apart on most browsers when you do tricky things like use non-GET/POST methods on redirecting resources.
There were other things too, I'm not sure CORS supported anything but GET and POST early on either. Wanting consistency and then sticking to it isn't an inherently bad thing, there's a lot to know, and people don't update knowledge about everything (I'm speaking generally as well as including my self here).
[0] https://www.mnot.net/blog/2006/01/23/test_xmlhttprequest
Sounds like this reality is not the recent one.
401 Unauthorized. When the user is unauthenticated.
403 Forbidden. When the user is unauthorized.
I can assure you very few people care
And why would they? They're getting value out of this and it fits their head and model view
Sweating over this takes you nowhere
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
I haven't done REST apis in a while, but I came across this recently for standardizing the error response: https://www.rfc-editor.org/rfc/rfc9457.html
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
So I say: GET or POST.
That's how we got POST-only GraphQL.
In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...
It's just absurd to mention idempotency when the state gets altered.
Of course there is
> DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id
Which is most operations
> Should you do something like delete by email or product, you have to use another operation,
Erm.... No, you don't?
> which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
Why would you want to cache a delete operation?
You may have an API for example that updates one object and inserts another one, or even deletes an old resource and inserts a new one
The verbs are only very clear for very simple CRUD operations. There is a lot of nuance otherwise that you need documentation for and having to deal with these verbs both as the developer or user of an API is a nuisance with no real benefit
I don't. I could deliver a diatribe on how even the common arguments for differentiating GET & POST don't hold water. HEAD is the only verb with any mild use in the base spec.
On the other hand:
> correct status codes and at least a few are used contrary to the HTTP spec
This is a bigger problem than verb choice & something I very much care about.
HEAD allows the server to send meta data without the (potentially very large) body. That could have been solved without verb (as if HEAD is a verb in this case!), of course, but it has its uses.
Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!
I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
In a server holding a "deck of cards," there might be a "HTTP GET <blah-de-blah>/shuffle.html" call with the side-effect of performing a server-side randomization operation.
I just made that up because I don't want to impugn anyone. But I've seen API sets full of nonsense just like that.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
So ... how does one do it?
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
POST /complex
value1=something
value2=else
which then responds with 201 Created
Location https://example.com/complex/53301a34-92d3-447d-ac98-964e9a8b3989
And then you can make GET request calls against that resource.It adds in some data expiration problems to be solved, but its reasonably RESTful.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...
Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.
Or stop worrying and just use POST. The computer isn't going to care.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
Haha yes! Is it even a dev team if they haven't had an overly heated argument about which 4xx code to return for an error state?Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
When I think about some of the RESTy things we do like return part of the response as different HTTP codes, they don't really add that much value vs. keeping things on the same layer. So maybe the biggest value add so far is JSON, which thanks to its limited nature prevents complication, and OpenAPI ecosystem which grew kinda organically to provide pretty nice codegen and clients.
More complexity lessons here: look at oneOf support in OpenAPI implementations, and you will find half of them flat out don't have it, and the other half are buggy even in YOTL 2025.
While I generally agree that REST isn’t really useful outside of academic thought experiments: I’ve been in this about as long as you are, and it really isn’t hard. Try reading Fieldings paper once; the ideas are sound and easy to understand, it’s just with a different vision of the internet than the one we ended up creating.
If you work off "widely accepted words" when there is disagreeing primary literature, you are probably mediocre.
Sometimes it really is bad and "everybody" can be very wrong, yes. None of us are native English speakers (most don't speak English at all), so these foreign sounding words all look the same, it's a forgivable "offence".
Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.
It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.
2. So just "HTTP API". And that would suffice. Adding "restful" is trying to be extra-smart or fit in if everyone's around an extra-smart.
This doesn't seem like a useful line of conversation, so I will ignore it.
> 2. So just "HTTP API".
No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.
Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.
Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.
Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.
Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.
Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.
And I agree with the feature article.
I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
There is no such thing as "misusing language". Language changes. It always does.
Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.
Because language always changes.
Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.
You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.
And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.
Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.
One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.
Really? What if somebody else wants to get some information to you? How do you know what to work on?
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
You said what I've thought about REST better than I could have put it.
A true implementation of a REST client is simply not possible. Any client needs to know what all those URLs are going to do. If you suddenly add a new action (like /cansofspam/123/frobnicate), a client won't know what to do with it. The client will need to be updated to add frobnication functionality, or else it just ignores it. At best, it could present a "Frobnicate" button.
This really is why nobody has implemented a REST server or client that actually conforms to Fielding's paper. It's just not realistic to have a client that can truly self-discover an API without being written to know what APIs to expect.
Sure it is, it's just not very interesting to a programmer. It's the browser. That's why there was no need talk about client implementations. And why it's hypermedia driven. It's implicit in the description that it's meant to be discoverable by humans.
AirBnb rediscovered REST when they implemented their Server Driven UI Platform. Once you strip away all the minutiae about resources and URIs the fundamental idea of HATEOS is ship the whole UI from the server and have the client be generic (the browser). Now you can't have the problem where the frontend gets desynced with the backend.
This cannot be overstated.
I'm watching with some interest to see if the LLM/MCP crowd gradually reinvents REST principles. LLMs are the only software we have invented yet which is smart enough to use a REST interface.
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Web browsers do exactly this!
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.
The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.
You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.
In all these discussion, I didn't see an article that would actually show an example of a successful application that does REST properly, all elements of it.
I agree that not many frameworks encourage "true" REST design, but I don't think it's too hard to get the hang of it. Try out htmx on a toy project and restrict yourself to using literally no JS and no session state, and every UI-focused endpoint of your favoured server-side framework returns HTML.
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.
> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.
And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.
> Why aren't you worried about badly formatted JSON?
Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.
Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.
If you extended JSON so that URLs (or URIs) were first-class, something like:
url ::= "<" scheme ":" ["//" authority] path ["?" query] ["#" fragment] ">"
it would form a viable hypermedia format because then you can reliably distinguish references from other forms of data. I think the only reason something like this wasn't done is that Crockford wanted JSON to be easily parsable by existing JS interpreters.You can workaround this with JSON schema to some extent, where the schema identifies which strings are URLs, but that's just way more cumbersome than the distinction being made right in the format.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
What was wrong with all nouns and verbs map to POST (maybe sometimes GET), and HTTP response codes other than 200 mean your request failed somewhere between the client code and the application server code. HTTP 200 means the application server processed the request and you can check the payload for an application indicator of success, failure, and/or partial success. If you work with enough systems, you end up going back to this, because least common denominator works everywhere.
Either way, anything that isn't ***** SOAP is a good start.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...
In fact, there are plenty of reasons not to use DELETE and PUT. Middleboxes managed by incompetent security people block them, they require that developers have a minimum of expertise and don't break the idempotency rule, lots of software stacks simply don't support them (yeah, those stacks are bad, what still doesn't change anything), and the most of the internet just don't use the benefit they provide (because they don't trust the developers behind the server to not break the rules).
Notably, the term "discoverable" doesn't even appear in TFA.
Other than things like this the browser makes very little assumptions about how a website works, it just loads what the html tells it to load and shows the content to the user. Imagine the alternative where browser by default assumed that special pages example.com/login and example.com/logout existed and would sometimes navigate you there by themselves (like with a prompt "do you want to login?")
If you wanted to design a new improved html alternative from scratch you likely would want the same properties.
The issue with Rest API is that most of what we call API are not websites and most of their clients are not browser but servers or the JavaScript in the browser where IDs are generally more useful than links.
REST is incredibly successful, html is rest, CSS is rest, even JavaScript itself is rest, but we do not call APIs that return html/CSS/js/media APIs we call them websites
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
Exactly, yes! The first few sentences from Wikipedia...
"REST (Representational State Transfer) is a software architectural style that was created to describe the design and guide the development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave." -- [1]
If you are desiging a system for the Web, use REST. If you are designing a system where a native app (that you create) talks to a set of services on a back end (that you also create), then why conform to REST principles?
You kind of could, but it's a bad idea. A core tenet of the REST architecture is that it supports a network of independent servers that provide different services (i.e. webpages) and users can connect to any of them with a generic client (i.e. a web browser). If your mission is to build a specialized API for a specialized client app (a JS web app in your example), then using REST just adds complexity for no reason.
For example, you could define a new content-type application/restaurantmenu+json and build a JS client that renders the content-type like a restaurant's homepage. Then you could use your restaurant browser JS client to view any restaurant's menu in a pretty UI... except your own restaurant's server is the only one that delivers application/restaurantmenu+json, so your client is only usable on your own page and you did a whole lot of additional work for no reason.
> does REST require a switch to HTML representation ... How such HTML representation can even use PUT and DELETE verbs
Fielding's REST is really just an abstract idea about how to build networks of services. It doesn't require using HTTP(S) or HTML, but it so happens that the most significant example of REST (the WWW) is built on HTTPS and HTML.
As in the previous example, you could build a REST app that uses HTTP and application/restaurantmenu+json instead of HTML. This representation could direct the client to use PUT and DELETE verbs if you like, even though these aren't a thing in HTML.
Now days there are just so many use cases where an architecture is more suited to RPC (and POST). And trying to bend the architecture to be "more RESTful" just serves to complicate.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
- Government portals for publicly accessible information, like legal codes, weather reports, or property records
- Government portals for filing forms and other interactions
- Open data initiatives like Wikipedia and OpenStreetmap
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.The funny thing is, that perfectly describes HTML. Here’s a document with links to other documents, which the user can navigate based on what the links are called. Because if it’s designed for users, it’s called a User Interface. If it’s designed for application programming, it’s called an Application Programming Interface. This is why HATEOAS is kinda silly to me. It pretends APIs should be used by Users directly. But we already have that, it’s called a UI.
It's also useful when you're programming a client that is not a web page!
You GET a thing, you dereference fields/paths in the returned representation, you construct a new URI, you perform an operation on it, and so on.
Consider a directory / database application. You can define a RESTful, HATEOAS API for it, write a single-page web application for it -or a non-SPA if you prefer-, and also write libraries and command-line interfaces to the same thing, all using roughly similar code that does what I described above. That's pretty neat. In the case of a non-SPA you can use pure HTML and not think that you're "dereferencing fields of the returned representation", but the user and the user-agent are still doing just that.
Yes, and it's so nice when done well.
> Most web APIs are not designed with this use-case in mind.
I wonder if this will change as APIs might support AI consumption?
Discoverability is very important to an AI, much more so than to a web app developer.
MCP shows us how powerful tool discoverability can be. HATEOS could bring similar benefits to bare API consumption.
One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
edit: typo
It "was perceived as" a barrier because it is a barrier. It "felt easier" because it is easier. The by-the-book REST principles aren't a good cost-benefit tradeoff for common cases.
It is like saying that your microwave should just have one button that you press to display a menu of "set timer", "cook", "defrost", etc., and then one other button you use to select from the menu, and then when you choose one it shows another menu of what power level and then another for what time, etc. It's more cumbersome than just having some built-in buttons and learning what they do.
I actually own a device that works in that one-button way. It's an OBD engine code reader. It only has two buttons, basically "next" and "select" and everything is menus. Even for a use case that basically only has two operations ("read the codes" and "clear a code"), it is noticeably cumbersome.
Also, the fact that people still suggest it's indispensable to read Fielding's dissertation is the kind of thing that should give everyone pause. If the ideas are good there should be many alternative statements for general audiences or different perspectives. No one says that you don't truly understand physics unless you read Newton's Principia.
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
During that same time, the business also wanted to use the fact that our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends.
Backenders read about API design, they get the idea they should be REST like (as in, JSON, with different HTTP methods for CRUD operations).
And of course we weren't going to have two separate APIs, that we ran our frontends on our API was another selling point (eat your own dog food, proof that the API can do everything our frontend can, etc).
So: the UI runs on a REST API.
I'm hoping that we'll go back to Django templates with a sprinkle of HTMX here and there in the future, but who knows. That will probably be a separate backend that runs in front of this API then...
It is a selling point. A massive one if you're writing enterprise software. It's not merely about "being technical", but mandatory for recurring automated jobs and integration with their other software.
Returning purely data means being able to transform it in any way you want, no matter where you use it. And depending on your usecase, it also means being able to sell access to it.
1. UX designers operate on every stage of software development lifecycle from product discovery to post-launch support (validation of UX hypotheses), they do not exercise control - they work within constraints as part of the team. The location of a specific action in UI and interaction triggering it is orthogonal to availability of this action. Availability is defined by the state. If state restricts certain actions, UX must reflect that.
2. From architectural point of view, once you encapsulate the checking state behavior, the following will work the same way: "if (state === something)" and "if (resource.links["action"] !== null)". The latter approach will be much better, because in most cases any state-changing actions will require validation on server and you can implement the logic only once (on server).
I have been developing HATEOAS applications for quite a while and maintain HAL4J library: there are some complexities in this approach, but UI design is certainly not THE problem.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
On this case specifically, everybody's lives are worse because of that.
I am using it to enter this reply.
The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
https://htmx.org/essays/hypermedia-clients/
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
AI may change this at some point.
Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it, although it does change how useful the aspects of REST (in particular, the uniform interface) will be to that client.
I'd say that my web browser is not using hypertext. It is merely transforming it so that I can use the resulting hypermedia, and thereby interface with the remote host. That is, my browser isn't the one that decides how to interface with the remote host; I am. The browser implements the hypertext protocol and presents me a user interface to the remote host.
Fielding might have a peculiar idea of what an "API" is, so that a "human + browser" is a programmatic application, but if that's what he says, then I think his ideas are just dumb and I shouldn't bother listening to him.
> Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it
There's no way for a "script client" to use hypertext without implementing a fixed protocol on top of it, which is allegedly not-RESTful. Unless you count a search engine crawler as such a client, I guess, but that's secondary to the purpose of hypertext.
> An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software.[1] A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.
The server and browser are two different computer programs. The browser understand how to make an API connection to a remote server and then take an HTML response it receives (if it gets one of that media type) and transform it into a display to present to the user, allowing the user to choose actions found in the HTML. It then understands how to take actions by the user and turn those into further API interactions with the remote system or systems.
Because the browser waits for a human to intervene and make choices (sometimes, consider redirects) doesn't make the overall system any less of a distributed one, with pieces of software integrating via APIs following a specific network architecture, namely what Fielding called REST.
Your intuition that this idea doesn't make a lot of sense for a script-client is correct:
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
That is, the browser may be communicating with the remote server (using APIs provided by the local OS), but it is not itself interfacing with the server, i.e., being offered a service for its own benefit. It may possibly be said that the whole system of "user + browser" interfaces with the remote server, but then it is no longer an application.
(Of course, this is all assuming the classical model of HTML web pages presented to the user as-is. With JS, we can have scripts and browser extensions acting for their own purposes, so that they may be rightly considered "client" programs. But none of these are using a REST API in Fielding's sense.)
I don't know what "for its own benefit" means.
Let alone ux affordances, branding, etc.
Ironic that Fielding's dissertation contained the seed of REST's destruction!
I thought the “problem” was that no one was building proper restful / HATEOAS APIs.
It can’t go both ways.
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
The application is then auto-discoverable. We have links to new endpoints, URLs, that progress or modify the application state. Humans can navigate these, yes, but other programs, like crawlers, can as well.
With REST you need to know a few things like how to find and parse the initial content. I need a browser that can go from a URL to rendered HTML, for example. I don't need to know anything about what content is available beyond that though, the HTML defines what actions I can take and what other pages I can visit.
RPC APIs are the opposite. I still need to know how to find and parse the response, but I need to deeply understand how those APIs are structured and what I can do. I need to know schemas for the API responses, I need to know what other APIs are available, I need to know how those APIs relate and how to handle errors, etc.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
/user/123/orders
How is this fundamentally different than requesting /user/123 and assuming there’s a link called “orders” in the response body?
The world of programming, just like the real world, has a lot of misguided doctrines that looked really good on paper, but not on application.
For example:
"_links": {
....
"cancel": { "href": "/orders/123/cancel", "method": "POST" }
}
Why "POST"?And what POST do you send? A bare POST with no data, or with parameters in it's body?
What if you also want to GET the status of cancellation? Change the type of `method` to an array so you can `"method": ["POST", "GET"]`?
What if you want to cancel the cancellation? Do you do `POST /orders/123/cancel/cancel HTTP/...`, or `DELETE /orders/123/cancel HTTP/...`?
So, people adopt, making an originally very pure and "based" standard into something they can actually use. After all, all of those things are meant to be productive, rather than ideological.
Now you have a noun and some of confusion goes away regarding GET and DELETE etc
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
I have yet to see an API that was improved by following strict REST principles. If REST describes the web (a UI, not an API), and it’s the only useful example of REST, is REST really meaningful?
This is very obviously not true. Take search engine crawlers, for example. There isn’t a human operator of GoogleBot deciding which links to follow on a case-by-case basis.
> I have yet to see an API that was improved by following strict REST principles.
I see them all the time. It’s ridiculous how many instances of custom logic in APIs can be replaced with “just follow the link we give you”.
> our clever thinker invents a new, higher, broader abstraction
> When you go too far up, abstraction-wise, you run out of oxygen.
> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
REST is the opposite. REST is “We did this. It worked great! This is why.” And web developers around the world are using this every single day in practical projects without even realising it. The average web developer uses REST, including HATEOAS, all the time, and it works great for them. It’s just when they set out to do it on purpose, they often get distracted by some weird fake definition of REST that is completely different.
But the core of Joel Spolsky's three posts on Architecture Astronauts is his expression of frustration at engineers who don't focus on delivering product value. These "Architecture Astronauts" are building layer on layer of abstraction so high that what results is a "worldchanging" yet extremely convoluted system that no real product would use.
A couple choice quotes from https://www.joelonsoftware.com/2008/05/01/architecture-astro...:
> "What is it going to take for you to get the message that customers don’t want the things that architecture astronauts just love to build."
> "this so called synchronization problem is just not an actual problem, it’s a fun programming exercise that you’re doing because it’s just hard enough to be interesting but not so hard that you can’t figure it out."
But you’re assuming that there is a real contradiction between shipping features and RESTful design. I believe that RESTful design can in many cases actually increase feature delivery speed through its decoupling of clients and servers and more deeply due to its operational model.
Notice that both of those are plural words. When you have many clients and many servers implementing a protocol a formal agreement of protocol is required. REST (which I will not claim to understand well) makes a formal agreement much easier, but you still need some agreement. However when there is just one server and just one client (I'll count all web browsers as one since the browser protocols are well defined enough) you can go faster by just implementing both sides and testing they work for a long time.
REST = Hell No
GQL = Hell No.
RPC with status codes = Grin and point.
I like to get stuff done.
Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
Why do this for API unless the API really really fits that style (rare).
GQL is expensive to parse and hides information from proxies (200 for everything)
Yes. All endpoints POST, JSON in, JSON out (or whatever) and meaningful HTTP status codes. It's a great sweet spot.
Of course, this works only for apps that fetch() and createElement() the UI. But that's a lot of apps.
If I don't want to use an RPC framework or whatever I just do:
{
method: "makeBooking",
argument: {
one: 1,
two: "too",
},
...
}
And have a dictionary in my server mapping method names to the actual functions.All functions take one param (a dictionary with the data), validate it, use it and return another single dictionary along with appropriate status code.
You can add versions and such but at that point you just use JSON-RPC.
This kind of setup can be much better than REST APIs for certain usecases
This makes automating things like retrying network calls hell. You can safely assume a GET will be idempotent, and safely retry on failure with delay. A POST might, or might not also empty your bank account.
HTTP verbs are not just for decoration.
Still, they are just a convention.
When you are retrying an API, you are calling the API, you know whether its a getBookings() or a addBooking() API. So write the client code based on that.
Instead of the API developer making sure GET /bookings is idempotent, he is going to be making sure getBookings() is idempotent. Really, what is the difference?
As for the benefits, you get a uniform interface, no quirks with URL encoding, no nonsense with browsers pre-loading, etc etc,. It's basically full control with zero surprises.
The only drawback is with cookies. Samesite: Lax depends on you using GET for idempotent actions and POST for unsafe actions. However, I am advocating the use of this only for "fetch() + createElement() = UI" kind of app, where you will use tokens for everything anyways.
https://www.jsonrpc.org/specification#request_object
Commonly, servers shouldn't accept duplicate request IDs outside of unambiguous do-over conditions. The details will be in the implementations of server and client, as they should be, i.e. not in the specification of the RPC protocol.
That’s got nothing to do with REST. You don’t have to do that at all with a REST API. Your URLs can be completely arbitrary.
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
If anyone wants to learn more about all of this, https://htmx.org/essays and their free https://hypermedia.systems book are wonderful.
You could also check out https://data-star.dev for an even better approach to this.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
If the client application only understands media types and isn’t supposed to know anything about the interrelationships of the data or possible actions on it, and there is no user that could select from the choices provided by the server, then it’s not clear how the client can do anything purposeful.
Surely, an automated client, or rather its developer, needs a model (a schema) of what is possible to do with the API. Roy Fieldings doesn’t address that aspect at all. At best, his REST API would provide a way for the client to map its model to the actual server calls to make, based on configuration information provided by the server as “hypertext”. But the point of such an indirection is unclear, because the configuration information itself would have to follow a schema known and understood by the client, so again wouldn’t be RESTful in Roy Fielding’s sense.
People are trying to fill in the blanks of what Roy Fielding might have meant, but in the end it just doesn’t make a lot of sense for what REST APIs are used in practice.
Fielding was absolutely not saying that his REST was the One True approach. But it DOES mean something
The issue at hand here is that he coined REST and the whole world is using that term for something completely unrelated (eg an http json api).
You could start writing in binary here if you thought that that would be a more appropriate way to communicate, but it wouldn't be English (or any humanly recognizable language) no matter how hard you try to say it is.
If you want to discuss whether hypermedia/rest/hateaos is a better approach for web apps than http json APIs, I'd encourage you to read htmx.org/essays and engage with that community who find it to be an enormous liberation.
I’m only mildly interested in discussing hypothetical hypermedia browsers, for which Roy Fielding’s conception might be well and good (but also fairly incomplete, IMO). What developers care about is how to design HTTP-based APIs for programmatic use.
You don't seem to have even the slightest idea of what you're talking about here. Again, I suggest checking out the htmx essays and their hypermedia.systems book
If you need an http json api for bots to consume, go for it. They are not mutually exclusive.
Let's say you've got a non-interactive program to get daily market close prices. A response returns a link labelled "foobarxyz", which is completely different to what the API returned yesterday and the day before.
How is your program supposed to magically know what to do? (without your input/interaction)
I suspect that your misunderstanding is because you're still looking at REST as a crud api, rather than what it actually is. That was the point of this article, though it was too technical.
https://htmx.org/essays is a good introduction to these things
> Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives.
Seems like you're contradicting yourself here.
If a non-interactive client isn't supposed to know anything and just "render" whatever it gets back, how can it perform useful work on the result?
If it can't, in which sense does REST still make sense for non-interactive clients?
It's all HTTP API unless you're actually doing ReST in which case you're probably doing it wrong.
ReST and HATEOAS are great ideas until you actually stop and think about it, then you'll find that they only work as ideas in some idealized world that real HTTP clients do not exist in.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
But for a client, UI or otherwise, to make use of a dynamic set of URIs/verbs would require it to either look for a specific keyword (hard coding the intents it can satisfy) or be able to semantically understand the API (which is hard, requires a human).
Oddly, all this stuff is full circle with the AI stuff. The MCP protocol is designed to give AIs text-based descriptions of APIs, so they can reason about how to use them.
Also, who determined these rules are the definition of RESTful?
> Also, who determined these rules are the definition of RESTful?
Roy Fielding.
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
It is a fundamentally flawed concept that does not work in the real world. Full stop.
As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.
Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP
> As such, JSON driven APIs can't be REST
I made it sound like JSON APIs can't be REST in principle, which is of course not true. If someone were to create hypermedia control specification for JSON and implement hypermedia client for it, it would of course would match the definition. But since we don't have such specification and compliant client at this time, we can't do REST as it is defined
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
We ended up with what I consider to be a solid design guide rooted in the correct use of Web standards. Not REST but RESTful. Clear and understandable, uniform, etc.
At the end of the day though the real challenge was more to make people adhere to those conventions. Why? Because most developers don't care at all. They want to finish their "Agile" sprint on time. They don't care about architecture, correctness, enterprise-wide homogeneity etc. Beyond the lack of ONE actual standard, that's the other real major problem.
https://github.com/NationalBankBelgium/REST-API-Design-Guide...
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
/draw_point?x=7&y=20&r=255&g=0&b=0
/get_point?x=7&y=20
/delete_point?x=7&y=20
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.This is also how HN does it:
/vote?id=44507373&how=up&auth=...
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Surely you're not advocating mutating data with GET?
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
See this post for example:
https://news.ycombinator.com/item?id=22761897
Quotes:
"Voting ring detection has been one of HN's priorities for over 12 years"
"I've personally spent hundreds of hours working on this"
The story url only would have to point to a web page that creates the upvote post request via JS.
CORS is a lot less strict around GET as it is supposed to be safe.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
Anyway, this is lame low effort trolling for some unknown purpose. Stop it.
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
Why?
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
I think LLM's are going to be the biggest shift in terms of actually driving more truly ReSTful APIs, though LLM's are probably equally happy to take ReST-ish responses, they are able to effectively deal with arbitrary self describing payloads.
MCP at it's core seems to design around the fact that you've got an initial request to get the schema and then the payload, which works great for a lot of our not-quite-ReST API's but you could see over time just doing away with the extra ceremony and doing it all in one request and effectively moving back in the direction of true ReST.
While I agree it's an interesting idea in theory, it's unnecessary in the real world and has a lot of downsides.
Django Rest Framework seems to do this by default. There seems very little reason not to include links over hardcoding URLs in clients. Imagine just being able to restructure your backend and clients just follow along. No complicated migrations etc. I suspect many people just live with crappy backends because it's too difficult to coordinate the rollout of a v2 API.
However, this doesn't cover everything. There's still a ton of "out of band" information shared between client and server. Maybe there's a way to embed Swagger-style docs directly into an API and truly decouple server and client, bit it would seem to take a lot more than just using links over IDs.
Still I think there's nothing to lose by using links over IDs. Just do it on your next API (or use something like DRF that does it for you).
Maybe gRPC or something like that will fill the gap ...
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
API quality is often not relevant to the business after it passes the “mostly works” bar.
I’ll just use plain http or RPC when it’s not important and spend more time on things that make a difference.
why not do everything in POST?
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
1. i can submit a request via HTTP
2. data is returned as JSON by a response
3. the most minimal amount of HTTP/Pagination necessary is required
query ($name: String!) {
greeting(where: {name: $name}) {
response
}
}
or mutation ($input: CreatePostInput!) {
createPost(input: $input) {
id
createTime
title
content
tags {
id
slug
name
}
}
}
and so on, instead of having to manually glue together responses and relations.It's literally SQL over the wire without needed to write SQL.
The payload is JSON, the response is JSON. EZ.
My impression is that it's far too flexible. Connecting it up to a database means you're essentially running arbitrary SQL queries, which means whoever is writing the GraphQL queries also needs to know how those queries will get translated to SQL, and therefore what the database structure/performance characteristics are going to be. That's a pain if you're using GraphQL internally and now your queries are spread out, potentially just multiple codebases. But if you exclude the GraphQL API publicly, now you don't even know what the queries are that people are going to want to use.
Mostly these days we use RPC-style APIs for internal APIs where we can control everything and be really precise about what gets called when and where. And then more "traditional" REST/resource-oriented endpoints for public APIs where we might have more general queries.
If the only consumer is your own UI, you should use a much more integrated RPC style that helps you be fast. Forget about OpenAPI etc: Use a tool or library that makes it dead simple to provide data the UI needs.
If you have a consumer outside your organization: a RESTish API it is.
If your consumer is supposed to be generic and can "discover" your API, RESTful is the way to go.
But no one writes generic ones anymore. We already have the ultimate one: the browser.
Adding actions to it!
POST api/registration / api/signup? All of this sucks. Posting or putting on api/user? Also doesn‘t feel right.
POST to api/user:signup
Boom! Full REST for entities + actions with custom requests and responses for actions!
How do I make a restful filter call? GET request params are not enough…
You POST to api/user:search, boom!
(I prefer to use the description RESTful API, instead of REST API -everyone fails to implement pure REST anyways, and it‘s unnecessarily limited.)
So then one gets to bike-shed if "signup" it is in the request path, query parameters, or the body. Or that since the user resource doesn't exist yet perhaps one can't call a method on it, so it really should be /users:signup (on the users collection, like /users:add).
Provided one isn't opposed to adopting what was bike-shedded elsewhere, there is a fairly well specified way of doing something RESTful, here is a link to its custom methods page: https://google.aip.dev/136. Its approach would be to add information about signup in a request to the post to /users: https://google.aip.dev/133. More or less it describes a way to be RESTful with HTTP/1.1+JSON or gRPC.
But that's not a difference between /user/signup and /user:signup .
I assumed most readers of my comment would get that the idea that /users/signup is ambiguous whether or not that is supposed to be another resource, while /users:signup is less so.
what's the confusion? you're creating a new user entity in the users collection.
On the other hand, agents could as well understand an OpenAPI document, as the description of each path/schema can be much more verbose than HATEOAS. There is a reason why OpenAPI-style API are favored: less verbosity of payload. If cost of agents is based on their consumption/production of tokens, verbosity matters.
[1] ok it's not an internet adage. I invented it and joke with friends about it
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
To handle authentication "properly" you have to use cookies or sessions which inheritly make apps not RESTful.
The term has caused so much bikeshedding and unnecessary confusion.
Likewise if the founders of the web took one look at a full on React based site they would shriek in horror at what's now the defacto standard.
But no, a service account in GCP has no less than ~4 identifiers. And the API endpoint I wanted to call needed to know which resource, so the question then is "which of the 4 identifiers do I feed it?" The right answer? None of them.
The "right" answer is that you need to manually build a string, a concatenate a bunch of static pieces with the project ID and the object's ID to form a more IDer ID. So now we need the project ID … and projects have two of those. So the right answer is that exactly 1 of the 8 different permutations works (if we don't count the constant string literals involved in the string building).
Just give me a URI, and then let me pass that URI, FFS.
There are plenty of valid criticisms, but that is not one, in fact thats where it shines.
It’s interesting that Stripe still even uses form-post on requests.
So your payloads look like this:
{
"id": 1,
"href": "http://someplace.invalid/things/1",
"next-id": 3,
"next-href": "http://someplace.invalid/things/3",
}
And rather than just using next-href your clients append next-id to a hardcoded things base URL? That seems like way more work than doing it the REST way.REST includes code-on-demand as part of the style, HTTP allows for that with the "Link" header and HTML via <script>.
I mean .. ok, you have the bookmark uri, aka the entrypoint
From there, you get links of stuff. The client still need to "know" their identifiers but anyway
But the params of the routes .. and I am not only speaking of their type, I am also speaking of their meaning .. how would that work ?
I think it cannot, so the client code must "know" them, again via out of band mecanisms.
And at this point, the whole stuff is useless and we just use openapi
I used to get caught up in what is REST and what is not, and that misses the point. It's similar to how Christopher Alexander's ideas pattern languages gets used in a way now that misses the point. Alexander was cited in introductory chapter of Fielding's dissertation. These are all very big ideas with broad applicability and great depth.
When combined with Promise Theory, this gives a dynamic view of systems.
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
What the heck does this mean? Does it mean that my API isn’t REST if it can’t interpret “http://example.com/path/to/resource” in the same way it interprets “COM<example>::path.to.resource”? Is it saying my API should support HTTP, FTP, SMB, and ODBC all the same? What am I missing?
Has any other system done this? where you send the whole application for each state with each state. project xandu?
I do find it funny how Fielding basically said "hey look at the web, isn't that a weird way to structure a program, lets talk about it." and every one sort of suffered a collective mental brain fart and replied "oh you mean http, got it"
I tend to use REST-like methods to select mode (POST, GET, DELETE, PATCH, etc.), but the data is usually a simple set of URL arguments (or associated data). I don't really get too bent out of shape about ensuring the data is an XML/JSON/Whatever match for the model structure. I'll often use it coming out, but not going in.
Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.
But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
You mention swagger. Swagger is an anti-REST tech. Defining a media type is the REST equivalent of writing a swagger API description.
If you can define an API in swagger, you can define one via a media type. It's just that the latter is generally not done because to do it requires a JSON schema (or similar) and people mostly don't use that or think of that as how one defines an API.
Boss: we need an API for XYZ
Employee: sure thing boss, I'll write it in swagger and implement by Friday!
Well, besides that, I don't see how REST solves the problem it says it addresses. So your user object includes an activate field that describes the URI you hit to activate the user. When that URI changes, the client doesn't even notice, because it queries for a user and then visits whatever it finds in the activate field.
Then you change the term from "activate" to "unslumber". How does the client figure that out? How is this a different problem from changing the user activation URI?
Were using actual REST right now. That's what SSR html uses.
The rest of your (vastly snarkier) diatribe can be ignored.
And, yet, you then said the following, which seems to contradict the rest of what you said before it...
> Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.