Now when I look at the documentation for many JavaScript, and even Python, libraries it's just examples. That's great if I'm trying to just throw something together as quickly as possible, but not if I need to fix a problem or actually learn the library.
Also having examples is fine, but they should be considered a bonus; not documentation.
Good documentation is hard, and very rare these days.
And it's a blurry line, since type definitions are a good form of documentation. It's just that type-system tooling has mostly replaced the need to go read through the docs for that. I expect to get it easily and obviously in whatever editor or IDE I've configured.
I think the prevalence of example based documentation is because of this trend - don't waste time manually writing the same thing type tooling is going to give your users anyway.
When I hit docs - I'm much less interested in the specific arguments, and I'm very interested in "Capabilities": Problems this tool can solve.
Examples are a great showcase for that. Especially if I have a specific problem in mind and I just want to know if a "tool" can solve that problem, and how to use it to do so.
---
If I have a type system: I want examples first.
If I don't have a type system: 1) I'm not happy. 2) I want examples first, followed by the detailed arguments/return/data structure docs you're referring to.
It's the API specification. It's not just the functions and their parameters, it's also an explanation of what they do.
> I'm much less interested in the specific arguments, and I'm very interested in "Capabilities"
And that's exactly what an API specification provides you, and that examples do not. Examples only tell you how to use the API on the same way that the author is using it.
it's much harder to imagine everything a tool can do with only the specs, and I'm not clear what things I'm missing. Examples make it concrete
With python, it could be a mysterious class that isn't explicitly mentioned.
For Rust, you often need to know 4 layers deep of nested types for various error and flow control, and once generics are introduced with the expectation of implemented traits, it all goes out the window.
If I need to declare the type of a value ahead of time, or know it's shape to serialize, check, etc, I want a very clear "this is what it returns, this is how it's expected to be used".
This way the examples must always be valid and runnable.
Examples like QuickStart guides should absolutely be required. They serve as the quick entry point and general use of the library. If I only had the code docs I would have no idea how I am supposed to use the api and was my main complaint of many Java libraries historically as they only had a Javadoc.
1. If you want an example and the docs have an example, great.
2. If you want technical details and the docs have technical details, great.
3. If you want an example but the docs only have technical details, you can still get what you want by spending longer to read and understand the documentation. It's inefficient but it works. Third party examples are also frequently available.
4. If you want technical details but the docs only have examples, you're SOL. You have to hope that the code is available and readable, or just try stuff and hope it works. Third party technical details are rare.
I'm not sure it's inefficient. It gives you the knowledge what is and isn't possible and also the semantics of what you actually do, all things you need to have later anyway. Relying only on examples is a great way to miss unintended side-effects or other consequences.
I don't understand the resistance to examples being a part of documentation (and not a bonus).
I think harimau777 was just expressing his opinion, like the blog owner.
I think that examples fit into the first or the second kind; I'm not too concerned which category they belong to, because the point is that all kinds all have their place.
Source: https://docs.divio.com/documentation-system/ - someone has already linked it in another comment, but I think it's worth replying because the comments in this thread are arguing about which kind they like the most.
But there's no reason that a document cannot contain both a specification and examples. That's the type I prefer, and that's the type I mostly encounter nowadays. Usually a specification on the left with some examples on the right, maybe even interactive examples.
The best documentation is type references and examples and a line explaining what the function does. Not one or the other.
A Javadoc-esque API doc is the minimum requirement for a serious language/library. Examples and tutorials are nice too, but the API doc can be automatically generated in every serious language so not having it is just plain unacceptable.
Tutorials, articles and examples etc are fine too, they can be combined with the API doc like Java does. But that's gravy, on top. You don't have just gravy, it needs to be on top of something and that something is a proper API doc.
This isn't even a discussion in my mind, it's just the right way to do it. It's a solved problem. You optimize for power users not amateurs. 99% of the time when I look up docs what I need is a plain boring API doc, I only need examples in my first week.
Sure, but you can optimize for one without excluding the other. Adding examples _allows_ amateurs to gain experience and _become_ power user, and imposes near-zero cost on power users who can just skip a couple lines and get into the "real" API doc.
"No true Scotsman would want examples."
I'm far from an amateur and I want to see several concise examples in documentation. That doesn't mean that's all I want to see.
They're not the best. The best is thorough api docs. The rest is nice to have.
And I'm not no true scotsmaning, I'm just saying most devs are ass. I know, I see people with 20 years of experience on me write buggy trash code that I have to fix. I see people whine about the java docs as if they aren't some of the best damn docs available anywhere. Because they're ass and they can't properly use the tools they're given. They'd rather have examples and copy snippets from SO or have ChatGPT write it for them.
Any professional will appreciate proper documentation. It is literally impossible to create examples for every situation. Api docs, by their nature, cover all use cases because they simply describe what exists not how to use it. So for those of us with the knowledge and intelligence to actually just look at the parts and put them together on our own, ie professionals, API docs are king. Examples are useful sometime, API docs are useful all the time.
But of course if you don't even know the language well enough to read the documentation, you prefer examples.
That's exactly what you said.
And I'm not no true scotsmaning
You absolutely were. Now you're trying to backpedal.
If you don't even know the language well enough to read docs I'd say you're an amateur. And if you're one of those people who just chronically don't read docs you're definitely an amateur. I mean you may have a job and you may even build stuff that kind of works, but unless you're sitting down and looking closely at the details you're not doing things right. You can't be. Doing things right requires looking closely at the details.
And I know from experience that there's a vast number of employed amateurs who write nothing but legacy code.
E.g. "Python rstrip" (without quotes) in DuckDuckGo does not lead me to the Python docs on page 1.
Now, languages with common / short names? All the time. Go, C, C++, D, etc; even Rust (especially since all the libraries have similarly ambiguous names related to iron oxide).
Am I uncommon in that I search for the website/domain I intend to read first and then look for the specific page?
Case in point: requests. Google always drops me to the pages like Quickstart[0], which are full of examples. But they are useless for advanced users! Yes, even my limited brain can remember that you call "get" to issue HTTP GET. What other options does it take? Does it take a timeout? How do I pass in content-type? Does raise_for_status ignore 204?
Both have their merits, but if developer only has time for one, I'd go for proper doc.
[0] https://requests.readthedocs.io/en/latest/user/quickstart/
Ultimately, creating a small example that I know should've worked and changing things in a way that I could narrow down the problem worked best.
The point to all this is that examples often encapsulate multiple things into a condensed format, that is extremely high fidelity (since it's typically a code example that runs and has output). Things like Api documentation in comparison typically cover a huge surface area, and have way less effort in maintaining them.
Also for the record, I absolutely don't trust libraries on behavior that isn't extremely obvious, explicit and so absolute that it's almost effortless to find out (I am talking first thing on README, entire sections showing that one thing, or definition through types). I just assume that things like behavior around status codes are going to change, and do my best in the calling code to mitigate issues as much as possible.
The developer has time for neither or both. Once the productivity barrier for one has been broken, the other is a tiny extra effort. In that case, they must provide both, except in the small minority of cases that are exceptionally self-describing, where one (usually docs) is sufficient.
The number of times I’ve opened FFmpegs man page must number in the hundreds. I think I’m a pretty good conceptually, but I can’t remember all the flags. IE that –s is frame size, while -fs is file size.
And while that man page does have some examples, these days I tend to ask an LLM (or if it’s going to be simple Google) for an example.
No, not really, or I should say simple examples are for beginners.
Typically when I'm writing example pages I write it in
Simple example
Simple example
Simple example
Intermediate example
Intermediate example
Complex example.
Having complex examples can really help those that are looking at going beyond the basics.
That's exactly what I'm pointing to in the post:
>When jumping between projects, languages and frameworks, it takes a considerable amount of mental energy to restore the context and understand what is going on.
Of course we want a full list of public functions with all the info. But with just a list of functions it's often still not quite clear how you're supposed to setup and call them, unless you dig through the whole list and try to understand how they interact internally. A few short examples on "How do I init the library/framework so I can start using it", "How do I recover from a failure case", etc. makes the docs infinitely more accessible and useful!
The most frustrating things are things that are undocumented at all and only have examples. What the hell kind of shitty documentation is just examples with no idea what parameters could or shouldn't be?!
If I tried to do that with man... I'd have to read (in alphabetical order) the documentation for 6 different command flags. That's how far I'd go to read about the flag -- to actually use it, I'd have to experiment with the command flag until I figured out the actual syntax using my imagination.
Let's say instead that I am in a rush... so I'm skipping that time-consuming process and instead scrolling furhter down for some practical examples... one full page later I find a bunch! ... Except there's only 4 and none of them demonstrate what I need (the syntax is completely different). The docs wasted my time exactly when I most needed it NOT to do that. If I'd instead just googled "xargs replacement example", I would have gotten something I can copy/paste in seconds and could have gotten on with my life!
The moral of the story is this: Don't tell without showing. Don't show without telling. Do both if your goal is to be understood.
You now you don't need to skim a man page like a physical directory? It's in a computer, just search for the flag you want to use.
[0] https://www.man7.org/linux/man-pages/man1/man.1.html#DESCRIP...
https://stackoverflow.com/documentation
We have shut down Stack Overflow Documentation. Documentation was our attempt at improving existing reference materials by focusing on examples. The beta ran from July 21st, 2016 until August 8th, 2017. For more details on why we ended it, please see our post on meta. Thank you to everyone that participated. As always, the content contributed by our community is available under CC BY-SA.
I learned recently that tldr also has separate pages for subcommands.
Examples let you grasp immediately how to use the library and provide you good starting point for your integration.
Detailed explanation of all params and configurations allows you to solve more complex problems and understand the full capabilities of the tool.
I am miserable when any of the two kind of documentation is missing.
The only exceptions are very simple libraries, where the example tells you already everything there is to know.
When feasible, I'd add repl environments. The ability to quickly modify and test the example can make up for a lot of the shortcomings documentation might have.
> Every project needs a varying ratio of each component and stacking all forms in a single website in the same format is very ineffective
I think Diátaxis _is_ mostly a conceptual framework. It helps me tremendously, totally implementation agnostic. From the site itself:
> Diátaxis strongly prescribes a structure, but whatever the state of your existing documentation - even if it’s a complete mess by any standards - it’s always possible to improve it, iteratively.
Examples are crucial to documentation, but they aren't — at least in this view — a kind of documentation, so much as a technique for documenting.
Not sure who came up with it first, but your link is similar to the author's link at the conclusion of the article:
> Since even major software projects rarely offer [4 distinct kinds of documentation][1], I am often hesitant to click on a "Documentation" link
Personally, I prefer documentation written as an explanation. If I understand how the thing works or why it's done that way, everything else is easy. Examples/tutorials/how-to guides only help me develop a conceptual framework by experiencing it, and I know that's how some people learn, but a skilled technical writer or educator can help me generate that conceptual framework correctly the first time.
I think the reason API references are so common is that they're really best suited for someone trying to build a wrapper or fully-compatible alternative implementation...which is what the original authors were doing, before the project existed. Many projects started as an internal, private API reference, and generating documentation can be as easy as making that public.
For example:
https://perldoc.perl.org/bigrat https://perldoc.perl.org/Archive::Tar
If you're writing docs for your project, consider following the Perl documentation style. Fortunately, that style is itself well documented:
https://perldoc.perl.org/perldocstyle#Description-and-synops...
I don't know what it is, its definitely not mathematical purity or consistency, but there's something there witb Larry Wall's linguistics background and general attitude that just stays with you.
Instead, many of them are almost like exercises to help you work through the ideas that the documentation is presenting. Some of them help you see how a syntax or other language rule applies. Sometimes the documentation presents two pieces of example code and walks you through why they're equivalent, helping you understand the language better. Or why they're almost equivalent and what the differences are.
In a lot of documentation, examples give you a superficial understanding but are a dead end if you want to go beyond superficial. In the Perl documentation, examples are often a bridge to a deeper understanding.
Wait a second, do people use $ for `$job` because it's how they earn money and not, as I've always thought, used it as a variable name?
Stop throwing my entire world view out of order please.
Also, many, many countries use $, like Canada.
(I'm 99% sure it's intended to be reminiscent of that sort of syntax. Example use: https://news.ycombinator.com/item?id=42761939)
... but the entire point of the "$JOB" etc. slang is that it's a string interpolation...
echo $JOB
and echo ${JOB}
are identical, though the latter is more flexible (Allowing eg. `echo ${JOB:-unemployment}` or `echo ${JOB}SUCKS`).Some times people put it in angle brackets <JOB>. I have no idea what system use variables like that.
In particular, I don't want to have to learn half a dozen footguns because of a leaky abstraction.
You can intuit all you want from a method signature, and then you will fail to produce working code because you missed a config, or a preparation step, or don't understand how to process the results, or...
If you have a `Foo` object, it should always be able to do every possible thing a `Foo` can do.
If you need to do something first, it should be a required argument to whatever lets you construct a `Foo` in the first place.
If the specific values of parameters of your mutable state are material to the outcome, then the example is incomplete if they are not specified. Similarly, you wouldn't use `fib(x) = 3` as an example without specifying `x = 4` in your context.
> don't understand how to process the results
Not sure when this would be the case. Do you have an example in mind?
Real world is usually a combination of
- read data
- trasform data
- obtain auth
- prepare data in proper formats
- <the only step usually shown in examples here>
- get results back. Perhaps an error? A stream? A specific format? Could we do retries?
That's why a full example would look like this: https://developer.auth0.com/resources/labs/actions/sync-stri... or https://github.com/stripe-samples and not a "it's enough to deduce parameters from method signature"
You don't quote me well. Here's what I actually wrote:
"If the technical spec of a method cannot be intuited from the signature and a handful of canonical examples of usage, then..."
> <the only step usually shown in examples here>
Then those are poor examples. I'm not defending that poor examples beat out good technical specs, or even that bad examples beat out bad technical specs. Merely that, if a user-facing function is not misplaced at the architectural level, then good examples often make good technical specs redundant (for users) and provide strictly greater value (for users).
I'm aware of real world practices. I'm a real world programmer working with real world functional codebases, where parameters material to computations are always exposed explicitly, or at least their monadic context is. It is no pipe dream to highlight these parameters in full examples.
What often is a pipe dream is attempting to formalize and then update English language technical documentation. Ironically, one of the biggest offenders I might point to is the Rocq proof assistant. Their documentation has decent example coverage, but where it doesn't have coverage, and instead only has authoritative-sounding technical specs, it is abysmal. I have gone down rabbit holes to find out the real-world implications of typeclass resolution flags like this one (https://rocq-prover.org/doc/V8.19.0/refman/addendum/type-cla...), for example, only to find out that its behavior has changed from version to version without any update to its technical spec. This is because the spec, being mere English, was not specific enough to even distinguish between these different behaviors.
It's merely that I think unstructured English is not a good language for communicating knowledge of how to use an API.
I am tired of rudimentary docs that only have examples.
If I at least have some examples to get me started, then I can go digging through other documentation (or even just the code) to figure how to do more advanced things... but being unable to even get started is a much bigger frustration IMO.
One time my company paid $5k for a commercial x264 license, and when we asked for some documentation on how to get started with encoding some video frames using their C API, they simply responded "the code is the documentation." With no real examples to go on (at the time in the 2000s there actually wasn't much available online at all), this set me back a good two weeks that I felt was completely unnecessary and completely turned me off to interacting with them at all for the future.
We ended up switching to a hardware solution that was much easier, faster, well-documented, and way friendlier people to talk to.
I agree that automatically generated API docs from code is nearly useless.
I agree that examples are important and useful.
But examples are not "the best documentation." Or I should say hyperbole is poor documentation of one's point of view.
Why? Because if you don’t explain how to actually use something, all the fine-grained details are pointless.
Classic example: try looking up the Java docs around 2003–2005 to figure out how to display an image in a Swing application. Endless pages about Graphics2D and Image and double buffering and what not but not a single mention of the real solution:
Just put it in a JLabel.
[0] https://fizyka.umk.pl/~jacek/docs/javatutorial/uiswing/compo...
We advise developers to include examples in their documentation, often under their own ## Examples heading. To ensure examples do not get out of date, Elixir's test framework (ExUnit) provides a feature called doctests that allows developers to test the examples in their documentation.
Doctests work by parsing out code samples starting with iex> from the documentation. You can read more about them at ExUnit.DocTest.
An example not to follow : https://developer.mozilla.org/en-US/docs/Web/API/Web_compone...
Here we are trying to explain templates and slots, with "slot" as an html attribute and the dummy data is... "slot". This kind of confusion is useless and makes it more difficult. Almost as if the goal was to be difficult.
I go on to explain each of the arguments, however, which this blog post doesn't necessarily suggest. I think we need both examples and explanations.
My draft is still early, but it’s already useful for everyday tasks like resizing, optimizing, and layering.
Even better are the challenges at the end of their examples which force you to rethink what you just read.
There's a couple other maladies that make for unreadable docs. Another is documenting only the last layer added by the Open Source collaborator and not documenting anything below, for example if I'm making an HTTP client:
http.get() makes an HTTP get request
http.set_tcp_params(param=value) sets the TCP parameter param to value
When you could do:
http.get() makes an http get request, which should be a read only requaest, that doesn't affect state of the server. Parameters are typically passed in the URL with ?param=value¶m2=value2 syntax as defined in RFCxxx
http.set_tcp_params() sets parameters of the tcp layer, as defined in RFCxxx, these can include most importantly the MTU which is the max packet size, which might be useful for fitting webpages... (<-- actually the reason the function was implemented at all, but then it was generalized just as an added bonus)
One way to fail in this regard is to use one of those autodocs things, you just lost the battle.
PLEASE provide examples and PLEASE provide documentation of every API call's inputs, outputs, and effects.
Feature-list type documentation is orders of magnitude less useful than workflow-type documentation. Majority of the time users are concerned with accomplishing tasks, which if the software is well designed, can be grouped, composed, and illustrated with examples. Key examples can be chosen to both showcase features and illustrate the main paths from "raw materials" to "finished workpiece", at various scopes.
This gap is too often filled by random youtube tutorials of varying quality. It's really telling of the massive blind spot afflicting the makers; they're too close to the details of what they build to see the whole, especially from the perspective of a user.
Yes, for api/token interactions I love examples that show how to generate a token with only permissions for what you are doing. Then pulling said data from API and interacting with it. At least that's how the users that use the product I work on. Security, workflow, and cleanup are great examples.
This means so long as you run the tests before shipping you basically can't ship with examples you forgot to update after an API change because they either don't compile or don't work.
And you might think well, nobody would ship examples that don't even compile right? But on more than one occasions I've used a Microsoft C# library that comes with examples clearly hand made by their engineers (yes I'm that old that it couldn't be LLM output) without checking they work. Somebody has sketched out what ought to work, they haven't tested it, and a reviewer has seen that it looks right and OK'd it, but again, never tested that it works. This is extremely frustrating because the documentation is wrong but now what?
I try to write good documentation for my programs, although it helps if other people (who do not already know the working of the program) will review it in order to suggest improvements.
When I was learning php in school, the examples / discussions on every documentation page were always the most helpful bit.
When I am learning something, I do tend to start with the examples, it is the best way to quickly get a feel for the shape of the domain and it's usage. But then end up in the reference in order to understand and apply what I just learned.
If you are deeply knowledgeable about some library or language, examples become less valuable and the straight forward API documentation becomes essential.
The best docs with examples I've seen are Qt (probably the best documentation overall), Rust and Go.
- it puts a simple usage example into the documentation - you get an easy-to-write, quick-to-run little assertion test that ensures basic functionality is saved
[0] https://littlegreenviper.com/testing-harness-vs-unit/#sample
Examples are very nice. One way to create examples is with test cases. Test cases can also be packaged with source code!
It's like googling "how to do XYZ with grep" instead of just typing "man grep" in the terminal.
If you only have N hours to work on "documentation", I say spend it on examples.
Also AI can probably turn a quality example into a doc whereas it can't turn a doc into a quality example.
Another thing to contemplate. I stand here as a 75 kilo entity - an example. I am, IHMO, a good example. The documentation on this example is so far many millions of page long, is very incomplete, certainly wrong, and could not be used to create said example.
https://news.ycombinator.com/item?id=15779382
As the above 8y old discussion and today’s blog post both say: yes, examples are but one part of a complete set of learning materials — practicals, theory, recipes, reference.
However, in a toss up, the greatest of these is the one you always want first: picking the thing up, tossing it about to get a feel of it, and getting your hands dirty!
For code, worked examples are particularly helpful because your dev environment can jump to the function definition, which typically has its own reference documentation.
I just LOVE IT when i read a device datasheet, perform the steps and things don't work. Then i have to download a couple of GB or so of framework and configurator to generate a project that shows me all the details they forgot to put in the datasheet: order of operations, missing requirements, all that stuff.
Even better when the examples are not updated when the device / libraries / framework are updated
* Absolute beginners/newbies: These users are best helped by showing a "happy path" through the tools, with plenty of examples to show you why things are that way, so they build up an understanding of how things are supposed to work.
* Regular users: These are best helped by a "topic" oriented style of documentation. Topics can take one element of a tool and go really in-depth on it. For example, a HTTP request library might have a topic page entirely dedicated to session authentication, explaining how to persist headers and cookies between requests in the library and what a session object does.
* Power-users/developers: These users are served by reference docs the most. Just a big index of classes, functions and argument doc strings that refer to each other is enough here, because anyone reaching for reference docs is usually doing it because they have a very specific issue.
As for real world examples I'm familiar with; NodeJS libraries often only have beginner docs, and the moment you go off of that happy path it becomes very difficult to reason about what you're doing (not helped by a lot of frameworks preferring you use CLI tools to build up your code, maybe this has changed), the python ecosystem often has good topical docs at the cost of poor reference documentation and C libraries very often only ship with reference documentation.
Programming-wise, it's easy to make reference docs (since they can be derived from comments in the source code) and beginner docs (because you probably have a mental model on how someone is meant to start using something), while writing good topical documentation is an entire skill on its own (since it requires understanding where someone might struggle with something.)
The best documented tools usually have all three covered at the same time, while poorly documented tools usually only have beginner or reference documentation. Only having beginner docs makes it impossible for someone to really learn a tool ("draw the rest of the fucking owl"). Only having topical docs makes it hard to figure out where to start. Only having reference docs is hard to reason about because there's nothing explaining to you how the references are meant to fit together.
It has a pro over documented examples in that they are _guaranteed_ to be correct.
Useless documentation means half-arsed is better.
The world of IT is broken, what sort of idiot gives Linux to their parents when as a trained developer man is so useless?
It's just excuse after excuse. Unit tests are documentation sort of garbage.
That's what's mind blowing about LLMs, IT devs are so bad LLMs are better. Hacker News comments also confirm this.
If the audience is teammates, the _code_ itself is usually the best documentation.
Tests lie about edge cases. A test _might_ be demonstrating an edge case, or the suite might ignore large sections of edge cases.
Documentation is worse, it rarely documents edge cases. Country Y passes legislation and you need to implement it? Yeah, the docs aren't getting updated.
If your audience is a different team, API references (the headers/RPC message schema/javadoc/etc.) are better than written docs. Again, they are less likely to mislead you and are much more likely to work.
Unless, of course, the interfaces are telling fibs about what is allowed. Interfaces that are big buckets of parameters can do that.
Only if your audience is an external organization would documentation be a good primary reference. It meets them where they are, selling them on the capabilities of the software and bootstrapping them to the code.