The problem of filtering out debug logs is you don’t need them, until you do. And then trying to recreate an event you can’t even debug is often impossible. So it’s easier to then retrieve those debug logs if they’re already there but hidden.
Also, as I pointed out elsewhere, modern observability platforms enable a way to have those debug logs available as an archive that can be optionally ingested after an incident but without filling up your regular quota of indexed logs. Thus giving you the best of both worlds (all logging but without the expense and flooding your daily logs with debug messages)
I’ve been on-call, and I think you’re cherry picking. The world has too many devs who still debug with log statements. Those logs never had any value to anyone but the original author.
I’ve also seen too many devs who are perfectly happy trying to write vastly complex Splunk queries to generate charts, and those charts tend to break in a production incident becausea bunch of people load them at once and blow up Splunk’s rate limiting. I’ve almost never had this problem with grafana. It’s true that you can make a dashboard with long-term trends that will fall over, but you wouldn’t use that dashboard for triage, unless you make one that tries to do both and the solution is split it into two dashboards.
If you want to make a successfully scaling organization, you need a way for new members to join your core of troubleshooters, without pulling resources away from solving the trouble. So they can’t demand time, resources or attention that are in short supply from the core group.
Grafana fits that yardstick much better than log analyzers.
You’re making a case that cryptical logs messages are bad. And I agree.
You’re also making a case that logs are only piece of the telemetry ecosystem. And I agree there too.
What I’m arguing is that there isn’t a need to filter logs based on cost because you can still work with them in observability platforms in a cost effective way.
Lastly, I didn’t say everything should be instantly available. Long term logs shouldn’t be in the same expensive storage pool as recent logs. But there should be a convenient way to import from older log archives into your immediate log querying tools (statement here is intentionally vague because different observability platforms will engineer this differently and call this process by different names)
As for complex queries, regardless of how easy to use your observability platform is, however many saved queries and dashboards you have built, there’s always going to be a need for upskilling your staff. That’s an inescapable problem.
This is often easier said than done. And there's ginormous costs associated with logging everything. Money that can be better spent elsewhere.
Also, logging everything creates yet another security hole to worry about.
If you use a tool that defaults the log spew to a cheap archive, sampling to the fast store, and a way to pull from the archive on-demand much of that is resolved. FWIW I think most orgs get big scared at seeing $$$ in their cloud bills, but don't properly account for time spent by engineers rummaging around for data they need but don't have.
This is a tricky one that's come up recently. How you you quantify the value of $$$ observability platform? Anecdotally I know robust tracing data can help me find problems in 5-15 minutes that would have taken hours or days with manual probing and scouring logs.
Even then you have the additional challenge of quantifying the impact of the original issue.
- Reliability as a cost center
- Vendor costs are to be limited
- CIO-driven rather than CTO-driven
Then it's going to be a given that they prioritize costs that are easy to see, and will do things like force a dev team to work for a month to shave ~2k/month off of a cloud bill. In my experience, these orgs will also sometimes do a 180 when they learn that their SLAs involve paying out to customers at a premium during incidents, which is always very funny to observe. Then you talk to some devs and they say things like "we literally told them this would happen years ago and it fell on deaf ears" or something.
exactly. high-cardinality, wide structured events are the way.
> Also, logging everything creates yet another security hole to worry about.
I think the real problem isn’t logging, it’s the fact that your developers are logging sensitive information. If they’re doing that, then it’s a moot point if those logs are also being pushed to a third party observability platform or not because you’re already leaking sensitive information.
If developers think “log everything” means “log PII” then that developer is a liability regardless.
Also, this is the sort of thing that should get picked up in non-prod environments before it becomes a problem.
If you get to the point where logging is a risk then you’ve had other failures in processes.
Good automatic tiering for logs is very useful as the most recent logs tend to be the most useful. I like NVMe -> hard disk -> tape library. LTO tape storage is cheap enough you don't need to delete data until it is VERY old.
“Better to have hoarding disorder than to need a fifty year old carrier bag full of rotting bus tickets and not have one” really should need more justification than a quote about how convenient it is to have what you need. The reason caches exist as a thing is so you can have what you probably need handy because you can’t have everything handy and have to choose. The amount of things you might possibly want or need one day - including unforeseen needs - is unbounded, and refusing to make a decision is not good engineering, it’s a cop-out.
Apart from cost, the more time and money you spend indexing, cataloging, searching it. How many companies are going to run an internal Google-2002 sized infrastructure just to search their old hoarded data?
Step one: add log severity to your log messages (pretty much every log library supports this out of the box).
Step two: add a log archive (you should have this anyway so that logs can be retained past the initial retention period of your log querying tools. Eg you might have a compliance requirement to keep logs for two years but you obviously wouldn’t want anything that old stored in your expensive fast log search)
Step three: create a way to ingest your archived logs (again, something your business should have, otherwise what’s the bloody point in having an archive)
Step four: have a rule that pushes logs of high severity straight into your log ingestion pipeline, and logs of lower severity into your archive.
Step four seems to be the piece that most people are oblivious too. But it’s generally really easy to implement. Particularly so if you’re using a reputable observability platform.
People who think “log everything” means “log PII” or “stick everything in the same log ingestion pipeline” are simply doing logging wrong. I’m not normally one to say “you’re doing it wrong” but when it comes to logging, these tools are long since mature now. The problem isn’t the tooling, it’s people’s awareness of it.
This has never been a source of significant issues for me.
Having it is pointless if your SNR is so low that it costs more money than simply waiting for the bug the next time it comes up.
IMO, if a bug never surfaces again, that's not a bug I care about anyway. Keeping all generated data in case someone wants to see the record from a bug 3 months ago is absolutely pointless - if it hasn't surfaced again in the last three weeks, you absolutely have more high-priority things to look at!
I want to see this mythical company, where a paid employee is dedicated by the company to look at a log from 3 months ago, to solve a bug that hasn't resurfaced in that three month period!
Seriously, storing petabytes of logs is a guarantee for someone on your team writing sensitive data to logs, and/or violate regulations.
The problem is, if you knew what was going to go wrong, you'd have fixed it already. So when there's a report that something did not operate correctly and you want to find out WTF happened, the detailed logs are useful, but you don't know which logs are useful for that unless you have reoccuring problems.
God why do we keep these fire extinguishers around, they sit unused 99.999% of the time.
And there’s a lot of scanning blindness out there. Too much extraneous data can hide correlations between other logs entries. And there’s half life in value of logs written for bugs that are already closed, and it’s fairly short.
I prefer stats because of the way they get aggregated. Though for GIL languages some models like OTEL have higher overhead than they should.
Everything else I could write is just turning various trade-off knobs, which is why I'd guess you haven't seen an out-of-the-box offering that does what you're describing. There's not just one solution to it that would be reasonable for all audiences
“Can’t to X, doesn’t work.”
“Look, it’s easy. Did you even RTFM? http://blah.example.com/doc/articleb#section2”
“Uh, no, because search engine took me to http://blah.example.com/doc/articleg#section7”
It's better to have all data and not need it, than to need it and not have it. Assuming you have the resources to ingest it in the first place, which seems like the focus of the optimization work they did.
I'm sure someone somewhere is working on an AI that predicts whether a given log is likely to get looked at based on previous logs that did get looked at. You could store everything for 24h, slightly less for 7d, pruning more aggressively as the data gets stale so that 1y out the story is pretty thin--just the catastrophes.
> As you’ll read below, this saves us millions of dollars a year and allows us to scale out our ClickHouse Cloud service without having to be concerned about observability costs, or make compromises on the log data we retain.
https://clickhouse.com/blog/building-a-logging-platform-with...
You don't understand why DataDog has a $44 billion market cap. It's yet another instance of Finance complaining that the transition to The Cloud gave every engineer a corporate credit card with no spend controls or a way for Finance to turn off the spigot.
business events + error/tail-sampled traces + metrics
... and logs in rare cases when none of the above works. logs are dump of everyting. why would you want to have so many logs in first place? and then build whole infra to scale that? and who and how reads all those logs? they build metrics on top of that? so might as well just build metrics directly and purposefully? with such high volume, even LLMs would not read them (too slow and too costly).. and what would even LLM tell from those logs? (may be sparce/low signal, hard to decipher without tool-calling, like creating merics)
> If a service is crash-looping or down, SysEx is unable to scrape data because the necessary system tables are unavailable. OpenTelemetry, by contrast, operates in a passive fashion. It captures logs emitted to stdout and stderr, even when the service is in a failed state. This allows us to collect logs during incidents and perform root cause analysis even if the service never became fully healthy.
Can you search log data in this volume? ElasticSearch has query capabilities for small scale log data I think.
Why would I use ClickHouse instead of storing log data as json file for historical log data?
(Context: I work at this scale)
Yes. However, as you can imagine, the processing costs can be potentially enormous. If your indexing/ordering/clustering strategy isn't set up well, a single query can easily end up costing you on the order of $1-$10 to do something as simple as "look for records containing this string".
My experiences line up with theirs: at the scale where you are moving petabytes of data, the best optimizations are, unsurprisingly, "touch as little data as few times as possible" and "move as little data as possible". Every time you have to serialize/de-serialize, and every time you have to perform disk/network I/O, you introduce a lot of performance cost and therefore overall cost to your wallet.
Naturally, this can put OTel directly at odds with efficiency because the OTel collector is an extra I/O and serialization hop. But then again, if you operate at the petabyte scale, the amount of money you save by throwing away a single hop can more than pay for an engineer whose only job is to write serializer/deserializer logic.
There are multiple reasons:
1. Databases optimized for logs (such as ClickHouse or VictoriaLogs) store logs in a compressed form, where values per every log field are grouped and compressed individually (aka column-oriented storage). This results in smaller storage space comparing to plain files with JSON logs, even if they are compressed.
2. Databases optimized for logs perform typical queries at much faster speed comparing to grep over JSON files. Performance gains may be 1000x and more because these databases skip reading unneeded data. See https://chronicles.mad-scientist.club/tales/grepping-logs-re...
3. How are you going to grep 100 petabytes of JSON files? Databases optimized for logs allow querying such amounts of logs because they can scale horizontally by adding more storage nodes and storage space.
In the article, they talk about needing 8k cpu to process their json logs, but only 90 cpu afterward.
For the latter, I have a very hard time believing we’ve squeezed most of the juice out of compression already. Surely there’s an absolutely massive amount of low-rank structure in all that redundant data. Yeah, I know these companies already use inverted indices and various sorts of trees, but I would have thought there are more research-y approaches (e.g. low rank tensor decomposition) that if we could figure out how to perform them efficiently would blow the existing methods out of the water. But IDK, I’m not in that industry so maybe I’m overlooking something.
100PB is the total volume of the raw, uncompressed data for the full retention period (180 days). compression is what makes it cost-efficient. on this dataset, we see ~15x compression, so we only store around 6.5PB at rest.
Let's take the example of an SFU-based video conferencing app, where user devices go through multiple API calls to join a session. Now imagine a user reports that they cannot see video from another participant. How can such problems be effectively traced?
Of course, I can manually filter logs and traces by the first user, then by the second user, and look at the signaling exchange and frontend/backend errors. But are there better approaches?
What I am saying is that I really dislike working in Clickhouse with all of the weird foot guns. Unless you are using it in a very specific, and in my opinion, limited way, it feels worse than Postgres in every way.
It blows my mind that a high availability system would purposefully prevent availability as a “feature”.
[0] https://martin.kleppmann.com/2015/05/11/please-stop-calling-...
A partition is when some nodes can’t reach other nodes.
Zookeeper instead has an issue where it does try to restart but the timeout (why?!) is too short, something like 30 seconds. If the majority of your nodes don’t all start within a certain time window the whole cluster stays down until someone manually intervenes.
I discovered this fun feature when keeping non-prod systems off to save money in the cloud.
It also has an impact when making certain big bang changes in production.
Other data that is ETL’d and might need to update? That sucks.
Anyway, yes, if your data is highly mutable, or you cannot do batch writes, then yes, Clickhouse is a wrong choice. Otherwise... it is _really_ hard to ignore 50x (or more) speedup.
Logs, events, metrics, rarely updated things like phone numbers or geocoding, archives, embeddings... Whoooop — it slurps entire Reddit in 48 seconds. Straight from S3. Magic.
If you still want really fast analytics, but have more complex scenarios and/or data loading practices, there's also Kinetica... if you can afford the price. For tiny datasets (a few terabytes), DuckDB might be a great choice too. But Postgres is usually a wrong thing to make work.
Data Warehouse consists of Slowly Changing Dimensions and Facts. none of these require updates
The problem is, people who created spec is just following trial and error approach, which is insane.
Are they just basically large hash tables?
- Smart placement of the data on disk, which allows skipping the majority of data and reading only the needed chunks (and these chunks are stored in a compressed form in order to reduce disk read IO usage even more). This includes column-oriented storage and LSM-like trees.
- Brute-force optimizations all over the place, which allow processing the found data at the maximum speed by employing all the compute resources (CPU, RAM, disk IO, network bandwidth) in the most efficient way. For example, ClickHouse can process more than a billion of rows per second per every CPU core, and the scan speed scales linearly with the number of available CPU cores.
(one of my immense frustrations with kubernetes - none of the commands for viewing logs seem to accept logical aggregates like "show me everything from this deployment").
k9s (k9scli.io) supports this directly.
Even when it comes to logging in the first place, I have rarely seen developers do it well, instead logging things that make no sense just because it was convenient during development.
But that touches on something else. If your logs are important data, maybe logging is the wrong way to go about it. Instead think about how to clean, refine and persist the data you need like your other application data.
I see log and trace collecting in this way almost as a legacy compatibility thing, analog to how kubernetes and containerization allows you to wrap up any old legacy application process into a uniform format, just collecting all logs and traces is backwards compatible with every application. But in order to not be wasteful and only keep what is valuable, a significant effort would be required afterwards. Well, storage and memory happen to be cheap enough to never have to care about that.
Would you delete a text file that's a few KB from a modern device in order to save space? It just doesn't make any sense.
Sure, we should cut waste, but compression exists for a reason. Dropping valuable observability data to save space is usually shortsighted.
And storage isn't the bottleneck it used to be. Tiered storage with S3 or similar backends is cheap and lets you keep full-fidelity data without breaking the budget.
My centrist take is that data can be represented wastefully, which is often ignored.
Most "wide" log formats are implemented... naively. Literally just JSON REST APIs or the equivalent.
Years ago I did some experiments where I captured every single metric Windows Server emits every second.
That's about 15K metrics, down to dozens of metrics per process, per disk, per everything!
There is a poorly documented API for grabbing everything ('*') as a binary blob of a bunch of 64-bit counters. My trick was that I then kept the previous such blob and simply took the binary difference. This set most values to zero, so then a trivial run length encoding (RLE) reduced a few hundred KB to a few hundred bytes. Collect an hour of that, compress, and you can store per-second metrics collected over a month for thousands of servers in a few terabytes. Then you can apply a simple "transpose" transformation to turn this into a bunch of columns and get 1000:1 compression ratios. The data just... crunches down into gigabytes that can be queried and graphed in real time.
I've experimented with Open Telemetry, and its flagrantly wasteful data representations make me depressed.
Why must everything be JSON!?
OTEL can do gRPC and a storage backend can encode that however it wants. However, I do agree it doesn't seem like efficiency was at the forefront when designing OTEL
Google was doing something comparable internally and this spawned some fun blog titles like “I have 64 cores but I can’t even move my mouse cursor.”
While not difficult, I am just curious how others approached it.
That's a bit of a blanket statement, too :) I've seen many systems where a lot of stuff is logged without much thought. "Connection to database successful" - does this need to be logged on every connection request? Log level info, warning, debug? Codebases are full of this.