They have support for many languages https://grafana.com/docs/pyroscope/latest/configure-client/l... ( also based on eBPF ).
C++ from Meta/FB is much more pleasant to read than ones from ... other older big techs. I appreciate that.
This makes things so, so, so much easier. Otherwise, a lot of effort has to built into creating an unwinder in ebpf code, essentially porting .eh_frame cfa/ra/bp calculations.
They claim to have event profilers for non-native languages (e.g. python). Does this mean that they use something similar to https://github.com/benfred/py-spy ? Otherwise, it's not obvious to me how they can read python state.
Lastly, the github repo https://github.com/facebookincubator/strobelight is pretty barebones. Wonder when they'll update it
1) native unwinding: https://www.polarsignals.com/blog/posts/2022/11/29/dwarf-bas...
2) python: https://www.polarsignals.com/blog/posts/2023/10/04/profiling...
Both available as part of the Parca open source project.
(Disclaimer I work on Parca and am the founder of Polar Signals)
I have multiple questions if you don’t mind answering them:
Is there significant overhead to native unwinding and python in ebpf? EBPF needs to constantly read & copy from user space to read data structures.
I ask this because unwinding with frame pointers can be done by reading without copying in userland.
Python can be ran with different engines (cpython, pypy, etc) and versions (3.7, 3.8,…) and compilers can reorganize offsets. Reading from offsets in seems me to be handwavy. Does this work well in practice/when did it fail?
Overhead ultimately depends on the frequency, it defaults to 19hz per core, at which it’s less than 1%, which is tried and tested with all sorts of super heavy python, JVM, rust, etc. workloads. Since it’s per core it tends to be plenty of stacks to build statistical significance quickly. The profiler is essentially a thread-per-core model, which certainly helps for perf.
The offset approach has evolved a bit, it’s mixed with some disassembling today, with that combination it’s rock solid. It is dependent on the engine, and in the case of python only support cpython today.
Also what's really cool to see is that Facebook's internal UI actually looks decent. Never work in a company of anywhere close to that size and the tooling always look like it was puked by a dog.
That said, as with anything from Meta, I approach this with a grain of salt and the fact that I can't tell what they stand to gain from this makes me suspicious.
Meta is one of the biggest contributors to FOSS in the world. (React, PyTorch, Llama, …). They stand to gain what every big company does, a community contributing to their infra.
You’ll note that nobody is open sourcing their ad recommender, that is the one you should be skeptical about if you ever see. You don’t share your secret sauce.
Actually... (2019) https://ai.meta.com/blog/dlrm-an-advanced-open-source-deep-l...
Source code:
https://github.com/facebookresearch/dlrm
Paper:
https://arxiv.org/abs/1906.00091
Updated 2023 blog post, but solely for content recommendation, but ads recommendation is ~90% the same:
https://engineering.fb.com/2023/08/09/ml-applications/scalin...
It's a little out of date, but the internal one is built with the same concepts, just more advanced modeling techniques and data.
Why not exactly? Between Meta’s great contributions to the open-source ecosystem and Mark behaving more like a normal man nowadays, right now is the only time in a long time that I’ve considered applying to go work at Meta. I’ve heard several of my colleagues and friends say the same thing in recent months.
You’re certainly entitled to your opinions and ad hominems. Many folks, including myself, disagree with you, so there’s that.
But man is that dude a bad example of how to be a human.
I'll cut him some slack for growing up in public with stupid money and no one to regulate his impulses, but uff da.
Wake me up when he's old enough for his lagging prefrontal cortex to catch up with the rest of him.
Seeing the title and the domain I thought this was user profiling and I was wondering why would Meta be publishing this.
Perhaps a contributing factor is how HN shows only the final non-eTLD [0] label of the domain. If it showed all labels, you'd have seen "engineering.fb.com" which, while not a dead giveaway, implies that the problem space is technical.
It would be nice if this aggressive truncation were applied only above a certain threshold of length.
[1] https://www.polarsignals.com/
(Disclaimer: founder of polar signals)
I'm curious if anyone is working on "self-healing" systems where the optimization feedback loop is closed automatically rather than requiring human engineers to parse complex profiling data.
Fractal compute expense modeelling is hard.
One may do well in applying fluid dynamics (such that we cannot maintain in head)
into compute requirements, it will be funny once we realize that everything i mico (pico) fluid dynamics in general
> A seasoned performance engineer was looking through Strobelight data and discovered that by filtering on a particular std::vector function call (using the symbolized file and line number) he could identify computationally expensive array copies that happen unintentionally with the ‘auto’ keyword in C++.
> The engineer turned a few knobs, adjusted his Scuba query, and happened to notice one of these copies in a particularly hot call path in one of Meta’s largest ads services. He then cracked open his code editor to investigate whether this particular vector copy was intentional… it wasn’t.
> It was a simple mistake that any engineer working in C++ has made a hundred times.
> So, the engineer typed an “&” after the auto keyword to indicate we want a reference instead of a copy. It was a one-character commit, which, after it was shipped to production, equated to an estimated 15,000 servers in capacity savings per year!
1. The copy was needed initially 2. This structure wasn’t as heavy back then
… over time the code evolved so it became heavy and the copy became unnecessary. That’s harder to find without profiling to guide things