I use this approach because on macOS, with SIP, Bear no longer works as it is forbidden from injecting its hooks into the system-protected clang shim. There may be other solutions.
(I haven't explored the space in a few years. They've historically fared poorly with embedded targets where one file may be compiled many different ways for targeting different platforms)
- headers generated during the build: tools may find the wrong header or fail when the header isn't found - generated sources: does the source file in the database even exist yet?
For those cases, one still needs to consider the "out of date" situation where a previous build left an old version around. This is something tools can't really even know today.
Beyond that, modules present an issue. The command line doesn't have any indication of what `import foo;` means to the TU in question (and in CMake's case, the file which does tell it that information is build-time generated anyways). Even if it did, the BMI may be in an incompatible format (e.g., using GCC for the build, but using `clang-analyzer` on the database) so the tool needs to go and make its own BMI. I gave a talk at CppCon about "build databases" which is basically "compilation databases, but consider what it means to support modules". (The video isn't available yet.)
I'm also working on getting a paper on them through ISO C++'s SG15 (Tooling) for proper specification.
FD: CMake developer
> In fact I can't even think of a reasonable case where it would be even a minor problem.
On my last project, a build took 2+ hours on a 128 core threadripper. It wasn't every day that the lowest level stuff changed, but it was probably once a sprint or so. Waiting 2 hours for the compilation database to be ready isn't tenable. Rider and Visual Studio could generate a usable model of the project in 2-3 minutes.
Almost nobody is writing Ninja files by hand. If you have to write something along those lines by hand, Makefiles would make more sense than Ninja. If Ninja does support exporting commands, it's a use case that doesn't matter because almost everyone uses CMake-generated Ninja files.
Also, that's kind of the point of mentioning Ninja. Inherently, anything that uses Ninja is in the same boat and doesn't need to start a build.
FD: CMake developer
I use this often in combination with direnv and Nix flakes for a good developer experience. (Note that if you need to keep the Nix flake VCS-ignored, you'll need to tell direnv to explicitly not use a Git fetcher for Nix using something like `use flake path://$PWD` though this isn't needed if you can just re-use a Nixpkgs expression, e.g. `use flake nixpkgs#wineWow64Packages.wine-unstable` or so.)
One thing that sucks is that it doesn't seem to be easy to handle cross-compilation stuff. Wine is a particularly challenging case, as many of the binaries are now cross-compiled with MinGW. It still provides better completion than nothing, but I do wish I could get it to be perfect.
When using Nix with MinGW I struggled even harder because Nix's MinGW is even weirder... But with enough mangling, you can even convince clangd on Linux to give somewhat decent completions for a MinGW compilation.
One obvious disadvantage is that you kind of need a full rebuild to get the compilation database to work correctly. At least in my personal experience, running bear on a partial build seems to not work in an additive fashion, though maybe I was holding it wrong.
These days I found my peace just writing my own python scripts to parse verbose build logs. You just need to extract filename, base dir, compiler and arguments. Sometimes you're lucky and build system supports dry run and you don't even have to run a real build to get the logs.
Way less invasive, no need to ask devops for additional tools and you can adapt it to any build system.
Whenever I start with a new toolchain, I spend a couple of hours to tweak the parser and I'm good to go.
https://develop.spacemacs.org/layers/+lang/c-c++/README.html