16 pointsby mooreds2 days ago7 comments
  • NickGerlemana day ago
    PCHs give a sour taste in my mouth after working on a project which very liberally added commonly imported headers to a huge one. In practice, it meant each TU was indirectly including many more headers than needed, and it was harder for humans (or IDEs) to reason about the real dependency chain. While builds were faster, they also started using much more memory.

    You also can end up needing to rebuild the world, if touching a header that is part of the PCH, even if it isn’t really needed by all the targets.

    Modules and header units were supposed to solve these a lot more cleanly, but is still not well supported.

  • o11ca day ago
    This blog post has a few inaccuracies, so the situation isn't as bad as it seems. (It is still annoying and requires thought though).

    I'm just going to ignore the part where Clang apparently sabotages itself by requiring `-include-pch`. You really shouldn't be using Clang in production because it has all sorts of hard limitations so I am not at all surprised at hitting another one; even if this particular one gets fixed, you'll still run into several others, whether you realize it or not (since usually it silently does the wrong thing). Your `./configure` should already be detecting "is this an actually-working GCC" anyway, so you can use that as the condition before you enable the PCH logic at all.

    The "can only use one PCH per compilation" limitation also exists in GCC and is well-documented there since I started using it (maybe in 2010?), but as noted it is not a major limitation. Assuming you're practicing sufficient rigor for your build process, you basically have 3 options: only one PCH for the whole project, one PCH per directory-ish (assuming each directory is a semi-independent part of the project; this is probably sanest), or try to precompile all headers (this has performance implications for whenever you edit a header).

    The "build vs src" purity problem has a simple solution - just use `-I` to specify an "overlay" include directory that is in the build directory, and have your PCH-making rule specify that in the first place. That's it.

    • aw1621107a day ago
      > You really shouldn't be using Clang in production because it has all sorts of hard limitations

      Wait, do you mean you shouldn't be using Clang PCHs in production, or shouldn't be using Clang in production at all?

      • o11ca day ago
        At all. Clang has a lot of footguns, and filing bugs about "regression compared to GCC" does not actually get them fixed.

        Remember, the whole point of Clang is so that people can make their proprietary compiler forks. Everything else, including quality, is secondary.

        • nuudlmana day ago
          Do you have any specific examples here?

          While no compiler is perfect (e.g., pointer provenance), one could just as easily argue that Clang has higher quality—most modern C/++ tooling is built on it, sanitizers are brought up in Clang and sometimes ported to GCC later, all of the modern safety work is in Clang (-Wbounds-safety, the various thread safety approaches, lifetime analysis, Sean’s borrow checked C++, Fil-C). The Clang static analyzer has also been used in production for over a decade, whereas -fanalyzer is still experimental and limited to C, …

          I have the feeling that the bugs that aren’t being fixed are either because the bug report is unactionable (3yr old Ubuntu Franken-clang is slower than than 3yr old franken-gcc) or because the problem is very non-trivial (e.g., you can find recent discussion about enabling dependence analysis by default, the aforementioned pointer provenance issues stemming from C, ABI/lowering warts around large _BitInt)

  • pjmlpa day ago
    Naturally they don't care about Windows and then blame precompiled headers.

    Traditionally they always sucked on UNIX systems, the only place I have been able to fully enjoy the improvements they bring to the table has been in PC systems, since Windows 3.x, across OS/2, and Borland/Microsoft compilers ever since.

    Additionally, on my C++ hobby coding, since I am on Windows, I am fully into modules.

  • pyuser583a day ago
    Wow I didn’t know Squid is still around. Not the kind of software that gets much publicity.
  • mayoffa day ago
    I hate cmake but this is something cmake does well in my experience. I had to write a Godot 4 plugin and Godot has many many header files. I made a project header that #included all the Godot headers, and a single target_precompile_headers directive in CMakeLists was enough to get it working on Mac and Linux (and I think on Windows but I didn’t need to run it on Windows).
  • limocea day ago
    Do C++ modules solve this problem?
    • pjmlpa day ago
      Kind of, the current problem is they aren't fully mature, you great a good experience with VC++ and clang (vLatest), alongside MSBuild/CMake/ninja.

      VS IDE tooling is stil kind of broken because they rely on EDG instead of VC++, and fixing module intelisense has been low priority for EDG, and MS as well as I would assume a $4 trillion valued company would care about their suppliers.

      Clion has better support on that regard.

      GCC is still not fully there, and Apple clang, well Apple is happy with module header maps for their Swift/Objective-C integration, so lets see.

    • o11ca day ago
      C++ modules solve exactly one of the problems - the "one pch limit" one - at the cost of introducing several more. Certainly they are not more compiler-independent!
  • xonre20 hours ago
    Tl;dr: it's an autoconf problem

    Article also misses: comparing speed of full rebuilds. Pch gives massive speedup on edit-compile-debug cycle or when several modules can share a single pch.