This but unironically.
This is a great line.
constexpr and std::execution seem like neat ideas, maybe I'll give them a shot if I build an AI harness around the compiler so it doesn't make me feel like a hopeless idiot for trying new things.
D's equivalent to "constexpr" is "compile time function evaluation". i.e. in any context where it only makes sense to run code at compile time, it will do so. This makes it trivial to do some pretty complex things at compile time. I put together an example that shows creating static arrays, dynamic arrays, and a dynamic array with a partial fibbonaci sequence all at compile time[0].
[0] https://gist.github.com/SuperDoxin/d9fcc68b73c035cbde7f0bd08...
The standard does require that if work was done at compile time the compiler is supposed to tell you if that was nonsense but (a) actually C++ is so complicated your compiler likely has many bugs in this respect and (b) you probably aren't sure the compiler did the work you expected at compile time, knowing all the excuses requires considerable expertise.
If all you want to do is bake data into a compiled program, there is the #embed feature added to C++26.
New (and delete) can be used in constexpr functions, however memory "allocated" like that cannot leave the constexpr "sandbox" so to speak, therefore std::vector cannot be generated at compile-time, but std::array may.
If you are working with fixed-size data like LUTs, just use std::array [1]
[1] Make sure not to use std::to_array when embedding 200KB+ files, as it's a mere constexpr function and not a language construct and will exceed constexpr limits; either specify the size or use a C-style array in this case
This does not look like a productive way to get things done.
This does not look like a productive way to get things done.
You will lose many nice features like fancy strings and easy array resizing (which may or may not be acceptable to you), but you don’t have to pay for it if you don’t use it. (Mostly)
This does seem pretty complicated. And I doubt I will ever use it. But for some the trade off is worth it, and they get to make the choice.
In the real world I would think trying to do any of the things discussed in this article should be an automatic commit rejection on any project.
In the real world, failing to understand what you're reading and eagerly generalising to the entire language should be an automatic hiring rejection in any team.
there's a point at which "pragmatism" starts being anything but, and it was around C++11 give or take a standard. how on earth do you use it day to day and not feel the schizophrenic non-design being a generalized property across the whole language?
For real though, defend constexpr two-stepping as a real use case for serious people.
Or did you just get a little bit confused and think the criticism here is actually coming from people who are out of their depth from hearing "compile time optimization" or don't know what reflection is?
Yep, definitely failure of understanding. :)
> For real though, defend constexpr two-stepping as a real use case for serious people.
Of course, here's a use case: I am a serious person developing a library that provides a nice API to solve a real-world problem using C++26 reflection. As part of the internal library machinery, I need some temporary storage for some compile-time algorithm (e.g. building a graph for automatic parallelization, or some other thing like that). In an internal helper function of my library, I use constexpr two-stepping to solve the problem without imposing hard limits on my algorithms and keeping the final API as simple as possible for the end user.
Then I submit my PR, but htobc reviews it and immediately rejects it, ignoring the real business value of the library because I made a conscious engineering decision to use a niche technique to solve a language limitation as part of my library implementation.
Then my startup fails.
(There are far simpler ways to do that.)
The reasonable engineer understands why they exist. The engineer with a hate boner or negative bias for a particular technology jumps on the opportunity to make a generalized statement.
I'd rather not be friends with the second kind of engineer. :)
I think it's telling that you are quick to respond to my superficial meta discussion, but have left the one reply up the chain about this being the "third reflection standard from the C++ committee" conspicuously unaddressed. Gotta dodge saying anything real at all costs? Or just trying to get your rocks off with an internet fight?
I find it ironic that you try to "avoid a language flame-war" by implying that anyone writing code like the one in the article, no matter how niche the reason/situation, is a moron.
I did not say anyone using this is a moron, I said I believed a commit exercising the niche edge cases demonstrated in the article should be rejected. In fact, had you asked, (instead of jumping to saying I was too stupid to understand what I was reading) I would have said you need to be very smart to write code like the examples in the article, and that's the reason I would reject it.
Sure it's fine and even fun to exercise these relatively recently released features and really explore the niche edge cases in a personal project, but they are a future maintenance headache in team based software development. The kind of compile errors walked through in the video are simply not where I would want my team spending effort debugging on any given day of the week.
Metaprogramming was not invented in 2020. There are a dozen different ways to get those optimizations in front of the compiler. They might not be as theoretically pure, or might not be using first order features of the language, or might have to sacrifice parts of the static analysis flow, but the trade offs are more than worth it when it comes to delivering.
But even in a real-world project, using this technique might have value as part of an internal implementation detail of a useful component that leverages C++26 reflection.
"trying to do any of the things discussed in this article should be an automatic commit rejection on any project" is a very strong statement that suggests you won't even care about the particular situation. It implies that, as soon as you see something that you deem overly weird, you immediately reject it without considering why it is weird and if that weirdness is needed to achieve a valuable goal.
Even in a mature and robust project there is room for esoteric techniques if well justified, commented out, and weighed against alternatives.
To give you a realistic example: you can reflect on struct data member names even if C++20 if you parse __PRETTY_FUNCTION__ at compile time. This is what Boost.PFR, Lahzam, and my own MiniPFR libraries do.
It's what a common developer would refer as "disgusting", "weird", and what would prompt the average C++ hater to take the opportunity to bash the language and remind the world how they think that the Standards Committee is a bunch of incompetent people. Not saying this is what you think, but you see where I'm coming from.
If people had applied blank statements like yours and decided that any code parsing __PRETTY_FUNCTION__ at compile time "should be an automatic commit rejection on any project", we wouldn't have a genuinely useful feature that allows us to reflect data member names in C++20, six years prior to having official reflection.
Yes, it's an esoteric technique. It's a hack, a workaround for a language limitation or language design failure. It's "experts-only" or whatever. Exactly the same way you feel about what the article demonstrates.
Yet, there was a good use case for it -- a very useful and valid one.
Outright banning certain techniques because you cannot understand how they could be valuable in very specific scenarios is honestly infuriating. Obviously, the use of niche/dangerous workarounds should not be liberally promoted. At the same time, automatically rejecting any commit is an insult to the intelligence of the person who wrote that code: perhaps they had a good reason to write it that way.
The problem with this discussion is that you pulled "because you cannot understand how they could be valuable" out of nowhere. They're talking about rejecting it despite the value. You should not be infuriated for that reason.
If there's an insult based on intelligence, it's saying they're too smart and need to write something dumber so that more people can maintain it.
LLVM and many conferences want nothing to do with him. gcc and the ISO standards committee still invite him to contribute, although not in a leadership role.
O'Dwyer is a remote consultant on C++. It's like the ideal role for a sex offender since it doesn't require him to go outside or put him in a room alone with others.
I don't think he's ever been accused of rape, sexual abuse, or even harassment since being released from jail. This is remarkable given the ISO C++ committee meetings have a reputation for being drunk, unsafe, and harassing women. Many other members have been credibly accused of harassment.
It's easy to say "remote work is a great idea for sex offenders" but difficult to say "in my profession".
constexpr std::vector<int> f() { return {1,2,3}; }
constinit std::vector<int> p = f(); // error
in D! const(int)[] f() { return [1,2,3]; }
immutable int[] aaa = f();
And the object file, look ma, aaa[] is statically allocated: internal:
db 001h,000h,000h,000h,002h,000h,000h,000h ;........
db 003h,000h,000h,000h,000h,000h,000h,000h ;........
__D5test63aaayAi:
db 003h,000h,000h,000h,000h,000h,000h,000h ;........
db 000h,000h,000h,000h,000h,000h,000h,000h ;........We're unavoidably going to learn at compile time how big the array is, and knowing how big it is will always be the same or better, so we should always do that.
int[f().length] array;
should do it.