The other thing is that high level optimizations tend to be hard to come by in hardware. Most datapath hardware is not highly fixed-function, instead it consists of somewhat general blocks that contain a few domain specific fused ops. So we either have hardware specifications that are natural language or RTL specifications that are too low level to do meaningful design exploration. Newer RTL languages and high level synthesis tools _also_ tend to be too low level for this kind of thing, it's a pretty challenging problem to design a formal specification language that is simultaneously high level enough and yet allows a compiler to do a good job of finding the optimal chip design. Approximate numerics are the most concrete example of this: there just aren't really any good algorithms for solving the problem of "what is the most efficient way to approximate this algorithm with N% precision", and that's not even including the flexibility vs efficiency tradeoff which requires something like human judgement, or the fact that in many domains it's hard to formulate an error metric that isn't either too conservative or too permissive.
Say someone wrote this code:
wire [31:0] a, b, c;
for(i=0; i<32; i=i+1) begin
assign c[i] = a[i] & b[i];
end
it sounds like this paper is about recognizing it could be implemented something akin to this: wire [31:0] a, b, c;
assign c = a & b;
Both will produce the exact same gates, but the latter form will compile and simulate faster.Maybe it doesn't help Design Compiler turn your shitty design into gold, but faster verification is an unalloyed good.
This paragraph goes hard. And this is exactly why design space exploration is essential. I think you're right that basically, simplistic delay/area models are insufficient and the exploration must be driven by actual metrics of complete P&R flows.
> [...] it's a pretty challenging problem to design a formal specification language that is simultaneously high level enough and yet allows a compiler to do a good job of finding the optimal chip design
My experience in this domain is that actually the challenging problem isn't so much the design of a formal language. Instead the challenge lies primarily in expressing your design in such a way that both generalizes over and meaningfully exposes the freedom in the design space.
> [...] solving the problem of "what is the most efficient way to approximate this algorithm with N% precision"
> [...] in many domains it's hard to formulate an error metric that isn't either too conservative or too permissive.
I think the latter comment alludes to my objection about the former: It all depends on what "N% precision" means.
Does it mean that for every input/output pair, the output is always within N% of the correct value?
Or does it mean that for N% of the inputs, the output is correct? Is that weighted by the likelihood distribution of getting those inputs?
Or does it mean that the total mean squared error over all input/output pairs is within N%? Etc. etc.
In other words, I think it goes beyond even conservative vs. permissive; simply, the devil is in the details, and digital circuit design is a difficult multi-objective optimization problem.