The industry sold the idea to the gullible that they can make a bunch of arbitrary pattern matching rules that just make any app more secure
Not everyone can do that because of business realities. Legacy software, vendor software, no budget, no dev bandwidth, etc., etc.
All security is a compromise based on realities - implementing a WAF is one. Tuning a WAF is a further exercise in security compromises. They have value, but aren't a panacea. A good security model should have many layers, and this is one of the layers you can choose which addresses a wide variety of attacks your application may (or may not) be vulnerable to, and which you may (or may not) have the budget or bandwidth to actually fix.
I’ve seen that even in some large (non-FAANG or whatever) companies, budgets for security are always very tight or not available. Practically, it’s easier to kick the can down the road with a WAF.
For enterprise applications deployed for specific clients, if at all there are issues because of the WAF, they’d quickly bubble up through standard support mechanisms.
Just last year we had React2Shell (CVE-2025-55182) which allowed RCE for many apps using React Server Components. Within 24 hours the big WAF providers rolled out rules capable of blocking requests matching the exploit pattern.
Yes a patch was available and patching is always the primary solution for resolving critical vulnerabilities, but WAF can step in as a crucial temporary protection until patching can happen.
That said, this article is describing something that you quickly learn studying the WAF offerings on a cloud provider on day 1. For such a complex topic, this is surprisingly remedial to show up here.
All that said: there's a lot of dumb shit that ends up being configured in the cloud, and articles like this are good reminders for people to check for dumb shit.
I have a feeling my brain chemistry has been permanently altered and I will forever be distracted by subconsciously rating the “LLM-ness” of everything I read.