user:Penligentai
created:Sep 22, 2025
karma:1
about:

Penligent — Say one sentence. It breaks into your own server in under 60s. Hi HN, I’m not here to sell you a slick demo or a buzzword. I’m here with a box of broken toolchains and a bloody keyboard. For the last decade I’ve been the poor bastard who wires Nmap into SQLmap, then into a dozen glue scripts, then spends two days pruning false positives and another day writing the report my client actually wants. It’s stupid, fragile work. It wastes hours of smart people’s time and it doesn’t scale. So we built Penligent. What it is (short): you type something like “scan mysite.example for SQL injection” and the system actually runs the thing — it decomposes the job, picks the tools, runs scans, rechecks hits, kills false positives, ranks results, and spits out a usable report you can hand to a dev. It does that in under a minute in controlled tests. No CLI wrestling. No manual glue code. No dumping raw scan logs into Slack and calling it “results.” What it isn’t: - Not a “tool bundle.” Not a pretty wrapper that just runs other tools noisily. - Not a black box oracle you can’t inspect. What it actually is: - A task orchestrator + verification layer — automation with reasoning and auditable decision logs. It logs why it chose X over Y. It shows the evidence chain for every “real” finding. If you want to see the exact call to SQLmap and why a follow-up probe was fired, it’s there. Why I don’t give a damn about marketing-speak: - The industry worships tool familiarity. “You’re only as good as the 30 tools you know.” That’s dumb. Expertise should mean understanding results, not rote flag memorization. - Most automation projects fail because they don’t verify. They dump hits and leave humans to babysit noise. Penligent’s whole point is to stop that. Proof (what we did): in a controlled environment we own, we fed it one sentence and in <60s it produced a verified finding with repro steps and a remediation suggestion. That’s our internal “works or it dies” test. We’ll publish the logs and evidence for the community to poke at. If you like digging into shit and breaking things, this is a dare: - We’re opening a limited free trial with token caps. Use it. Stress test it. Feed it the stupid cases you think will break it. - We will open-source critical mid-layers so you can audit and fork. We’re not hiding behind “proprietary magic.” Ask the good questions: - “What’s your false positive/false negative profile and how do you measure it?” — ask this and I’ll post the data and test harness. - “Can you scale to 10k assets? What’s your shard/queue strategy?” — ask this and I’ll post our architecture notes. - “Where’s the performance pain — LLM latency, I/O, disk, DB contention?” — ask this and I’ll show flamegraphs. Yes, this will piss some people off. Good. I want that. I want the engineers who hate marketing fluff to take it apart and either prove it trivial or make it better. If it survives HN’s roast, it becomes something useful. If it fails, we fix it in public. Legal and common sense: we ran demos only on systems we own or have written permission to test. Do not run pentests against systems you don’t own or don’t have explicit written permission for. That’s illegal and stupid. If you want to try it, here’s the demo link (limited tokens): [your trial link] If you want raw logs & evidence for the demo above, I’ll paste them in the top comment when this posts. — Team Penligent Penligent.ai