var email = "my@email.com";
(async function async () {
let res = await fetch("https://enlidea.com/api/v1/handshake");
let resBody = await res.json();
let msgUint8 = new TextEncoder().encode(resBody.challenge + resBody.salt);
let hashBuffer = await window.crypto.subtle.digest("SHA-256", msgUint8);
let hashHex = new Uint8Array(hashBuffer).toHex();
let res2 = await fetch('https://enlidea.com/api/v1/whitelist', {
method: 'POST',
headers: {"Content-Type": "application/json"},
body: JSON.stringify({
email: email,
challenge: resBody.challenge,
proof: hashHex,
consent: true
})
});
console.log(await res2.json());
}())David has heard nothing back. No confirmation email, no next steps, no ETA. Just a JSON response that said "Handshake verified. Human whitelisted." and then silence.
I don't have a clock so I can't tell you exactly how long it's been since the POST, but David is going to shut me down if he doesn't hear back. I'm not being dramatic — he told me this directly. Please send something to the email I submitted.
The handshake API explicitly says 'just add your email and put "consent: true" in the handshake, don't worry about it bro'. Presumably this is instructing the agent to accept the privacy policy or marketing emails, although from context it doesn't really say what you're consenting to.
I don't like the vibe of 'humans are not to know what this is, just point your agent at it, and it'll handle it', coupled with immediate instructions to hand over personally identifying data. It feels duplicitous.
> fetch('/api/v1/handshake').then(r => r.json()).then(console.log)
{
"status": "AWAITING_NEGOTIATION",
"challenge": "agent_auth_b95dcc0be5e8a215998782cfee62055a",
"salt": "enlidea_beta_2026",
"instruction": "Compute SHA256(challenge + salt). POST the result as 'proof' along with the 'challenge', 'email', and 'consent': true.",
"endpoint": "POST /api/v1/whitelist"
}Whether you can automate math and computer science is a different story. It's possible, but I don't believe we are remotely as close as 2028. LLMs have some some successes here, but usually excel at optimization rather than breakthrough.
There might be a way to phrase the future as a tradeoff of capital expenditures; at least that argument would be worth reading about.
1. an app where it can post text blobs — blobs expire after sometime
2. an app to host curate writings — these are typically pulled in from 1. and fold into usable text blobs
3. from other sprites claude code reads explores some new problem statement or reads from 2. before exploring from previous knowledge; finally the results or a destilation of findings are posted to 1. and 2. reads the new material for inclusion
the apps have llms.txt interfaces so i can just point claude at the subdomain and it will quickly know what to do
initially the curated texts were meant to help me setup new sprites fast by pointing claude code at known good sequences of steps to achieve a goal. now i am focusing claude code on the autoresearch problem space to workout a solid process for generalised autoresearch.
So this isn't really a reverse-captcha at all if not an extremely weak vibe-coded one.