DKIM deniability is a particular hobbyhorse of mine [1]. These short keys are obviously not the way to achieve it, but I view this kind of thing as a result of the muddled thinking that’s been associated with DKIM and the community’s lack of desire to commit to the implications of an all-email signing mechanism.
[1] https://blog.cryptographyengineering.com/2020/11/16/ok-googl...
I would argue it should be taught in schools.
It's a basic life skill, to be aware of what it means and how common it is. How do you function in society being oblivious of these facts? Basically by perennially becoming a victim to these, or by becoming bitter.
Just knowing about it is half the battle.
I would also add other dark personality types with potential to cause you real harm to the list - BPD and NPD.
School yourself, save yourself a whole lot of bad surprises. Over a lifetime being aware of the existence of these means you are on the lookout for warning signs, and you learn to get better at picking up on warning signs faster.
* an example supporting the counter-example is simply another counter-example, and
* an example contradicting the counter-example is, by definition, also another counter-example.
Corollary: all examples that can be opposed or contradicted by counter-examples are themselves also counter-examples.
DKIM might have convinced the witness sometimes though.
It is the lawyers appearing before the court that may attempt to play the person not the ball, by variously undermining or bolstering their standing as an expert at all.
Sometimes you've just gotta love those auto-shortened URLs: security.googleblog.com/2013/12/internet-wide-efforts-to-fight-email.html
Hopefully one day we'll win the fight.
What's more dangerous is that a jury wouldn't know the difference.
If there is some mail from my addr, with a valid DKIM signature, it proves nothing: - perhaps the mail was sent by somebody else on the same plateform, but in my name (identity usurpation of the user part of the mail) - perhaps somebody got illegal access to my email account, without me knowing - .. ?
In no case it proves that I, as a human, sent this email.
But of course, justice is a fiction that cannot exist: there is no justice, only probabilities (and feelings :( ).
To satisfy DKIM's design goal, you only need a "current" DKIM key that is secure for a window of time. When that window of time passes, you rotate the secret and publish your own key, repairing (hopefully) much of the privacy injury.
I'm missing something here. DKIM mostly proves an email from person@from.me was sent by a server @from.me controls. There is also a bloody great audit trail inside of the email with together with SPF can do a pretty good job of proving the same thing.
I'm struggling to see how an email sent to me, that presumably was always intended to be readable by me could suddenly become a privacy violation because it's signed. That is doubly so if I won't receive it if it isn't validly signed when it is received, which is often the case.
It comes down to if a third party gets access to your emails (e.g. through a server compromise), should they be able to prove to a fourth party that the emails are legitimately yours, vs completely faked? Non repudiation through strong DKIM keys enables this.
Example: Third party is a ransomware gang who releases your emails because you didn't pay a ransom after your email server was compromised. Fourth party is a journalist who doesn't trust the ransomware gang, but also wants to publish juicy stories about your company if there is one, but doesn't want to risk their reputation / a defamation case if the ransomware gang just invented the emails.
Hands on keyboard? You're right, absolutely not. But I can learn something useful via DKIM nevertheless.
If this occurs, is DKIM privacy safe? To make sure I understand, by publishing the private key after a period, that allows for repudiation, since any real email at that point could have been faked using the published private key?
The fact that people do not rotate keys or use secure keys means that you can use it for other things as well, like detecting forgeries. It wasn't meant for humans, but for machines.
Rotating the key does make the claim "I have proof he sent it" a litter weaker, as it's no longer as easy to prove. But only a little as "your honour, it is marked as DKIM verified by the independent email provider he uses" is pretty solid.
If Gmail received the email, it's likely they simply drop email with no or bad DKIM signatures. So if it's in your gmail inbox but a has signature that doesn't check out, to be categorically certain it was signed you would have to ask Google. But it's the same deal as the From: address - almost everyone is going to assume it was validly signed because that's gmail's policy.
TL/DR, I think your wrong. Rotating the key only weakens the proof slightly. The only thing that is destroyed is the cryptographic proof, but only in a fanatical cryptographers world is that the only form of acceptable proof. That fanatical cryptographer is wrong of course. Things outside of the mathematical certainty also matter. The rubber hose joke is funny, and the butt of that joke is fanatical cryptographers making that very assumption.
Apparently I'm the exception to that rule, because I have a DKIM extension installed on Thunderbird. I use it to do what you say isn't useful - as another way to check phishing messages long after they have been sent.
Until DKIM is universally enforced checking it after the fact will be useful. When DKIM is universally enforced rotating the keys won't matter to either the user or attacher, because both can be sure everything in the inbox (stolen or otherwise) was DKIM signed.
> I really don't follow what you're trying to say here.
It's simple. As things stand now much of the time you don't need a verifiable DKIM signature to know if the message had a valid DKIM signature when it was sent. Therefore it doesn't matter much to the user or attacker if keys were rotated - your "privacy violation" is still a thing whether the keys are rotated or not.
Unfortunately the "much" qualifier must be in there, and compounding that is the stuff that does skip through without being signed are almost always attacks on me - phishing or otherwise. Such messages are rarely sent from a bulk provider that insists on signing because it gets shut down promptly. The sender would probably prefer they were signed so the rejects weren't so frequent, but there are operating on low probability of people taking their message seriously so the even lower probability imposed by invalid DKIM signatures not a disaster.
Unfortunately for your argument legit email is now almost universally signed because the sender is relying on it getting through. If someone steals an inbox and an email that doesn't looked like spam was DKIM signed, then you can pretty safely assume it was validly signed when sent. Being able to validate the DKIM signature after the fact doesn't add much confidence.
It's not rocket science.
I'm not clear how the universal DKIM argument comes into play. Even if we were sure Google only accepts valid DKIM, you still have to trust that the accuser did in fact find it in the alleged Google inbox.
Whereas with the non-rotated key, the accuser has cryptographic proof their alleged email is genuine, because they couldn't have created it without the key.
You seem to be trying to say that "the fact that it was delivered proves it had a valid signature when it was sent".
That presupposes that the headers indicating when it was delivered are correct, or that it was delivered at all in the first place.
I don't think you understand the attack.
I sent Thomas an email admitting to something scandalous.
A few months later, Mallory pops the server Thomas's email is stored on, and extracts his mail spool.
Mallory wants to prove to a third party that the email is authentic and not a fabrication.
I know it's real.
Thomas knows it's real.
Mallory is pretty sure it's real.
Alice, a reporter for the Daily Bugle cannot verify it is real, rather than Mallory's forgery.
I can claim it's a forgery, and point out that Mallory could have made it up and generated the signature with the published key.
Now, I may have a problem if it were a crime rather than something merely scandalous because then Bob, an FBI agent, decides to subpoena some logs and maybe prove when it was sent, but even so, logs typically don't have message content or even a hash of the message content.
Consider this wrinkle: Thmoas's email provider is @gmail.com. Assume it is pretty well known the Gmail will only put email in his inbox if it is DKIM signed. (I run my own email home server. I can assure you this is true now unless you are someone like @debian.org. Unsigned email is simply dropped by most of the major players.)
You send the incriminating email. It's accepted by Gmail as it's DKIM signed. You rotate your DKIM keys. Mallory now steals in the @gmail inbox.
I can think of only two defences for you now. One is Google accepted the email without a valid DKIM signature - which you say is your main defence. The other is someone else sent the email by getting control of your email account / server / DKIM. I personally would find it much easier to believe you lost control of your email account than Google accepted a badly DKIM signed email from some random.
I still think this is a classic example of the XKCD rubber hose comic. The cryptographers are suffering from tunnel vision. They focus on exclusively on the well known properties of their beloved cryptography. It's odd they keep doing that. Modern cryptography is mature, well understood, and for the most part unbreakable. The weakest link is invariably elsewhere.
However, Bob can't prove to the world "Alice sent me this message saying she hates cats!" because everybody knows Bob knows the same secret as Alice, so, that message could just as easily be made by Bob. Bob knows he didn't make it, and he knows the only other person who could was Alice, so he knows he's right - but Alice's cat hatred cannot be proved to others who don't just believe what Bob tells them about Alice.
But seriously, in a case before a court or jury, wouldn't there be much more evidence? Down to your own lawyer sending a complete dump of your phone with all those Sandy-Hooks-conspiracies and hate messages to the opposing side?
The unintentional problem DKIM is causing is that it actually provides non-repudiation for many years. Those signed emails can sit in someone's mailbox for years, then get stolen by a hacker. The hacker can then blackmail the owner by threatening to dump the email trove, or for newsworthy targets they can just do it. Reasonable people (e.g., high-integrity newspapers, courts of law) will say "how can we trust that these stolen emails are authentic given that there's no chain of custody?" DKIM signatures nearly answer that question, which makes stolen emails much more valuable than they would be otherwise.
> Reasonable people (e.g., high-integrity newspapers, courts of law) will say "how can we trust that these stolen emails are authentic given that there's no chain of custody?" DKIM signatures nearly answer that question, which makes stolen emails much more valuable than they would be otherwise.
Thank you for clarifying where the vulnerability chain begins and ends.DKIM's goal is that the receiving system can trust that the message presented to it has come from a host that knows the key, and so could sign it. At the time that message is received, you should be able to assume that only people authorised to send from that domain are able to do so.
By publishing older keys, you gain deniability back, because anybody could have forged the message and signed it. Even if you have timestamps that show when you received the message, the onus is one you to prove that those timestamps are not fabricated. This is a much harder problem because secrets can be revealed later but not forgotten.
To be able to prove that you did it fact receive the message on that date, you'd probably have to record e.g. the message ID, signature and timestamp (perhaps all encrypted with a key that you can reveal later) on a blockchain, which would then serve as proof of when the message was received. Even then, you'd still have to prove that the key hadn't been disclosed before that time.
Deniability and perfect forward secrecy are at odds with how people use email anyway. But that doesn't stop people from demanding both very loudly, and some people from promising them.
Spoofing is a better excuse than a stolen password only in the case of a single email. If there's a conversation spanning multiple messages, a spoofer wouldn't be able to properly answer the messages of the other party, as he doesn't receive them.
Lying isn't a stronger defense to perform abuse than weak keys are for stopping abuse.
You change your password only after you realize it got stolen. If you didn't realize it, then it makes sense you didn't change it. And, depending on the specific case, you could find plausible explanations also for the other points.
You don't need a conversation to cause havoc.
Sure, but my point was different: let's say one of your teacher answered the spoofed email from the principal, you wouldn't be able to (properly) answer that email since you wouldn't receive it. So, in the case of an email exchange between two people, one can't claim his/her emails were spoofed, as the spoofer wouldn't be able to answer the other party's emails in a precise and on topic way. This is without even considering that, bh default, most email clients include the previous messages inside a reply. Meaning that the spoofer would somehow be able to know the other party's reply exactly word by word.
It is wild to see people argue against it! They're basically pleading with email providers to collude with attackers to violate their privacy.
Anyway, to further add to my point, depending on the context you don't even need to claim that someone stole your password. In the company where I am now , it is custom that, if someone finds out someone else didn't lock their computer, that someone sends an email (from the victim's account) to the whole office saying that the victim is going to bring cake to the office. DKIM is meant to prove that a message comes from an authorized server, but to prove the identity of the sender as well you need something more.
Edit: to be fair, I do get that with DKIM deniability gets harder. But I think that, for the average person, you would gain more in terms of spam and phising protection than what you loose. High profile targets have to take different security measures than the masses anyway.
I don't think this is quite true. First of all, this is not only valuable to attackers, it's also valuable in a court of law to establish the truth of what happened. Secondly, it can be valuable to me to be able to prove that you sent me an email, even if you wished to deny it, also mostly in legal contexts.
More generally, authenticated communication has a long history of being considered a useful thing for society. Physical mail includes delivery confirmations where the receiver must sign for the receipt, proving to anyone that they did receive the letter. People would often add hard-to-forge personal seals to letters in even older days, that could prove to anyone that they were the ones who sent that document. And even common letters were usually signed rather, even when typewritten, again making it hard to later repudiate.
While I absolutely see the value in making it possible to securely send repudiatable email in some specific circumstances, I think having non-repudiatable email as the default is a net benefit to society, and has been the de facto standard for at least a few hundred years before email ever came along.
Repudiation of clear text messages looks like the easier implementation.
> The fix would cost you basically nothing, and would remove a powerful tool from hands of thieves.
Maybe that was true a while ago, but it becoming much less true now. Most people and organisations outsource the email handling to the likes of Google and Microsoft. They tend to reject email that isn't DKIM signed, and add a "DKIM validated" header to those that are. "Tend" is probably too weak a word now - email that isn't signed isn't likely to be delivered.
So the mostly likely scenario now is "someone steaks email that can no longer be DKIM validated, but it is possible prove it was DKIM validated when it was received". If that's true rotating keys doesn't help.
The fact that this is possible is some cryptography black magic.
You tell me that you’ll use the word “banana”.
Provided no-one except you knows that my secret word is “apple”, you know the message came from me.
But it’s perfectly possible for you to fake a secret message and sign it “Love d1sxeyes, P.S. apples”, and so our approach only works between the two of us. A third party probably can’t even confirm that “apples” was the correct keyword, and even if they could, that can only prove that one of us wrote it, they can’t definitively say which one of us.
Now extrapolate this to using some more sensible mechanism than dictionary words.
I.e. a symmetric key is shared between you and me. If I receive a message with that key, I know it's from you because the key is only known by you and me, and I know I wasn't the sender, so it must be you. But any third party can only say that the message was by one of us.
But let’s say I publish that signing key tomorrow. Once I do that, you can’t prove I sent today’s mail because anyone could’ve used that published key tomorrow forget the signature.
So I agree that it brings deniability, but I don't agree that it still meets the original purpose of verifying the sender.
So there’s some threat modeling, too. Are you trying to email someone highly adversarial? Maybe you’re at a law office or such and that’s the case! This wouldn’t help a lot there. Not everyone is immediately forwarding all inbound mails to a timestamping service though.
(I don’t have skin in this game, so I’ll probably duck out after this. I don’t have opinions on whether publishing old DKIM keys is good or bad.)
No doubt that is true. However, given the total volume of email, even that tiny, tiny remaining fraction still represents actual mail with legitimate use-cases. So it's good to bear that fact in mind and not roughly implement 80-20 stuff that tramples on those.
Today, I made 3 copies of my housekey and gave them to friends. You still know that I was the one that allowed you entry into my house, but you can not prove to anyone else that I was the one that made the copy, because there are now 3 other people that could do that.
(For this example, imagine I made the key copies at home and didn't go to a locksmith who could verify when they were made, since we don't need a locksmith to do software crypto)
If Alice and Bob each have a public/private key pair, they can do a diffie-hellman key exchange to form a common secret. If they use that secret to authenticate a message, then it can be shown that only Alice or Bob sent the message. If you're Alice or Bob, this is what you want to know --- either you or your corespondent sent it, and if you didn't send it, your correspondent did.
But if Alice or Bob asks Carol to validate it, Carol can only determine that Alice or Bob sent it, and perhaps not even that. Anyone in possession of the secret used to authenticate the message can also make a message that would be deemed authentic. If you have participation of Alice or Bob, you can show that the secret was derived from DHE with Alice and Bob's key pairs, but that's all.
This is nifty and useful, but it's more appropriate for end to end communication, which is not the domain of DKIM. Email is a multi point store and forward system, where each point would like to know if a message is authentic; deniability like this would probably mean either
a) only the final destination could determine authenticity and therefore intermediates would not be able to use authenticity as a signal to reject mail
b) only the first intermediate could determine authenticity; it could be used to reject mail, but the end user would have to trust the intermediate
Both of these are workable systems, but DKIM provides that all intermediates and the end user can know the mail was authorized (to some degree) by the origin system.
https://www.wired.com/2012/10/dkim-vulnerability-widespread/
So we went to a few weeks to 8h in 14 years give or take
> We chose a server with 8 dedicated vCPUs (AMD EPYC 7003 series) and 32 GB of RAM from Hetzner
Not very beefy really. Beating this time is easily in range of, what, millions of people high end gaming machines?
Provision a 4096-bit DKIM key.
Every online DKIM/SPF checker will say all is good when looking at your DNS.
They will also fail any test email you send, with more or less excellent descriptions such as:
STATUS: Fail
DKIM: Pass
SPF: Pass
There's this fun thing that, apparently:
It's permitted and valid to use keys larger than 2048 bits in your DKIM entry.
It is not, however, required to process keys larger than 2048 bits.
This cost me some hair to learn the hard way.
Verifiers MUST be able to validate signatures with
keys ranging from 512 bits to 2048 bits, and they MAY be able to
validate signatures with larger keys.
I did my master thesis on this topic one year ago and found that all popular mail providers nowadays support 4096 bits, and some even up to 16384 bits. Verifiers MUST be able to validate signatures with keys ranging from 1024 bits to 4096* bits
So mail providers MUST support up to 4096 bits if they follow the latest RFC.Compute is rapidly increasing, there is continuous chatter about quantum and yet everyone seems to be just staring at their belly buttons. Obviously bigger keys are more expensive in compute, but we've got more too...why only use it on the cracking side, but not on defense?
Even simple things like forcing TLS 1.3 instead of 1.2 from client side breaks things...including hn site.
These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
Long story short, brute forcing AES256 or RSA4096 is physically impossiblemost countries registrar's won't support DNS hacks requied for larger dkim.
we still use the minimum key size in most countries.
In the context of DKIM we're waiting for Ed25519 to reach major adoption, which will solve a lot of annoyances for everyone.
3072 has been recommended by various parties for a few years now:
Operations per second?
* https://wiki.strongswan.org/projects/strongswan/wiki/PublicK...
Running MacPorts-installed `openssl speed rsa` on an Apple M4 (non-Pro):
version: 3.4.0
built on: Tue Dec 3 14:33:57 2024 UTC
options: bn(64,64)
compiler: /usr/bin/clang -fPIC -arch arm64 -pipe -Os -isysroot/Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk -arch arm64 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk -DL_ENDIAN -DOPENSSL_PIC -D_REENTRANT -DOPENSSL_BUILDING_OPENSSL -DZLIB -DNDEBUG -I/opt/local/include -isysroot/Library/Developer/CommandLineTools/SDKs/MacOSX15.sdk
CPUINFO: OPENSSL_armcap=0x87d
sign verify encrypt decrypt sign/s verify/s encr./s decr./s
rsa 512 bits 0.000012s 0.000001s 0.000001s 0.000016s 80317.8 973378.4 842915.2 64470.9
rsa 1024 bits 0.000056s 0.000003s 0.000003s 0.000060s 17752.4 381404.1 352224.8 16594.4
rsa 2048 bits 0.000334s 0.000008s 0.000009s 0.000343s 2994.9 117811.8 113258.1 2915.6
rsa 3072 bits 0.000982s 0.000018s 0.000019s 0.000989s 1018.4 54451.6 53334.8 1011.3
rsa 4096 bits 0.002122s 0.000031s 0.000032s 0.002129s 471.3 31800.6 31598.7 469.8
rsa 7680 bits 0.016932s 0.000104s 0.000107s 0.017048s 59.1 9585.7 9368.4 58.7
rsa 15360 bits 0.089821s 0.000424s 0.000425s 0.090631s 11.1 2357.4 2355.5 11.0
(Assuming you have to stick with RSA and not go over to EC.)Cryptographically-relevant quantum computers (CRQC's) will also break smaller RSA keys long before (years?) the bigger ones. CRQC's can theoretically halve symmetric cryptography keys for brute force complexity (256-bit key becomes 128-bit for a CRQC cracker).
He's not djb but definitely not a “random poster” either.
(This isn’t intended as a leading question.)
It's not being blocked per se, you can use it mostly (98%) without any issues. Though things like Amazon SES incorrectly reject letters with multiple signatures. Google and Microsoft can't validate them when receiving. It's more that a few common implementations lack the support for them so you can't use _just_ Ed25519.
Ed25519 (and Ed448) have been approved for use in FIPS 186-5 as of February 2023:
* https://en.wikipedia.org/wiki/EdDSA#Standardization_and_impl...
So on the general web it seems remote at best.
NIST P-curve certs were acceptable per the Base Requirements all the way back in 2012
* https://cabforum.org/uploads/Baseline_Requirements_V1_1.pdf
See "Appendix A - Cryptographic Algorithm and Key Requirements (Normative)", (3) Subscriber Certificates.
Ed25519 certs do work with TLS (OpenSSL support at least), but without browser adoption it's machine to machine with private CA only .
Getting the big players to agree and execute though is a lot like herding cats. I'm sure some in the big players are trying.
You're essentially asking "why aren't we doing what we're doing"
What people don't realize: key size recommendations are surprisingly stable over long timeframes, and have not changed for a very long time. In the early 2000s, some cryptographers started warning that 1024 bit RSA is no longer secure enough, and in the following years, recommendations have been updated to 2048 bit minimum. That's now been a stable recommendation for over 20 years, and there's not the slightest sign that 2048 bit can be broken any time soon.
The only real danger for RSA-2048 is quantum computers. But with quantum computers, increasing your RSA key sizes does not buy you much, you have to move to entirely different algorithms.
Except that now the recommendation by NIST at least is to switch to 2048-bit by 2030 and then deprecate RSA altogether by 2035.
But yeah, not being on at least 1024-bit RSA is weird and careless.
What size do you suggest?
So it would be a slight increase in complexity, but if we are able to build a machine with enough qbits to crack 1024 keys, I don't think the engineering is all that far off from slightly scaling things up 2x-10x.
Yup. And I don't even think quantum resistance was the goal of some of the algos that, yet, happen to be believed to be quantum resistant. Take "Lamport signatures" for example: that's from the late seventies. Did anyone even talk about quantum computers back then? I just checked and the word "quantum" doesn't even appear in Lamport's paper.
Not unless they have a time machine. Shor's algorithm was discovered in the 90s (sure the concept of a quantum computer predates that, but i don't think anyone really realized they had applications to cryptography)
We've been doing it for decades now… (DES used 56bits back then, AES started at 128).
Also, keep in mind that increasing the key length by 1 means that you need twice as much compute to crack it through brute force (that is, unless cryptanalysis shows an attack that reduces the difficulty of the scheme, like for instance with the number field sieve on RSA) so you don't need to increase key size too often: following Moore's law, you need to increase it by on bit every two years, or 5 bits every decades. Additionally key size generally account for many years of compute progress and theoretical advance, so that you really don't need to worry about that over a short period (for the record higest RSA factorization to date is 829 bits, yet people started recommending migration away from 1024 bit RSA a decade ago or so, and the industry is in the process in deprecating it entirely even though it's probably going to take years before an attack on it becomes realistic.
That’s the reason, it breaks things, and some of them are important and can’t simply be updated.
IMO this is not a valid excuse.
If it's exposed to the internet it needs to be able to be updated with relative ease to respond to a changing threat landscape. Especially if it's "important". If it cannot be then it is already broken and needs to be fixed. Whether that fix is doing a hard upgrade to get to the point that future upgrades can be easier, entirely replacing the component, or taking the thing offline to a private non-Internet network depends on the situation, but "we aren't going to change, the rest of the internet should conform to us" has never been a reasonable response.
This is particularly true in the contexts of public mail servers where DKIM matters and anything involving public use of TLS. The rest of the internet should not care if your company refuses to update their mail servers or replace their garbage TLS interception middleboxes. We should be happy to cause problems for such organizations.
The world is full of things that aren't "valid excuses". Explaining why something is the way it is is not the same as justifying it.
If and when anything quantum is able to yield results (I wouldn’t worry much about this), increasing key size is pretty much meaningless, you need to move to other encryption schemes (there’s lots of options already).
It will likely work for a while, but it’s a fundamentally wrong approach and you’re going to be exposed into recording & decryption attacks, instead of breaking your encryption today, I just store all your communications and wait for the next qc to be online, then fish for stuff that is still useful.
It’s a silly approach if the timeframe is 50 years because most secret information goes stale quicker, but if you’re just waiting for say a year…
Getting a working qc to reasonable scale is the hard part. Once you have done that most of the hard engineering problems are solved. I doubt doubling its size at that point would be very much of a problem.
If we are making (uninformed) predictions, i bet we wont see QC solving 1024 bit RSA in the next 15 years (at least), but once it does it will only take a year or two more to solve 4096.
because no one thinks there is a reason to, no one has any fear that classical computers will catch up with RSA-2048/AES-128 before their grand children are dead.
post-quantum crypt stuff is happening and people are planning how to migrate to it.
keys are stateful content like DB schemas, but they don’t receive daily attention, so the tooling to maintain them is usually ad-hoc scripts and manual steps.
they recommend 2048 and use 4096 themselves because if they need to ever break your 2048 it's less bad than if you were recommended to use 4069. wink wink
same with everyone recommending ed22519 when ed448 is as good and as fast to encode. but all the arguments point to encode speed from a Bernstein paper which used a pentium iii!
I am acutely aware that there are SOME places where software only supports RSA and only supports up to 1024-bit or 2048-bit keys, and that is a legal requirement. Ramping up key sizes would be great but even 2048-bit keys aren't quite secure against certain kinds of actors (even disregarding hammer-to-head style of attacks)
> Even simple things like forcing TLS 1.3 instead of 1.2 from client side breaks things
... kind've a case in point about the pace of required improvements.
Unfortunately 1024 bit keys are still out of reach of a hobbyist effort but could be pulled off by academics roughly of the same scale as the 2010 effort to factor a 768 bit key (https://eprint.iacr.org/2010/006.pdf)
They couldn't tell me why I got the email, and what the problem was with my account. The representative couldn't see a record of this email set.
I'm 100% certain this email came from Bank Of America. There was nothing in the email that was phishing -- no links, no bad phone numbers.
The SPF, DKIM, and DMARC all passed googles's ARC-Authentication-Results. The DKIM key is 2048 bits long.
I asked Bank of America to investigate, and they said "it must have been a phishing message" and sent me a link on how to look out for phishing.
I'm pretty sure this was just a glitch; some system that does some consistancy check ran too early while the account was being set up and generated that email.
However, because they told me it was "phishing" I just sent a FedEx to the CTO with the complete paper trail advising them that EITHER their DKIM keys were compromised and they need to issue a public warning immediately OR their incompetent staff and IT system gave me the runaround and wasted an hour of my time. Either way, I want a complete investigation and resolution.
You aren't in danger.
[1]: https://sympa.inria.fr/sympa/arc/cado-nfs/2020-02/msg00001.h...
Not "free", but any malicious actor has access to a lot more than a single GPU.
The UK government also has several huge arm based solutions dedicated to cracking internet encryption, zero chance that isn't breaking mostly everything, for sure the Chinese and Russians have similar.
So you seriously think that almost all current RSA is being decrypted in real time by at least UK, China and Russia (and I would assume US)? Do you have any source or reference for this at all?
RSA-270 (much, much easier than 1024 compute-wise) has a bounty of $75k, why would it be unclaimed then when you can spend three years worth of cloud rented H100 (I'm being conservative here and count $3/h which is far from the best deal you can get) and still make a profit?
Also a GPU core and CPU cores really aren't comparable individually, so your “consumer graphic card having thousands of core already” is comparing apples to oranges.
numberphile has a great video on that one https://www.youtube.com/watch?v=V4V2bpZlqx8
Also, taking the OP as a "worse case", afaik:
512bit = $8
so
1024 = 8^2 = $64
2048 = 8^2^2 = $4,096
4096 = 8^2^2 = $16,777,216
noting $8 for 512 seems very expensive to me.
search_space(n: number_of_bits) = 2^n * k
so search_space(1024)/search_space(512)=2^512, not 2^2.
Asymptotics in GNFS are better[0], but only on the order of e^(cbrt(512 * 64/9)) times more work, not 2^2.
This would give an approximation of math.exp(math.cbrt(512 * 64/9))*$8 = $40 million for 1024 bits.
[0] https://en.wikipedia.org/wiki/General_number_field_sieve
->but only on the order of e^(cbrt(512 * 64/9))
e^(log(n)) = n
> none of it is "brute force"
It's not exhaustive search like it would be for symmetric encryption, but it's still somewhat brute-force (especially since RSA keys are inflated in size compared to symmetric encryption to accommodate for their vulnerabilities), put more clearly what I meant was “not without theoretical breakthrough unknown to the public”.
BTW, it's not a very good idea to lecture people with actual crypto knowledge (even though mine is quite rusty now for I have not done any serious stuff for 15 years) when your own comes from ill-understood YouTube vulgarization videos.
What can you square then? For example, can you square lengths? E.g. 1km is 1000m, what is its square?
cost = 2.828^(2*(bits/512))
It didn't "square the cost", it doubled the number of bits to find the cost, I just skipped a load of the math.
> What can you square then?
In that case it's the number of operations (which is unitless) that must be squared and then multiplied by the cost of each operation. For instance (figures are completely made up for illustration purpose) if one individual operation costs 0.1 cent, and you have 8000 ops for the factorization, it costs $8, and the operation number squared means you have 64,000,000 operations, and the total cost is $64k. In practice we're talking about trillions of very cheap operations but when you square that number you get an insanely big number which even multiplied by a small individual cost ends up costing trillions of dollars, putting it out of reach of factorization from anyone.
(p1^2) x (p2^2) = (p1xp2)^2 =cost^2
and gnfs search cost increases in cost by (roughly) the square of the number of bits.
If my Yuan exchange rate example didn't convince you, let's have a few thoughts experiments:
- let say you can do can do some amount of work for less than $1 (maybe even factoring a 512 bits number) let's call that amount of work X, and you do it for say $0.9. Do you think you can do X² work for price^2, which is $0.81 ? Yes, much more work for less than the price of doing X, isn't that magical?
- a hard drive with 1TB of storage costs $40. Do you think you can have 1 Yottabyte (10^12 squared is 10^24) of storage for just $1600?
There's a reason to all these paradoxes, it simply makes no sense to take the square of a sum of money, because you can't pay it in square dollars.
2^16 = 65536
2^32 = 4294967296
4294967296/65536 = 65536
so if a search space of 65536 costs you $8, then a search space of 4294967296 = 65536 x 65536 = 8 x 8 = 8^2 = $64
2^64 = 1.844674e+19
1.844674e+19/4294967296 = 4294967296
so a search space of 1.844674e+19 = 4294967296 x 4294967296 = 65536 x 65536 x 65536 x 65536 = 8 x 8 x 8 x 8 = 8^2^2 = 8^4 = $4096
where here $8 is the cost of finding (or not) 1 number in a haystack of 2^512 numbers, and the rest is identical.
So close, yet so far: the correct answer here is “65536 x 8 = $525k”, not “8x8 = $64”. If a $8 worth of hard drive can store 65536, then to store 4294967296 you need 65536 of such drives, not 8…
Man this is really embarrassing.
in gnfs the search space is number fields, so you increase the number of the number fields by the square of what came before.
Cost(n) = search_space(n) * $8 / search_space(16)
And search_space(x) = 2^x so
Cost(n) = 2^n * $2^3 / 2^16 = $2^(n - 13)
Cost(32) = $2^(32 - 13) = $524288 Cost(64) = $2^(64 - 13) = $2251799813685248
So it quickly becomes astronomically expensive.
If you double the number of bits n you get
Cost(2n) = $2^(2n - 13)
You were assuming that Cost(32) = Cost(16)^2, in other words Cost(2n) = Cost(n)^2. But this equality doesn't hold:
Cost(n)^2 = $2^(n - 13)^2 = $2^(2n - 26)
This is significantly smaller than Cost(2n) = $2^(2n - 13) as stated above.
An equation with different units on each side like "512 bit = $8" doesn't work mathematically and will lead to contradictory conclusions.
So moving from 2^16 = 65536
to
2^32 = 4294967296
Increases the size of the total potential search space from 5909 to 193635251, which is ~ 5909 x 32769
secondly, the reason it grows by only n^2, is you only need to search along the curve n = a x b - which is the "sieve" part.
if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64
Thirdly, your stupidly high costs are because you seem to think you need to check if 4817 x 5693 is a prime fractorisation of 26768069 when in fact you already know the prime factorisation by that point.
Can you answer
a) If 1 apple costs you 8$ then (1)^2 apple costs you: ???
b) If 10 apples cost you 8$ then (10)^2 apples costs you: ???
edit: between the price and the amount, usually there is a ~linear relationship. So if you can buy 2^512 something for 8$, then chances are that for 8 times the price you'll only get ~8 times more amount, and not 2^512 times morehttps://en.m.wikipedia.org/wiki/Big_O_notation
The tldr is bigO gives you how the cost of apples changes with the number of apples in the worst case.
its a rough simplification, the precise formula is close but not exactly that. the simplification is also actually (2n)^2 but in my defense I was going from memory of work from more than 2 decades ago (testing generated prime factors were good prime factors, overwhelmingly they were not).
using your apples example if the bigO of eating apples is O(n^2), and it takes you 8 minutes to eat 2 apples, it will take you no more than 64 minutes to eat 4 apples.
However, you will do yourself a big favor if you take the time to understand why this is wrong:
> if 2^512 calculations costs you $8 then (2^512)^2 calculations costs you $64
The cost per calculation is some constant C:
Cost(n calcs) = n calcs * C
Therefore,
Cost(n^2 calcs) = n^2 calcs * C
In this example, C = $8 / 2^512 = $2^-509
So Cost(2^512^2) = Cost(2^1024) = 2^1024 * $2^-509 = $2^425
The $8 will vary, and the actual cost function completely depends on the implimentation, its definitely possible to do worse, very likely possible to do better - there was rumors a few years ago that some Riemann surface based math can do it in O(1), but I know nothing about Riemann surfaces so can't judge their veracity.
“The earth is flat blah blah blah”
already That gives the worst case complexity right at the top.
> 2^16
[1] 65536
> n<-2^32
> exp(((64/9)^(1/3)+1)(log(n)^(1/3))(log(log(n))^(2/3)))
[1] 38178499
> 38178499/65536
[1] 582 2^16 -> 2^32 ~ x 2^9
> n<-2^64
> exp(((64/9)^(1/3)+1)(log(n)^(1/3))(log(log(n))^(2/3)))
[1] 84794674511
> 84794674511/38178499
[1] 2221
2^32-> 2^64 ~ x 2^11
> n<-2^128
> exp(((64/9)^(1/3)+1)(log(n)^(1/3))(log(log(n))^(2/3)))
[1] 2.507616e+15
> 2.507616e+15/84794674511
[1] 29572
2^64 -> 2^128 ~ x 2^15
So in GNFS 2^32 -> 2^64 complexity doesn't increase 4294967296 times, worst case it increases 2221 times
_In practice_ the cost grows closer to (nbits)^2 mostly because as an "embarrassingly parallel" algorithm you get significant benefits from cached results (quickly discard entire number fields that were previously calculated).
if O(x) = 8 then O(x)^2 = 8^2
I did miss a x2 earlier because e.g. 128 bits is 128^2 (16384 rather than 29572) harder than 64 bits, not 64^2
So its (2xO(x))^2 = (2 x 8)^2 = $256
You're trying to seek refuge in math you barely understand to escape contradiction but it's not even a problem with GNFS, the fundamental problem is that you're trying to do something you mathematically cannot do, which is squaring sums of money. It's equivalent to a division by zero in a demonstration, it just nullifies all the reasoning around it.
And I've given plenty of illustrations why you cannot do that you definitely should read instead of obsessing yourself in proving you're right.
> ops(2*16)
121106.42245436447
> ops(2*32)
38178499.24944067
> ops(2*32) / ops(2*16)
315.24751929508244
So if ops(2*16) costs $8, then ops(2*32) costs $8 * ops(2*32) / ops(2*16) = $2521.98. Far more than $8^2.
The cost reaches the millions for 64 bits, and ~$165 trillion for 128 bits:
> 8 * ops(2*64) / ops(2*16)
5601332.962726709 (far more than $8^2^2)
> 8 * ops(2*128) / ops(2*16)
165647073370.16437 (far more than $8^2^2^2)
Note that this is increasing faster than the number of bits squared:
> ops(2*32) / 32 * 2
37283.690673281904
> ops(2*64) / 64 * 2
20701824.831895076
> ops(2*128) / 128 * 2
153052707259.34015
As the wiki page says, it's super-polynomial in the number of bits.
If you still disagree with all of this, can you explain what's wrong with this method of calculating the worst-case cost of factoring the number n?
Cost(n) = ops(n) * Cost(2^16) / ops(2^16)
Or what you don't understand about this way of calculating it?
meanwhile 512 bits costs $8
But you just keep believing 128 bits costs $165 trillion ROFL.
>> ops(2 * 16)
>121106.42245436447
>> ops(2 * 32)
>38178499.24944067
>> ops(2 * 32) / ops(2 * 16)
>315.24751929508244
So if ops(216) costs $8, then ops(232) costs $8 * ops(232) / ops(216) = $2521.98. Far more than $8^2.
And I said $256, because as an "embarrassingly parallel" algorithm you get significant benefits from cached results (quickly discard entire number fields that were previously calculated).
Which, btw, is how they break 512bit DH in less than a minute.
Also still a lot closer than your >$165 trillion
sigh
> So if ops(2*16) costs $8, then ops(232) costs $8 ops(232) / ops(216) = $2521.98. Far more than $8^2. > The cost reaches the millions for 64 bits, and ~$165 trillion for 128 bits:
Your answer
> meanwhile 512 bits costs $8 > But you just keep believing 128 bits costs $165 trillion ROFL.
At this point the only conclusion that doesn't involve questioning your sanity is just to conclude that you don't know anything about math and you struggle even reading mathematical notation (“if <> then <>” being the most basic construct one can learn about math, and you still struggle with it!).
This and the stuff they've been writing about suggest that we are observing a mind that used to know things, but suffered some serious damages/degrade over time. That's always so sad.
> The size of the input to the algorithm is log2 n or the number of bits in the binary representation of n. Any element of the order nc for a constant c is exponential in log n. The running time of the number field sieve is super-polynomial but sub-exponential in the size of the input.
"Super-polynomial" means that the running time increases by more than the square.
In any case, even if the algorithm were just polynomial, the argument about squaring costs doesn't work out.
Your claim that factoring a 256bit number would cost fractions of a cent rather than my claim of roughly $3 is also very easily verifiable.
Further I'll note you sound exactly like the kind of person insisting diffie hillman was a good key exchange mechanism prior to Snowdens disclosures. good luck with that.
> Further I'll note you sound exactly like the kind of person insisting diffie hillman was a good key exchange mechanism prior to Snowdens disclosures. good luck with that.
Before or after Snowden, Diffie-Hellman (it's Martin Hellman with an “e”) is a good key exchange mechanism! When using it on Z/pZ as field it's not the most practical one nowadays because you need big keys to get the desired security level (exactly the same problem as RSA), but you if you use an elliptic curves instead you can use shorter keys again (and this is exactly what ECDH is doing: it litterally means Elliptic Curve Diffie Hellman! Diffie-Hellman went nowhere).
Meanwhile, ~10 years ago https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf
After a week-long precomputation for a specified 512-bit group, we can compute arbitrary discrete logs in that group
in about a minute.
We find that 82% of vulnerable servers use a single 512-bit group, allowing us to compromise connections to 7% of Alexa Top Million HTTPS sites. In response, major browsers are being changed to reject short groups. We go on to consider Diffie-Hellman with 768- and 1024-bit groups. We estimate that even in the 1024-bit case, the computations are plausible given nation-state resources.
1024 being at risk against state-level adversaries isn't shocking to anyone, but there's a significant gap between this and costing $64, and this gap is much bigger than 10 years of Moore's Law (NSA had much more than 32*$64 of available compute ;).
You're making grandiose claims and there's nothing that holds in your reasoning, it really feels like I'm discussing physics with a flat-earther…
Reference? Why has no one demonstrated this?
> a "tragicomic" tendency to forget mankind's unstoppable progress
When it comes to compute, it's no faster than Moore's Law, which means roughly one bit of symmetric encryption every two years.
> and we've been heading down this path for decades at an exponentially increasing pace.
Given that the encryption security is itself exponential in bit length, we are in fact heading down this path linearly! (A doubling in compute power means the ability to crack twice as hard cryptosystems, which means ones that have 1 bit more of security).
Key must be extended over time, and they are, and have been for decades. A PoC of an attack of a security system broken since 1999 should be treated exactly like how we are amazed at little computing power was available to the Apollo program: this is a cool bit of trivia that shows the growth of available computing power, but not a display of any kind of security issue.
Not NSA-proof, but should be more than enough to keep spammers out, especially considering that DKIM is just one layer of protection.
A RSA key is the product of two primes, not any number, so you need a lot more bits to get equivalent security to, say, AES. That's also a reason for elliptic-curve cryptography, which needs a lot less bits than RSA for the same level of security.
This explanation doesn't seem right to me. For 1024 bit numbers, about 0.14% are prime. So that difference only loses a handful of bits. There are more than 2^2000 usable RSA-2048 keys, and simply guessing and dividing would require more than 2^1000 guesses. Those few bits lost to the prime number restriction aren't why the level of security is so low.
"first 255 bytes" "second 255 bytes" "etc"
DNS clients combine the 255-byte strings back into a single string.
No, DKIM clients and SPF clients do that. Generic DNS clients, however, are in theory free to ascribe any semantic meaning they like to the string separations.
They've contacted the company with the vulnerability and resolved it before publishing the article - search the original article for the substring "now no longer available".
Usually, you demonstrate that an online system is vulnerable by exploiting that vulnerability in good faith, documenting the research, and submitting it for review. It does not matter if you're cracking an encryption scheme, achieving custom code execution for a locked-down game console, proving that you can tamper with data in a voting machine, or proving that you can edit people's comments on a Google Meet Q&A session - the process is the same.
If you say something's vulnerable, people can shrug it off. If you say and prove something's vulnerable, the ability to shrug it off shrinks. If you say and prove something's vulnerable and that you'll publish the vulnerability - again, using the industry standard of disclosure deadlines and making the vulnerability public after 60-or-so days of attempting to contact the author - the ability to shrug it off effectively disappears.
In general, subverting security and privacy controls tends to be illegal in most jurisdictions. Best case is when you have clear permission or consent to do some testing. Absent that there's a general consensus that good faith searching for vulnerabilities is ok, as long as you report findings early and squarely. But if you go on to actually abuse the vulnerability to spy on users, look at data etc ... you've crossed a line. For me, cracking a key is much more like that second case. Now you have a secret that can be used for impersonation and decryption. That's not something I'd want to be in possession of without permission.
If that were true there would be no market for white hat hackers collecting bug bounties. You need to be able to demonstrate cracking the working system for that to be of any use at all. No company will listen to your theoretical bug exploit, but show them that you can actually break their system and they will pay you well for disclosure.
Generally, law enforcement and judges don't blame you as long as you use best practices, but you need to adhere to responsible disclosure very strictly in order for this not to be something the police might take an interest in.
Demonstrating the insecurity of a 512 bit key is easy to do without cracking a real life key someone else owns; just generate your own to show it can be done, then use that as proof when reporting these issues to other companies. The best legal method may be to only start cracking real keys if they ignore you or deny the vulnerability, or simply report on the fact you can do it and that the company/companies you've reached out to deny the security risk.
Companies that pay for disclosure won't get you into trouble either way, but companies that are run by incompetent people will panic and turn to law enforcement quickly. White-hat hackers get sued and arrested all the time. You may be able to prove you're right in the court room, but at that point you've already spent a ton of money on lawyers and court fees.
In this case, the risk is increased by not only cracking the key (which can be argued is enough proof already, just send them their own private key as proof), but also using it to impersonate them to several mail providers to check which ones accept the cracked key. That last step could've easily been done by using one's own domains, and with impersonation being a last resort to prove an issue is valid if the company you're reporting the issue to denies the risk.
As I said in my post, no company will listen to your hypothetical exploit. Show them youve hacked their system and they listen.
But to pursue data deliberately crosses a bright line, and is not necessary for security research. Secret keys are data that be used to impersonate or decrypt. I would be very very careful.
I see it the other way around. If some hacker contacted me and proved they had cracked my businesses encryption keys and was looking for a reward, I dont think I would be looking to prosecute them and antagonise them further.
Also, I don't think DKIM is used for encryption, just signatures.
Looking at public data, using some other public knowledge to figure out something new does not make it inherently illegal. They didn't crack it on their systems, they didn't subvert it on their systems, they did not use it against their systems. I'd love to see some specific examples under what it could be prosecuted under specifically. Because "that door doesn't actually have a lock" or "the king doesn't actually have clothes" is not practically prosecutable anywhere normal just like that.
Especially in the EU, making such cryptographic blunders might even fall foul of NIS2, should it apply to you.
In general this also quickly boils down to the topic of "illegal numbers" (https://en.wikipedia.org/wiki/Illegal_number) as well.
"Are you aware that this key could be used to decrypt information and impersonate X?"
"Are you aware that this key is commonly called a Private key?"
"Are you aware that this key is commonly called a Secret key?"
"Are you aware that it is common to treat these with high sensitivity? Protecting them from human eyes, using secure key management services and so on?"
"Was it even necessary to target someone else's secret private key to demonstrate that 512-bit keys can be cracked?"
"Knowing all of this, did you still willfully and intentionally use cracking to make a copy of this secret private key?"
I wouldn't want to be in the position of trying to explain to a prosecutor, judge, or jury why it's somehow ok and shouldn't count. The reason I'm posting at all here is because I don't think folks are thinking this risk through.
That key can not be used to decrypt anything. Maybe impersonate, but the researchers haven't done that. It's also difficult to claim something is very sensitive, private or secure if you're publicly broadcasting it, due to the fact that the operation to convert one to an another is so absolutely trivial.
And they did not make a copy of their private key, they did not access their system in a forbidden way. They calculated a new one from publicly accessible information, using publicly known math. It's like visually looking at something and then thinking about it hard.
I wouldn't want to explain these things either, but such a prosecution would be both bullshit and a landmark one at the same time.
Pointing out that someone is doing something stupid is also not illegal, though they make try to make your life painful for doing so.
Just as if you made a copy of the physical key for a business's real-world premises, you could well find yourself in trouble. Even if you never "use" it.
They _did_ call out the 3 of 10 tested major email providers that don't follow the DKIM RFC properly: Yahoo, Tuta, Mailfence (after having notified them).
Matthias, co-founder of Tuta Mail
Actually making a clone of the key, and then showing "hey it fits" will get you more traction more quickly ... but there's also plenty of Police Departments who might well arrest you for that.
That's exactly what I meant in terms of not actually doing anything with the result. That said, it's obviously somewhat different with a physical key than a cryptographic key.
But I think many others, and many in law enforcement, will see cracking a key as "actually exploiting it". You've exploited the cracking vulnerability to target a particular key, is how they'll see it. Law enforcement also have a natural incentive to want possession of harm-adjacent paraphernalia to carry substantial liability.
I think they have a point; that key is private data, and there's a reason people lock keys up in KMSs and HSMs; they can have a large blast radius, and be hard for companies to revoke and rotate. Importantly, a compromise of a key will often trigger notification requirements and so now it is a breach or an incident, in a way that a good faith security vulnerability report is not.
To make an extreme example; if you were to crack an important key for a government agency, good luck with that is all I'll say. I sure wouldn't.
The CFAA has a clause about "trafficing in passwords or similar information" (18 USC 1030(a)(6)), but the mental state requirements are very high: that trafficking has to be knowing and with intent to defraud (that intent will be something prosecutors will have to prove at trial).
There might be some state law somewhere that makes this risky, but virtually every hacking prosecution in the US anyone has heard of happens under CFAA. I'm not a lawyer, but I've spent a lot of time with CFAA, and I think cracking DKIM keys is pretty safe.
My understanding, and IANAL, is that decrypting things that aren't yours is a bad idea and is covered mainly by electronic communications and wire acts, e.g. U.S. Code § 2511 and others.
Worth remembering: when CFAA was originally passed, an objection to it was "we already have laws that proscribe hacking computers"; the fraud statutes encompass most of this activity. CFAA's original motivation was literally WarGames: attacks with no financial motivation, just to mess things up. So even without statutory issues, breaking an encryption key and using it to steal stuff (or to gain information and ferry it to others who will use it for crimes) is still illegal.
Your guess is as good as mine about whether ECPA covers wifi sniffing. But: presuming you obtain an encryption key through lawful means, ECPA can't (by any obvious reading) make cracking that key unlawful; it's what you'd do with the key afterwards that would be problematic.
Private keys are not "generally accessible" and my concern is that the authorities will see cracking the key itself as issue enough, and unlawful. If a security researcher triggers painful breach notifications, which could well happen for a compromised private key, I don't think it's unthinkable at all that an upset target will find a DA who is happy to take this interpretation.
I don't think this specific DKIM case is particularly high-risk, but I still wouldn't do it without permission from the key holder.
Did google really FAIL because of DKIM signature being insecure or because SPF failed?
DKIM is asymmetric. So a 512bit DKIM equivalent symmetric hash would be 64bits, which is long broken. Even 160bit SHA1 is considered broken. A DKIM of roughly equivalent strength to a 512bit SHA3 would be at least 4096bits and still does not include SHA3's techniques for mitigating replay attacks.
Unfortunately DKIM only supports rsa-sha1 and rsa-sha256 signatures (https://datatracker.ietf.org/doc/html/rfc6376/#section-3.3). It'd be nice to see DKIM get revised to allow Ed25519 or similar signatures.
RSA encryption is 10x weaker than Elliptic curve (224 bits ECC ~= 2048 bits RSA). Both are asymmetric.
Alternatively, asymmetric Elliptic curve is as strong as AES symmetric encryption. But it's quantum vulnerable, of course.
This comment also serves as a public notice that I'm going to factor all the 512-bit DKIM RSA keys out there from now on. Start migrating.
When people go searching for prime numbers / bitcoin with massive compute, I assume that there are huge libraries of "shortcuts" to reduce the searching space, like prime numbers only appear with certain patterns, or there are large "holes" in the number space that do not need to be searched, etc. (see videos e.g. about how prime numbers make spirals on the polar coord. system, etc). I.e. if you know these you can accelerate/reduce your search cost by orders of magnitude.
For whatever various encryption algorithm that people choose to test or attack (like this story), is there somewhere such libraries of "shortcuts" are kept and well known? To reduce the brute force search need?
And is the state of sharing these to the point that the encryption services are designed to avoid the shortcut vulnerabilities?
Was always wondering this.
For other cryptographic operations, almost any sufficiently large prime can be used. Even a 50% reduction on a computation that will take trillions of years, has no practical impact.
Now, factoring large numbers is a separate thing. You don't brute force all the possible factors, that would be a really bad approach. Modern algorithms are called "sieves," this is a gross oversimplification but essentially they keep picking random numbers and computing relations between them until they come up with enough that have a certain property that you can combine them together to find one of the factors. It doesn't have anything to do with shortcuts or patterns or tricks, it is just a fundamental number theory algorithm.
At one point in time you could reach me on email, XMPP, and SIP all using the same identifier. We dropped XMPP about a decade ago when all the major service providers stopped federating with it, but if you know my work email address you can also call my desk phone using the same identifier.
Because email addresses have existed since the begining of the web, anyone who has ever been on the internet has one and uses it for identification purposes. This will not change without another universal standard which everybody automatically has.
Its like IPv4, we all have the ability to use IPv6 but do we? Hell no we just use NAT as its an easier quick fix.
Changing any addressing on the internet is tough because you always have to have backwards compatibilty, which kind of ruins the point of moving forward.
Most of Asia (and probably Africa) uses phone #.
In addition to that, reports show that the majority of Asian and African users access the internet via a shared device. Therefore phone number cannot be a universal identifier there as it generally identifies a group of people rather than in indivdual. This is why google and outlook accounts with SSO are still generally the most used identifying systems in the world.
In India, it is exceptionally rare for any services (gov’t and commercial) to use email address. They use mobile.
One of many examples [https://web.umang.gov.in/landing/department/aadhaar.html]
That accounts for 1.3+ billion distinct accounts in 2023 right there.
[https://uidai.gov.in/en/about-uidai/unique-identification-au...]
Do you think those folks in Asia and Africa sharing mobile devices have email accounts they can use instead?
You were talking about phone numbers being used to access online services, and now you are posting links about the Indian governments Personal Identification number which is more akin to a National Insurance number in the UK or social security number in the US
> Do you think those folks in Asia and Africa sharing mobile devices have email accounts they can use instead?
Yes definitely. Huge numbers of people in these developing nmations share a single mobile device per family or household, yet they all have individual facebook and google accounts to communicate with the world. That is defintiely all controlled via email address, unless I have missed a feature of FAANG companies where you can sign up for accounts with a mobile phone number?
I’ve gotten a lot of spear phishing attacks, as far back of 2018, with emails that passed many verification checks. Getting representation to this issue is notoriously difficult because people assume an undiscerning victim and end user. They also rely on the false idea that scammers can’t spell or don’t spell correctly, specifically to weed out discerning people. When there is a subset that makes everything as legit and impersonating as possible.
Of course, if you can modify the SPF records, you can make the DMARC record say whatever you want.
I'm pretty sure mine are 2048-bit, though I'd have to check as they were last set a fair while ago.
> In our study on the SPF, DKIM, and DMARC records of the top 1M websites, we were surprised to uncover more than 1,700 public DKIM keys that were shorter than 1,024 bits in length
txt records are 1024bits... add the prefix for the key and you get <1024.
to use a larger key you must combine many txt records in a never ending comical interoperability issues. the first one usually being that your regional monopoly scoundrel, a.k.a. registrar, run a server which doesn't even allow more than one entry.
There actually is a standard for adding more than 255 bytes to a TXT record and it's fairly widely supported, but it can be tricky to configure properly on a name server.
I wouldn't say it is tricky with a good name server: for instance with bind just split the value into multiple quoted strings and it'll do the rest¹.
Though while it is well known, it doesn't seem to be well documented away from many forum posts discussing the matter: after a little searching I can't find reference to the issue in bind documentation or relevant RFCs. The RR format allows RDATA to be longer than 255 octets (RDLENGTH is an unsigned 16-bit value, not 8-bit) so presumably the limit is imposed by the zonefile format. The only reference to 255 octet limits in RFCs 1034 & 1035 is the length of a full name (with individual labels making up a name being limited to 63 octets).
Of course many UIs for editing zone files or other stored RR information, or other DNS management interfaces, might implement longer RDATA support in a worse way or not support longer RDATA values at all.
----
[1] I'm not sure what “the rest” is (I might look deeper later)
TXT-DATA One or more <character-string>s.
Looking up what a character-string is: .... <character-string> is a single
length octet followed by that number of characters. <character-string>
is treated as binary information, and can be up to 256 characters in
length (including the length octet).
So the 255-byte limit for each string within the TXT record is a core part of DNS. But so is having more than one such string in the record data (which I thought was a later extension). I have no idea why this strange format was chosen when, as you noted already, the RDATA is already sized by the RDLENGTH, which is 16 bits. However, it is not a mere artifact of the zonefile format; indeed the format follows from the spec.I'm not sure that all clients and servers out there handle more than one string in the record properly, though in my experience, most do. Still, having to split the values, and being able to split them before exactly 255 bytes, makes it trickier than it needed to be to administer and validate such configurations manually.
if you want TLD not available at your us registrar, you're stuck with low dkim keys. period. doesn't matter what specs says.
dunno if that's just a consequence of the monopoly design of the dns business, or more nefarious reasons, but that's the situation on most TLD not easily registered such as .br
[1]: In the 5th bullet under section 1.2 of https://registro.br/ajuda/tutoriais-administrativos/ it says (roughly) "In DNS (OPTIONAL), you can inform the DNS servers previously configured for your domain. If you do not have this information, ignore this field. Our system will automatically use the DNS servers made available free of charge by Registro.br."
Yahoo Mail has a market share on the order of 3%. So a black hat could then target a decent chunk of users with @yahoo addresses specifically.
Has anyone heard of this being exploited in the wild? Would be interesting to find out whether there are some reputable domains among the 1.7k vulnerable ones.
Please adhere to honesty and good faith arguments.
The article clearly states that Yahoo is one of the 3 clients that didn't reject 512bit keys the way they should per RFC.
Yahoo Mail inbox users are vulnerable _receivers_ of spoofed emails.
But as a security generality - email is vastly less secure* than human nature wants to assume that it is. Human nature usually wins.
*Outside of a carefully run org's own network, and a few other edge cases
I don't think this has to do with "human nature" anymore than http did. It's a very important, powerful form of communication without any secure replacement. Just as we switched to https, ideally an "xmail" or the like would get created as an open standard with open software that was email with better security by default. Sadly I'm not sure we collectively have the ability to do that kind of thing any longer, powerful entities have realized it's just too attractive to lock it up. But even many open source organizations don't seem to feel like bothering. Plenty of security experts even just prefer new shiny and will spout ridiculous "move to instant messaging". So status quo rules for the foreseeable future.
We can extend email, though. Why isn't there an SMTP GETKEY command to return a PGP key corresponding to an email address? Sure, the sender might not support the same version of PGP, and sure, the connection might be intercepted (hopefully you'd not trust the output of this command except over TLS), but like most of the email system, it would be a big improvement and good enough most of the time.
If you're in IT at a carefully run org: You ditched 512-bit keys years ago. This article is nothing but a 20-second story, to help explain to PHB's and noobs why they got an error message, or what sorta important stuff you're always busy keeping your org safe from.
If you're in IT at a scraping-by org: Maybe today's a good day to ditch 512-bit keys. And if you get push-back...gosh, here's a how-to article, showing how a "forged corporate signature stamp" can be made for only $8.
If you're trying to teach senior citizen how to avoid being scammed on the internet: You've got zero visibility or control, so you're stuck with "sometimes these can be forged, depending on technical details" generalities.
For instance, once you disregard so called transactional mail and spam, real email is almost all encrypted for all practical purposes.
DKIM and DMARC also work quite well for spoofing protection, aside from the corner cases like the above.
Average Software Engineers have an outdated idea of email, formed by 1990 era Internet.
Even Russian spies use mail.ru and their emails are compromised not by SMTP MitM but by weak passwords, google for "moscow1 moscow2 password" to see what I am talking about )
Anyway. Back to the technical point. Email servers pretty much always use TLS to talk to each other. The connection may degrade to non-encrypted for backwards compatibility, unlike HTTPS. But it's vanishingly rare.
So, for all practical purposes that affect ordinary citizens: injection, scanning and sensitive information extraction, email in transit is quite secure.
k=rsa; … p=<a bunch of base64 data>
The base64 data is an RSA public key. You can print in textual form with something like, your-clipboard-paste-command | base64 -d | openssl rsa -pubin -noout -inform der -text
The first line of output will be something like, Public-Key: (2048 bit)
Which is the key length.If you fetch with `dig`, note that sometimes dig will do this:
example.com. 1800 IN TXT "k=rsa; t=s; p=blahblahblahblah" "blahblahblah"
I.e., it breaks it up with a `" "`; remove those, that's not part of the data. I.e., concat the strings dig returns, then parse out the p=, then base64 decode, then openssl.(You can also do what the article does, but without Python, which is jam that base64 between some PEM header guards like the article does, and then feed it to openssl. Same command, but you don't need the -inform der b/c the "in[put] form[at]" is now pem, which is the default.)
The email product has gobbled up both AOL and Verizon and they also whitelabel to a bunch of other ISPs. Just because they are never in the news for anything cool and hackernews commenters dont use them doesnt mean they dont exist.
LOL. One of my favourite internet flame wars was circa 2007 (in and around discussing the incoming financial crises) and we got talking about encryption and how none of it actually "works".
Particularly vile troll, and iirc also owner of the site bet me $50,000 I couldn't reverse the 512 RSA key he posted (created by openssl).
He got the factorisation less than an hour after he made the post.
Strangely, the entire site disappeared pretty quickly after that (and it's not on wayback machine).
given where the math guys are now with GNFS I'm not sure I would trust 8192 bit RSA in 2024, 2018 for dropping 512 bit was already more than a decade late.
Do you have any proof/quote for that? Some pretty knowledgeable and well-known people in this thread 2048 bit RSA is quite safe with current capabilities[1]
We detached this subthread from https://news.ycombinator.com/item?id=42633787.
Front-ends are essentially free distributed computing resources while the backends need to be paid for.
Loads of developers everywhere, frontend and backend, go to lengths to optimise their programs. Loads of other developers also don't care.
...Yet companies you'd think would be similarly obsessed with FE perf are, at least in the hiring stage, totally nonplussed by that background. C'est la vie.