http://www.cs.bc.edu/~straubin/crypto2017/heys.pdf
My recommendation: print it out to a PDF with huge margins so you can make notes, and then work through all the worked examples.
Although cryptanalysis of DES did reveal some flaws, the actual breaks, including ones you'd use today if tasked to break DES at scale, all target exactly the two deliberately chosen weaknesses from when it was designed 50 years ago - its keys are too small and its block sizes are too short, that's all, and that's enough.
In AES neither of these flaws is present, hence I am not alone in thinking we probably won't ever build another one.
You might think well, in 50 years we'll have better computers so that's dumb. Nope. Unless there's an actual mathematical break what runs out isn't mere human ingenuity, it's plain physics.
While this is most likely sound (due to the sensitivity to initial conditions aka avalanche effect) there is a small chance that this creates a mathematical structure that will one day get exploited.
AES is even more vulnerable to this chance than usual because it actually uses mathematical functions for several of its components (the sbox and the 32-bit linear permutation). No one has been able to exploit this combination yet though.
Contrast this with SHA-2 for example, it's an unbalanced Feistel permutation that had a lot of 'random' nonlinear crap thrown in. SHA-2 can actually be used as a block cipher (SHACAL-2) however there is no HW acceleration for the inverse permutation - so you would be limited to CTR-like modes.
There are several ways to achieve this. An example would be to replace a few of the XOR (additions modulo 2) operations that introduce subkeys between the AES rounds with another kind of addition instructions, e.g. with addition modulo 2^64.
These additions do not commute with the operations in the Galois field used by AES and they completely destroy the system of equations in that Galois field that is equivalent with the standard AES, making impossible any algebraic attempts at breaking AES. Moreover, if the additions would replace XOR in irregular places, that would make some of the rounds distinct from the others, breaking an attack that depends on all the rounds being identical. By replacing quasi-randomly the XOR operations with either addition modulo 2^64 or addition modulo 2^32 it is possible to make all the AES rounds distinct, with no pair of identical rounds and with only a negligible throughput decrease.
Also using the usual hardware AES instructions, it is simple to implement the Rijndael variant with a block size of 256 bits, which is much stronger than the standard AES with a block size of 128 bits (this has been described in an Intel application note when they have introduced the AES instructions in the Intel Westmere CPUs, in 2010).
So any possible advances in the cryptanalysis of AES will have effects only for the decryption of old recordings of encrypted information.
Future encrypted information will be easily protected against any advances with only software changes.
There are more promising Feistel-like constructions on top of AES operations such as Areion: https://eprint.iacr.org/2023/794
The shuffle of Rijndael-256 is suboptimal, being a derivative of the shuffle designed for an 128-bit block, so it is certainly possible to devise something better than that when designing specifically for a 256-bit size. Rijndael-256 has only the advantage that it is a quasi-standard mode, which has passed some cryptanalysis during the AES competition.
I have not studied Areion, but at a first glance I agree with you that it seems promising.
The SHACAL-2 permutation though is much more mathematically unstructured than AES. It's an augmented ARX unbalanced Feistel design (w/ additional non-linearities). Hard to imagine you could reconstruct any usable mathematical structure in that mess. It also has a strong key schedule which is not vulnerable to related-key attacks (AES is) which is by design due to its hashing application. 512-bit key space too which allows for easy nonce integration.
The only vulnerabilities of the iterated construction appear when a weak method is used for generating the round keys from the cipher key (i.e. when the so-called key schedule is weak), so that there are predictable relationships between the round keys.
There exists an alternative (and equivalent) construction for a block cipher, when the same key is introduced in all rounds, but in this case all the round functions must be different from each other (instead of iterating the same function).
It took almost two decades to explain that to sitting presidents and Congress.
'We' have been telling 'them' that shit won't scale since at least the 90's.
Everyone forgets about Al Gore's black sheep - the Clipper Chip.
BTW, if you are keeping score: stop. Every country has things they do well and things they do bad. Look for and fix your local bads, this isn't a game of who is better, it is a game everyone should win.
I had my first tech job thanks to his NSF funding as Senator Gore but he was also a spook and in the end it ended up evening out.
> BTW, if you are keeping score: stop.
Yeah maybe not doing so good at that. But you always have to watch out for that time when your 'friends' will accidentally sell you out if you don't reiterate your social and professional boundaries with them from time to time.
I would quibble the deliberate part for the block length (and perhaps strengthen your point). DES came out in the 70's and people were still designing 64 bit ciphers in the 90's. Examples: IDEA, CAST5, Blowfish. I think that it is most likely that the the 56 bit key length was really the only factor intended to make DES deliberately weak. Gigabyte scale files/sessions were just not a thing back then.
Therefore there is no doubt that both the reduction of the key length and of the block length have been deliberate.
The only thing that can be argued is that perhaps the motivation for the reduction in the block length could have been less for weakening the cipher, but more for reducing the cost of the hardware, based on the assumption that the cipher will be used only for the encryption of amounts of data that are very small for today, but which could have seemed big during the mid seventies.
1. Someone would have an encrypted session or file that ran into the 10s of GB.
2. The signals intelligence entity would have to somehow have gotten access that much data. Did they tap a telephone line and intercept a 300 bps modem connection for 30 years straight? Did they get access to half a billion encrypted punch cards?
3. The signals intelligence entity would have enough processing power and memory to actually find the collisions.
4. The signals intelligence entity would be able to extract some useful information out of 1 or 2 colliding 8 byte blocks randomly chosen out of billions in the same file/session. The extreme unlikeliness of that is why such collisions are still of little practical use today.
On the other hand, a short block size complicates a lot the use of a block cipher function in other useful algorithms, e.g. key derivation algorithms, RNG algorithms, MAC algorithms etc.
The necessity of using a block cipher function also for such purposes has become very obvious almost immediately after the standardization of DES, but then it was too late.
Such applications have been seriously hindered for a couple of decades, until the mid nineties, by the lack of any other publicly known strong block cipher functions. Many workarounds have been published with the purpose of using DES even where its short block length was unacceptable, but all such workarounds were more inefficient than using a block cipher with a greater block size. They were designed only because it was expected that using the already existing hardware integrated circuits that computed DES would be more efficient than implementing a block cipher function in software, on the slower computers of that time.
Yes, people can upgrade but nobody fucking will until you impress upon them how stupid they're being by gambling the entire company on carrying that debt for another year.
Of course the max RSA key lengths on the card weren't up to it anyway (kids: if you by crypto hardware and don't use it immediately, don't warehouse it looking for a problem for your solution), but at least I got to put my foot down and we only shipped with SHA-1 and SHA-2 support
I'm currently taking Cryptography at university and I find the resources online to be quite scarce. I mostly find myself reading Wikipedia. I don't know if I'm missing some background knowledge but some of the math notations tend to be quite difficult to understand. I have spent around 10 hours trying to understand Differential Cryptanalysis unsuccessfully!
The standard recommendation these days is Aumasson's Serious Cryptography. I like David Wong's Real-World Cryptography as well.
I just checked and it has been a whooping 12 years since I purchased/read the book, so I retract my recommendation.
I would actively recommend against using it as a guide in 2025. But you're not crazy to have liked it before. Funny enough, 12 years ago, I wrote a blog post about this:
https://sockpuppet.org/blog/2013/07/22/applied-practical-cry...
I checked my blog and I also wrote a post about some crypto related things shortly after I purchased the book. It's a post about a bug in the JDK that I stumbled across, which I am certain I would not have understood without Bruce's book:
https://blog.heckel.io/2014/03/01/cipherinputstream-for-aead...
Btw, I was a bit of a fan boy back then and I got the signed copy of the book, haha.
It's true that practical side-channel leaks on software block cipher implementations tend to be microarchitectural (e.g. cache timing), but that's only because the "easier" attacks are already mitigated or considered out of scope (e.g. no physical access).
It's not quite the same "lab exercises" format but the CTF challenge files are public, so you can try it for yourself.