- new
- past
- show
- ask
- show
- jobs
- submit
There's a lot of multiplication of numbers in parallel, so it makes sense to try to fit that to matrices.
Cryptography is built bottom-up, but likewise it makes sense to exploit data structures that already exist in silicon.
That sounds interesting. Where have you heard about that? Or is this your own research?
This is exactly the point. I was disappointed that I had to scroll so far down the page until I saw the word "entropy." There is a deep connection between machine learning and encryption and compression in information theory. As Shannon demonstrated, the one-time pad's encrypted output is maximum entropy, and so would data compressed to the Shannon limit. Such an optimal compressor learns the underlying probability distribution of the data to represent it with the fewest bits possible, which is exactly the goal of machine learning. A trained ML model can be seen as a lossy compression of the training data. Autoencoding models make the link between ML and compression (and thus encryption) explicit.
I think the contrast is more interesting: exact discrete trajectories in cryptography versus approximate continuous function approximation in neural networks.
In cryptography, you usually want a state space so large that nobody can accidentally find, reconstruct, or predict the same path you took.
In neural networks, you want an immense initial search space because NNs need to model the real world, which is highly complex and contains patterns that appear unpredictably. One aspect I think is often overlooked is that NNs are mostly deletive: they start with a very broad representational space and become progressively more specific by discarding what the NN perceives as irrelevant distinctions.
I think this puts the article's point about complexity and mixing in a clearer light. The same class of procedures achieves almost opposite effects. In neural networks, you want mixing so the model can approximate many possible paths at once. In cryptography, you want mixing so the path taken is unpredictable and hard to trace. The key difference is that, for NNs, an approximate path can be good enough. In cryptography, an approximate path is as useless as a very distant one.
- "Why are neural networks and cryptographic ciphers so similar? (2025)" https://reiner.org/neural-net-ciphers .. https://news.ycombinator.com/item?id=47983467
- "Show HN: ResonanceNet – Proof-of-Training Blockchain" https://news.ycombinator.com/item?id=47386901
- "Cryptographic hashing as a transformer attention head" https://github.com/ffr1/unbounded-context-attention
- 'Implementation for NDSS'2025 paper: "TensorCrypt: Repurposing Neural Networks for Efficient Cryptographic Computation"' https://github.com/OSUSecLab/TensorCrypt :
> More specifically, with a program translation framework that converts traditional cryptographic algorithms into NN models, our proof-of-concept implementations in TensorFlow demonstrate substantial performance improvements: encryption speeds for AES, Chacha20, and Salsa20 show increases of up to 4.09×, 5.44×, and 5.06×, respectively, compared to existing GPU-based cryptographic solutions written by human experts.
I've finished the Cryptography I on Coursera already. Can't recommend it enough
2) Cryptography: Theory and Practice by Douglas Stinson et al. This is a more mathematical treatment and hence a nice complement to the Paar book above - https://www.routledge.com/Cryptography-Theory-and-Practice/S...
3) For understanding how cryptography is used in Networks see the classic Network Security: Private Communication in a Public World by Radia Perlman et al. The 2nd edition is where i started my journey into network security/cryptography needed for my then job. Highly recommended - https://www.amazon.com/Network-Security-Charlie-Kaufman/dp/0...
The first two books give you the "mechanisms" (and theory) of cryptography i.e. the building blocks. The last book puts everything together to implement "policies" via practical applications (eg. IPSec/SSL etc.) for the real world. They are complementary and hence should be studied together to get the full picture.
For those feeling like this guy is being a dick, that's a normal reaction, but try to understand that this attitude was cultivated and used by enlightened individuals back when many people thought they were making secure software without even salting and hashing their users passwords. People thought md5 was good enough, https was barely being used, people had to be convinced to use ssh instead of telnet and ftp, and so forth. Drilling the idea into people that cryptography and security is difficult and that you should listen to experts had to be done. Don't take it personally.
So yes, as you study cryptography, do keep in mind that it's extremely unlikely that you'll learn enough to come up with something better on your own, that you will very likely make mistakes if you try to write your own implementation of any cryptographic algorithms, and that you should still just use existing libraries and the recommendations of experts on best practices.
He's a perfectly nice guy with a lot to say about information security and its intersection with public policy. But I think it's been plural decades since he basically declared himself outside of modern cryptography (you could call it at the point where he said he didn't "trust the math" of elliptic curves, which he left out of Practical Cryptography, over 26 years ago).
It's not so much that you should or shouldn't take "Applied Cryptography is bad" personally; rather: if you think Applied Cryptography is a useful reference or learning tool, it's pretty important to know that it is not.
Any recommendations for a technically competent person, but for someone with math knowledge trailing off at Calc 2?
It's straightforward to get yourself to a place where you can do cryptographic things and feel somewhat comfortable with what's happening. Truly understanding it to the point where you can reason safely about it is deceptively harder.
For example, the proof that there are infinitely many primes looks simple [0], but it's still pretty hard to understand, let alone derive yourself independently. And the other important cryptography/number theory theorems like Euler's totient theorem [1] are even trickier.
Along those lines, you do not need to understand the proof of Euler's totient theorem to understand cryptography. It is a distraction. All you need (at most) is to know that the result is true, and even then it's only fundamentally important for RSA, which you likely shouldn't bother learning about these days. RSA simultaneously
1. looks very simple (though the simple version is horrendously insecure), and 2. does not have particularly good performance, and 3. does not have particularly good security (either post or pre quantum), and 4. has been in the process of being phased out for quite some time now.
this is not a good combination of properties. The fact that cryptography textbooks cover it is mostly due to historical tradition. I would personally argue it is time to omit it from instruction materials.
Well, I had to when I learned cryptography, but I learned it from a class offered by the math department, so I guess that's rather unsurprising :).
> even then it's only fundamentally important for RSA […] this is not a good combination of properties
Strong agree here.
1. comfort with logical operations/arithmetic over F2 2. discrete probability over finite sets 3. some basic complexity theory (mostly to reason about running time, though being familiar with proofs by reduction can help as well if you actually want to do security proofs).
a decent idea might be to take some "good" undergraduate cryptography class's course resource and use that. For example, Mihir Bellare is an extremely accomplished cryptographer. The course materials for his undergrad course F2018 are
https://cseweb.ucsd.edu/~mihir/cse107/slides.html
He's also written a longer series of lecture notes on cryptography that's freely available. I don't know where it is on his webpage these days, but you can find it below
https://www.cs.tufts.edu/comp/165/papers/Goldwasser-Bellare-...
the difficult part with this approach is not being able to ask questions that easily. To "fix" this, you can either
* use AI, though that has its own issues, or * use some community forum, such as crypto.stackexchange.com
if you want a full book, the typical (undergradute) one that roughly matches the above syllabus is "An Introduction to Modern Cryptograph" by Katz and Lindell.
I've also heard good things about Mike Roseluk's the joy of cryptography
https://joyofcryptography.com/
Boneh and Shoup have a decent (freely available, and very comprehensive) textbook at the graduate level
but it is following (roughly) the standard undergraduate curriculum, so if the slides I linked too are too sparse at some point, you could look up that topic in Boneh and Shoup (or use Boneh and Shoup as context to ask an LLM more targeted questions).
That all being said, the main difficulty for someone in your position is likely determining "what to learn" in cryptography. The easy thing would be to follow the standard undergraduate track, but if you're interested in any particular topic there are likely better routes to take.
A large part of this book is aimed at the readers who want to know why we designed Rijndael in the way we did. For them, we explain the ideas and principles underlying the design of Rijndael, culminating in our wide trail design strategy.
E.g. graphs, matrices, linear algebra systems, digital photos (and much more) can be all perfectly transformed into each other, or in other ways this are all different ways to look at the same data. (This is also not just hypothetical * 2)
As a side effect saying things are similar "because they are just matrix algorithms" is meaningless, because most things are "just a matrix algorithm". (It's also meaningful for the same reason, as it means you can transform many problems into such with well understood solutions.)
And the high level abstraction of "encoder -> state -> decoder" structure is another of such "too generic/meaningless" things. As state can be anything and encoding is just "process input to generate state" and decoder is just "generate output from state" wen can model most algorithms with that. Like the identity function is now `encode(input): state=input; decode(state): output=state` (and indeed as long as state is large enough wrt. the input (or unbound) you can train encode decoder network to do exactly that, as meaningless as that seems.
Similar you can treat everything as everything you have in cryptography itself: All of hash, PRNG, stream cipher can be easily(kinda, * 3) build from another and like most algorithms in existence they can be formulated to "consume data and then produce output", i.e. as encoder decoder pattern ;)
So IMHO it's mostly an combination of observer bias due to how we like to model things (in a very high level POV). And a construction bias from that and what computer can compute well (in a slightly less high level POV when looking at somewhat "arbitrary choices").
I know some of the examples here might sound a bit ridiculous, but it's some of the most important insights in CS:
- a lot of algorithms can be transformed (or modeled as) a lot of other algorithms, take advantage of it
- just because something looks alike, it neither means it is alike nor that it being alike has any deeper meaning (e.g. two graphs might look alike, but only for the subset of sample data you happen to use to plot them)
---
(* 1): Anything related to quantum physics seems be especially bad affected by it.
(* 2): you navigation system might navigate by mapping a navigation graph to a matrix and then compute on the matrix, where steps involved might treat it as a linear algebra system solved using some fast approximation.
(* 3): At least some of the conversion directions are easy. Some are a bit less intuitive. Also this assumes you either have perfect properties on the used building block or don't have to estimate how the imperfect properties map to properties of the created construct... Oh and naturally you can use some simple operations in the transformation (add, xor) as long as they are used in a straight forward way. E.g. PRNG(seed, offset) = HASH(encode(seed, offset)), with encode being a bijektive pairing function (e.g. `bytes_128bit_le(seed)+bytes_64bit_le(offset)`). E.g. for encryption you chunk, xor every chunk with a single time used key and generate the key using HASH(encode(key, nonce, chunk_idx)), or PRNG(encode(key, nonce), chunk_idx). That is how AES-CTR/AES-GCM does encryption. It (roughly) uses XOR(AES(key, encode(nonce, offset)), chunk). (Yes AES is more used like a hash then a block cipher in most modern AES ciphers...)
The condition for carcinization is usually described as a kind of “shared condition.” After reading this article, that is what I felt.
In other words, from the perspective of shared conditions, isn’t it possible that systems receive similar pressures when they need to mix information?
1. There is a state space. 2. Each part of the input affects many parts of the output. 3. A simple rule is not enough, so nonlinearity becomes necessary. 4. But the hardware cannot be allowed to stall, so the system evolves toward a structure where simple transformations are repeated many times.
Ultimately, even across different fields, the core question is how to decompose complexity into atomic units. The choice of those units tends to converge under the pressures imposed by the underlying substrate. This seems to be the central thesis of the article.
This feels similar to how humans solve nonlinear differential equations.
If so, perhaps the structure of human cognition itself works in a similar way: when facing nonlinearity, we break it into smaller structures and design around those smaller parts.
Because my academic background is limited, I find it difficult to express this properly in language. But I think this kind of pressure can also be applied to programming and software theory.
When I think about software engineering, it also often starts from the smallest element that does not change easily, and then builds larger systems by composing those elements. In OOP, that unit is the conceptual object. In FP, it is the function. In DOP, it is data.
FP is mathematical. DOP is aligned with the data that computers store and transmit. OOP is connected to our abstract model of the world. That may be why different people are good at different paradigms.
OOP compresses the world into objects and responsibilities. FP compresses the world into functions and composition. DOP compresses the world into data and transformable structures. Utlimately, it is a question of how we cut complexity, what we choose as the minimal unit of decomposition.
Then what should this idea be called? And if we apply this to AI coding, what would it imply?
I have thoughts, but because I did not study enough, I feel frustrated that I cannot express them more fully. I wish I had learned more.
These are well-expressed thoughts on a really hard subject.
The central problem of cryptology is to prevent inference about either the key or the plaintext, despite the requirement to be able to reconstruct the plaintext from the ciphertext+key. So ciphers have to almost perfectly mix information.
Machine learning is possible because in the absence of perfect mixing, inference is possible (given many input output pairs), even if the information is many decibels down below the noise. So the information about what parameters need changing is present in the output despite many subsequent layers of processing. This means that a lot of mixing can be tolerated, and it's needed because you don't know in advance what the data flow should look like in detail, so the NN has to provide as many options as possible.
yesn't
most modern stream ciphers basically use XOR for encryption with one time use keys per chunk (like. AES-CTR, AES-GCM, AEGIS, ChaCha20, etc.)
no mixing of bites is needed there just high entropy uniformly distributed one time use keys being generated per block, i.e. you need a "good enough" PRNG
practically the easiest way to get them is by doing something similar to a hash on the state(key, nonce, index) in some form. Which is likely done by mixing up information, hence the yes in yesn't.
but any PRNG with sufficient properties would do, and there probably are some which use some clever math which you probably wouldn't describe as "mix information".
It's just "shuffling bits" + "bad one way function" is often "sufficient" secure and faster then alternatives.
And historical many ciphers (e.g. AES block cipher) come from a time where we didn't yet had grate frameworks/know-how about how to assess security properties and write cryptography. Hence why they did all kinds of ways of mixing information and chaining which sometimes is quite.. arbitrary.
It might be easy to assume AES stuck around as it's "just grate" but that is plain wrong. It stuck around because it spread everywhere (including standards/requirements) before we knew how to best do things and due to that then ended up with hardware acceleration support on most chips. But no one would create it that way anymore (it is prone to side channel attacks if you don't have HW accl. xor use bitslicing trickery which makes it slow). But due to everything having AES hw acceleration it became a very fast building block. Hence why most modern cipher still use (part of) it and even some hashes and other algorithms use it... It's another example of how a "good enough" and wide spread technology often wins, not the best.