Rendered at 11:36:00 GMT+0000 (Coordinated Universal Time) with Vercel.
jandrewrogers 19 hours ago [-]
This is surprisingly common.
The security of UUIDv4 is based on the assumption of a high-quality entropy source. This assumption is invalidated by hardware defects, normal software bugs, and developers not understanding what "high-quality entropy" actually means and that it is required for UUIDv4 to work as advertised.
It is relatively expensive to detect when an entropy source is broken, so almost no one ever does. They find out when a collision happens, like you just did.
UUIDv4 is explicitly forbidden for a lot of high-assurance and high-reliability software systems for this reason.
LocalH 18 hours ago [-]
This is why CloudFlare has done what they did with the lava lamp wall. Not that the wall is such a great source of entropy on its own - I'm sure it's not their only source, but you can never have too many sources of entropy - but it makes it visible in a way that can grab those who don't fully understand the concepts of RNGs and how entropy plays into that.
The more sources of entropy, the more closely you approach "perfect" randomization. And a large chunk of those entropy sources need to be non-deterministic. Even on the small level, local applications running on local systems, like games, can use things like the mouse coordinates, the timings between button presses, the exact frame count since game start before the player presses Start to greatly enhance randomness while still using PRNGs under the hood
Yes, for the latter, that's technically deterministic (and the older the game considered, the more deterministic it is, see TAS runs of old games obliterating the "RNG"). But when you have fifty different parameters feeding into the initial seed, that's fifty things an attack would have to perfectly predict or replay (and there are other ways to avoid replay attacks that can be layered on top)
If CloudFlare had less than 100 different sources of entropy, I'd be disappointed. And that's assuming their algorithm for blending those entropy sources into a single seed value is good
greiskul 14 hours ago [-]
> you can never have too many sources of entropy
This is so true. And the beauty is that with algorithms, we don't even need to know much about the entropy to be able to extract it.
There is the Von Neumann method of generating an unbiased coin from a biased coin. Of throwing it twice, and checking if you got HT or TH. And completely discarding all HH or TT results. It doesn't matter if the coin you are using is 20% or 80%, the result will be a true 50/50.
There are more modern algorithms that can be even better (in that they need less coin tosses if you have a very unbalanced coin).
And then there is modern cryptographic hashing. Feed it all the bits you can. Collisions end up only happening in the real world if every single one of those bits is identical. So if you have actual entropy being fed, that cannot be controlled, predicted, or replicated, modern cryptography tells you that the end result is unique.
ms_menardi 7 hours ago [-]
> There is the Von Neumann method of generating an unbiased coin from a biased coin. Of throwing it twice, and checking if you got HT or TH. And completely discarding all HH or TT results. It doesn't matter if the coin you are using is 20% or 80%, the result will be a true 50/50.
This blew my mind. Thank you!
I had to think about it a bit, so for anyone scratching their head right now trying to figure it out, consider it this way:
what matters is the ordering, of heads-then-tails, or tails-then-heads.
It doesn't matter that it's biased one way or the other, if you keep flipping pairs until you get a result with two different values, it's a 50/50 chance whether the less-likely result comes first, or second.
You might only have a 20% chance of any particular pair having a tails (for example), but in the cases where you do have a tails, it's a 50/50 chance that it comes first or second.
susam 6 hours ago [-]
And for people who like equations, here is my attempt at explaining it.
Assume each flip is independent and the bias remains same in each flip.
You don't need conditional probability here, as the flips are independent.
It's just p(H)p(T).
And p(H)p(T) = p(T)p(H), thus 2*p(H)p(T) = 2p(1-p).
aws_ls 2 hours ago [-]
Thanks for your explanation. I did not get it in the first read, and was too lazy to think, until saw your comment.
Just want to point out, that one is actually doing the experiment with a biased coin, then one must ignore all pairs.
e.g in case a coin which is heavily biased, say .9 H and .1 T. One should start with ignoring all the HH pairs, and start only at odd index. Lest, one picks a value like HHHHT (in the case the 2nd HH pair was not skipped, instead they greedily picked up the first HT, which will make the experiment HT biased).
taegee 5 hours ago [-]
Afaics it's just basic commutativity –
p(H)p(T) = p(T)p(H) – since instances are independent.
Same, of course, holds for flipping it multiple times. But there you get more than Head or Tail (binomnk(n, k)).
throwaway89865 7 hours ago [-]
Not very random if it's only TH or HT. Trivial to brute force with no more than two tries!
hun3 46 minutes ago [-]
(Note that this still assumes that each biased-coin toss is i.i.d.)
victorbjorklund 17 hours ago [-]
If I understand it the Lava lamps are 90% PR/fun. They have a lot of other sources for entropy that scales better.
pverheggen 15 hours ago [-]
Yes, they also have wave machines, pendulums, and mobiles :)
Wouldn’t thermal noise in a resistor make more practical sense?
nyrikki 11 hours ago [-]
The original from SGI back in the mid 90's, before CPUs had RDRAND instructions etc... was a an actually practical solution.
At the time I was at the Internet company that originally got online-gaming banned in the US, we were looking at CCDs and Cesium emitters that required a license etc...
While I am not sure, it seems cloudflare basically implemented one after SGI's[0] patent expired.
The patent and the licensing cost and adding SGI was a major blocker for us doing it, the startup closed before we found a real solution. But the best PRNGs like Blum Blum Shub were way too slow at the time. But things did improve quickly at that time.
Would a CRT TV tuned to channel 3 and no RF input be a good source?
ssl-3 57 minutes ago [-]
In the sense that RF noise can be a source of entropy: Sorta*. But one doesn't need the whole thrift-store television set to do that; the visual aspect of a CRT displaying analog video snow just adds style points**.
*: Sorta, because if someone discovers that the entropy is derived from an analog TV tuned to channel 3, then they also know how to influence it from outside.
**: Style points can have value; it's OK to have fun with work. But that's a secondary function.
unilynx 16 hours ago [-]
The noise probably makes the lava lamp wall just as effective as pointing the camera at the Mona Lisa - the lamps themselves are not that unpredictable frame-to-frame.
LocalH 15 hours ago [-]
For the record, the lamps and camera are present in their lobby afaik, so you can actually go there, stand in front of them, and slightly affect the entropy.
A cool parlor trick, certainly.
throw-the-towel 15 hours ago [-]
Speaking of ants, Fourmilab (i.e. John Walker, of Autodesk fame) used to provide a random number generator powered by background radiation: https://www.fourmilab.ch/hotbits/
You can get entropy just by plugging an oscilloscope into a pile of dirt and cranking the gain up.
adrian_b 14 hours ago [-]
Any high-gain amplifier can be used, with its input connected to a resistor or a diode.
For instance you can use the microphone input of a PC, together with an additional external amplifier made with an audio amplifier integrated circuit or an operational amplifier integrated circuit and with a diode or a resistor at its input. The microphone input of PCs provides a 5 V voltage that can be sufficient as a power supply for a noise source plugged in it.
Such a true RNG can be made on a small PCB with an audio jack, so you can plug it into any PC with microphone input and have a true RNG that you can trust better than the RNG included in modern Intel and AMD CPUs. In the past, many AMD CPUs had defective internal RNGs. Moreover, both for Intel and for AMD it is impossible to verify whether the internal RNG does what it claims to do or it generates predictable pseudo-random numbers.
tliltocatl 2 hours ago [-]
Meh. The problem is that it might start receiving you local radio station and end up deterministic enough to screw you. So you need to shield the dirt properly.
11 minutes ago [-]
bmitc 3 hours ago [-]
> This is why CloudFlare has done what they did with the lava lamp wall.
Interesting. I wonder how true it actually is that they use it like they claim here: https://www.cloudflare.com/learning/ssl/lava-lamp-encryption.... It's in one of their lobbies, so doesn't that make it susceptible to an attack in some way? I'm not knowledgeable enough to know, but I figured if they actually used that method, they'd have a more controlled environment.
I also don't fully understand it. A large part of that wall is static. And the camera isn't going to pick up on the stochastic properties of the lava as much as exists in the real world. So it feels like their images will be very statistically similar.
Yep - I've seen legitimate-looking dups on bad hardware, and "there are a ton of trailing zeros" is also an incredibly common duplicate mode for some UUID libraries (like earlier Go ones that didn't validate the "requested N bytes, returned 3, you must re-request to get N-3 more" return values. it doesn't happen on most hardware or OSes, so people never check it, so it just comes up in production some day with tens of thousands of collisions).
thecloud 18 hours ago [-]
Thanks for the insight! Mind expanding on what alternatives are being used in high reliability systems instead of UUIDv4?
jandrewrogers 18 hours ago [-]
In high-reliability systems a criterion for identifier design is easy detection of defective identifiers. This includes buggy systems and adversarial manipulation.
The problem with UUIDs that rely on entropy sources is that it is computationally expensive to detect if the statistical distribution of identifiers is diverging from what you would expect from a random oracle. I've written systems that can detect entropy source anomalies but you'll want to turn it off in production.
It is pretty cheap to sanity check most non-probabilistic identifier schemes. UUIDs that use broken hash algorithms (e.g. UUIDv3/5) or leak state (e.g. UUIDv7) are exposed to adversarial exploitation.
The identifier scheme is dependent on the use case. Does the uniqueness constraint apply to the instance of the object or the contents of the object? Is the generation of identifiers federated across untrusted nodes? How large is the potential universe of identifiers?
The basic scheme I've seen is a 128-bit structured value that has no probabilistic component. These identifiers can be encrypted with AES-128 when exported to the public, guaranteeing uniqueness while leaking no internal state. The benefit of this scheme is that it is usually drop-in compatible with standard UUID even though it is technically not a UUID and the internal structure can carry useful metadata about the identifier if you can decrypt it.
Federated generation across untrusted nodes requires a more complex scheme, particularly if the universe of identifiers is extremely large. These intrinsically have a collision risk regardless of how the identifiers are generated.
All of the standardized UUID really weren't designed with the requirements of scalable high-reliability systems in mind. They were optimized for convenience and expedience which is a perfectly reasonable objective. Most people don't need an identifier system engineered for extreme reliability, even though there is relatively little cost to having one.
eaf7e281 16 hours ago [-]
> leak state (e.g. UUIDv7)
But according to PostgreSQL, UUIDv7 provides better performance in the database, so is this essentially a trade off between security and speed?
jubilanti 16 hours ago [-]
Yes, because UUIDv7 gives up some random bits in order to include the timestamp, which is done in a way that makes UUIDv7s quick to sort by timestamp.
ai_slop_hater 14 hours ago [-]
How does including the timestamp expose me to adversarial exploitation?
danpalmer 12 hours ago [-]
It reveals the time you created the UUID, for one. That can lead to a bunch of problems.
goalieca 11 hours ago [-]
I’ve not come across any.
filcuk 18 hours ago [-]
The latest UUID (7?) Uses half random gen, half timestamp. This not only makes it sortable by creation, but would also make a collision like this impossible.
stanmancan 18 hours ago [-]
It's still possible in most implementations of UUIDv7.
UUIDv7 assigns the first 48 bits for the timestamp in milliseconds. You can generate a lot of UUID's in a millisecond though!
Then you have another 12 bits that you can use as you wish; "rand_a". The spec has a few methods they suggest on how to use these bits including 12 bits of random data, using it for sub-millisecond timestamps, or creating a monotonic counter, but each have their downsides:
- Purely random data means you can still run into collisions and anything within the same millisecond is unordered
- Sub millisecond you can run into collisions; there's nothing stopping you from generating two UUID's with the same 62 bits of rand_b data in the same sub-millisecond timestamp.
- Monotonic counters can overflow before the next tick, then what? Rollover? Once you roll over it's no longer monotonic and you can generate the same random data within the same monotonic cycle. Also; it's only monotonic to the system that's generating the UUID. If you have a distributed system and they each have their own monotonic cycles then you'll be generating UUID's with the same timestamp + monotonic counter, and again, are relying on not generating the same random data.
You can steal some of the 62 bits in rand_b if you want as well; you can use rand_a for sub-millisecond accuracy, and then use a few bits of rand_b for a monotonic counter. There's still a chance of collision here, but it's exceedingly low at the expense of less truly random data at the end.
If you want truly collision free, you'd also need to assign a couple of bits to identify the subsystem generating the UUID so that the monotonic counter is unique to that subsystem. You lose the ordering part of the monotonic counter this way though, but I guess you could argue that in nearly 100% of cases the accuracy of sub-millisecond order in a distributed system is a lie anyways.
naniwaduni 16 hours ago [-]
I think by the time you're building a system that needs to generate (and persist!) billions of identifiers per millisecond, you're solidly past the point where all your design decisions need to be vetted for whether they make sense on your extremely exotic setup.
tremon 12 hours ago [-]
But 12 bits is not "billions of identifiers" -- it's 4096. Once you exhaust that counter in the same millisecond, you are still relying on a gamble that your random source will not generate the exact same bit sequence for the previous same counter value. And this thread started out with the OP explaining that random collisions are much more common than we'd like them to be, for various reasons.
rootlocus 17 hours ago [-]
We have a dedicated snowflake id generator service that returns batch ids. It's also distributed, each service adds its own instance number to the id. When it overflows it just blocks for the next ms. For our traffic, it's never a bottleneck.
ralferoo 14 hours ago [-]
Something I use on my own distributed system (where I wanted 64-bit IDs), is use 32 bits for the time in seconds (with an epoch from 2020, so good until 2088), 8 bits for the device ID and 24 bits for a serial number (resets to 0 every time the seconds increments).
That's generally enough IDs per second for most of my edge nodes, but the central worker nodes need more, so I give them a different split and use 4 bits for the device ID and 28 bits for serial number instead.
If a node overflows its serial number that second, I kind of cheat and increment the seconds field early. Every time this happens, I persist the seconds field to the database, and when the app restarts, it starts its seconds count at the last persisted seconds plus one. If the current time in seconds is greater than the last used seconds, I also update it and reset the serial number. Works remarkably well for smoothing out very occasional spikes in ID generation while still approximately remaining globally sortable.
I also "waste" a bit of the 32-bit time field by considering it to be signed, even though it's not really because I don't expect this system to last long enough to reach times where the MSB gets set. But if I ever change my system, I'll set that bit and everything will stay ordered. I'll probably reset the epoch at that point too.
ffsm8 18 hours ago [-]
Considering the context I think it's worth pointing out that it's technically not impossible - it's just even less likely.
Everything in crypto is always a probability - never a certainty
nitsky 18 hours ago [-]
True, but it makes the specific collision the post observed completely impossible.
stanmancan 18 hours ago [-]
I left a more detailed comment on the parent, but it's definitely not impossible!
ryanmonroe 17 hours ago [-]
The scenario in this post is that the first uuid was created one year before the duplicate uuid. That isn’t possible with v7
ffsm8 17 hours ago [-]
You're heavily leaning on "collision like this" to relate to the exact time stamps for your statement to be true.
It's equality possible to interpret the "like this" to the collision itself, without a focus on the 1 year distance between the creation dates.
So I guess both views are valid.
calfuris 12 hours ago [-]
The inclusion of a timestamp in v7 makes collisions impossible unless the generating systems think that the time is the same down to the millisecond, which makes the temporal distance quite relevant.
stanmancan 12 hours ago [-]
Plenty of systems end up generating multiple UUID's in a single millisecond.
The issue with UUIDv7 is that you also have significantly less entropy since you only have a 62 bits (sometimes less, depending on implementation) of "random" data. So while the time aspect of format lowers the chances of collisions, generating two UUIDv7's in the same millisecond (depending on implementation) have a significantly higher chance of collision than two UUIDv4's.
It's still incredibly unlikely, but it's also incredibly unlikely you generate two matching UUIDv4's, but it does happen.
TLDR; It's possible to generate matching UUIDv7's, don't assume otherwise.
stanmancan 15 hours ago [-]
The scenario being the collision itself, the time period isn’t particularly relevant aside from it occurring much quicker than expected.
JamesSwift 17 hours ago [-]
Surely the scenario where he generates the same number of items as he did between 2025 and now, but did it in 1 tick of v7 UUIDs also runs into it?
majorchord 11 hours ago [-]
The spec doesn't require the use of actually random numbers though.
18 hours ago [-]
matt-p 17 hours ago [-]
UUIDv7 is arguably better, because it is entropy plus time.
otherme123 15 hours ago [-]
It is what I usually use for its sorting, but some people don't want to leak time info.
majorchord 11 hours ago [-]
Entropy is not a requirement in the UUID spec.
lazide 18 hours ago [-]
Sequences, generally.
perching_aix 18 hours ago [-]
How is UUIDv4 to blame for a broken source of entropy? Or am I misinterpreting your words?
hmry 18 hours ago [-]
I wouldn't say it's "to blame", but it is more susceptible to bad RNG.
If the RNG is bad, you'll get more benefit from adding non-random bits than you would from additional badly RNG'd bits.
The probability of future collisions also rises the more IDs you generate. If you incorporate non-random bits, you can alleviate that:
- timestamps make the collision probability not grow over time as you accumulate more existing UUIDs that could collide
- known-distinct machine IDs make the collision probability not grow as you add more machines
jandrewrogers 17 hours ago [-]
I never blamed UUIDv4 for broken entropy sources. A broken entropy source breaks UUIDv4 even if you are using it correctly.
There is a long history of broken entropy sources showing up in real systems. No matter how hard people try to prevent this it keeps happening. Consequently, a requirement for high-quality entropy sources is correctly viewed as an unnecessary and avoidable foot-gun in high-reliability software systems.
hombre_fatal 18 hours ago [-]
Presumably they mean using randomness as unique IDs.
adonovan 8 hours ago [-]
For a while we’ve been fixing telemetry-reported crash bugs in the project I maintain, and now hardware bugs are showing up with some frequency. I was amazed how common they are. Sometimes data values (e.g. SP register) are corrupted, but other times even infallible operations (e.g loads of rodata constants) crash, indicating that the instruction itself was corrupted. So, yeah, I believe you’ll eventually see UUID collisions, but not because the underlying cryptanalysis was wrong.
Hizonner 16 hours ago [-]
> UUIDv4 is explicitly forbidden for a lot of high-assurance and high-reliability software systems for this reason.
Hmm. What do those systems do for cryptography? Just assume it won't work and not rely on it at all?
jandrewrogers 15 hours ago [-]
In these kinds of systems the cryptographic components often aren't even accessible from the software. It isn't a thing you need to worry about.
This makes it easier to audit for use of entropy sources in the software since there really isn't a valid use case for it.
18 hours ago [-]
erikerikson 16 hours ago [-]
Super simple to detect and try again.
jandrewrogers 16 hours ago [-]
A collision is simple to detect but it requires you to actually check, which is expensive at scale. The entire point of UUIDv4 is that you don't have to check for collisions because it should never happen. But if you don't check and it does happen you are in UB territory which is generally very bad.
A risk of collision before it happens is non-trivial to detect but this is really what you'd want.
erikerikson 13 hours ago [-]
Only expensive if you have unsorted keys or lack an index. Neither of which are unscalable.
jandrewrogers 12 hours ago [-]
You must have missed the “at scale” part. There is nothing inexpensive about extra network hops, cache misses, and page faults implied by your solution. Indexing at scale is almost always lossy for performance reasons. The location where you insert a new record is frequently not the same location as where you have to search for an existing record.
It is resource amplification all the way down. In a lot of systems that index these keys the cost of that check is several times that of doing a blind insert.
erikerikson 12 hours ago [-]
No I didn't miss it.
DynamoDb works fine, using CQRS if necessary.
keeganpoppen 12 hours ago [-]
literally the whole point of randomly generating UUIDs is that you don't need to check for collision. that's what the "U"s are for. that is the abstraction that is supposedly being provided. "using <insert Amazon AWS Certification Test Answer #7>" is not in any way a "scalable solution" for that with no other context. nor is just throwing out <random Martin Fowler concept #27>. the whole point is that it is a global (well, per name, "universal") abstraction that can, in practice, have holes that make it so you can't use it "universal"-ly.
erikerikson 11 hours ago [-]
I totally appreciate what you are complaining about. It's always been part of the documentation for a UUID. Having had Martin Fowler as a colleague and meeting with him weekly for a bit, I'd expect him to nod along with what I've written. It's standard knowledge and part of the technical corpus. As is actually distributed unique ID generation which is also not hard.
orf 12 hours ago [-]
AKA centralising a decentralised identifier generator?
erikerikson 12 hours ago [-]
There are better approaches like pre -avoiding collisions but generating tends to be more expensive than checking.
orf 12 hours ago [-]
In what world is generating a UUID more expensive than checking for duplicates? at any scale?
Walk me through that please
erikerikson 12 hours ago [-]
Yeah, that was a little sloppy but it's generating is more expensive than not generating. In more words, generating an id and validating uniqueness is more expensive than only validating uniqueness.
keeganpoppen 12 hours ago [-]
exactly lmao. that is exactly what is being presented as "scalable <full stop>". sigh.
erikerikson 11 hours ago [-]
No one has yet defined the scale but almost all of the real world scenarios people are actually encountering would be handled by either of the offered solutions.
squirrellous 9 hours ago [-]
In this specific case. In the case of trace IDs (an example of which is [1]) where the equivalent of UUIDs are explicitly used to avoid coordination, it’s hard to imagine how you’d reliably detect and retry.
A lot of databases have a uniqueness constraint that is basically a register level compare and replace. Others have a if_not_exists which is nearly the same. If you're not targeting a serious throughput use case, it's enough. If you are then there are lots of solutions/alternatives that completely avoid coordination. On the other hand, maybe tracing protocols are robust to out of order delivery. If that won't do them sequence numbers tied to monotonic sequence IDs should be plenty. If not then I'd need very serious conversations to be convinced you're not wasting everyone's time
ranger_danger 11 hours ago [-]
Reading the UUID spec leads me to believe that good entropy is not even a requirement for any version:
> Implementations SHOULD utilize a cryptographically secure pseudorandom number generator (CSPRNG) to provide values that are both difficult to predict ("unguessable") and have a low likelihood of collision ("unique").
So I don't think technically we can say entropy or random numbers at all are even "required for UUIDv4 to work as advertised."
aaron695 14 hours ago [-]
[dead]
throwaway_19sz 1 days ago [-]
Funny story no one will believe, but it’s true. A good friend of mine joined a startup as CTO 10 years ago, high growth phase, maybe 200 devs… In his first week he discovered the company had a microservice for generating new UUIDs. One endpoint with its own dedicated team of 3 engineers …including a database guy (the plot thickens). Other teams were instructed to call this service every time they needed a new ‘safe’ UUID. My pal asked wtf. It turned out this service had its own DB to store every previously issued UUID. Requests were handled as follows: it would generate a UUID, then ‘validate’ it by checking its own database to ensure the newly generated UUID didn’t match any previously generated UUIDs, then insert it, then return it to the client. Peace of mind I guess. The team had its own kanban board and sprints.
roryirvine 23 hours ago [-]
I've seen similar, buried deep within a major SV tech co.
Their process was a bit more complex because the master list of in-use UUIDs was stored in an external CMDB service run by a different department. They got a daily dump of that db, so were able to check that when generating a "provisional" id. Only once it had been properly submitted to the CMDB did it became "confirmed".
They had guardrails in place to prevent "provisional" ids being used in production, and a process for recycling unused "confirmed" ids. Oh, and they did regular audits which were taken very seriously by management.
Last I heard, they were 18 months into a 6 month project to move their local database cache to Zookeeper...
wongarsu 23 hours ago [-]
At some point someone optimizes the system to a global company-wide incrementing 128 bit counter. Instead of needing a costly database lookup against a growing database the microservice just fetches the current counter, increments it by one and hands out the new value. Easy, fast O(1) operation.
This even allows you to shard the service to provide high availability and distribute the service globally to reduce latency. Just give each instance a dedicated id range it can hand out. I'd suggest reserving some of the high bits to indicate data center id, and a couple more bits for id-generator instance within that dc.
Wait a second, this starts to look familiar ... does Twitter still do that, or did they eventually switch?
franktankbank 1 days ago [-]
Who has the balls to form that team? Were they disbanded?
CodesInChaos 17 hours ago [-]
This is usually caused by an insufficently seeded PRNG.
Are you generating the UUID in the backend, or the frontend? Frontend is fundamentally unreliable for many reasons, including deliberate collisions. So if that case you'll need to handle collisions somehow. Though you can still engineer around common sources of collisions, the specifics depend on the environment.
On the other hand making a backend reliable is feasible. What kind of environment is your code running in? Historically VMs sometimes suffered from this problem, though this should be solved nowadays. Heavily sandboxed processes might still run into this, if the RNG library uses an unsafe fallback. Forking processes or VMs can cause state duplication and thus collisions.
danpalmer 12 hours ago [-]
I remember hearing about Segment (analytics company) had their entire product based around UUIDs generated in web browsers. There were collisions all over the place, the product was seemingly incapable of producing useful data at a fundamental level because of it. Hopefully they've fixed that now.
_kst_ 16 hours ago [-]
This reminds me of a passage from the book "Pro Git".
"Here’s an example to give you an idea of what it would take to get a SHA-1 collision. If all 6.5 billion humans on Earth were programming, and every second, each one was producing code that was the equivalent of the entire Linux kernel history (6.5 million Git objects) and pushing it into one enormous Git repository, it would take roughly 2 years until that repository contained enough objects to have a 50% probability of a single SHA-1 object collision. Thus, an organic SHA-1 collision is less likely than every member of your programming team being attacked and killed by wolves in unrelated incidents on the same night."
Deliberate collisions are addressed in the following paragraph.
SHA-1 hashes are not random, so the issue of poor pseudo-random number generation doesn't apply as it does to uuidv4. And SHA-1 hashes are 160 bits, vs. 128 for uuidv4.
But I love the idea of unrelated wolf attacks.
mega_dean 12 hours ago [-]
Reminds me of this page with an example for understanding how many permutations there are for a shuffled deck of cards: https://czep.net/weblog/52cards.html
> So, just how large is it? Let's try to wrap our puny human brains around the magnitude of this number with a fun little theoretical exercise. Start a timer that will count down the number of seconds from 52! to 0. We're going to see how much fun we can have before the timer counts down all the way.
Shall we play a game?
> Start by picking your favorite spot on the equator. You're going to walk around the world along the equator, but take a very leisurely pace of one step every billion years. The equatorial circumference of the Earth is 40,075,017 meters. Make sure to pack a deck of playing cards, so you can get in a few trillion hands of solitaire between steps. After you complete your round the world trip, remove one drop of water from the Pacific Ocean. Now do the same thing again: walk around the world at one billion years per step, removing one drop of water from the Pacific Ocean each time you circle the globe. The Pacific Ocean contains 707.6 million cubic kilometers of water. Continue until the ocean is empty. When it is, take one sheet of paper and place it flat on the ground. Now, fill the ocean back up and start the entire process all over again, adding a sheet of paper to the stack each time you’ve emptied the ocean.
Do this until the stack of paper reaches from the Earth to the Sun. Take a glance at the timer, you will see that the three left-most digits haven’t even changed. You still have 8.063e67 more seconds to go. 1 Astronomical Unit, the distance from the Earth to the Sun, is defined as 149,597,870.691 kilometers. So, take the stack of papers down and do it all over again. One thousand times more. Unfortunately, that still won’t do it. There are still more than 5.385e67 seconds remaining. You’re just about a third of the way done.
dalmo3 7 hours ago [-]
Damn, I got the paper stack wet with all that ocean water. Guess I'm starting again from scratch...
swiftcoder 15 hours ago [-]
On the other hand, it turns out that pre-image attacks are quite feasible, and as several people who have thoughtlessly committed the pre-image attack test case files to git can attest… quite problematic
TacticalCoder 14 hours ago [-]
Hasn't the Git team been hard at work to optionally offer other hashes, like SHA256, in addition to SHA-1?
> FWIW, I just tested crypto.getRandomValues() behavior on googlebot and it is also deterministic(!)
D2OQZG8l5BI1S06 12 hours ago [-]
That makes sense. I'm not sure why anybody would generate UUIDs in browsers though, it seems to defeat the purpose.
danpalmer 12 hours ago [-]
Tell that to Segment. Hopefully they've fixed that, but they didn't seem to think it was a problem years ago (spoiler: it was a big problem).
adyavanapalli 1 days ago [-]
What you're talking about is so extremely rare that it's much more likely that the entire Earth is destroyed by an asteroid right this inst...
delichon 23 hours ago [-]
About as rare as an asteroid typing an ellipsis and clicking the add comment button.
juancn 22 hours ago [-]
Something off on how the RNG is initialized?
Lack of entropy?
If the rng is not customized it will use:
const rnds8 = new Uint8Array(16);
export default function rng() {
return crypto.getRandomValues(rnds8);
}
getRandomValues doesn't specify a minimum amount of entropy.
Hizonner 22 hours ago [-]
It's a near certainty that something is badly wrong with the RNG, and, yes, probably in how it's seeded.
It's probably messing up the cryptography, too.
Onavo 19 hours ago [-]
But defaults should be sane and safe. RNG isn't the sort of thing you want to be messing up. Every JS dev was taught that Math.random is not safe by default, but the crypto package is.
14 hours ago [-]
Geee 1 days ago [-]
According to the many-worlds interpretation of quantum mechanics, there's bound to be one branch of universe where every UUID is the same. Can you imagine what those guys are thinking?
mittermayr 1 days ago [-]
I fully agree. It makes no sense. Yet...
The only guesses I'm having is that we originally generated UUIDv4s on a user's phone before sending it to the database, and the UUID generated this morning that collided was created on an Ubuntu server.
I don't fully know how UUIDv4s are generated and what (if anything) about the machine it's being generated on is part of the algorithm, but that's really the only change I can think of, that it used to generated on-device by users, and for many months now, has moved to being generated on server.
wongarsu 23 hours ago [-]
If it was two on-device generated UUIDs I could see a collision happening. There have been instances of cheap end devices not properly seeding their random number generators, leading to colliding "random" values. And cases of libraries using cheap RNGs instead of a proper cryptographic RNG, making it even worse
But on a server that shouldn't happen, especially not in 2026 (in the past, seeding the rngs of VMs used to be a bit of an issue). Even if one UUID was badly generated, a truly random UUID statistically shouldn't collide with it. You'd need an issue in both generators
AntiUSAbah 1 days ago [-]
You let users generate a UUID?
To be honest, the chance that you are doing something weird is probably higher than you experiencing a real UUID conflict.
How did your database 'flag' that conflict?
wongarsu 23 hours ago [-]
If it's UUIDv4 and you validate that the UUID is valid and not conflicting I don't really see the issue with user-generated UUIDs. Being able to generate unique keys in an uncoordinated manner is the main selling point of UUIDs
Sure, it's something I'd flag in any design to spend two minutes to talk about potential security implications. But usually there aren't any
mittermayr 1 days ago [-]
user-generated (as in: on the user's phone) was only at the very early stages of this product, and we've since moved to on-server. It's a cash-register type of app, where the same invoice must not be stored twice. So we used to generate a fresh invoice_id (uuidv4) on the user's device for each new invoice, and a double-send of that would automatically be flagged server-side (same id twice). This has since moved on to a server-only mechanism.
The database flagged it simply by having a UNIQUE key on the invoice_id column. First entry was from 2025, second entry from today.
stubish 1 days ago [-]
The UUIDv4 collision is statistically extremely unlikely. What is more likely is both systems used the same seed. This might be just a handful of bytes, increasing the chance of collision to one in billions or even millions.
lazyjones 1 days ago [-]
Better check what crypto.js is actually doing in your exact setup. Weak polyfills exist...
If the entire universe were turned into a giant computer and did nothing but generate uuids until its heat death, how many bits would you need for the ID space?
"But are you worried that every human on Earth will be hit by a meteorite right now? That probability is also non-zero, yet it is so infinitesimally small that we treat it as an impossibility."
This might be a bad example because one meteorite could take out the world and given enough time is likely to.
beejiu 18 hours ago [-]
Are your UUIDs generated client side or server side? If it's client side, it could be due to a crawling bot. Googlebot for example executes Javascript using deterministic "randomness".
Yeah, the answer almost certainly has to be this, or that they were using an old version of the package which didn't use the system RNG correctly (the current version appears to do it correctly, but I didn't dive into older versions), or their project has loaded an old broken polyfill re-implementing the JS crypto API, or they were running this on a hosting setup that does something jank like resuming the same VM snapshot with its RNG state on multiple servers. This category of explanation is many orders of magnitude more likely than a true random collision.
pif 3 hours ago [-]
All the comments I've been able to read are missing the elephant in the room: no high-quality entropy source can turn a "should" into a "must".
If you want something that is difficult to guess, ask the cryptography guys. But if you need something that is -_guaranteed_ unique, you must build it yourself.
merlindru 21 hours ago [-]
Gotta be a seeding issue. If it's not, and you can prove it, you're about to be a little famous probably :P
leni536 1 days ago [-]
It's not happening by chance, there is a bug somewhere.
From what I skimmed the package should just call to the js runtime's crypto.randomUUID(). I think it should always be properly seeded.
I think it is extremely unlikely that the runtime has a bug here, but who knows? What js runtime do you use?
jbverschoor 19 hours ago [-]
Most plausible cause: uuid package depends on some random number generator package, which has recently been compromised in order to make “random” numbers predictable. As a result, many crypto (ssl + currency) projects are compromised due to a supplychain attack.
jbverschoor 19 hours ago [-]
Changed 3 weeks ago:
uuid/src/rng.ts : the random array is const. Every call will share the same random number. Subsequent call will update your old random code, so if you generated something important... good luck
The old code used to do a slice() which creates a new copy.
Might be unintentional. Although I have no idea how this would pass any tests, as you would think to test generating 2 randomnumbers and hope they are not the same.
jbverschoor 18 hours ago [-]
Didn't actually want to write a test myself.. but I miss Claudia confirmed it. Pretty concearning.
Synchronous / serial calls:
import rng from './rng';
const a = rng();
console.log('a after first call: ', Array.from(a));
const b = rng();
console.log('a after second call:', Array.from(a));
console.log('b after second call:', Array.from(b));
console.log('a === b (same reference)? ', a === b);
console.log('a equals b (same contents)? ', a.every((v, i) => v === b[i]));
output:
a after first call: [
101, 193, 125, 19, 142,
136, 181, 140, 209, 224,
176, 153, 179, 248, 246,
166
]
a after second call: [
4, 29, 48, 215, 162, 60,
64, 23, 78, 137, 2, 186,
230, 249, 70, 224
]
b after second call: [
4, 29, 48, 215, 162, 60,
64, 23, 78, 137, 2, 186,
230, 249, 70, 224
]
a === b (same reference)? true
a equals b (same contents)? true
and aynchronous calls:
import rng from './rng';
async function getId() {
const bytes = rng();
await new Promise(r => setTimeout(r, 0)); // yield to the event loop
return Array.from(bytes);
}
const [id1, id2] = await Promise.all([getId(), getId()]);
console.log('id1:', id1);
console.log('id2:', id2);
console.log('identical?', id1.every((v, i) => v === id2[i]));
Shouldn't your test follow the pattern of how rng() is actually being used in the uuid.ts code internally?
Your test is more-or-less contrived to fail given the tradeoff to avoid repeated memory allocations but that doesn't say much about the actual usage in uuid generation since it's not exported for general purpose use.
Presumably they had some hot path somewhere where rng() is called in a loop and this optimization made sense with awareness that it could be misused as in your example breaking the contract ensuring randomness, which (hopefully) they're not actually doing anywhere.
Unless I'm missing something replacing the package over this with a less vetted implementation seems excessive and possibly even counterproductive.
jbverschoor 15 hours ago [-]
I don't believe so. Sure it's not an issue after some checks, but it's very easy to shoot yourself in the foot like that. I get the micro-optimization for the allocation.. But it's not clear / documented. At the minimum, the function should be renamed to reflect the inner workings.
The function is a module, and it doesn't do what you'd expect.
Welp.. time to patch and update everything again. Another day, another npm-package headache. Very odd()
Attack vector: call the rng(), and send the result somewhere. You now have now overwritten someone elses "random number" and know about it. The fun things you can do with those numbers!
jbverschoor 18 hours ago [-]
Seems to be "safe" because of it's not exported, and the results get used in a different way. Still is a bug in my book.
tumdum_ 1 days ago [-]
Poorly seeded prng.
jdthedisciple 1 days ago [-]
most likely the culprit indeed
nswango 1 days ago [-]
But I used nonstandard nonces!
serf 1 days ago [-]
1 in 4.72 × 10²⁸
1 in 47.3 octillion.
i'd be suspecting a race condition or some other naive mistake, otherwise id be stocking up on lottery tickets.
(lol at the other user posting at the same time about the lottery ticket.. great minds and all that.)
petee 1 days ago [-]
I've always looked at it the the other way - being that lucky would mean you have even less chance of something else lucky happening, good time to save your money
k4rli 23 hours ago [-]
The lottery ticket part makes no sense. Statistically if such an improbable event just happened to him, then chance of it happening again should be even more improbable.
xyzzy123 7 hours ago [-]
I had dup uuids causing soak test failures in a Linux based distributed system. After long investigation it turned out there was a kernel bug (race condition) that meant two processes on MP system reading from /dev/random at the same could (very rarely, like 1 in a million) get the same bytes when reading the device.
I'd look at rng initialisation first.
jordiburgos 1 days ago [-]
Please, do not use b6133fd6-70fe-4fe3-bed6-8ca8fc9386cd, I checked my database and I was using it already.
rich_sasha 1 days ago [-]
I always thought generating UUIDs at random was insane. I now only use LLMs. The prompt is: "generate a UUID. Make sure no one ever used it anywhere in their code or database. Check your work and think hard about each step. Do not output any reasoning or plain English, only th UUID itself".
You're welcome.
mittermayr 1 days ago [-]
I knew it, we're all getting the same cheap UUIDs and the good ones are reserved for the big dogs.
Galanwe 1 days ago [-]
uuid.uuidv4() recently switched to "adaptive entropy" instead of "xmax entropy" in an effort to save costs on non-premium users.
robshep 1 days ago [-]
I'm using 16b55183-1697-496e-bc8a-854eb9aae0f3 and probably some more too.
I suppose if we all post our list here, then we can all check for duplicates?
We should all send our already-generated UUIDs to a shared database, we could just put it on Supabase with a shared username/password posted on HN, so we can all ensure that after generating a UUIDv4 locally, it's not used by anyone else. If it's in the database, we know it's taken.
It's a super simple mechanism, check in common worldwide UUID database, if not in there, you can use it. Perhaps if we use a START TRANSACTION, we could ensure it's not taken as we insert. But that's all easy, I'll ask Claude to wire it up, no problem.
broken-kebab 1 days ago [-]
But then I will claim I have already used all the UUIDs in my spreadsheets, and my lawyer will send cease&desist letters to every database.
That UUID should have my name sticker on it. Don't your UUIDs have name stickers?
smokel 16 hours ago [-]
Multiple times have I blamed compilers, cosmic rays, quantum effects, or at the very least an obscure kernel bug, before realizing that I was the source of a bug.
A collision at 15,000 records is so unlikely that I would first suspect something else. Duplicate processing, replayed requests, reused objects, misleading logs, or another code path reusing the identifier.
Could you share a bit more of the surrounding code so we can check?
latentframe 5 hours ago [-]
One of the most dangerous words in engineering is “statisticaly impossible”
At enough scale edge cases stop to be theoretical and start become production events.
sedatk 13 hours ago [-]
> Duplicate UUIDs (Googlebot)
> This module may generate duplicate UUIDs when run in clients with deterministic random number generators, such as Googlebot crawlers. This can cause problems for apps that expect client-generated UUIDs to always be unique. Developers should be prepared for this and have a strategy for dealing with possible collisions, such as:
> - Check for duplicate UUIDs, fail gracefully
> - Disable write operations for Googlebot clients
Is the uuid generated in the frontend or backend? If frontend, I’d wager the likeliest explanation is that the client code or request was messed with to inject a previously known uuid rather than an entropy issue.
baq 19 hours ago [-]
the vm you're running on virtualized all the entropy away.
There are a bunch of constraints that must be strictly held for UUIDs to be collision resistant, I'd guess there is a problem with your random number generator.
nu11ptr 18 hours ago [-]
Ultimately it comes down to your entropy source. I always generate and insert in a loop for this reason, if there is a collision, I therefore handle that gracefully.
sbuttgereit 19 hours ago [-]
> I thought this is technically impossible
No, very technically possible... though, with good randomness, very, very unlikely.
But nothing technically prevents a UUIDv4 from generating a duplicate value.
radial_symmetry 8 hours ago [-]
Glad to be reading the comments here because I also had this happen to me once and thought I must have been going insane.
glaslong 1 days ago [-]
Buy some lava lamps
mdavid626 18 hours ago [-]
Or there is some other explanation, eg. somebody messed with the request manually, or with the db.
beardyw 1 days ago [-]
Just a stupid question, but why not append the date, even in seconds as hex. It's just a few bytes and would guarantee that everything OK now will be OK in the future?
flohofwoe 1 days ago [-]
You can just use a different UUID variant which includes timestamp data instead (e.g. v1 or v7), there are also variants which include the MAC address.
mittermayr 1 days ago [-]
yeah, any sort of additional semi-random data could've helped prevent this, I'm sure. That, however, is also kind of the idea of UUIDv4, it has lots of randomness and time built in already.
beardyw 23 hours ago [-]
But surely hashing the date still allows for a future collision. Leaving the date as is means it will never collide after that one second has passed.
flohofwoe 1 days ago [-]
UUID v4 consists of only random bits, no timestamp info.
mittermayr 1 days ago [-]
oh, interesting, I didn't know that and this could possibly be part of the problem perhaps depending on what's used as the seed.
pan69 1 days ago [-]
> but why not append the date
And use uuid v5 to hash it :)
sudb 19 hours ago [-]
This is first time I have experienced some vindication that choosing CUID2[1] for one of my projects was actually a good idea.
A check inside the generator function is the best way I've found to avoid this. Wrap uuid or whatever random generator with a check against an ID cache. If it already exists, just run the generator recursively.
NKosmatos 1 days ago [-]
> I thought this is technically impossible
Actually it's not impossible, but very very improbable.
Were the chances than an npm package is crap factored in?
wg0 1 days ago [-]
Would the UUID v7 be more collision proof? Hard to say because it takes time into account but then the number of entropy bits are reduced hence the UUID generated exactly at the same time have more chance of a collusion because number of entropy bits are a much smaller space hence could result in collusions more easily.
Thoughts?
AntiUSAbah 1 days ago [-]
You open up every millisecond a new block. Should be even more unlikely
nozzlegear 18 hours ago [-]
> I thought this is technically impossible, and it will never happen,
In an eternal universe, even the most unlikely of events will happen an infinite number of times.
sqquima 18 hours ago [-]
Meta, but if I had a question like this, I'd likely have asked on Twitter or Reddit first. I'll keep in mind using HN as an alternative Q&A site.
20 hours ago [-]
danfritz 18 hours ago [-]
Always let your db generate uuids. On postgres this is easy since v18 it supports uuid v7!
There is no need to set uuids through javascript or node imo
hx8 18 hours ago [-]
There's plenty of reasons to set a unique identifier before database save, or to want a unique identifier that doesn't have a 1-to-1 relationship with your object.
For example, in the idempotent kafka consumer pattern we set a unique ID in the header of every kafka message at the time of message publishing. We then have our consumers do a quick check of the ID against their data store to see if they have processed the message before or not. This way there is no impact if a consumer sees the same message twice. This allows us more flexibility during rebalancing events or replaying old offsets.
Cantthink1029 11 hours ago [-]
Not every application uses a DB you know, there are other reasons to use a UUID
not_math 1 days ago [-]
Reminds me of some code I saw running in production. Every time we added a new entry, we were pulling all the UUIDs from this table, generating a new UUID, and checking for collisions up to 10 times.
shortercode 18 hours ago [-]
Fun thing about random is that these things happen. UUIDv7 is less prone to this as it includes both a time component and random. I’ve been using ULID in a few project which has similar attributes to uuidv7 but more space efficient.
BugsJustFindMe 13 hours ago [-]
This is like one of the hardest things for people to understand. Even the best randomness guarantees fuck all. Entropy-based IDs are collision-resistant not collision-proof.
8 hours ago [-]
lyfeninja 1 days ago [-]
Although incredibly rare, it's not impossible so probably best to just plan for collisions. A simply retry should suffice. But I agree I feel like something is going on somewhere else ...
dist-epoch 17 hours ago [-]
It's much more likely that you hit an "impossible bug" due to a bit flip somewhere.
Imagine the database having the old UUID in a memory buffer due to a recent index scan, and a bit flip happened somewhere in the logic which basically copied the old UUID into the memory location of the new UUID, or some buffer addresses got swapped, or the operation which allocated the new UUID received a memory buffer containing the old one, and due to a bit flip the memcpy operation was skipped, or something along that line.
Facebook wrote extensively about this, stuff like "if (false) {do_x(); )" and do_x being called. For example their critical RocksDB kv store has extensive redundant protections to defend against such "impossible bugs".
AndreyK1984 1 days ago [-]
Why not to have timestamp-uuid instead ?
dgellow 1 days ago [-]
How confident are you that your machines clocks are in perfect sync? What about the risk of clock drift + correction, or hardware issues?
croon 24 hours ago [-]
Not GP, but: not confident. How confident would I be to avoid a (slightly lower entropy) UUID collision while also avoiding a clock desync landing on the exact same logged millisecond? Very, which is how confident I was about not encountering an UUID collision before this thread, so very++ I guess.
nhumrich 16 hours ago [-]
> technically impossible
Not at all! Just very unlikely. It's about odds and statistics. Not physics.
ASalazarMX 15 hours ago [-]
This undersells the word unlikely. It is very, very, very, very unlikely.
OutOfHere 23 hours ago [-]
This is why I prefer to use a random base32 string over UUID. At least you get a proper 128 bit entropy instead of just a 122 bit entropy as with UUIDv4. That's a 64x difference in collision probability. I always thought UUIDs were a toy, not for serious use. If you control the strings, you can even make a longer ID.
Also, numerous applications that use a unique ID per record frequently need to check for ID collisions. I know I do for a short URL generator.
naikrovek 1 days ago [-]
The chance of a UUIDv4 collision is very low, but it is never zero.
If everything is done properly, then this is very likely the one and only time anyone involved in the telling or reading of this account will ever experience this.
dalmo3 1 days ago [-]
Classic gamblers fallacy!
zuzululu 16 hours ago [-]
just uuidv5
QuercusMax 13 hours ago [-]
I lost all confidence in the infallability of software RNG when I was working on an assignment for Data Structures a million years ago (2000?). The assignment was simple: simulate a 2D random walk where you randomly go NSEW, and run 100 cases, collecting stats as to how long it takes to return to the origin.
Super easy assignment, wrote it up probably in C++ (maybe just C?), and ran it on my linux box (probably Debian potato). It finished super quick and gave me an average of like 5.6 steps to return to the origin or something. Cool!
I copied it over to my account on the department's HP-UX machines where I was supposed to run and submit it to my instructor. Compiled fine. And then it... just ran forever. I was doing rand() % 4 or something, and the HP-SUX RNG had crazy bias in its last 2 bits, and it just walked away forever, never returning to the origin. Well crap!
Got an A for my writeup, though!
ares623 1 days ago [-]
Buy a lottery ticket
kittikitti 16 hours ago [-]
Almost all pseudo-random number generators are absolute garbage. They need you believe they work because the NSA needs backdoors and to foolproof ransomware attacks. This isn't surprising at all to me.
dividendflow 17 hours ago [-]
[flagged]
ESAM_C 1 days ago [-]
[dead]
samdhar 1 days ago [-]
[flagged]
uncircle 1 days ago [-]
Statistically speaking, does extremely unlikely mean impossible? If it were replicable I'd raise my eyebrow, otherwise it's fair game, no?
As someone that enjoys the unterminable complaints about RNG in the video game scene, I would never trust any human's rationalization of random outcomes.
mschild 1 days ago [-]
> Statistically speaking, does extremely unlikely mean impossible?
No, it means extremely unlikely. Collisions can occur, as op just found out, but the chances are so abysmally small that most people don't care.
Any application I have worked on, I always had a pre-save check to see if the UUID was already present and generate a new one if it was. Don't think it ever triggered unless a bug was introduced somewhere but good practice anyway.
nubg 1 days ago [-]
You are replying to an AI bot
harperlee 1 days ago [-]
Would be cool to have a plugin that shows % of bot per user, based on their history of comments.
ashleyn 1 days ago [-]
There could be a problem with the way the system generates entropy for randomness.
nubg 1 days ago [-]
Question to fellow HNers, do you recognize that this comment was written by AI?
prakka 1 days ago [-]
No, to be honest. However, as soon as it was pointed out, I checked again and it made sense.
In my opinion, these kind of intuitions have to grow over time. And every time it’s pointed out, you learn. So please, keep pointing it out :).
tirutiru 1 days ago [-]
I did not. Post-conditioning by your comment and the other one,I can see some signs such attempting to be unusually comprehensive. The 'atoms in your liver' could be an awkward human trying to be poetic about scales.
I still don't see idiomatic markers of AI so that's scary if your claim is correct.
uncircle 1 days ago [-]
I guess not, and I feel dirty now. I'm logging off for the day.
nottorp 1 days ago [-]
Interesting enough, I skipped it when scrolling through the comments the first time. I think I instinctually do that to most karma whoring comments, no matter if manual or LLM generated.
Only noticed it because I did another pass and saw the replies talking about "AI".
piva00 1 days ago [-]
Yes but as a feeling (hunch?) not as something my brain analysed and reached a conclusion.
Weird how I'm already somewhat conditioned to spot it on a intuitive level.
mschild 1 days ago [-]
Kind of. It reads a bit too much like tech support you'd get when asking one for help.
1 days ago [-]
ssenssei 1 days ago [-]
when it started going on about all the different cases in the second bullet point... yeah
speedgoose 1 days ago [-]
Yes, stupid comparison with atoms in the liver and a bullet list below? I stopped reading.
This is why it’s stupid to assume a randomly generated ID is unique just because it is random.
Lammy 15 hours ago [-]
> I thought this is technically impossible, and it will never happen
I always hated this meme/mindset, because if you dig in to the history of them you'll see that their original purpose was to collide. They were labels to identify messages in Apollo's distributed computing architecture. UID and later UUIDs were a reversible way to mark an intersection point between two dimensions.
Any two nodes in a distributed system would generate the same UID/UUID for the same two inputs, and a recipient of an identified message could reverse the identifier back into the original components. They were designed as labels for ephemeral messages so the two dimensions were time and hardware ID (originally Apollo serial number, later 802.3 hwaddress etc).
I think a lot of the confusion can be traced to the very earliest AEGIS implementation where the Apollo engineers started using “canned” (their term, i.e. static or well-known) UIDs to identify filesystems. Over time the popular usage of UUID fully shifted from ephemeral identifiers where duplicates were intentional toward canned identifiers where duplicates were unwanted and the two dimensions were random-and-also-random.
The security of UUIDv4 is based on the assumption of a high-quality entropy source. This assumption is invalidated by hardware defects, normal software bugs, and developers not understanding what "high-quality entropy" actually means and that it is required for UUIDv4 to work as advertised.
It is relatively expensive to detect when an entropy source is broken, so almost no one ever does. They find out when a collision happens, like you just did.
UUIDv4 is explicitly forbidden for a lot of high-assurance and high-reliability software systems for this reason.
The more sources of entropy, the more closely you approach "perfect" randomization. And a large chunk of those entropy sources need to be non-deterministic. Even on the small level, local applications running on local systems, like games, can use things like the mouse coordinates, the timings between button presses, the exact frame count since game start before the player presses Start to greatly enhance randomness while still using PRNGs under the hood
Yes, for the latter, that's technically deterministic (and the older the game considered, the more deterministic it is, see TAS runs of old games obliterating the "RNG"). But when you have fifty different parameters feeding into the initial seed, that's fifty things an attack would have to perfectly predict or replay (and there are other ways to avoid replay attacks that can be layered on top)
If CloudFlare had less than 100 different sources of entropy, I'd be disappointed. And that's assuming their algorithm for blending those entropy sources into a single seed value is good
This is so true. And the beauty is that with algorithms, we don't even need to know much about the entropy to be able to extract it.
There is the Von Neumann method of generating an unbiased coin from a biased coin. Of throwing it twice, and checking if you got HT or TH. And completely discarding all HH or TT results. It doesn't matter if the coin you are using is 20% or 80%, the result will be a true 50/50.
There are more modern algorithms that can be even better (in that they need less coin tosses if you have a very unbalanced coin).
And then there is modern cryptographic hashing. Feed it all the bits you can. Collisions end up only happening in the real world if every single one of those bits is identical. So if you have actual entropy being fed, that cannot be controlled, predicted, or replicated, modern cryptography tells you that the end result is unique.
This blew my mind. Thank you!
I had to think about it a bit, so for anyone scratching their head right now trying to figure it out, consider it this way:
what matters is the ordering, of heads-then-tails, or tails-then-heads.
It doesn't matter that it's biased one way or the other, if you keep flipping pairs until you get a result with two different values, it's a 50/50 chance whether the less-likely result comes first, or second.
You might only have a 20% chance of any particular pair having a tails (for example), but in the cases where you do have a tails, it's a 50/50 chance that it comes first or second.
Assume each flip is independent and the bias remains same in each flip.
Let
Then Therefore Now calculateIt's just p(H)p(T).
And p(H)p(T) = p(T)p(H), thus 2*p(H)p(T) = 2p(1-p).
Just want to point out, that one is actually doing the experiment with a biased coin, then one must ignore all pairs.
e.g in case a coin which is heavily biased, say .9 H and .1 T. One should start with ignoring all the HH pairs, and start only at odd index. Lest, one picks a value like HHHHT (in the case the 2nd HH pair was not skipped, instead they greedily picked up the first HT, which will make the experiment HT biased).
Same, of course, holds for flipping it multiple times. But there you get more than Head or Tail (binomnk(n, k)).
https://blog.cloudflare.com/harnessing-office-chaos/
https://blog.cloudflare.com/chaos-in-cloudflare-lisbon-offic...
At the time I was at the Internet company that originally got online-gaming banned in the US, we were looking at CCDs and Cesium emitters that required a license etc...
While I am not sure, it seems cloudflare basically implemented one after SGI's[0] patent expired.
The patent and the licensing cost and adding SGI was a major blocker for us doing it, the startup closed before we found a real solution. But the best PRNGs like Blum Blum Shub were way too slow at the time. But things did improve quickly at that time.
[0] https://patents.google.com/patent/US5732138A/en
https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist_noise
*: Sorta, because if someone discovers that the entropy is derived from an analog TV tuned to channel 3, then they also know how to influence it from outside.
**: Style points can have value; it's OK to have fun with work. But that's a secondary function.
A cool parlor trick, certainly.
You can get entropy just by plugging an oscilloscope into a pile of dirt and cranking the gain up.
For instance you can use the microphone input of a PC, together with an additional external amplifier made with an audio amplifier integrated circuit or an operational amplifier integrated circuit and with a diode or a resistor at its input. The microphone input of PCs provides a 5 V voltage that can be sufficient as a power supply for a noise source plugged in it.
Such a true RNG can be made on a small PCB with an audio jack, so you can plug it into any PC with microphone input and have a true RNG that you can trust better than the RNG included in modern Intel and AMD CPUs. In the past, many AMD CPUs had defective internal RNGs. Moreover, both for Intel and for AMD it is impossible to verify whether the internal RNG does what it claims to do or it generates predictable pseudo-random numbers.
Interesting. I wonder how true it actually is that they use it like they claim here: https://www.cloudflare.com/learning/ssl/lava-lamp-encryption.... It's in one of their lobbies, so doesn't that make it susceptible to an attack in some way? I'm not knowledgeable enough to know, but I figured if they actually used that method, they'd have a more controlled environment.
I also don't fully understand it. A large part of that wall is static. And the camera isn't going to pick up on the stochastic properties of the lava as much as exists in the real world. So it feels like their images will be very statistically similar.
The problem with UUIDs that rely on entropy sources is that it is computationally expensive to detect if the statistical distribution of identifiers is diverging from what you would expect from a random oracle. I've written systems that can detect entropy source anomalies but you'll want to turn it off in production.
It is pretty cheap to sanity check most non-probabilistic identifier schemes. UUIDs that use broken hash algorithms (e.g. UUIDv3/5) or leak state (e.g. UUIDv7) are exposed to adversarial exploitation.
The identifier scheme is dependent on the use case. Does the uniqueness constraint apply to the instance of the object or the contents of the object? Is the generation of identifiers federated across untrusted nodes? How large is the potential universe of identifiers?
The basic scheme I've seen is a 128-bit structured value that has no probabilistic component. These identifiers can be encrypted with AES-128 when exported to the public, guaranteeing uniqueness while leaking no internal state. The benefit of this scheme is that it is usually drop-in compatible with standard UUID even though it is technically not a UUID and the internal structure can carry useful metadata about the identifier if you can decrypt it.
Federated generation across untrusted nodes requires a more complex scheme, particularly if the universe of identifiers is extremely large. These intrinsically have a collision risk regardless of how the identifiers are generated.
All of the standardized UUID really weren't designed with the requirements of scalable high-reliability systems in mind. They were optimized for convenience and expedience which is a perfectly reasonable objective. Most people don't need an identifier system engineered for extreme reliability, even though there is relatively little cost to having one.
But according to PostgreSQL, UUIDv7 provides better performance in the database, so is this essentially a trade off between security and speed?
UUIDv7 assigns the first 48 bits for the timestamp in milliseconds. You can generate a lot of UUID's in a millisecond though!
Then you have another 12 bits that you can use as you wish; "rand_a". The spec has a few methods they suggest on how to use these bits including 12 bits of random data, using it for sub-millisecond timestamps, or creating a monotonic counter, but each have their downsides:
- Purely random data means you can still run into collisions and anything within the same millisecond is unordered
- Sub millisecond you can run into collisions; there's nothing stopping you from generating two UUID's with the same 62 bits of rand_b data in the same sub-millisecond timestamp.
- Monotonic counters can overflow before the next tick, then what? Rollover? Once you roll over it's no longer monotonic and you can generate the same random data within the same monotonic cycle. Also; it's only monotonic to the system that's generating the UUID. If you have a distributed system and they each have their own monotonic cycles then you'll be generating UUID's with the same timestamp + monotonic counter, and again, are relying on not generating the same random data.
You can steal some of the 62 bits in rand_b if you want as well; you can use rand_a for sub-millisecond accuracy, and then use a few bits of rand_b for a monotonic counter. There's still a chance of collision here, but it's exceedingly low at the expense of less truly random data at the end.
If you want truly collision free, you'd also need to assign a couple of bits to identify the subsystem generating the UUID so that the monotonic counter is unique to that subsystem. You lose the ordering part of the monotonic counter this way though, but I guess you could argue that in nearly 100% of cases the accuracy of sub-millisecond order in a distributed system is a lie anyways.
That's generally enough IDs per second for most of my edge nodes, but the central worker nodes need more, so I give them a different split and use 4 bits for the device ID and 28 bits for serial number instead.
If a node overflows its serial number that second, I kind of cheat and increment the seconds field early. Every time this happens, I persist the seconds field to the database, and when the app restarts, it starts its seconds count at the last persisted seconds plus one. If the current time in seconds is greater than the last used seconds, I also update it and reset the serial number. Works remarkably well for smoothing out very occasional spikes in ID generation while still approximately remaining globally sortable.
I also "waste" a bit of the 32-bit time field by considering it to be signed, even though it's not really because I don't expect this system to last long enough to reach times where the MSB gets set. But if I ever change my system, I'll set that bit and everything will stay ordered. I'll probably reset the epoch at that point too.
Everything in crypto is always a probability - never a certainty
It's equality possible to interpret the "like this" to the collision itself, without a focus on the 1 year distance between the creation dates.
So I guess both views are valid.
The issue with UUIDv7 is that you also have significantly less entropy since you only have a 62 bits (sometimes less, depending on implementation) of "random" data. So while the time aspect of format lowers the chances of collisions, generating two UUIDv7's in the same millisecond (depending on implementation) have a significantly higher chance of collision than two UUIDv4's.
It's still incredibly unlikely, but it's also incredibly unlikely you generate two matching UUIDv4's, but it does happen.
TLDR; It's possible to generate matching UUIDv7's, don't assume otherwise.
If the RNG is bad, you'll get more benefit from adding non-random bits than you would from additional badly RNG'd bits.
The probability of future collisions also rises the more IDs you generate. If you incorporate non-random bits, you can alleviate that:
- timestamps make the collision probability not grow over time as you accumulate more existing UUIDs that could collide
- known-distinct machine IDs make the collision probability not grow as you add more machines
There is a long history of broken entropy sources showing up in real systems. No matter how hard people try to prevent this it keeps happening. Consequently, a requirement for high-quality entropy sources is correctly viewed as an unnecessary and avoidable foot-gun in high-reliability software systems.
Hmm. What do those systems do for cryptography? Just assume it won't work and not rely on it at all?
This makes it easier to audit for use of entropy sources in the software since there really isn't a valid use case for it.
A risk of collision before it happens is non-trivial to detect but this is really what you'd want.
It is resource amplification all the way down. In a lot of systems that index these keys the cost of that check is several times that of doing a blind insert.
DynamoDb works fine, using CQRS if necessary.
Walk me through that please
[1] https://news.ycombinator.com/item?id=48033853
> Implementations SHOULD utilize a cryptographically secure pseudorandom number generator (CSPRNG) to provide values that are both difficult to predict ("unguessable") and have a low likelihood of collision ("unique").
From https://www.rfc-editor.org/rfc/rfc9562.html#unguessability
So I don't think technically we can say entropy or random numbers at all are even "required for UUIDv4 to work as advertised."
Their process was a bit more complex because the master list of in-use UUIDs was stored in an external CMDB service run by a different department. They got a daily dump of that db, so were able to check that when generating a "provisional" id. Only once it had been properly submitted to the CMDB did it became "confirmed".
They had guardrails in place to prevent "provisional" ids being used in production, and a process for recycling unused "confirmed" ids. Oh, and they did regular audits which were taken very seriously by management.
Last I heard, they were 18 months into a 6 month project to move their local database cache to Zookeeper...
This even allows you to shard the service to provide high availability and distribute the service globally to reduce latency. Just give each instance a dedicated id range it can hand out. I'd suggest reserving some of the high bits to indicate data center id, and a couple more bits for id-generator instance within that dc.
Wait a second, this starts to look familiar ... does Twitter still do that, or did they eventually switch?
Are you generating the UUID in the backend, or the frontend? Frontend is fundamentally unreliable for many reasons, including deliberate collisions. So if that case you'll need to handle collisions somehow. Though you can still engineer around common sources of collisions, the specifics depend on the environment.
On the other hand making a backend reliable is feasible. What kind of environment is your code running in? Historically VMs sometimes suffered from this problem, though this should be solved nowadays. Heavily sandboxed processes might still run into this, if the RNG library uses an unsafe fallback. Forking processes or VMs can cause state duplication and thus collisions.
<https://git-scm.com/book/en/v2>
"Here’s an example to give you an idea of what it would take to get a SHA-1 collision. If all 6.5 billion humans on Earth were programming, and every second, each one was producing code that was the equivalent of the entire Linux kernel history (6.5 million Git objects) and pushing it into one enormous Git repository, it would take roughly 2 years until that repository contained enough objects to have a 50% probability of a single SHA-1 object collision. Thus, an organic SHA-1 collision is less likely than every member of your programming team being attacked and killed by wolves in unrelated incidents on the same night."
Deliberate collisions are addressed in the following paragraph.
SHA-1 hashes are not random, so the issue of poor pseudo-random number generation doesn't apply as it does to uuidv4. And SHA-1 hashes are 160 bits, vs. 128 for uuidv4.
But I love the idea of unrelated wolf attacks.
> So, just how large is it? Let's try to wrap our puny human brains around the magnitude of this number with a fun little theoretical exercise. Start a timer that will count down the number of seconds from 52! to 0. We're going to see how much fun we can have before the timer counts down all the way. Shall we play a game?
> Start by picking your favorite spot on the equator. You're going to walk around the world along the equator, but take a very leisurely pace of one step every billion years. The equatorial circumference of the Earth is 40,075,017 meters. Make sure to pack a deck of playing cards, so you can get in a few trillion hands of solitaire between steps. After you complete your round the world trip, remove one drop of water from the Pacific Ocean. Now do the same thing again: walk around the world at one billion years per step, removing one drop of water from the Pacific Ocean each time you circle the globe. The Pacific Ocean contains 707.6 million cubic kilometers of water. Continue until the ocean is empty. When it is, take one sheet of paper and place it flat on the ground. Now, fill the ocean back up and start the entire process all over again, adding a sheet of paper to the stack each time you’ve emptied the ocean. Do this until the stack of paper reaches from the Earth to the Sun. Take a glance at the timer, you will see that the three left-most digits haven’t even changed. You still have 8.063e67 more seconds to go. 1 Astronomical Unit, the distance from the Earth to the Sun, is defined as 149,597,870.691 kilometers. So, take the stack of papers down and do it all over again. One thousand times more. Unfortunately, that still won’t do it. There are still more than 5.385e67 seconds remaining. You’re just about a third of the way done.
https://github.com/uuidjs/uuid/issues/546
Eg:
> FWIW, I just tested crypto.getRandomValues() behavior on googlebot and it is also deterministic(!)
If the rng is not customized it will use:
getRandomValues doesn't specify a minimum amount of entropy.It's probably messing up the cryptography, too.
The only guesses I'm having is that we originally generated UUIDv4s on a user's phone before sending it to the database, and the UUID generated this morning that collided was created on an Ubuntu server.
I don't fully know how UUIDv4s are generated and what (if anything) about the machine it's being generated on is part of the algorithm, but that's really the only change I can think of, that it used to generated on-device by users, and for many months now, has moved to being generated on server.
But on a server that shouldn't happen, especially not in 2026 (in the past, seeding the rngs of VMs used to be a bit of an issue). Even if one UUID was badly generated, a truly random UUID statistically shouldn't collide with it. You'd need an issue in both generators
To be honest, the chance that you are doing something weird is probably higher than you experiencing a real UUID conflict.
How did your database 'flag' that conflict?
Sure, it's something I'd flag in any design to spend two minutes to talk about potential security implications. But usually there aren't any
The database flagged it simply by having a UNIQUE key on the invoice_id column. First entry was from 2025, second entry from today.
If the entire universe were turned into a giant computer and did nothing but generate uuids until its heat death, how many bits would you need for the ID space?
This might be a bad example because one meteorite could take out the world and given enough time is likely to.
If you want something that is difficult to guess, ask the cryptography guys. But if you need something that is -_guaranteed_ unique, you must build it yourself.
From what I skimmed the package should just call to the js runtime's crypto.randomUUID(). I think it should always be properly seeded.
I think it is extremely unlikely that the runtime has a bug here, but who knows? What js runtime do you use?
uuid/src/rng.ts : the random array is const. Every call will share the same random number. Subsequent call will update your old random code, so if you generated something important... good luck
The old code used to do a slice() which creates a new copy.
Might be unintentional. Although I have no idea how this would pass any tests, as you would think to test generating 2 randomnumbers and hope they are not the same.
Synchronous / serial calls:
output: and aynchronous calls: output:Your test is more-or-less contrived to fail given the tradeoff to avoid repeated memory allocations but that doesn't say much about the actual usage in uuid generation since it's not exported for general purpose use.
Presumably they had some hot path somewhere where rng() is called in a loop and this optimization made sense with awareness that it could be misused as in your example breaking the contract ensuring randomness, which (hopefully) they're not actually doing anywhere.
Unless I'm missing something replacing the package over this with a less vetted implementation seems excessive and possibly even counterproductive.
The function is a module, and it doesn't do what you'd expect.
became
https://github.com/uuidjs/uuid/blob/f2c235f93059325fa43e1106...
Welp.. time to patch and update everything again. Another day, another npm-package headache. Very odd()
Attack vector: call the rng(), and send the result somewhere. You now have now overwritten someone elses "random number" and know about it. The fun things you can do with those numbers!
1 in 47.3 octillion.
i'd be suspecting a race condition or some other naive mistake, otherwise id be stocking up on lottery tickets.
(lol at the other user posting at the same time about the lottery ticket.. great minds and all that.)
I'd look at rng initialisation first.
You're welcome.
It's a super simple mechanism, check in common worldwide UUID database, if not in there, you can use it. Perhaps if we use a START TRANSACTION, we could ensure it's not taken as we insert. But that's all easy, I'll ask Claude to wire it up, no problem.
A collision at 15,000 records is so unlikely that I would first suspect something else. Duplicate processing, replayed requests, reused objects, misleading logs, or another code path reusing the identifier.
Could you share a bit more of the surrounding code so we can check?
> This module may generate duplicate UUIDs when run in clients with deterministic random number generators, such as Googlebot crawlers. This can cause problems for apps that expect client-generated UUIDs to always be unique. Developers should be prepared for this and have a strategy for dealing with possible collisions, such as:
> - Check for duplicate UUIDs, fail gracefully
> - Disable write operations for Googlebot clients
https://github.com/uuidjs/uuid/commit/91805f665c38b691ac2cbd...
Something tangentially cool which is related: https://eu.mouser.com/new/leetronics/leetronics-infinite-noi...
There are a bunch of constraints that must be strictly held for UUIDs to be collision resistant, I'd guess there is a problem with your random number generator.
No, very technically possible... though, with good randomness, very, very unlikely.
But nothing technically prevents a UUIDv4 from generating a duplicate value.
And use uuid v5 to hash it :)
1. https://github.com/paralleldrive/cuid2
Actually it's not impossible, but very very improbable.
P.S. You should play a lottery/powerball ticket
P.P.S. Whenever I use the word improbable, the https://hitchhikers.fandom.com/wiki/Infinite_Improbability_D... comes in mind
Thoughts?
In an eternal universe, even the most unlikely of events will happen an infinite number of times.
There is no need to set uuids through javascript or node imo
For example, in the idempotent kafka consumer pattern we set a unique ID in the header of every kafka message at the time of message publishing. We then have our consumers do a quick check of the ID against their data store to see if they have processed the message before or not. This way there is no impact if a consumer sees the same message twice. This allows us more flexibility during rebalancing events or replaying old offsets.
Imagine the database having the old UUID in a memory buffer due to a recent index scan, and a bit flip happened somewhere in the logic which basically copied the old UUID into the memory location of the new UUID, or some buffer addresses got swapped, or the operation which allocated the new UUID received a memory buffer containing the old one, and due to a bit flip the memcpy operation was skipped, or something along that line.
Facebook wrote extensively about this, stuff like "if (false) {do_x(); )" and do_x being called. For example their critical RocksDB kv store has extensive redundant protections to defend against such "impossible bugs".
Not at all! Just very unlikely. It's about odds and statistics. Not physics.
Also, numerous applications that use a unique ID per record frequently need to check for ID collisions. I know I do for a short URL generator.
If everything is done properly, then this is very likely the one and only time anyone involved in the telling or reading of this account will ever experience this.
Super easy assignment, wrote it up probably in C++ (maybe just C?), and ran it on my linux box (probably Debian potato). It finished super quick and gave me an average of like 5.6 steps to return to the origin or something. Cool!
I copied it over to my account on the department's HP-UX machines where I was supposed to run and submit it to my instructor. Compiled fine. And then it... just ran forever. I was doing rand() % 4 or something, and the HP-SUX RNG had crazy bias in its last 2 bits, and it just walked away forever, never returning to the origin. Well crap!
Got an A for my writeup, though!
As someone that enjoys the unterminable complaints about RNG in the video game scene, I would never trust any human's rationalization of random outcomes.
No, it means extremely unlikely. Collisions can occur, as op just found out, but the chances are so abysmally small that most people don't care.
Any application I have worked on, I always had a pre-save check to see if the UUID was already present and generate a new one if it was. Don't think it ever triggered unless a bug was introduced somewhere but good practice anyway.
In my opinion, these kind of intuitions have to grow over time. And every time it’s pointed out, you learn. So please, keep pointing it out :).
I still don't see idiomatic markers of AI so that's scary if your claim is correct.
Only noticed it because I did another pass and saw the replies talking about "AI".
Weird how I'm already somewhat conditioned to spot it on a intuitive level.
Why? There's a built-in for this.
https://nodejs.org/api/crypto.html#cryptorandomuuidoptions
I always hated this meme/mindset, because if you dig in to the history of them you'll see that their original purpose was to collide. They were labels to identify messages in Apollo's distributed computing architecture. UID and later UUIDs were a reversible way to mark an intersection point between two dimensions.
Any two nodes in a distributed system would generate the same UID/UUID for the same two inputs, and a recipient of an identified message could reverse the identifier back into the original components. They were designed as labels for ephemeral messages so the two dimensions were time and hardware ID (originally Apollo serial number, later 802.3 hwaddress etc).
I think a lot of the confusion can be traced to the very earliest AEGIS implementation where the Apollo engineers started using “canned” (their term, i.e. static or well-known) UIDs to identify filesystems. Over time the popular usage of UUID fully shifted from ephemeral identifiers where duplicates were intentional toward canned identifiers where duplicates were unwanted and the two dimensions were random-and-also-random.