- new
- past
- show
- ask
- show
- jobs
- submit
Yet here we are, what looks like a massive undertaking for vibe coding.
Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.
It's probably a bit of both.
Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.
Bold of you to assume they have the expertise.
Submitting patches is joining forces and helping out.
--------
[1] And align with the project's direction. This part is of course much more subjective so could very easily be an honest misunderstanding of the situation.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
I love Rust, but you couldn't pick a language with slower compile times... XD
Linking is also slow, and the extreme amounts of metadata produced for LLVM almost serves as a benchmark for LLVM's throughput, but that's all in an effort to produce faster, better binaries in the end.
On godbolt.org, Hello World compiles and runs in about 250ms. Zig's Hello World compiles and runs in 600ms. Of course Zig is still an unfinished language so optimisations like these are probably hardly a priority, but when it comes to lines of code per second, the difference isn't as big as people make it out to be.
What will make the most difference is how many crates the rewrite will pull in. The PORTING.md file specifies "No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`" for the second phase, which should definitely get rid of the excessive compile time many people associate with Rust projects.
I guess it's all relative.
I find Rust's compile times abhorrent and it's objectively slower than many many other languages that also pull in dependencies left, right, and center. I guess that just means Rust scales very badly with amount of code.
I'd put it at a bit better than Haskell, but honestly not by much.
I really wish Rust would focus much more on compile times, or on making smaller parallel compilation units. It's quite a chore to have to keep splitting your program into smaller and smaller crates just to not sit and wait for an eternity.
As a comparison my CI job for Rust takes 14m running on a 16vCPU machine while my much larger TypeScript project compiles in 1m on a 2vCPU machine. I know people that have to spend quite a lot of work on keeping compile times manageable for Rust (nix, smaller crates, aggressive caching, etc etc).
Rust still brings me enough value that I'll stick with it, but one can still dream of a better future :)
So the practical delta is around half an hour to three quarters of an hour per day, or multiple hours per week. That directly affects flow state and experimentation speed. over the span of a month that's 2 full days worth of work waiting for the compiler. Or if you take my company's evaluation of the average engineer's hour cost it's roughly 2550 per month or almost 30k per year, obviously it's a bit exaggerated, you don't spend a full year refactoring and working like that, but even a tenth of that is still a big lump of money if you scale it to a few teams.
Now it needs to be taken with a huge pinch of salts because Rust provides other benefits that offsets the fact that it's painfully slow to compile, but still worth noting
So it's definitely a faster feedback loop and honestly completely bearable, but it's not 200ms.
The patch would have been rejected either way because it was out of date and conflicted with other work going on.
LLMs promote a decoupling of mental models and the actual codebase.
As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.
Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!
(Though I don't know if this particular patch series would get accepted on its own merits.)
split into a bunch of much smaller changes?
There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.
That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.
The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.
Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.
It's possible to know in general terms, how computers work, and what assembly is without "knowing assembly" in the sense of being familiar with using/debugging it as a programming language.
Then it's sufficient to know assembly, but not necessary.
This is compatible with "[developers] that still understand assembly to this day tend to be better developers", but not with "[on developers who] don’t know assembly, which speaks to [their] quality".
Vide-coders often don't read, let alone understand, the code they send for PRs.
- the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler
- the mental model of what is being generated and how much the contributor understands and owns the generated code
High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.
So it is not, by your own admission, "exactly, literally the same".
If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.
Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.
There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.
Couldn't you say exactly the same about bun?
I guess there are 2 philosophies in software development: move fast and break things and move at a pace that guarantees everything is rock solid.
Most commercial software, Anthropic included is taking the former path, while most infrastructure teams are taking the later.
I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.
Both appear to be[1][2]. FreeBSD doesn't have a formal policy yet, but they appear to be leaning towards admitting some degree of LLM contribution.
[1]: https://docs.kernel.org/process/coding-assistants.html
[2]: https://forums.freebsd.org/threads/will-freebsd-adopt-a-no-a...
PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.
Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!
You can be against a particular technology without being "anti-technology".
See DRM/surveillance/bad self driving implementations.
Just because a thing exists doesn’t mean you have to use it for everything. You don’t use asbestos blanket? Why are you so against asbestos?
So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.
Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.
The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.
People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.
Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.
The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.
Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.
The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.
I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.
The last time I had a go with Haskell, the errors reminded me so much of hellish terminal compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.
As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.
No, they were prevented from doing so because the Zig devs didn't like the proposed changes and are preparing a more comprehensive improvement.
Lock the syntax/api together for a couple of years. Allow AI code in Zag.
Review after a few years, see which is better.
And will Rust team accept their vibe coded patches?
I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...
They didn't.
fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against
Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster
That's because it's not vibe coding - stingraycharles doesn't seem to understand what vibe coding is. Vibe coding was defined here https://x.com/karpathy/status/1886192184808149383
> There's a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
This is very far from Anthropic's migration plans.
My benchmark is basically, "are you letting the AI drive."
In this case, an AI appears to have written the migration guide...
And then that leaks outside their social and age groups, because other people hear the incorrect usage, get confused, and incorporate that confusion into their own use of the term.
with superpowers, i see a lot of specs -> impl plan -> execute plan
Inventing a term doesn't give you exclusive rights to provide the definition.
They recently proposed some of their internal tools to be the official Rust implementation[0] of Connect RPC[1]. As a protobuf based library set, this includes a new Rust-based protobuf compiler, Buffa[2].
[0]: https://github.com/orgs/connectrpc/discussions/7#discussionc...
Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.
I imagine these are not really concerns with rust at this point.
In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.
I think this is true regardless of what language you’re using.
I’ve built a lot in Zig and there’s no difference between vibing stuff in it versus TypeScript/React. Claude can “one-shot” them both, and will mimic existing code or grep the standard library to figure everything out.
Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.
> and guide it closely (in which case it's useful for focused work)
It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.
Here, I just did a quick test with claude.
1. "make a simple tcp echo server that uses rust"
compiles and runs - took a few seconds to generate.
2. "make a simple tcp echo server that uses zig"
result: compile error, took literal minutes of spinning and thinking to generate
response: "ziglang.org isn't in the allowed domains. Let me check if there's another way, or just verify the code compiles conceptually and present it clean."
/opt/homebrew/Cellar/zig/0.15.2/lib/zig/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it"); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
3. "make a simple tcp echo server that uses zig 0.16"
result: compile error:
zig build-exe main.zig main.zig:30:21: error: no field named 'io' in struct 'process.Init.Minimal' const io = init.io; ^~
4. "make a simple tcp echo server that uses zig 0.15"
result: compile error
zig build-exe main.zig /nix/store/as1zlvrrwwh69ii56xg6yd7f6xyjx8mv-zig-0.15.2/lib/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");
Rust took seconds and just works. Zig examples took minutes and don't work out of the box. The DX & velocity isn't even close.
1. the language and stdlib are written by people who know what they're doing 2. packages in the ecosystem, at the barest level, are written by those who didn't leave after a few compile errors they couldn't reason about
I think the changes are improvements, but there's a real cost to language churn, and every time it happens, the graveyard of projects grows just that little bit larger.
Virtually all crates are still at version 0.x and introduce constant breaking changes: [https://00f.net/2025/10/17/state-of-the-rust-ecosystem/](https://00f.net/2025/10/17/state-of-the-rust-ecosystem/)
If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.
LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.
What you are talking about used to be a pain point, but is now pretty much gone.
Rust can be a real superpower for AI-assisted dev work, because the compiler outputs very good errors, and the type system catches most safety bugs.
Zig is a great language and I want to see it succeed, but this is a prudent move for Bun.
Sometimes it is worth it, but it may also kill projects. A risky move. And AI doesn't help its cause. AI can save a lot of time when making ports, it is one of the things it does best, but it doesn't protect from regressions.
I am not using Bun in production, but if I was, I would consider it a risk. Not because of Rust vs Zig, but for changing things that work.
> The regular pull requests for bun are wild too: https://github.com/oven-sh/bun/pulls?q=is%3Apr+
> Most are created autonomously by @robobun, checked for duplicates with a GitHub action (powered by Claude), reviewed by @coderabbitai and @claude. Meanwhile the CI is broken and @robobun finally closes a portion of its own PRs because they duplicate other PRs it has written. (Merging into main is still done by a human.)
How is it an incorrect interpretation? Jared is indeed pitching/suggesting/predicting that human contribution will not be allowed in the near future, i.e. banned.
The person upthread should have said "predicting".
Among them:
- much easier to iterate on (due to the language being simpler and compilation much faster)
- native C/C++ interops (Zig can compile C and C++ and mix it with Zig) which is crucial for a node-replacement runtime that runs an open source JS engine
- fewer dependencies and trivial static linking
I guess that now that they've been acquired by Anthropic there's this combination of having both in-house Rust talent, AI which does better on Rust, and the funding and resources necessary to undertake such a migration.
I'm struggling to figure out how to even start interrogating this notion. What does this mean?
So the difference is not in writing new stuff but in maintaining the existing codebase. Rust's rigidity makes it potentially harder to break stuff compared to Zig's general flexibility. As a project grows and matures, different types of contributors naturally come in and it's unreasonable to expect everyone to learn about historical footguns that may have accumulated.
I think there are even longer term plays that Anthropic should be looking at, in this space, but it seems like they've decided rust is the right thing, so fair play. I would be (am!) thinking about making an LLM optimized high level language that you can generate / train on intensively because you control the language spec.
Claude struggling at Zig: the above + memory safety issues if you run “fast” mode.
It is generally true that Rust code tends to be written in a way that the compiler catches the issue at compile time. The same is not as true for Zig, Python or JS
something JS-adjacent could certainly be more known than an obscure language but are that many people using drop-in node replacements?
But I can’t reconcile the reasoning about “strong, thorough compiler” with the fact that LLMs are also fantastic at Ruby.
They also write really great posix shell (including very sophisticated scripts) and python.
Something more subtle is going on.
Has anyone made any cross language benchmarks for LLMs? I wonder if rust's conceptual complexity makes it harder for LLMs to write? If all you care about is working software, which language is best for LLMs? Python, because there's more example code? Go or Java, because they're simpler languages? Ruby because its terse? Rust because of the compiler? I'd love to see a comparison!
I believe now we have all but we fail at choosing.
Sorry if I’m being pedantic, but I’m not aware of Bun having made any statements about AI assisted coding before.
It doesn’t look like that at all. Do you think that all use of AI is vibe coding?
https://github.com/oven-sh/bun/compare/claude/phase-a-port
This single commit is 65k lines of additions
https://github.com/oven-sh/bun/commit/ffa6ce211a0267161ae48b...
There's a decent article by Simon Willison that talks about this: https://simonwillison.net/2025/Mar/19/vibe-coding/
> I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.
But pointing your AI at an entire codebase to transpile pretty much entirely by itself? Yeah vibe coding is a fitting term.
Even if you wrote it a small essay on how to Rust. That improves the situation but doesn't change the core autonomy/hope of the task.
> (programming, neologism) A method of programming in which a developer generates code by repeatedly prompting a large language model.
As much as I find the word "vibe" generally annoying (in all contexts), I actually really like "vibe coding" as "LLM did everything and I didn't even look at it". It's a succinct, useful way to describe that mode of doing things. Diluting it down to "LLM-assisted coding" makes it useless.
It sort of surprises me how uptight people are getting about a term that was mentioned on X last year and has since been tossed around to loosely imply that a machine did between zero and all of the work. Just because it doesn't match exactly does not mean it's useless, it maps to a concept, if the details are important and ambiguous, then elaborate.
You're absolutely right.
"+27,939Lines changed: 27939 additions & 0 deletions"
of new rust code
This is obviously very different from that, but the way the commit looks doesn't make it so.
Why? Do you think large changes not made by LLMs are also reviewed line by line?
I think the most commonly-accepted definition of "vibe coding" is when you "forget that the (generated) code even exists"[0]. So vibe-ness entirely hinges upon whether you're manually reviewing. If you make/prompt changes based on what you observe in the generated code (rather than only based on runtime behavior), then you're not "vibe coding".
I think the other things you mentioned are orthogonal to vibe-ness.
Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".
Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.
My only experience with ports so far is Python to Go, and it's been near flawless (just enough stupid shit to make me feel justified to be in the loop).
Especially for memory management the right and wrong abstractions in Rust can lead to a factor of 5 or 10 extra amount of difficulty. The right memory management abstraction and your code can be a straight line port (or even cleaner!), the wrong one and you're going to just be spending a lot of tokens to have a machine spin around in circles trying to untie itself
GC'd languages don't have this problem, though obviously you can still generate stupid amount of pain for yourself by doing something wrong
The slides: https://go.dev/talks/2015/gogo.slide#3
An interesting similarity:
>We had our own C compiler just to compile the runtime.
The Bun team maintain their own fork of Zig too
The LLM is non-deterministic. You could have it independently do the conversion 10 times, and you'd get 10 different results, and some of them might even be wildly different. There's no way to validate that without reviewing it fully, in its entirety, each time.
That's not to say the human-written deterministic conversion tool is going to be perfect or infallible. But you can certainly build much more confidence with it than you can with the LLM.
The problem is not that we get 10 solutions, and I think you should draw out your implications and state them directly. Bc they're already either solved or being actively iterated on by industry. And we (well not me) can address them if you're willing to speak them
This would require a robust test suite though.
One of the cases where vibe coding might actually be useful, writing a throwaway tool.
Should you use the LLM to do the thing directly, or use the LLM to implement a tool that does the thing?
I tend to reach for the latter, it’s easier to reason about.
But none of these properties are what let you perform a successful port. The port is going to rely entirely on oracle testing.
Have the best of both worlds.
[0]: https://github.com/oven-sh/bun/compare/claude/phase-a-port
-------------------------------------------------
Language files blank comment code
-------------------------------------------------
Zig 1298 79693 60320 571814
TypeScript 2600 67434 115281 471122
JavaScript 4344 36947 37653 290873
C++ 583 27129 19117 215531
C 111 21577 83914 199576which is needed, as making things safe often requires refactoring not localized to a single function/code block and doing that while transpiling isn't the best idea. In general I would recommencement a non LLM based transpilation (if possible) and then use an LLM to do bit by bit as localized as possible bottom up refactoring to get ride of unsafe code potentially at some runtime performance cost, followed by another top down refactoring to make thing nice and fast. And human supervision to spot parts where paradigms clash so hard that you have to do some larger changes already during the bottom up step.
anyways that means segfaults likely would stay segfaults in the initial transpilled version
I've had more success vibe coding Rust than I have in more dynamic languages. I suspect the strictness of the Rust compiler forces the AI agent to produce better code. Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
I am in the middle of porting TypeScript to Rust and learned a ton doing this. You can check out the work in progress here https://github.com/mohsen1/tsz/
Happy to share my learnings on this
My way of compensating for my own inability to do detailed code reviews is making sure the tests, integration tests, end to end tests, cover everything I care about. Without that, you can't be sure it is not skipping detail work. I've also made it do some bench marking and stress testing and then analyze the code base for potential bottlenecks. After it found and fixed a few issues, it got better. Finally, prompting it to do critical reviews, look for refactoring opportunities, etc. can give you a nice list of stuff to fix next. Having it run memory leak checkers and static code analysis tools also is a good strategy. Once you start running low on issues you find this way, the code is probably not horrible. Or at least you hit some sort of local optimum.
The lack of code reviews sounds pretty horrible. But it is now quickly becoming the biggest bottleneck in AI assisted coding. Eliminating that bottleneck is scary but it enables a few step changes in volume of code that becomes possible. Using strict compilers and strict memory management helps eliminate a few categories of bugs and issues.
I was previously doing this with languages I do understand. Once you start routinely dealing with larger and larger commits, reviews become a problem.
I expect working with larger code bases like this will get a lot easier and better over time. I noticed that the main headaches I face with this type of engineering are the tendency of models to keep deliberately cutting corners, only doing happy path testing, or deferring essential work for later. I suspect a lot of the models are simply biased to conserving token usage. Pretty annoying but also easy to compensate for with follow up prompts and testing. And probably something that becomes less of an issue as the models get tuned to behave better without additional prompting.
My rewrite is running stable in production for two weeks with 50x speedup, which have made the doomed old solution viable again.
Wonder what this will mean for future legacy projects and how how we should structure the programs to be inside “rewrite with llm”-size? Maybe a renessanse for microservices?
Dunning Kruger effect. At least you admit it.
> Not sure. It could be just that I am less familiar with Rust so it feels like it's doing a better job.
Ya think?
I had similar code written in zig and c++ and cold compilation was many times faster in c++ and incremental compilation was instant in c++.
I think the reason most rust projects compile slow is because of excessive usage of dependencies and also the excessive use of metaprogramming in code.
Zig doesn’t have multiple compilation units so it doesn’t parallelize compilation
So, Anthropic acquires Bun team because claude-code uses Bun. They port Bun from Zig to Rust presumably because Rust "is better" (imagine big air quotes here). Again presumably, they want to make claude-code "better". Why make it so complicated? With all the power of LLMs they have, surely they can make claude-code the best possible by writting it in Rust directly.
It's easy to just see Bun as a marketing stunt, as well.
Claude Code itself is already heavily written by LLMs[0], so I'm not sure what's "this" here. You mean LLMs are okay for writing code but not porting?
[0]: No, it's not just marketing. The codebase was leaked and anyone who glanced at it would realize the claim is likely true.
What I said is that "they know that LLMs are not the right tool for this" is not the answer, as CC is already vibecoded so it'd be very weird to believe you can't vibecode a port of CC.
The actual answer is, of course, the whole discussion is just making a hill out of a mole. Bun is not committed to a Rust rewriting, vibed or not.
Rust on the other hand is pretty established by now and has less breaking changes. It also has more compile-time safety-guarantees that makes vibe-coding a bit more confident.
In top of that, Zig has rejected their upstream contributions. So they'd have to maintain their own compiler in the long run, which is probably just technical debt to maintain.
[0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
Normal, emotionally stable people do sometimes make decisions about what businesses to patronize based on the political leanings of the business owners. Same thing happens with art appreciation, movie/TV watching, and plenty of other things. Zig might not be a business, but the same rules apply.
You may think that's foolish, and not make your decisions that way, but it's a perfectly valid way to make decisions.
Maybe with issues like abortion or racial discrimination, but not tariffs.
Such as React Native? :D
Whether or not they can clean it up is an interesting question.
but also telling a LLM to do a line-by-line translation and giving it a file _is guaranteed to never truly be a line-by-line translation_ due to how LLMs work. But thats fine you don't tell it to do line-by-line to actually make it work line by line but to try to "convince" it to not do any of the things which are the opposite (like moving things largely around, completely rewriting components based on it "guessing" what it is supposed to do etc.). Or in other words it makes the result more likely to be behavior (incl. logic bug) compatible even through it doesn't do line-by-line. And that then allow you to fuzz the behavior for discrepancies in the initial step before doing any larger refactoring which may include bug fixes.
Through tbh. I would prefer if any zip -> terrible rust part where done with a deterministic, reproducible, debug-able program instead of a LLM. The LLM then can be used to support incremental refactoring. But the initial "bad" transpilation is so much code that using an LLM there seems like an horror story, wrt. subtle hallucinations and similarr.
(would teach me a little about Zig, about which i know 0)
#1 boils down to “can the LLM solve the pointer aliasing here?” and #2 is translating between metaprogramming paradigms. Could work but a line-by-line translation is a pipe dream.
Line-by-line ports to idiomatic Rust are usually not possible because of the borrow checker and Rust's ownership rules. That's the reason the Typescript compiler was ported to Go instead of Rust.
I would guess dealing with breaking changes is a big motivation for this.
https://github.com/oven-sh/bun/compare/claude/phase-a-port#d...
that isn't particularly surprising, but the point is I would expect getting things more stable than the zig version would take a bit.
I'm not sure I would take this kind of path, I would much more focus on refactoring the project to small and easily translatable components with small boundaries, but it's cheap to try things.
I get nodejs not found error when running opencode command in terminal. I installed it via bun too.
What is the most interesting here for me is:
- a big, clear outcome and acceptance criteria, vibe coding project on
- a public, working, high performance, full featured, production codebase by
- the leading LLM model maker known for the strongest coding ability
A good example no matter if it successes or not.
As a fan of the language, I hope it leads to some reflection on things that might need to change moving forward.
Both their AI policy and their rejection of Bun's performance PR were level-headed and well-reasoned. And the link seems more like a proof-of-concept than anything else.
It's true corporate sponsors are a big help with language development, but not at the expense of conceptual integrity.
[1] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...
In addition, the link in the comment you replied to explains why the PRs Bun opened to Zig would have lowered the quality of the compiler and how Zig has achieved even greater speedups, with more widely applicable features like incremental compilation and the self-hosted backend.
I think people here are reading too much into it.
But I'm excited for the (I think inevitable) stage where the shoggoth starts to reach outside those constraints -- rewriting, patching, renaming, rebuilding libraries, DLLs, binaries -- and we move into a regime where the libraries dissolve, the application floats on top of the shifting sands of an ever more efficient, secure, unified and totally inhuman technology stack.
Obviously this is a horrifying idea in some ways (interpretability, security etc), but it's also not obvious to me that it can't work, especially if there are dedicated, centralized efforts to do this. it's also not clear that interpretability is necessarily mutually exclusive with full slopification/machine rewrite of decades of foundational, incremental development
Will everything eventually be rewritten in Rust and we finally achieve utopia?
OK I'm sorry, I'll see myself out.
Not sure about vibe-coding it. While they aren't using v8, LLMs made it easier to understand v8 quirks and update v8 as they make weird changes every now and then. It couldn't write the runtime without help though.
For those curious: https://github.com/alshdavid/ion
It seems there was an issue where the image API ignored the ICC Profile.(now fixed) Any developer with experience implementing image formats would almost certainly avoid this mistake. This is a problem that cannot be solved with vibe coding. In this situation, the user is merely a guinea pig for bug fixes.
Sounds like responsible open source software development to me. That's what pre-releases are for.
April 27th - Zig contributor mlugg clarifies why the specific optimizations Bun did were ill advised and wouldn't have been accepted in Zig, regardless of AI use [1]
May 4 - Bun is looking into Rust as an alternative.
This, to me, seems like total whiplash. Has anyone at Bun made a statement on why they're making such dramatic changes? It seems like the lesson to internalize from mlugg is not "switch to Rust"
[1] https://lobste.rs/s/ifcyr1/contributor_poker_zig_s_ai_ban#c_...
It was always a risky proposition to use Zig, unless those persons were philosophically committed to help the language develop or die-hard fans. If not, them jumping to some other language, should not be so big of a surprise.
They may come to the conclusion that Zig is incapable of delivering on its promises or is deficient at satisfying their requirements.
Haha, is it really okay not to retract that that the official account previously posted a caricature criticizing Rust?
I'm not a rust dev but even I kind of notice that tokio is kind of shunned in most projects. Why is that? Is it just bad or what?
Source: I worked on Deno, competed directly with Bun on HTTP performance (and won on some metrics).
Edit: and of course I typed future instead of task (aka "spawned future"). Thanks, child commenters below. Much of Deno was built on spawning futures that mapped to promises and doing it as fast as possible. I spent ages writing a future arena to optimize this stuff..
Edit: and tasks.
You much rather have this runtime you're building manage task scheduling and allocation and all that. It's the most natural design choice to make.
However, there are reasons why you might not want to use it:
- You don't need async at all
- You want to own the async execution polling completely
- You want some alternative futures executor like io uring (even though tokio-uring is a thing)
But as soon as you need something that doesn’t fit neatly into the abstractions they provide, even something as seemingly simple as proactively reusing or cancelling sessions, things quickly become extremely complicated, inefficient, and unreliable.
For high-performance servers, where you really care about raw performance, DoS resistance, and taking advantage of modern kernel features, these abstractions can become a major limitation.
It’s a bit like using an ORM that gives you no easy way to send raw SQL queries. It works fine for common cases, even if it’s not always optimal. But when you really want to take advantage of what the database can do, you usually avoid the ORM.
I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).
Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.
Trying to run it as a replacement for node in persistent backend/api scenarios is just plain broken.
RSS grows unbounded under Bun: https://discord.com/channels/876711213126520882/148058965798...
On nodejs: `tokei src`: 98333 LOC C++ Code
On bun: `tokei src` 573572 LOC Zig Code
On deno: `tokei libs cli runtime` 289573 LOC Rust Code
This seems wrong though so would be appreciated if someone who knows the structure of these projects can correct me on the folder names.
Doing `tokei lib src test deps` gives more than 5M loc. but not sure if that is fair
As an aside, I've been bitten by Zig's breaking changes on my own projects as well. It's taken the shine off of Zig and I'm looking at alternatives.
I've really enjoyed Bun the past year or so, but the acquisition by Anthropic, Bun's codebase and documentation increasingly becoming AI slop, and this impulsive complete rewrite - all of it has ruined it for me and I'm actively moving off of Bun. I don't feel comfortable relying on it any longer.
This makes me respect Zig team's stance more, that it's a technical decision more than an ideological one.
I was hopeful for this project, and I've reported crashes & bugs in the bundler with the hope that it will stabilize over time, but this is just silly - I'm not going to risk them pulling the rug under me and replacing the runtime with 1 million lines of vibecoded rust.
Hm does that actually work?
Edit: in a way that can be verified, and not the AI tool saying it did
If they did, I guess they would rewrite deno in C++
Everyone wants to be a Rustee these days.
https://bun.com/blog/bun-joins-anthropic
"I got obsessed with Claude Code"
So the bad, bad Zig that opposes the clanker mania has to be punished, even if top comments deny it.
Anthropic is one of the most evil companies in existence today. Whenever someone produces something, they steal it.
Company A buys company B. A's management decrees the henceforth B's aqcuihired team must comply with company A's standards.
Second system effect kicks in. Bugs multiply.
Half of original company B devs leave.
I'm investigating whether future projects should revert to using Deno.
Problem is fanboys like YOU.
This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.
> if/how hard it’d be to get it to pass Bun’s test suite and be maintainable
Every month brings new opportunities to completely abstract the process of porting code with agents, all using linguistics. What an exciting time.
For those looking for a similarly interesting (and interestingly similar) example, see Cloudflare's port of Next.js[0], "vinext", from a couple of months ago. It had some teething problems at the start but I'm using it in a few production projects now with minimal issues.
[0] - https://github.com/cloudflare/vinext
If people get worked up about experimentation, that's their problem, not yours.
You can delete your social media accounts and just keep working on what you want to, for one. Nobody is forcing you to use social media.
I don't think the tone was the problem.
[there was some sarcasm there, BTW, if anyone has a faulty detector that didn't pick up on it]
By working in public on a popular open source project, you are communicating intent and purpose to your users and the general public through your commit messages, branch names, and documentation. You’ll save yourself a lot of grief if you act accordingly.
Doesn't matter if it's "experimental", it's a dumb experiment that shouldn't exist.
Do you think the same about bitcoin? Where do you draw the line as to what programs are allowed to be written?
While the concerns many have about Bun's potential future direction are valid IMO, of the posts on this thread the one you are criticising is one of the more constructive.
Recently Bun's latest version had memory leaks which crashed production code from my understanding and their attitude[0] of saying OSS will have no human contribution allowed, now doing these ports of zig to rust, going back for years what the decision making of using zig was and this code basically being vibed as there is no way that they are reviewing the code while being VC funded/bought by anthropic.
These are all genuine issues which cause hate. You can say people are hating because people rely on it but the true thing is that also seems like a bait and switch and that people switched from node.js to bun (maybe even being locked inside bun), only for them to do these highly questionable decisions which is the reason why people are starting to hate on bun.
Atleast that's my interpretation right now reading this whole thread.
[0]:https://x.com/jarredsumner/status/2048434628248359284: "I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026."
- Jarred Sumner
e.g. `Box::leak(Box::new( ... ))`
Who is to say that it’s wrong?
Bun raised millions of dollars and was acquired by a commercial entity which bragged in the same blog post of reaching $1B. They’re not a guy with an eyepatch and a tin can out on the street.
Open-source developers should be compensated, but they don’t have to be. You can’t reasonably offer your work for free then complain someone isn’t paying you. If you want to be paid, charge for it.
Signed: A long time open-source developer who has dedicated years of full-time work to useful projects without compensation or raising VC money or being acquired.
We are all software engineers on here (or at least many of us are), we all know how project management and prioritisation works right? We can't work on everything all at once.
That is not what the question is about, which you’ll see if you engage with it properly in good faith. There is a single question in the comment (indicated, as one does in English, by a question mark):
> How do you feel about all the constant concerns being raised about the quality of the project lately?
Everything else is context and opinion to explain the question.
At some point it need to be made clear; it's not a legal obligation, but a reputational challenge.
What aspect do you think dominates?
For what it's worth, in my last experience with Bun[0] I ran into a couple of bugs where it seemed Rust could have helped, e.g. using Bun.write
[0]: https://mastrojs.github.io/blog/2025-10-29-what-struggled-wi...)
I've had surprisingly good results from getting AI agents to take a script in shell, python or typescript and have it translate it into those other programming languages, including rust versions. Or swapping from one build system to another.
Or take on an additional/related feature (like Redis grepping over the new array data types). Because you can be relatively sure the borders are stable and you can limit the surface/scope.
Personally, I find this experiment interesting and I’m curious to see how it develops. Writing idiomatic rust requires a shift in mindset, so it’ll be worth watching how well LLMs adapt to that over time.
I don't understand why this mentality is so common. Zig and Rust are both fine languages with markedly different design goals and they can coexist.
I hope you get the code elegant and not only maintainable but future friendly and performant.
While you are here, can you elaborate on the method chosen? For example, why not write a conversion script for phase A? I mean, same Anthropic model will produce it in no time, prompting it is at the same cognitive load level, but you would have a deterministic result.
I'm sure recasting Bun in a new mold is going to be hugely informative about the structure of Bun itself, regardless of the outcome.
would love to read a postmortem
> This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
Trying to pass off a blunder like this like its no big deal is an insult to your users. You made a dumb mistake. Own it, be transparent and correct the problem that started this; namely, put some form of experimental tag in the commit message. Then say you made a simple mistake, sorry, and move on. Being dismissive is a defense mechanism that can arouse suspicion, as in are you now lying about the experimental state to quench the flame war? Not that I believe that but it can certainly now become conspiracy. Again, you can avoid all that with transparency.
It’s their repo, let them do what they want lol
It could get even worse if they get Second System Syndrome[1] and try to add features as they rewrite it. Considering Bun's rapid development cycle, this seems likely.
[0] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[1] https://en.wikipedia.org/wiki/Second-system_effect
A commit message on a random branch is not an obligation. Not telling random internet users what side projects they're working on is not a blunder. It quite frankly doesn't matter what you think looks official, it doesn't give you the right to treat people like this.
It's so embarrassing to be a programmer some times, so many of my peers behaving like spoiled rotten brats.
The majority of the community feels this way which says something. The author's reaction is to publicly display being upset and dismissive of the communities reaction. That is just making it worse.
When you work on a project this big, more care is needed. The commit was an innocent mistake. The blunder is blowing off the communities response as overblown which it would be had the commit been tagged experimental. But it wasn't. And the author did themselves no favor blowing it off.
If the author was smart, their reply would simply have been:
Hello, To clarify, this is an experimental branch only. There are no plans to port, only experiment. I will tag the repo as such to ensure people understand its intention and avoid future misunderstandings.
Nothing difficult to understand here.
Yes, it says that those people are spoiled rotten brats and the community needs to start calling it out to improve itself.
They aren't contributors. They aren't employees. They aren't paying customers. Bun is not a web standard. They benefit from a free product that they chose to opt into over the standard ecosystem.
And for some reason they feel they have a right to know every decision and experiment everyone who does work on that project is making apriori. And, God forbid, if somebody even so much as starts working on something in an off branch that doesn't affect them in any way without getting their approval, they're going throw an absolute hissy fit.
And to criticize the person actually doing their job for feeling slighted that hundreds of people have verbally accosted them over it, because one feels they don't recognize an "implied responsibility" to those folk, is silly.
I'll also push back, though. The majority of the community doesn't seem to be doing anything.
Props for the effort man, but people have already picked up on Zig-to-Rust transition.
Poor Zig folks ...
You may even be an OK programmer, but IF YOU AREN'T ABLE TO DO THE WORK I DON'T WANT TO USE IT.
Not worth your time? Not worth my time.
Not actually pointing on you or anyone in particular here to be clear. And if the answer would be "not much more than forgetting the light when leaving the toilets", certainly that would be a "go have fun" cheerleading on my part.
But otherwise we collectively have to keep in mind that the prompt that we can throw mindlessly and without perceiving any direct negative feedback are possibly not anodyne.
So if you can measure it, come back also with these numbers so we can all take that into consideration next time the thrill to run it just to see what happens rise in our mind. Thanks.
> Showing 1,808 changed files with 790,916 additions and 151 deletions.
Just looking at the git diff [0].
I looked at one of these rust port files [1]. Its 827 loc and apparently 7,576 tokens. So that gives you a first order guess that the full 700k additions is around 8 million output tokens. Obviously there are some tool calls, reasoning, reads of the zig version, and fixing compile errors as overhead. So I would guess maybe this is like 40 million tokens by multiplying by 5?
If we guess that is around $200 to $500 in token spend. We can probably guess that it emits around the same as buying $100 in gas? Or like 50 or so kgs of CO2?
[0] https://github.com/oven-sh/bun/compare/main...claude/phase-a...
[1] https://github.com/oven-sh/bun/blob/dacc59c62a8f93eabe6d9998...
It feels odd that the same message can be thus down voted and give the impulse to provide courteous response with reasoning, metrics and values.
Glory to your kindness and informarive way to react.
Probably less than the impact of having dozens/hundreds of actual developers, each with a dedicated computer running for months/years in what it would take for a similar effort.
If you want to go live in the woods and farm/hunt for yourself, feel free. I'd suggest you stay away from the museums with paint and not glue yourself to a car mfg.
Having people working together at some goal is not not going to create the same social structures as running LLMs at the same goal. That's missing the ecosocietal forest for the digital output.
Actually, at societal level, no, people are not free to go into gather and hunt mode, that is not at scale. Sure some individual can do it on the margin, but by definition that won't make the mainstream societal impact disappear.
As for social structures in creating software... the social structures around creating software shouldn't be a goal... software serves to scratch an itch or serve a purpose... and that purpose can even be social or entertainment... but the creation of the software itself doesn't need to serve any other purpose and if it can be done via automation, or partly automation, all the better.
As to going into hunter/gatherer mode... have you tried? My brother isn't even online and regularly hunts and fishes... so did my dad. They weren't wealthy people and still managed to get by. A lot of people do and did through history... because most people wouldn't be willing to do it... I realize that some countries and regions are more populated... but there's plenty of space in the US to achieve this kind of lifestyle.
For that matter, there's absolutely very little standing in your way if YOU want to take on the goals of creating cleaner energy or pairing with "responsible" data centers.
But I really think you're just virtue signaling and grand standing in order to try to shame others because you feel guilty for things you aren't actually responsible for.