Rendered at 10:44:30 GMT+0000 (Coordinated Universal Time) with Vercel.
totalperspectiv 15 hours ago [-]
Having written a lot of Mojo over the last two year, just for fun, it's a really cool language. Ownership model adjacent to Rust, comptime that is more powerful than Zig, Rich type system, first class SIMD support, etc.
Performance wise it's the first language in long time that isn't just an LLVM wrapper. LLVM is still involved, but they are using it differently than say, Rust or Zig.
Very excited for Mojo once it's open sourced later this year.
ainch 1 days ago [-]
As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs. The last time I booted it up, I tried to do some basic string manipulation just to test stuff out, but spent an hour puzzling out why `var x = 'hello'; print(x[3])` didn't work, and neither did `len(x)` (turns out they'd opted for more specific byte-vs-codepoint representations, but the docs contradicted the actual implementation).
Hopefully they get Mojo to a good place for more general ML, but at the moment it still feels quite limited - they've actually deprecated some of the nice builtins they had for Tensors etc... For now I'll stick with JAX and check in periodically, fingers crossed.
sureglymop 1 days ago [-]
Mojo is cool but I just don't understand the python backwards compat thing. They're holding themselves back with that.
All the flaws I can think of in Kotlin are due to the Java compatibility. They could've made it work here by being more explicit but the way it currently works seems doomed.
pjmlp 23 hours ago [-]
Same story with C and Objective-C, C and C++, JavaScript and TypeScript, Java and Scala, Java and Clojure,.....
Yes the underlying platform they based their compatibility on, is the reason they got some design flaws, some more than other.
However that compatibility is the reason they won wide adoption in first place.
tasuki 23 hours ago [-]
They coulda made it Scala!
coldtea 1 days ago [-]
>As someone in ML who's interested in performance, I'm keen for Mojo to succeed - especially the prospect of mixing GPU and CPU code in the same language. But I do wonder if the changes they're making will dissuade Python devs.
Unless it's open sourced, it's a moot point, as most Python devs wont come anyway.
Certhas 1 days ago [-]
This is a bit ironic, given that people seem to have no problem using CUDA all over the place... Plus they promise to open source with the 1.0 release. We'll see...
_aavaa_ 23 hours ago [-]
I don’t see irony there. We’re locked into CUDA due to past decisions. And in new decisions we don’t want to repeat that mistake.
MohamedMabrouk 1 days ago [-]
I think that plan is to open source the compiler with 1.0 which is expected to be this summer. so in ~3-4 months time.
armchairhacker 1 days ago [-]
> We have committed to open-sourcing Mojo in Fall 2026.
Sadly for them, Nvidia didn't stay still in the meantime and created the next generation of CUDA, CuTile for Python and soon for C++, through CUDA Tile IR (using a similar compiler stack based on MLIR).
Event though it's not portable, it will likely have far greater usage than Mojo just by being heavely promoted by Nvidia, integrated in dev tools and working alongside existing CUDA code.
Tile IR was more likely a response to the threat of Triton rather than Mojo, at least from the pov of how easy is to write a decently performing LLM kernel.
pjmlp 1 days ago [-]
And for not staying behind, Intel and AMD are doing similar efforts, and then we have the whole CPython JIT finally happening after so many attempts.
Not to mention efforts like GraalPy and PyPy.
And all these efforts work today in Windows, which is quite relevant in companies where that is the assigned device to most employees, even if the servers run Linux distros.
I keep wondering if this isn't going to be another Swift for Tensorflow kind of outcome.
melodyogonna 1 days ago [-]
People keep mistaking Mojo as good syntax for writing GPU code, and so imagine Nvidia's Python frameworks already do that. But... would CuTile work on AMD GPUs and Apple Silicon? Whatever Nvidia does will still have vendor lock-in.
pjmlp 1 days ago [-]
Indeed, but Intel and AMD are also upping their Python JIT game, and in the end Mojo code isn't portable anyway.
You always need to touch the hardware/platform APIs at some level, because even if the same code executes the same, the observed performance, or in the case of GPUs the numeric accuracy has visible side effects.
melodyogonna 22 hours ago [-]
It is portable in that you can write code to target multiple platforms in the same codebase. Mojo has powerful compile-time metaprogramming that allows you to tell the compiler how to specialise using a compile-time conditional, e.g. https://github.com/modular/modular/blob/9b9fc007378f16148cfa...
Of course, this won't be necessary in most cases if you're building on top of abstractions provided by Modular.
You don't get this choice using vendor-specific libraries; you're locked into this or that.
pjmlp 22 hours ago [-]
Yes you do, you get PyTorch or whatever else, built on top of those vendor-specific libraries.
That is the thing with Mojo, when it arrives as 1.0, the LLM progress and the investment that is being done in GPU JITs for Python, make it largely irrelevant for large scale adoption.
Sure some customers might stay around, and keep Modular going, the gold question is how many.
brcmthrowaway 1 days ago [-]
Interesting, how big impact is CuTile?
modeless 1 days ago [-]
When I first heard about Mojo I somehow got the impression that they intended to make it compatible with existing Python code. But it seems like they are very far away from that for the foreseeable future. I guess you can call back and forth between Python and Mojo but Mojo itself can't run existing Python code.
ainch 1 days ago [-]
In their original pitch that was definitely part of it: take Python code, add type hints, get a big speedup. As they've built it out it seems to have diverged.
dtj1123 1 days ago [-]
They also advertised a 36,000x speedup over equivalent Python if I remember correctly, without at any point clarifying that this could only be true in extreme edge cases. Feels more like a pump-dump cryptography scheme than an honest attempt to improve the Python ecosystem.
boxed 1 days ago [-]
Well... the article made self deprecating fun of the click bait title, showed the code every step of the way, and actually did achieve the claim (albeit with wall clock time, not CPU/GPU time).
And it wasn't "equivalent python", whatever that means, they did loop unrolling and SIMD and stuff. That can't be done in pure python at all, so there literally is no equivalent python.
Certhas 1 days ago [-]
If you paid very close attention it was actually clear from the start that the idea was to build a next gen systems language, taking the lessons from Swift and Rust, targeting CPU/GPU/Heterogeneous targets, and building around MLIR. But then also building it with an eye towards eventually embedding/extending Python relatively easily. The Python framing almost certainly helped raise money.
Chris Lattner talked more about the relationship between MLIR and Mojo than Python and Mojo.
pjmlp 1 days ago [-]
So basically Chapel, which is actually being used in HPC.
Certhas 22 hours ago [-]
I don't know Chapel in detail, I was more thinking Hylo. I don't think Chapel has a clear value/reference semantics or ownership/lifetime story? Am I wrong here?
The Mojo docs include two sections dedicated to these topics:
The metaprogramming story seems to take inspiration from Zig, but the way comptime, parameters and ownership blend in Mojo seems relatively novel to me (as a spectator/layman):
I was sort of paying attention to all these ideas and concepts two-three years ago from the sidelines (partially with the idea to learn how Julia could potentially evolve) but it's far from my area of expertise, I might well be getting stuff wrong.
mastermage 1 days ago [-]
That was what was originaly advertised, they wanted to be what Kotlin is to Java but for Python. They quickly turned tails on this.
That and the not completely open source development model is what has always felt very vaporwary to me.
victorio 1 days ago [-]
From the site:
Python interop
> Mojo natively interoperates with Python so you can eliminate performance bottlenecks in existing code without rewriting everything. You can start with one function, and scale up as needed to move performance-critical code into Mojo. Your Mojo code imports naturally into Python and packages together for distribution. Likewise, you can import libraries from the Python ecosystem into your Mojo code.
pansa2 1 days ago [-]
> they intended to make it compatible with existing Python code
That was the original claim, but it was quietly removed from the website. (Did they fall for the common “Python is a simple language” misconception?).
Now they promise I can “write like Python”, but don’t even support fundamentals like classes (which are part of stage 3 of the roadmap, but they’re still working on stage 1).
Maybe Mojo will achieve all its goals, but so far has been over-promising and under-delivering - it’s starting to remind me of the V language.
samuell 1 days ago [-]
The communication had me try to run some very simple python code assuming it of course should run (reading files line by line), which didn't work at all.
For me this was a big disappointment, and I wonder how much this has backfired across developers.
kjsingh 1 days ago [-]
isn't that achieved by Codon?
haskman 1 days ago [-]
Really the only thing good about Python is its ecosystem.
coldtea 1 days ago [-]
Nah, it's also a very fine language for getting an idea down quickly.
Might not have the niceties purists like, but perhaps that's exactly it's a great language for that.
It's like executable pseudocode, and unlike other languages, all the ceremony is optional.
People flocked to it way before it became a "must" for ML and CS thanks to that ecosystem becoming dominant.
mastermage 1 days ago [-]
but that ecosystem is realy good.
smartmic 1 days ago [-]
Advertising prominently with "AI native" seems necessary today, at least for some folks. To me, that's kind of off-putting, since it doesn't really say anything.
Can anyone of the AI enthusiasts here explain, why, or, what is meant by
> As a compiled, statically-typed language, it's also ideal for agentic programming.
jpnc 1 days ago [-]
It's been really interesting to see all the desperation on hero pages for all these products and services ever since AI came into prominence. I think the funniest for me was opening IBM DB2 product page and seeing it labeled as 'AI database'. Hysterical.
> why, or, what is meant by
More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.
chillfox 1 days ago [-]
I don’t really consider myself an “AI enthusiasts”, but I do use it.
So, agents tend to do better the more feedback they can get. Type checking is pretty good for catching a bunch of dumb mistakes automatically.
The point is more hints for the agent is more better most of the time.
phyrog 1 days ago [-]
So just like for humans...
Reubend 1 days ago [-]
I don't know what they meant by it, and I share your opinion that "AI native" is somewhat meaningless for a programming language like this.
Regarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.
What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.
boxed 1 days ago [-]
I've recently used Claude to write quite a bit of mojo (https://github.com/boxed/TurboKod) and I can quite confidently say that Claude will write deprecated mojo syntax a lot, but the compiler tells it and it fixes it pretty fast too. The only reason I notice is that I look at Claude while it's working and I see the compilation warnings (and sometimes Claude is lazy and doesn't compile so I have to see it).
But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.
rmnclmnt 1 days ago [-]
Because a coding agent (when instructed well) will try to make a piece of code work in a loop. Static typing and compilation help in the process (no more undefined variables discovered at runtime for instance). But that’s not bullet proof at all as most of us know
Mojo looks neat but I'm pretty satisfied with Julia at this point for high performance numerical computing across CPU, GPU, etc. I can't help but feel this niche is already mostly solved beyond having Python like syntax. Even Python has things like Numba and Triton that are effective for less complicated / more self contained type problems.
andriamanitra 3 hours ago [-]
The changelog looks reasonable, seems like the language is moving in the right direction. I'm excited to try it out once it's open sourced. The dependency on glibc is an unfortunate limitation as it means you can't install Mojo on musl-based systems.
Julia is more mature for the same purposes, and since last year NVidia is having feature parity between Python and C++ tooling on CUDA.
Python cuTile JIT compiler allows writing CUDA kernels in straight Python.
AMD and Intel are following up with similar approaches.
If Mojo will still arrive on time to gain wider adoption remains to be seen.
bobajeff 10 hours ago [-]
I've been keeping my eye on mojo. Honestly though the thing I least like about Python is it's syntax.
Someone else here is bringing up Julia. Which I think is a fine language but the compiler error messages and the library documentation are not what I would want in a language as far along as it is. I'm also worried about the correctness issues I've read about in a blog awhile back. Also I don't feel like I can make the kind of Python module I want with it (because of binary size and time to first x)
That being said I'm only hoping that Mojo can become an option. But I really like to use a REPL and I like the dynamicness of Python. So I might not ever get around to doing anything outside of maybe Numpy for performance.
archargelod 8 hours ago [-]
> the thing I least like about Python is it's syntax.
For me it's the opposite - the only thing I like about Python is it's syntax. That's why I really like Nim - you get C speed, "comptime", metaprogramming, powerful type system, memory safety and code is often short and elegant.
Mojo seems interesting too, but so far they're mostly focused on ML stuff and not general programming. And I believe compiler is still not open-source?
Recurecur 10 hours ago [-]
I’m a big fan of Mojo’s design. It isn’t comparable to Julia since it has deterministic memory management.
I also think Mojo is more focused on being an industrial strength language. I was shocked to see the first iteration of Julia ahead of time compilation did not provide file I/O.
taylorallred 17 hours ago [-]
I know Mojo is aimed at ML, but I'm actually really interested in trying it for game development :)
totalperspectiv 14 hours ago [-]
Me too! I've been using it for bioinformatics related work, and it is absolutely fantastic. I can't wait for it to hit fully open source status so it can be easily recommended.
momojo 12 hours ago [-]
I work in bioimaging. What kind of bioinformatics are you doing that requires mojo level power?
totalperspectiv 11 hours ago [-]
"requires" is a strong word, but I implemented an alignment kernel that can do alignments on the GPU.
Overall I think there is going to be a lot of "old" gpu compute hanging around, and now that writing kernels is a lot easier than it has been, we might as well try and see what algorithms we can get working there.
I originally picked up Mojo for the SIMD, not for the GPU kernels. The SIMD usability in Mojo is outstanding.
What's "alignment" in your context. In bioimaging it usually refers to aligning something to a reference atlas (like the Allen Reference Mouse Brain Atlas) or aligning two microscope channels (like the red channel and green channel)
simplyvibecode 14 hours ago [-]
Full open source Mojo 1.0 coming this fall!
Timot05 1 days ago [-]
I’m relatively new to programming but I wish they had used a functional language syntax rather than an object oriented one as the basis for mojo.
From my experience, AI revolves a lot around building up function pipelines, computing their derivatives, and passing tons of data through them; which composability and higher order functions from functional programming make it a breeze to describe.
I also feel that other fields than AI are moving towards building up large functional pipelines to produce outputs, which would make mojo suitable for those fields as well. I’m building in the space of CAD for example and I’d love to use a “functional mojo” language.
Revanche1367 1 days ago [-]
The vast majority of real world ML code today is written in languages like Python and C++. Relatively few people outside of academia and online forums are functional language enthusiasts. The industry is also looking like most actual coding is going to be done by LLMs going forward, so it makes little sense to design new languages with a niche potential user base since LLMs need a ton of training data. I’m think that was a factor in deciding to base mojo on Python along with the other reasons they state.
Timot05 1 days ago [-]
agree with all of this. Though i'd say: since the language is mostly read by humans rather than written, in my opinion, it makes even more sense to have a language syntax that actually matches intent. In the case of Machine Learning, it's mostly connecting functions together and acting on them, which matches functional syntax.
LLMs are also already very effective at writing ML-inspired syntax (like ocaml or f#) as they have plenty of data to train on, making llms effective from day one if a similar syntax was chosen.
arikrahman 1 days ago [-]
I'm in the same boat, this would've been in the family of the first language that neural nets and AI were created with back decades ago, Lisp. Coming from the awesome project of Swift, which to their credit, was a massive undertaking to convince Apple execs, I was still hoping for a functional language approach like Haskell with the practicality of Clojure.
dllu 1 days ago [-]
I remember reading about this 4 years ago as the new Chris Lattner project and was super excited, though a little skeptical.
I think that nowadays with vibe/agentic coding, high performance Python-like languages become ever more important. Directly using AI agents to code, say, C++, is painful as the verbose nature of the language often causes the context window to explode.
boxed 1 days ago [-]
Not to mention that C++ basically can't be made to be safe. But Rust is probably fine.
coppsilgold 15 hours ago [-]
Python is basically the master glue language at this point.
If more than a few percent of execution time is spent in Python you are probably doing it wrong.
Personally I don't even understand why Cython is a thing, just write performance critical functions in other languages:
Note that you can even start threads in those languages and use function calls as pseudo-RPC. All without an overly complex build system.
sirfz 13 hours ago [-]
Cython is a no-brainer really. You write the same language with immense speedup (matching what the "other languages" can achieve at much less effort).
Also tools like numba can beat them all at way less effort.
Imho, dropping into other languages should be the last resort in any project.
MohamedMabrouk 15 hours ago [-]
Mojo aims to be this (other language) arguably with easier programming model that rust, familiar syntax to python devs, and a modern design in general. Its stated goal now, is the easiest way to extend python. it provides the same interface for zero-hassle import of .mojo files
physicsguy 15 hours ago [-]
Cython and PyBind and Nanobind are good for wrapping an existing library written in C++ and crafting an interface that doesn’t feel like it’s a C++ one. They were a big step from ctypes and SWIG
pjmlp 4 hours ago [-]
Plus, even if it has a Perl like feeling to it, C++26 reflection will make this even easier.
Already available on GCC 16.
fiedzia 10 hours ago [-]
> If more than a few percent of execution time is spent in Python you are probably doing it wrong.
Every program that starts with 1% of Python writes more Python and gets to 20,40, 60 and than 99% of it.
pjmlp 1 days ago [-]
Still phase one, doesn't do native Windows.
Meanwhile Julia is more mature for the same purposes, and since last year NVidia is having feature parity between Python and C++ tooling on CUDA.
Python cuTile JIT compiler allows writing CUDA kernels in straight Python.
AMD and Intel are following up with similar approaches.
So will Mojo still arrive on time to gain wider adoption?
Time will tell.
chrismsimpson 1 days ago [-]
I do wonder if Mojo was a great idea just a little too late to the party. Porting ‘prototypes’ from Python to lower level languages is fairly trivial now with LLMs.
insumanth 1 days ago [-]
I was excited when Mojo launched and thought it might grow big quick. I don't see much traction. The pitch is compelling. What could be the issue?
samuell 1 days ago [-]
As someone who would have strong reasons to invest time in Modular (simple high performant language for implementing bioinformatics scripts), I would say primarily the worry that development might be too tied to Modular, the startup behind it, which eventually might pivot into other priorities.
One would want to see either a strong community build up around it, or really hard evidence for a long-term commitment to the language from Modular. And the latter will take a long time to be assured of I think.
Also, editing tools need to catch up before very wide adoption of a language with a lot of new syntax.
kstrauser 1 days ago [-]
I have no time for or interest in proprietary compilers. The standard library is Apache 2, but the license link on their home page is to a long terms of service thing. I’d like to be wrong because it looks interesting. Until then, this doesn’t exist in my world.
I bet that’s true for a great many people. There are too many wonderful FOSS languages to bother with one you can’t fix or adapt or share.
williamstein 1 days ago [-]
Mojo is still NOT open source (the standard library is but not the compiler). Open source is table stakes for a
modern programming language.
pjmlp 1 days ago [-]
- Doesn't support Windows, which is what many companies give their employees, outside Silicon Valey like culture
- The MLIR approach, which was also designed by Chris Lattner while at Google, has proven quite valuable to create Python JIT DSL
- The Python ecosystem now being taken seriously by the main GPU vendors, thanks to MLIR, as all their proprietary compilers are based out of LLVM
- Others remember Swift for Tensorflow
tweakimp 1 days ago [-]
When it was announced it was not generally available for everyone to try out. There was a waitlist phase.
rienbdj 4 hours ago [-]
Does anyone know if Mojo is more suitable for functional programming that Python?
Things like optimizing away object allocations, pure function inlining, tail call optimization?
jadar 7 hours ago [-]
Congrats to Chris Lattner and crew! Seems like a neat project. I listened to him talk about Mojo on some podcast a while back and was really impressed. Swift was a runaway success, and I hope he can pull it off again with Mojo! :)
0xpgm 1 days ago [-]
Right now majority of beginners start programming with a high-level language, say Python or JavaScript - then for more advanced system-level tasks pickup C/C++/Rust/Zig etc.
If Mojo succeeds, it could be the one language spanning across those levels, while simplifying heterogeneous hardware programming.
tveita 1 days ago [-]
Is there any project that showcases Mojo for running neural network models on the GPU - like ideally something like llama.cpp that could run one or more existing models to showcase the readability and performance?
sriram_malhar 1 days ago [-]
Doesn't anyone here have _one_ kind word to say about its features? Every one seems to be starting with "on the other hand".
pjmlp 1 days ago [-]
Many of us were already around during Swift for Tensorflow.
noduerme 1 days ago [-]
Am I old or remembering this wrong... didn't Zuck write the first iteration of Facebook in PHP, and then spend millions to hire people to write something that converted the code to C++?
Does it have the indentation thing? That would be a no go for a lot of people
IceDane 1 days ago [-]
Only incredibly inexperienced people think indentation in python is a problem.
tasuki 23 hours ago [-]
Yes, indeed, indentation is one of the very few things in Python which aren't problematic!
vga1 1 days ago [-]
I have tons of experience with python, possibly more actual work experience than any other language, and I do think the indentation is a bit of a problem. Obviously not a huge one, but still something I wished they had done differently. Because I like to have a robust format-on-save wired into my editor, and you just cannot quite have that when indentation is meaningful.
Sure, black's pretty good and definitely better than nothing.
Just wanted to provide an easy counterpoint to the logical fallacy by IceDane.
23 hours ago [-]
AbuAssar 14 hours ago [-]
> AI native
What’s that supposed to mean?
jjice 14 hours ago [-]
My guess is that they expose first party skills and maybe other agent friendly docs? Not positive though.
DeathArrow 19 hours ago [-]
>As a compiled, statically-typed language, it's also ideal for agentic programming.
Since there is not much Mojo code in the wild so the LLMs were trained on it, I wonder how it will work in practice.
Probably the agents will make lots of mistakes and you will spend 10x the tokens compared to using a language the model are well versed in.
runarberg 1 days ago [-]
I am actually on a lookout for a low level language which compiles to web assembly to write a (relatively small) supervised learning model which I plan to be good enough for 5 year old phone CPUs. I have a working prototype in Julia and was planning on (eventually) rewrite it in Rust mostly for the web assembly target. I come from a high level language background so the thought of rewriting in rust is a little daunting. So I was excited to learn about Mojo and find out if they had a WebAssembly target in their compiler.
But then I read this:
> AI native
> Mojo is built from the ground up to deliver the best performance on the diverse hardware that powers modern AI systems. As a compiled, statically-typed language, it's also ideal for agentic programming.
Well, no thank you. I know the irony here but I want nothing to do with a language made for robots.
kstrauser 1 days ago [-]
I’ve written Python for the past 25 years or so. I dig it. But I don’t think I’ve started a new Python project since starting to experiment with Rust. A lot (not all!, but a lot) of Rust patterns look a lot like Python if you squint at it just right. I also think that writing lots of Rust has made me better at writing Python. The things Rust won’t let you get away with are things you shouldn’t be doing almost anywhere else.
Go on, give it a shot. It stops being intimidating soon! And remember that the uv we all love was heavily influenced by Cargo.
frizlab 1 days ago [-]
If you’re searching for a language that has the same strong memory safety than rust but is a bit easier to write, you should give Swift a go.
pjmlp 1 days ago [-]
I can't go get coffee so many times per day, there are better compiled languages to chose from, while offering Python like ergonomics.
runarberg 1 days ago [-]
I actually have written Rust, but it has been a minute. I think my last project (a backend for a massive online multiplayer theremin jam session [site no longer up; but HN discussion still exists: https://news.ycombinator.com/item?id=10875211] 10 years ago).
I remember Rust very fondly in fact. And I had the same experience as you, learning Rust made me a better Javascript programmer. Lets see if a little neural network can be as fun.
Certhas 1 days ago [-]
Mojo has been suffering in their communication from targeting VCs rather than users. They never actually had a clear "Mojo extends Python" MVP or even strategy to get to an MVP anytime soon. And the language started developing before AI Agents were a thing and has more to do with building around state of the art LLVM tooling than AI Agents. But I guess "easier lifetime semantics than Rust and native access to MLIR intrinsics" doesn't raise money...
DeathArrow 1 days ago [-]
>No more choosing between productivity and performance - Mojo gives you both.
That's a very big claim.
logicchains 1 days ago [-]
Very bold of them expecting people to use a language with a closed source compiler in the 2020s.
evertheylen 1 days ago [-]
If you're looking for a language that aims to solve the "two-language problem" like Mojo, but want something more open, more mature and less influenced by VC funding, check out Julia: https://julialang.org/
runarberg 1 days ago [-]
I used Julia a lot when I was studying statistics (which I dropped out of) back in 2015, but I recently (like last weekend) came back to it to write a prototype of a supervised learning model, and I have to say, coming back to it was pure joy. And my model prototype was indeed fast enough for me.
Now I will probably rewrite the model in rust if I want to do anything with it (mostly for the web assembly target as I want this thing to run in browsers) but I will for sure be using Julia for further experimentation. Lovely language.
ainch 1 days ago [-]
They've said they'll open source the compiler alongside the 1.0 release.
walterlw 1 days ago [-]
from what I understand the goal for now is not to get the people to use it, but for enthusiasts to try it
kstrauser 1 days ago [-]
What enthusiast worth getting feedback from is going to tinker with a locked up language?
melodyogonna 1 days ago [-]
You'd be surprised. Anyway, the compiler will be opened with 1.0 release, that's why reaching beta is exciting.
Performance wise it's the first language in long time that isn't just an LLVM wrapper. LLVM is still involved, but they are using it differently than say, Rust or Zig.
Very excited for Mojo once it's open sourced later this year.
Hopefully they get Mojo to a good place for more general ML, but at the moment it still feels quite limited - they've actually deprecated some of the nice builtins they had for Tensors etc... For now I'll stick with JAX and check in periodically, fingers crossed.
All the flaws I can think of in Kotlin are due to the Java compatibility. They could've made it work here by being more explicit but the way it currently works seems doomed.
Yes the underlying platform they based their compatibility on, is the reason they got some design flaws, some more than other.
However that compatibility is the reason they won wide adoption in first place.
Unless it's open sourced, it's a moot point, as most Python devs wont come anyway.
https://docs.modular.com/mojo/faq/#will-mojo-be-open-sourced
Event though it's not portable, it will likely have far greater usage than Mojo just by being heavely promoted by Nvidia, integrated in dev tools and working alongside existing CUDA code.
Tile IR was more likely a response to the threat of Triton rather than Mojo, at least from the pov of how easy is to write a decently performing LLM kernel.
Not to mention efforts like GraalPy and PyPy.
And all these efforts work today in Windows, which is quite relevant in companies where that is the assigned device to most employees, even if the servers run Linux distros.
I keep wondering if this isn't going to be another Swift for Tensorflow kind of outcome.
You always need to touch the hardware/platform APIs at some level, because even if the same code executes the same, the observed performance, or in the case of GPUs the numeric accuracy has visible side effects.
Of course, this won't be necessary in most cases if you're building on top of abstractions provided by Modular.
You don't get this choice using vendor-specific libraries; you're locked into this or that.
That is the thing with Mojo, when it arrives as 1.0, the LLM progress and the investment that is being done in GPU JITs for Python, make it largely irrelevant for large scale adoption.
Sure some customers might stay around, and keep Modular going, the gold question is how many.
And it wasn't "equivalent python", whatever that means, they did loop unrolling and SIMD and stuff. That can't be done in pure python at all, so there literally is no equivalent python.
Chris Lattner talked more about the relationship between MLIR and Mojo than Python and Mojo.
The Mojo docs include two sections dedicated to these topics:
https://mojolang.org/docs/manual/values/
https://mojolang.org/docs/manual/lifecycle/
The metaprogramming story seems to take inspiration from Zig, but the way comptime, parameters and ownership blend in Mojo seems relatively novel to me (as a spectator/layman):
https://mojolang.org/docs/manual/metaprogramming/
I was sort of paying attention to all these ideas and concepts two-three years ago from the sidelines (partially with the idea to learn how Julia could potentially evolve) but it's far from my area of expertise, I might well be getting stuff wrong.
That and the not completely open source development model is what has always felt very vaporwary to me.
Python interop > Mojo natively interoperates with Python so you can eliminate performance bottlenecks in existing code without rewriting everything. You can start with one function, and scale up as needed to move performance-critical code into Mojo. Your Mojo code imports naturally into Python and packages together for distribution. Likewise, you can import libraries from the Python ecosystem into your Mojo code.
That was the original claim, but it was quietly removed from the website. (Did they fall for the common “Python is a simple language” misconception?).
Now they promise I can “write like Python”, but don’t even support fundamentals like classes (which are part of stage 3 of the roadmap, but they’re still working on stage 1).
Maybe Mojo will achieve all its goals, but so far has been over-promising and under-delivering - it’s starting to remind me of the V language.
For me this was a big disappointment, and I wonder how much this has backfired across developers.
Might not have the niceties purists like, but perhaps that's exactly it's a great language for that.
It's like executable pseudocode, and unlike other languages, all the ceremony is optional.
People flocked to it way before it became a "must" for ML and CS thanks to that ecosystem becoming dominant.
Can anyone of the AI enthusiasts here explain, why, or, what is meant by
> As a compiled, statically-typed language, it's also ideal for agentic programming.
> why, or, what is meant by More errors caught at compile time means an agent can quickly check their work statically without unit and other tests.
So, agents tend to do better the more feedback they can get. Type checking is pretty good for catching a bunch of dumb mistakes automatically.
The point is more hints for the agent is more better most of the time.
Regarding compilation and static typing, it's extremely helpful to be able to detect issues at compile time when doing agentic programming. That way, you don't run into as many problems at runtime, which of course the agent has more difficulty addressing. Unit tests can help bridge the gap somewhat but not entirely.
What's not stated on their website is that Mojo is likely a bad choice for agentic programming simply because there isn't much Mojo training data yet.
But yea, to write mojo 1.0 code even after getting errors might take a new training round, so next or even next-next models.
[1] https://mojolang.org/releases/v1.0.0b1/
Python cuTile JIT compiler allows writing CUDA kernels in straight Python.
AMD and Intel are following up with similar approaches.
If Mojo will still arrive on time to gain wider adoption remains to be seen.
Someone else here is bringing up Julia. Which I think is a fine language but the compiler error messages and the library documentation are not what I would want in a language as far along as it is. I'm also worried about the correctness issues I've read about in a blog awhile back. Also I don't feel like I can make the kind of Python module I want with it (because of binary size and time to first x)
That being said I'm only hoping that Mojo can become an option. But I really like to use a REPL and I like the dynamicness of Python. So I might not ever get around to doing anything outside of maybe Numpy for performance.
For me it's the opposite - the only thing I like about Python is it's syntax. That's why I really like Nim - you get C speed, "comptime", metaprogramming, powerful type system, memory safety and code is often short and elegant.
Mojo seems interesting too, but so far they're mostly focused on ML stuff and not general programming. And I believe compiler is still not open-source?
I also think Mojo is more focused on being an industrial strength language. I was shocked to see the first iteration of Julia ahead of time compilation did not provide file I/O.
Overall I think there is going to be a lot of "old" gpu compute hanging around, and now that writing kernels is a lot easier than it has been, we might as well try and see what algorithms we can get working there.
I originally picked up Mojo for the SIMD, not for the GPU kernels. The SIMD usability in Mojo is outstanding.
Paper on the tool I wrote: https://doi.org/10.1093/bioadv/vbaf292
What's "alignment" in your context. In bioimaging it usually refers to aligning something to a reference atlas (like the Allen Reference Mouse Brain Atlas) or aligning two microscope channels (like the red channel and green channel)
From my experience, AI revolves a lot around building up function pipelines, computing their derivatives, and passing tons of data through them; which composability and higher order functions from functional programming make it a breeze to describe.
I also feel that other fields than AI are moving towards building up large functional pipelines to produce outputs, which would make mojo suitable for those fields as well. I’m building in the space of CAD for example and I’d love to use a “functional mojo” language.
I think that nowadays with vibe/agentic coding, high performance Python-like languages become ever more important. Directly using AI agents to code, say, C++, is painful as the verbose nature of the language often causes the context window to explode.
If more than a few percent of execution time is spent in Python you are probably doing it wrong.
Personally I don't even understand why Cython is a thing, just write performance critical functions in other languages:
<https://pypi.org/project/rustimport/>
<https://pypi.org/project/import-zig/>
Note that you can even start threads in those languages and use function calls as pseudo-RPC. All without an overly complex build system.
Also tools like numba can beat them all at way less effort.
Imho, dropping into other languages should be the last resort in any project.
Already available on GCC 16.
Every program that starts with 1% of Python writes more Python and gets to 20,40, 60 and than 99% of it.
Meanwhile Julia is more mature for the same purposes, and since last year NVidia is having feature parity between Python and C++ tooling on CUDA.
Python cuTile JIT compiler allows writing CUDA kernels in straight Python.
AMD and Intel are following up with similar approaches.
So will Mojo still arrive on time to gain wider adoption?
Time will tell.
One would want to see either a strong community build up around it, or really hard evidence for a long-term commitment to the language from Modular. And the latter will take a long time to be assured of I think.
Also, editing tools need to catch up before very wide adoption of a language with a lot of new syntax.
I bet that’s true for a great many people. There are too many wonderful FOSS languages to bother with one you can’t fix or adapt or share.
- The MLIR approach, which was also designed by Chris Lattner while at Google, has proven quite valuable to create Python JIT DSL
- The Python ecosystem now being taken seriously by the main GPU vendors, thanks to MLIR, as all their proprietary compilers are based out of LLVM
- Others remember Swift for Tensorflow
Things like optimizing away object allocations, pure function inlining, tail call optimization?
If Mojo succeeds, it could be the one language spanning across those levels, while simplifying heterogeneous hardware programming.
Just wanted to provide an easy counterpoint to the logical fallacy by IceDane.
What’s that supposed to mean?
Since there is not much Mojo code in the wild so the LLMs were trained on it, I wonder how it will work in practice.
Probably the agents will make lots of mistakes and you will spend 10x the tokens compared to using a language the model are well versed in.
But then I read this:
> AI native
> Mojo is built from the ground up to deliver the best performance on the diverse hardware that powers modern AI systems. As a compiled, statically-typed language, it's also ideal for agentic programming.
Well, no thank you. I know the irony here but I want nothing to do with a language made for robots.
Go on, give it a shot. It stops being intimidating soon! And remember that the uv we all love was heavily influenced by Cargo.
I remember Rust very fondly in fact. And I had the same experience as you, learning Rust made me a better Javascript programmer. Lets see if a little neural network can be as fun.
That's a very big claim.
Now I will probably rewrite the model in rust if I want to do anything with it (mostly for the web assembly target as I want this thing to run in browsers) but I will for sure be using Julia for further experimentation. Lovely language.