- new
- past
- show
- ask
- show
- jobs
- submit
I've felt before that compilers often don't put much effort into optimizing the "trivial" cases.
Overly dramatic title for the content, though. I would have clicked "Async Rust Optimizations the Compiler Still Misses" too you know
Yes, we can have async in traits and closures now. But those are updates to the typesystem, not to the async machinery itself. Wakers are a little bit easier to work with, but that's an update to std/core.
As I understand it, the people who landed async Rust were quite burnt out and got less active and no one has picked up the torch. (Though there's 1 PR open from some google folk that will optimize how captured variables are laid out in memory, which is really nice to have) Since I and the people I work with are heavy async users, I think it's maybe up to me to do it or at least start it. Free as in puppy I guess.
So yeah, the title is a little baitey, but I do stand behind it.
Great to see people wanting to get involved with the project, though. That’s the beauty of open source: if it aggravates you, you can fix it.
Retrospectively, i think everyone is satisfied with the adopted syntax.
Maybe it’s a case of agree and commit, since it can’t really be walked back.
await using db = await sqlite.connect(await ctx.getConfig("DB_CONN"));
It's not so bad when you have one `await foo` vs `foo.await`, it's when you have several of them on a line in different scopes/contexts.Another one I've seen a lot is...
const v = await (await fetch(...)).json();
Though that could also be... const v = await fetch(...).then(r => r.json());
In any case, it still gets ugly very quickly.```dart (await taskA()).doSomething() (await taskB()) + 1 (await taskC()) as int ```
vs.
```rust taskA().await.doSomething() taskB().await + 1 taskC().await as i32 ```
It gets worse if you try to compose:
```dart (await taskA( (await taskB( (await taskC()) as int )) + 1) ).doSomething() ```
This often leads to trading the await syntax for `then`:
```dart await taskC() .then((r) => r as i32) .then(taskB) .then((r) => r + 1) .then(taskA) .then((r) => r.doSomething()) ```
But this is effectively trading the await structured syntax for a callback one. In Rust, we can write it as this:
```rust taskA(taskB(taskC().await as i32).await + 1).await.doSomething() ```
This is a code block
HN has never used markdown so the triple-tick does nothing but create noise here.I was worried about features that I still don't love like `.match` etc (I'm more open to these now).
Post-fix macros would have been very complex. Scoping alone is complex.
`.await` kinda just works. It does everything you want and the one cost is that it looks like a property access but it isn't. A trivial cost in retrospect that I was a huge baby about, and I'll always feel bad about that.
Postfix macros had some very tricky issues and it would have delayed things a lot to figure out the right resolution.
You may also need to setup a large stack frame for each C FFI call.
As an Elixir + Erlang developer I agree it’s a great programming model for many applications, it just wasn’t right for the Rust stdlib.
One problem I have with systems like gevent is that it can make it much harder to look at some code and figure out what execution model it's going to run with. Early Rust actually did have a N:M threading model as part of its runtime, but it was dropped.
I think one thing Rust could do to make async feel less like an MVP is to ship a default executor, much like it has a default allocator.
By providing a default, I think you're going to paint yourself into a corner. Maybe have one of two opt-in executors in the box... one that is higher resource like tokio and one that is meant for lower resource environments (like embedded).
Claim-2: async versus sync is a fundamental division in CS
Discuss amongst yourselves. I lean towards thinking both are probably true (P~70%, P~90%)
See: “What Color is Your Function?” by Bob Nystrom (2015). https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
Hi. The article calls Rust async an MVP. You should expect strong reactions when you frame it like that.
"MVP" has a generally understood meaning; distorting that is unhelpful and confusing. Rust's async was not an MVP when it was released in 2019. It was the result of a lot of earlier work.
Rust async: (a) works well for a lot of people and orgs in production settings and (b) is arguably better designed than most (all?) other async implementations. Calling it an MVP is far from "simply the truth". It is an opinion -- and frankly a pretty clickbaity one. I appreciate your article's attention to detail, but the title is straight up shameful sensationalism.
I strive to not reflexively defend the status quo, but I get really chafed when people conveniently blur the difference between fact and opinion.
Please argue on narrowest correct claims available. The current title overstates your claims and undermines its overall credibility. Your central claim (as I read it) is that for embedded software there are opportunities for async improvement in Rust. Yeah this might sound boring, but I think it's accurate.
My other main criticism of your article is when it claims Rust async breaks the "zero cost abstraction" principle. I don't buy this claim, because you do not show that hand rolling the code provides the same guarantees. A lot of people misunderstand what "zero cost" means; your article wouldn't be the first to give the wrong impression.
Writing is hard (different audiences bring different backgrounds), and I commend anyone who puts their ideas out into the world. Please take this as constructive feedback: please agree or disagree with me on the merits. Ask and engage where I'm unclear.
The team literally described it as such.
One of the main architects of Rust’s async/await, withoutboats, left a comment on lobsters:
> It's just the truth. Neither in the language design nor in the compiler implementation has hardly any progress been made in the now 7 years since we shipped the MVP. The people primarily involved in delivering the MVP all become less active in the project around the same time and delivery since then has stalled out.
>
> I hope this person receives the support to do this work.
Rust’s async is great, and I feel you around some of the less informed criticisms. But it’s been called an MVP for a decade now, it’s not an insulting characterization. Just because it’s been an MVP does not mean it’s not good or useful.
Even if MVP is the correct term for its current state, it has a connotation to it which less informed folks will take away the wrong meaning from, so perhaps it's not useful to continue to propagate it even if true.
This is both true and not true. It's no secret that the async ecosystem has had deep social rifts for a very long time, and that's made it very tough to actually make progress by anyone, regardless of the desire to.
It is true that async massively gave a boost to Rust's adoption, and it is good enough for many users. It is a monumental technical achievement. At the same time, that doesn't mean it's perfect.
The author seems to be obsessing about the overhead for trivial functions. He's bothered by overhead for states for "panicked" and "returned". That's not a big problem. Most useful async blocks are big enough that the overhead for the error cases disappears.
He may have a point about lack of inlining. But what tends to limit capacity for large numbers of activities is the state space required per activity.
Is it really though?
In my experience many Rust applications/libraries can be quite heavy on the indirection. One of the points from the article is that contrary to sync Rust, in async Rust each indirection has a runtime cost. Example from the article:
async fn bar(blah: SomeType) -> OtherType {
foo(blah).await
}
I would naively expect the above to be a 'free' indirection, paying only a compile-time cost for the compiler to inline the code. But after reading the article I understand this is not true, and it has a runtime cost as well.This may look like a case of over-optimization, but given how many times i've seen this pattern, i assume it builds up to a lot of unnecessary fluff in huge codebases. To be clear, in that case, the concern is not really about runtime speed (which is super fast), but rather about code bloat for compilation time and binary size.
Most useful async blocks are deeply nested, so the overhead compounds rapidly. Check the size of futures in a decently large Tokio codebase sometime
Depends somewhat on your expectations, I suppose. Compared to Python, Java, sure, but Rust off course strives to offer "zero-cost" high level concepts.
I think the critique is in the same realm of C++'s std::function. Convenience, sure, but far from zero-cost.
not just too dramatic
given that all the things they list are
non essential optimizations,
and some fall under "micro optimizations I wouldn't be sure rust even wants",
and given how far the current async is away from it's old MVP state,
it's more like outright dishonest then overly dramatic
like the kind of click bait which is saying the author does cares neither about respecting the reader nor cares about honest communication, which for someone wanting to do open source contributions is kinda ... not so clever
through in general I agree rust should have more HIR/MIR optimizations, at least in release mode. E.g. its very common that a async function is not pub and in all places directly awaited (or other wise can be proven to only be called once), in that case neither `Returned` nor `Panicked` is needed, as it can't be called again after either. Similar `Unresumed` is not needed either as you can directly call the code up to the first await (and with such a transform their points about "inlining" and "asyncfns without await still having a state machine" would also "just go away"TM, at least in some places.). Similar the whole `.map_or(a,b)` family of functions is IMHO a anti-pattern, introducing more function with unclear operator ordering and removal of the signaling `unwrap_` and no benefits outside of minimal shortening a `.map(b).unwrap_or(a)` and some micro opt. is ... not productive on a already complicated language. Instead guaranteed optimizations for the kind of patterns a `.map(b).unwrap_or(a)` inline to would be much better.
So threads was the right programming model.
Now language runtimes prefer “green threads” for portability and performance but most languages don’t provide that properly. Instead we have awkward coloring of async/non-async and all these problems around scheduling, priority, and no-preemption. It’s a worse scheduling and process model than 1970.
Not really. I’ve observed async code often is written in such a way that it doesn’t maximize how much concurrency can be expressed (eg instead of writing “here’s N I/O operations to do them all concurrently” it’s “for operation X, await process(x)”). However, in a threaded world this concurrency problem gets worse because you have no way to optimize towards such concurrency - threads are inherently and inescapably too heavy weight to express concurrency in an efficient way.
This is is not a new lesson - work stealing executors have long been known to offer significantly lower latency with more consistent P99 than traditional threads. This has been known since forever - in the early 00s this is why Apple developed GCD. Threads simply don’t provide any richer information it needs in the scheduler to the kernel about the workload and kernel threads are an insanely heavy mechanism for achieving fine grained concurrency and even worse when this concurrency is I/O or a mixed workload instead of pure compute that’s embarrassingly easily to parallelize.
Do all programs need this level of performance? No, probably not. But it is significantly more trivial to achieve a higher performance bar and in practice achieve a latency and throughput level that traditional approaches can’t match with the same level of effort.
You can tell async is directionally kind of correct in that io_uring is the kernel’s approach to high performance I/O and it looks nothing like traditional threading and syscalls and completion looks a lot closer to async concurrency (although granted exploiting it fully is much harder in an async world because async/await is an insufficient number of colors to express how async tasks interrelate)
Well, we know how to make "traditional threads" fast, with lower latency and more consistent P99 since forever^2, in the early 90s. [1]
Sure, we can't convince that Finnish guy this is worthwhile to include in THE kernel, despite similar ideas had been running in Google datacenters for idk how many years, 15 years+? But nothing stops us from doing it in the userspace, just as you said, a work stealing executor. And no, no coloring.
Stack is all you need. Just make your "coroutines" stackful. Done. All those attempts trying to be "zero-cost" and change programming model dramatically to avoid a stack, introduced much more overhead than a stack and a piece of decent context switch code.
> You can tell async is directionally kind of correct in that io_uring is the kernel’s approach
lol, it is very hard to model anything proactor like io_uring with async Rust due to its defects.
> lol, it is very hard to model anything proactor like io_uring with async Rust due to its defects.
Not really. People latched on to async cancellation issues as intractable due to one paper but I’m not convinced it’s unsolvable whether due to runtimes that consider the issue more fundamentally or the language adding async drop which lets the existing runtimes solve the problem wholesale.
The point I’m making is that I/O and hardware is fundamentally non-blocking and we will always pay a huge abstraction penalty to try to pretend we have a synchronous programming model.
Stack-based coroutines is one way to do it. A relevant trade-off here is overhead, requiring a runtime and narrowing the potential use-cases this can serve (i.e embedded real-time stuff).
If you don’t care about supporting such use cases you can of course just create a copy of goroutines and be pretty happy with the result.
I guess this is referring to https://www.youtube.com/watch?v=KXuZi9aeGTw ?
But as you observed, async/await fails to express concurrency any better. It’s also a thread, it’s just a worse implementation.
A thread pool is not a research project.
* Cooperative vs. preemptive scheduling
* Userspace vs. kernel scheduling
* Stackless vs. stackful
* Easy control over waiting/blocking behavior vs. none
* Easy fan out + join vs. maybe, with some work and thread spawn overhead
* Can integrate within a single-threaded event loop vs. not really
Depending on what you're doing they may be interchangeable or you can only go one way. The basic cases where you're doing basically synchronous work in a thread/task is no different either way, other than having colored functioned with async/await and efficiency. If you're doing some UI work then event handlers are likely running in a single-threaded event loop which is the only thread you can interact with the UI on, which you can't block or the UI is going to freeze.
Your premise is wrong. There are many counterexamples to this.
There are many ways to implement and manage threads. In Unix-like and Windows systems a "thread" is the above, plus a bunch of kernel context, plus implicit preemptive context switching. Because Unix and Windows added threads to their architectures relatively late in their development, each thread has to behave sort of like its own process, capable of running all the pre-existing software that was thread-agnostic. Which is why they have implicit scheduling, large userspace stacks, etc.
But nothing about "thread" requires it to be implemented or behave exactly like "OS threads" do in popular operating systems. People wax on about Async Rust and state machines. Well, a thread is already state machine, too. Async Rust has to nest a bunch of state machine contexts along with space for data manipulated in each function--that's called a stack. So Async Rust is one layer of threading built atop another layer of threading. And it did this not because it's better, but primarily because of legacy FFI concerns and interoperability with non-Rust software that depended on the pre-existing ABIs for stack and scheduling management.
Go largely went in the opposite direction, embracing threads as a first-class concept in a way that makes it no less scalable or cheap than Rust Futures, notwithstanding that Go, too, had to deal with legacy OS APIs and semantics, which they abstracted and modeled with their G (goroutine), M (machine), P (processor) architecture.
Go uses userspace threads. It’s also interesting that Go and Java are the only mainstream languages to have gone this route. The reason is that it has a huge penalty when calling FFI of code that doesn’t use green threads whereas this cost isn’t there for async/await.
Sure, but once you involve the kernel and OS scheduler things get 3 to 4 orders of magnitude slower than what they should be.
The last time I was working on our coroutine/scheduling code creating and joining a thread that exited instantly was ~200us, and creating one of our green threads, scheduling it and waiting for it was ~400ns.
You don't need to wait 10 years for someone else to design yet another absurdly complex async framework, you can roll your own green threads/stackful coroutines in any systems language with 20 lines of ASM.
2. Unchecked array operations are a lot faster. Manual memory management is a lot faster. Shared memory is a lot faster.
Usually when you see someone reach for sharp and less expressive tools it’s justified by a hot code path. But here we jump immediately to the perf hack?
3. How many simultaneous async operations does your program have?
For example, if you don't explicitly call the java.awt.Toolkit.sync() method after updating the UI state (which according to the docs "is useful for animation"), Swing will in my experience introduce seemingly random delays and UI lag because it just doesn't bother sending the UI updates to the window system.
Eclipse uses SWT instead, which wraps the platform's native widgets.
I thought it was because they could copy chromium.
Which inputs are getting latency? The keyboard? The files?
> the non blocking nature
You’ve literally argued against yourself without realizing.
But God help you if you have to change the code. Async threads are a way to organize it and make it workable for humans.
Absolutely not
In this context the interesting thing to measure would be doing IO in your green threads vs OS threads.
A stronger theoretical performance argument for async io is that you can do batching, ala io_uring, and do fewer protection domain crossings per IO that way.
OS Threads are for compute parallelism, async with stackless coroutines (ideally) or green threads is for IO parallelism. It’s pretty straight forward.
And IMO, Zig has show how to do async IO right (the foundational stuff. Other languages could add better syntax for ergonomics.
The core of your async implementation doesn't have to care about I/O - as long as it has a way to block/schedule fibers, it's easy to implement io_uring/IOCP based I/O on top of that - it's a matter of sticking a single IO poll in your main loop, and when you get a result, schedule the fiber that's waiting for it.
Another thing you get almost for free is an accurate Sleep(0.3) - your Sleep pushes the current fiber in a global vector with the time to be resumed, and you loop over that vector in your main loop.
We're writing a game engine so WaitForNextFrame() is another useful one - the implementation is literally pushing the current fiber to a vector and resuming it the next tick.
It depends on what you are doing. Threads are the right model for compute-bound workloads. Async is the right model for bandwidth-bound workloads.
Optimization of bandwidth-bound code is an exercise in schedule design. In a classic multithreading model you have limited control over scheduling. In an async model you can have almost perfect control over scheduling. A well-optimized async schedule is much faster than the equivalent multithreaded architecture for the same bandwidth-bound workload. It isn't even close.
Most high-performance code today is bandwidth-bound. Async exists to make optimization of these workloads easier.
Why can’t a scheduler be written which optimizes around IO? What additional information is present in code that has async/await annotations?
To use a custom scheduler you must first disable the existing schedulers your code is using by default for both execution and I/O. That means no OS scheduling. Thread-per-core architectures with static allocation and direct userspace I/O is the idiomatic way to do this regardless of programming language.
Optimal scheduling is a profoundly intractable problem -- it is AI-Complete. A generic scheduler is always going to be deeply suboptimal because a remotely decent schedule isn't practically computable in real systems. A more optimal scheduler must continuously rewrite the selection and ordering of thousands of concurrent operations in real-time. Importantly, this dynamic schedule rewriting is based on a model that can see across all operations globally and accurately predict both future operations that haven't happened yet and any ordering dependencies between current and future operations. A modern system can handle tens of millions of these operations per second, so the scheduling needs to be efficient.
A generic scheduler has to allow for almost arbitrary operation graphs and behavior. However, if you are writing e.g. a database engine, you have almost the entire context of how operations relate to each other both concurrently and across time. The design of a somewhat optimal scheduler that only understands your code becomes computationally feasible. It isn't trivial -- scheduler design is properly difficult -- but you build it using async style.
What does "I/O optimized scheduling" look like to you, and does it end up with the same sort of compiler hints, like "async / await"? Or is it different?
I think it's still basically doing epoll behind the scenes [1], but you have straightforward sequential code in the process and the actual implementation is invisible to the user, and you can use old boring blocking code with an object that is a drop-in replacement for Thread.
I personally still kind of prefer the explicit async stuff with Futures and Vert.x since I kind of like the idea that async is encoded into the type itself so you're more directly aware of it, but I'm definitely an outlier for that.
[1] Genuinely, please correct me if I'm wrong, it's very possible that I am.
You are not. I prefer the same and that's how my product works right now. My HTTP API is Vert.x-only with futures. My particular use case is thousands of devices sending small packages to the API in undefined periods of time or in bursts, so I find Vert.x event-loop performance quite a good match for my use case. In fact it has been very positive given customer feedback thusfar.
Background tasks in my app are processed in a different module, which uses plain old ScheduledExecutorService-based thread pool to poll. The tasks are visible in the UI as well. I still haven't switched to VTs, because I don't know what load-implications that may have on the database pool. The JEP writes `Do not pool virtual threads` [0]. I assume if a db connection is not available in the pool, the VT will get parked, but I feel this isn't quite what a background scheduler should look like, e.g., hundreds of "in-process" tasks blocked while waiting for db connection to free up. Testing is on my todo list for some time now.
The JEP doesn't mention epoll, but there is a write up about that on github: `On Linux the poller uses epoll, and on Windows wepoll (which provides an epoll-like API on the Ancillary Function Driver for Winsock)` [1]
0 - https://openjdk.org/jeps/444#Do-not-pool-virtual-threads
1 - https://gist.github.com/ChrisHegarty/0689ae92a01b4311bc8939f...
It makes sense that they would use epoll under the covers; I would have been surprised if they weren't using epoll or io_uring/kqueue.
When it comes time to test your concurrent processing, to ensure you handle race conditions properly, that is much easier with callbacks because you can control their scheduling. Since each callback represents a discrete unit, you see which events can be reordered. This enables you to more easily consider all the different orderings.
Instead with threads it is easy to just ignore the orderings and not think about this complexity happening in a different thread and when it can influence the current thread. It isn't simpler, it is simplistic. Moreover, you cannot really change the scheduling and test the concurrent scenarios without introducing artificial barriers to stall the threads or stubbing the I/O so you can pass in a mock that you will then instrument with a callback to control the ordering...
The problem with callbacks is that the call stack when captured isn't the logical callstack unless you are in one of the few libraries/runtimes that put in the work to make the call stacks make sense. Otherwise you need good error definitions.
You can of course mix the paradigms and have the worst of both worlds.
Node.js has a problem where every standard library function has a callback and blocking version. At least they just committed to doing both.
Every explanation of the feature starts with managing callback hell.
Threads offer concurrent execution, async (futures) offer concurrent waiting. Loosely speaking, threads make sense for CPU bound problems, while async makes sense for IO bound problems.
JK, obviously callbacks became prominent as a result of folks looking for creative solutions to the C10K[0] problem, but threads have a long history of haters[1][2][3].
[0] https://en.wikipedia.org/wiki/C10k_problem
[1] https://brendaneich.com/2007/02/threads-suck/
[2] https://web.stanford.edu/~ouster/cgi-bin/papers/threads.pdf
[3] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-...
There is one hill I'll die on, as far as programming languages go, which is that more people should study Céu's structured synchronous concurrency model. It specifically was designed to run on microcontrollers: it compiles down to a finite state machine with very little memory overhead (a few bytes per event).
It has some limitations in terms of how its "scheduler" scales when there are many trails activated by the same event, but breaking things up into multiple asynchronous modules would likely alleviate that problem.
I'm certain a language that would suppprt the "Globally Asynchronous, Locally Synchronous" (GALS) paradigm could have their cake and eat it too. Meaning something that combines support for a green threading model of choice for async events, with structured local reactivity a la Céu.
F'Santanna, the creator of Céu, actually has been chipping away at a new programming language called Atmos that does support the GALS paradigm. However, it's a research language that compiles to Lua 5.4. So it won't really compete with the low-level programming languages there.
If your threads are "free" you can just run 400 copies of a synchronous code and blocking in one just frees the thread to work on other. async within same goroutine is still very much opt in (you have to manually create goroutine that writes to channel that you then receive on), it just isn't needed where "spawn a thread for each connecton" costs you barely few kb per connection.
except when a RAM fetch is so expensive a load is basically an async call - and it's a single machine code instruction at the same time
For problems that aren't overly concerned with performance/memory, yes. You should probably reach for threads as a default, unless you know a priori that your problem is not in this common bucket.
Unfortunately there is quite a lot of bookkeeping overhead in the kernel for threads, and context switches are fairly expensive, so in a number of high performance scenarios we may not be able to afford kernel threading
But what you said about kernel implementation is true. But are we really saying that the primary motivation for async/await is performance? How many programmers would give that answer? How many programs are actually hitting that bottleneck?
Doesn’t that buck the trend of every other language development in the past 20 years, emphasizing correctness and expressively over raw performance?
Of course - what else would it be? The whole async trend started because moving away from each http request spawning (or being bound to) an OS thread gave quite extreme improvements in requests/second metrics, didn't it?
What I question is whether 1. Most programs resemble that, so that they make it an invasive feature of every general purpose language. 2. Whether programmers are making a conscious choice because they ruled out the perf overhead of the simpler model we have by default.
The original motivation for not using OS threads was indeed performance. Async/await is mostly syntax sugar to fix some of the ergonomic problems of writing continuation-based code (Rust more or less skipped the intermediate "callback hell" with futures that Javascript/Python et al suffered through).
It's all nuanced and what to choose requires careful evaluation.
Most stacks are tiny and have bounded growth. Really large stacks usually happen with deep recursion, but it's not a very common pattern in non-functional languages (and functional languages have tail call optimization). OS threads allocate megabytes upfront to accommodate the worst case, which is not that common. And a tiny stack is very fast to copy. The larger the stack becomes, the less likely it is to grow further.
>cannot have pointers to stack objects
In Go, pointers that escape from a function force heap allocation, because it's unsafe to refer to the contents of a destroyed stack frame later on in principle. And if we only have pointers that never escape, it's relatively trivial to relocate such pointers during stack copying: just detect that a pointer is within the address range of the stack being relocated and recalculate it based on the new stack's base address.
Yes, you're not getting Rust performance (tho good part of it is their own compiler vs using all LLVM goodness) but performance is good enough and benefits for developers are great, having goroutines be so cheap means you don't even need to do anything explicitly async to get what you want
The computational cost of context-switching threads at yield points is often many times higher than the actual workload executed between yield points. To address this you either need fewer yield points, which reduces concurrency, or you need to greatly reduce the cost of yielding. An async architecture reduces the cost of yielding by multiple orders of magnitude relative to threads.
I would they this often is 1% of cases. As for Rust ecosystem, it doesn't make much case to add so much complexity and inconvenient abstractions to cover 1% of use-cases.
Did you know you can get even more performance if you manually manage memory and don’t use virtual functions?
"Green threads" only exist in crappy interpreted languages, and only because they have stop-the-world single-threaded garbage collection.
Now the languages that don't offer choice is another matter.
I also want to address something that I've seen in several sub-threads here: Rust's specific async implementation. The key limitation, compared to the likes of Go and JS, is that Rust attempts to implement async as a zero-cost abstraction, which is a much harder problem than what Go and JS does. Saying some variant of "Rust should just do the same thing as Go", is missing the point.
Yes you can lint out unwraps and other panic operations, but if there was a subset of no-panic rust a large part of the issue detailed in this post goes away. But it’s frustrating working with a language that has so many operations that can, in theory, panic even if in practice they should only do so if a bit flips. Like a proving an array is non-empty or working with async. You either end up with a lot of error handling for situations which will never happen or really strange patterns like non-empty list pattern (structure with first field and then your list). Which of course ends up adding its own bloat.
The Rust-in-Linux folks are working on this with things like failable memory operations. It's required for their own use. Increased use of proof (such as proving that an array is non-empty) is also being slowly worked on.
Rust's goal is memory safety. Panics are perfectly memory safe.
I tire so much of complainers who want someone else to make all their tools infallible yet want to do nothing. Let's just full-stop there. They not only want to avoid working on the tools. They prefer if the tool does everything for them, and they prefer having things done for them without bound.
Complainers want easy APIs. When the API isn't easy enough, they want easy Kubernetes containers "programmed" by YAML. When that isn't easy enough, it's all point-and-click hosted services on GCP and Amazon. You people don't want to program. You want apps. Infallable apps. You want to be consumers, fed from the sky like little birds who endeavor only never to fledge, never to fly. And you want to pay nothing for it.
The secret you people need to figure out is that the lifestyle you think is sustainable is actually a commensal relationship with people building things for you. There is no vast alliance to wrest power from corporations, to dissolve capitalism, no grass roots movement to "shake things up." There is food falling from higher in the water column from an ecosystem filled with people who do things. Those above do not have time to look down, but if they did, all they would feel is overwhelming contempt, so they only look across at the horizon.
But why do people seek to confirm comments like this? Because Rust scary. Churn on, little ant mill. Let be free any who understand the pointlessness of this performance.
I never really liked the viral nature of async in rust when it was introduced.
I wish rust the best of luck and with more people like this rust could have a brighter future.
For now the best option to write code that wants to live in both worlds is sans-io. Thomas Eizinger at Fireguard has written a good article about this[1] pattern. Not only does it nicely solve the sync/async issue, but it also makes testing easier and opens the door to techniques like DST[2]
I have my own writing on the topic[3], which highlights that the problem is wider than just async vs sync due to different executors.
0: https://github.com/rust-lang/effects-initiative
1: https://www.firezone.dev/blog/sans-io
2: https://notes.eatonphil.com/2024-08-20-deterministic-simulat...
Algebraic effects are the way forward, but that's a long way off.
Broadly I think there are three approaches:
1. For frequent and small CPU heavy tasks, just run them on the IO threads. As long as you don't leave too long between `.await` points (~10ms) it seems to work okay.
2. Run your sans-io code on a dedicated CPU thread and do IO from an async runtime. This introduces overhead that needs to be weighed against the amount of CPU work.
3. Have the sans-io code output something like `Output::DoHeavyCompute { .. }` and later feed the result back as `Input::HeavyComputeResult { .. }`, in the middle run the work on a thread pool.
Thanks for sharing! Reading the articles, it looks at me, it is a kind of manual reimplementation of the state machine generated by async? This also makes the code harder to reason with. I am unsure if it is worth the complexity.
I somehow miss noticing that in C++ and I have no idea how it is working in other domains.
My only gripe is that a lot of it is feeling a bit kick-starter-y, with each of the goals needing specific funding. Is that the best model we've found so far?
IMO the term "project goals" is quite misleading for what this actually is. A project goal is a system for one person (or a small group of people) to express that they'd like to work on something and ask for Rust project volunteers to commit ongoing time and effort to supporting them through code review, answering questions, etc. It doesn't mean that the Rust project itself has set the goal, or even necessarily endorsed it.
So it's not quite right to treat it as a formal roadmap for Rust, just a "there are some contributors interested in working on these areas".
There seems to be some consensus even within the C++ ISO committee that the evolution process of that language is somewhat broken, mostly due to its size and the way it is organized.
> My only gripe is that a lot of it is feeling a bit kick-starter-y, with each of the goals needing specific funding. Is that the best model we've found so far?
Sadly, this seems to be the way things go once a technology catches on, commercially. Can't blame large donors for sponsoring only the parts they are interested in. Fortunately, considerable funding of TweedeGolf comes from (Dutch) government, I think.
You can 'sell' new features. They cost money to create, but they solve real problems. Those problems also cost money and if that's more than the cost of creating the feature, companies are willing to put in money (generally).
Maintenance is harder. But there are now some maintainer funds! Like the one from RustNL: https://rustnl.org/maintainers/ These are broader ongoing work and backed by many orgs chipping in a little bit.
Idk if it's the best model, but at least it seems to kinda work
In my programming language I wrote custom pass for inlining async function calls within other async functions. It generally works and allows to remove some boilerplate, but it increases result binary size a lot.
Technically Rust can do the same.
A lot of code doesn't follow there guidelines because they don't care about efficiency and don't need it. But there are numerous projects who care about performance and efficiency, and realize the pitfalls once code runs in production (ScyllaDB is one example).
LLMs don't help as well, generating everything async up to the main, using wrong primitives and not properly designing the system.
Examples in the blog seem too simple make any conclusions
So yes, it does really matter. Keep in mind that optimizations stack. We're preventing LLVM from doing it's thing. So if we make the futures themselves smaller, LLVM will be able to optimize more. So small changes really compound.
async fn bar(input: u32) -> i32 {
let blah = input > 10; // Preamble
let result = foo(blah).await;
result * 2 // Postamble
}
> If only we were allowed to execute the code up to the first await point, then we could get rid of the Unresumed state. But "futures don't do anything unless polled" is guaranteed, so we can't change that.Is that actually valid reasoning? If we know that foo(blah) doesn’t do “anything” until polled, then why can’t bar call foo without polling it before foo itself is polled? After all, there’s no “anything” that will happen.
So if I call a foo() that violates the rule, it seems odd to complain that the generated bar() also violates it.
Whether your function has the `async` keyword attached or has a function argument of type `IO` doesn't really change anything substantial.
The whole "function color" argument seems pretty overblown to me. You can't call `foo(int, string)` if you don't have both an int and a string, so is it now a different "color" than the function `bar(int)`? If you want to call `foo` from `bar`, you have to somehow procure a string, and the same is true for `IO` in Zig, and the same is true for async in Rust, where what you have to procure is an async executor.
The `async` keyword can be seen as syntactic sugar for introducing a hidden function parameter (very literally, it's called `&mut std::task::Context`), as well as rewriting the function as a state machine.
Having to write copies of List.map and List.async_map in the stdlib is a smell, but the real cost is potentially having to duplicate every function in your code that calls either.
E.g, if you have the 'async' effect, List.map can work with async functions or synchronous ones, without modification. It's the caller's responsibility to provide that async handler/environment at whatever level of abstraction makes sense, instead of explicitly wiring IO or async all the way through for a function that may or may not need it. The compiler (or runtime, if necessary) will keep you from calling a function that requires the async effect if you don't have a handler for it.
I will note, though, that this particular example (iterating over a list asynchronously) is actually where you might really want to do something else, because async allows you to do something fundamentally different: run all in parallel.
Async really shines when you have code that does two or more things and you don’t care about the order in which they finish, and that isn’t really feasible without it.
I will take Go concurrency over rust async any day of the week.
Technically speaking Rust didn't have to use a keyword (and in fact didn't for quite some time between 1.0 and when async was added), but the ergonomics of the library-based keyword-less solutions was considered to be less than optimal compared to building in support to the language.
> This is why new languages should build in green threading from the start.
This, just like most other decisions one can make when designing a language, is a tradeoff. Green threads have their niceties for sure, but they also have drawbacks which made them a nonstarter for what the point in the design space the Rust devs were aiming for. In particular, the Rust devs wanted something that did not require overhead for FFI and also did not require foreign code to know that something async-related is involved. Green threads don't work here because they either have overhead when copying stuff between the green thread stack and the foreign stack or need foreign code to understand how to handle the green thread stack.
The problem is that "nearly every time" bit. There's times where you are looking at the code and you absolutely want to be aware of where the function is suspending. Similar to the use of ? in error handling to surface all failable operations that might do an abnormal return.
Then you shouldn't be using a low-level systems language? You can simply choose a higher-level abstraction language that better matches your programming preferences.
Look at how the borrow works. Most of the time, the compiler can _suggest_ the fix.. and instead of fixing that silently, they want you to fix it.
This is the design choice they made.
there is the async book but it is largely unfinished
you can watch John Gjengset's crust of rust async, decrusting tokio, and why what, and how of pinning in rust
then there are tokio-lessons and tokio tutorial which teach how to use tokio runtime
and there are also good blogposts by phil-oop and rose wright on how async works
https://doc.rust-lang.org/book/ch17-00-async-await.html https://google.github.io/comprehensive-rust/concurrency/welc...
https://rust-lang.github.io/async-book/intro.html
https://youtu.be/ThjvMReOXYM https://youtu.be/o2ob8zkeq2s https://youtu.be/DkMwYxfSYNQ
https://github.com/freddiehaddad/tokio-lessons https://tokio.rs/tokio/tutorial
https://os.phil-opp.com/async-await/ https://dev.to/rosewrightdev/from-futures-to-runtimes-how-as...
add async keyword to functions add .await when calling them use tokio in your main function (easy to look up) use the async recursion crate if you need to use recursion but don't want to box everything
There are some bonuses like calling functions in parallel, but there you go.
can't for the love of dog parse the meaning of this - what do you mean? a callback that is async passed to a sync api? you refer to the complexity of sync<->async bridging? ...or?
The risk they took was very calculated. Unfortunately they’re bad at math and chose the wrong trade-offs.
Ah well. Shit happens.
They chose the exact same tradeoffs as C++'s async/await (and the same overall model as Python/NodeJS), so I'm not sure what that says about programming as a whole.
Not to mention Tokio (most popular runtime for Rust) is multi-threaded by default. So you have to deal with multithreading bugs as well as normal async ones. That is not the case with most async languages. For example both Python and NodeJS use a single thread to execute async code.
Python still has pluggable eventloops - this is sort of mandatory to interact with weird things like GUI toolkits, and Python's standard event loop was standardised pretty late in the game. Early on there was even an ecosystem split between Twisted and competing event loops implementations.
> For example both Python and NodeJS use a single thread to execute async code
I'd argue this is more a historical artefact of how the languages functioned before futures were introduced, rather than an inherent limitation.
Or you can schedule your thread-local tasks in a LocalSet to run them all on the owning thread, while keeping the other threads around to handle tasks that are intentionally parallel.
The general theme here is that tokio (and C++ equivalents) provide you the flexibility to do more things than the native Python/Node runtime does (and yes, the defaults take advantage of this). But the underlying intention is the same (and post-GIL we expect to see some movement in this direction on the Python front as well).
Source: am professional C++ developer
You could've deduced that from the fact that someone who puts this amount of energy in a detailed article about intricacies of an area of "foo", quite certainly does not "hate on foo".
The article is fine besides the bait title.
I don't know enough about the domain to be objectively helpful, so it's all wishy-washy feelings on my part. I keep reaching for orchestrating things with threads in Rust where most people would probably reach for async these days. The only language where I've felt fine embracing the blessed async system is Haskell and its green threads (which I understand come with their own host of problems).
I still don’t have enough experience to have a strong opinion on Rust async, but some things did standout.
On the good side, it’s nice being able to have explicit runtimes. Instead of polluting the whole project to be async, you can do the opposite. Be sync first and use the runtime on IO “edges”. This was a great fit to a project that I’m working on and it seems like a pretty similar strategy to what zig is doing with IO code. This largely solved the function colloring problem in this particular case. Strict separation of IO and CPU bound code was a requirement regardless of the async stuff, so using the explicit IO runtime was natural.
On the bad side, it seems crazy to me how much the whole ecosystem depends on tokio. It’s almost like Java’s GC was optional, but in practice everyone just used the same third party GC runtime and pulling any library forced you to just use that runtime. This sort of central dependency is simply not healthy.
The system requirements for an async runtime on a workstation processor compared to say, an RP2040 look very different. But given the ability to swap out the backend, when I write async IO code for a small ARM M0 microcontroller, that code looks almost identical to what I'd be writing outside that context, but with an embedded focused runtime, ie embassy.
I can focus less on the runtime specifics as they use the same traits and interfaces. Compare this with say, using a small RTOS or rolling your own async environment, it's quite nice.
Much of what I need to learn to write the async code in embassy can cross over to other domains.
But maybe my fears are unfounded.
Traits in the stdlib for common functionality like "spawn" (a task) and things like async timers. Then executors could implement those traits and libraries could be generic over them.
We could have something similar for a global async executor which can be overridden. Or maybe you launch your own executor at startup and register it with std, but after that almost all async spawn calls go through std.
And std should have a decent default executor, so if you don't care to customise it, everything should just work out of the box.
Yeah, I think that's the current status. I believe it was for a long time (and possibly still is) blocked on language improvements to async traits (which didn't exist at all until relatively recently and still don't support dyn trait).
This is true, but perhaps not uniquely so, when compared to platform dependence of the standard libary already. File semantics, sync primitive gaurantees and implementations, timers and timer resolutions, etc have subtle differences between platforms that the Rust stdlib makes no further gaurantees about.
How nice would it be if there were ReadAsync and WriteAsync traits in the standard library.
Right now, every executor (and the futures crate) implements their own and there are compat crates to bridge the gaps.
There are issues in particular with core traits for IO or Stream being defined in third-party libraries like tokio, futures or its variants. I've seen many cases where libraries have to reexport such types, but they are pinned to the version they have, so you can end up with multiple versions of basic async types in the same codebase that have the same name and are incompatible.
To spawn a future on tokio, it has to implement `Send`, because tokio is a work-stealing executor. That isn't the case for monoio or other non-work-stealing async executors, where tasks are pinned to the thread they are spawned on and so do not require `Send` or `Sync`, so you can use Rc/RefCell.
Moreover, the way that async executors schedule execution can be _different_. I have a small executor I made that is based on the runtime model of the JS event loop. It's single-threaded async, with explicit creation of new worker threads. That isn't a model that can "slot in" to a suite of traits that adequately represents the abstraction provided by tokio, because the abstraction of my executor and the way it schedules tasks are fundamentally different.
Any reasonably-usable abstraction for the concept of an async runtime would impose too many constraints on the implementation in the name of ensuring runtime-generic code can execute on any standard runtime. A Future, for better or worse, is a sufficiently minimal abstraction of async executability without assuming anything about how the polling/waking behavior is implemented.
Tokio's default executor is a work-stealing multi-threaded executor, but it also has a local executor and a current-thread executor, which can run !Send futures.
The hardest parts for me to grok really came down to lifetime memory management, for example a static/global dictionary as a cache, but being able to evict/recover entries from that dictionary for expired data... This is probably the use case that IMO is one of the least well documented, or at least lacking in discoverable tutorials etc.
As for slf4j, I still don't see any justification for an abstraction layer on top of logging. I never, ever migrated from one logger to another, and even if I did need to do it - it is very easy as most loggers are very similar. E.g. that's why I decided to use log4j2 in my latest project.
Web/server frameworks have to bind to a runtime because they have to make decisions about how to connect to a socket. Hyper is sufficiently abstract that it doesn't require any runtime, but using hyper directly provides no framework-like benefits and requires that you make those decisions and provide a compatible socket-like implementation for sending requests.