- new
- past
- show
- ask
- show
- jobs
- submit
On our last product, we decided to start switching from Typescript to Rust on the backend because we got tired of crashes. I consider that to be one of the greatest technical mistakes I've made ever, as our productivity slowed massively. I'll just share two time-draining issues that only occur in Rust: (1) Writing higher-order functions (e.g.: a function to open a database connection, do something, and then close it -- yes, I know you can use RAII for this particular example), which is trivial in Haskell and TypeScript and JavaScript and C++ and PHP, turned out to be so impossible in Rust [even after asking Rust-expert friends for help], that I learned to just give up and never try, though it sometimes worked to write a macro instead. (2) It's happened many times that I would attempt a refactoring, spend all day fixing type errors, finally get to the top-level file, get a type error that's actually caused somewhere else by basic parts of the design, and conclude the entire refactoring I had attempted is impossible and need to revert everything.
On top of that, Rust is the only modern language I can name where using a value by its interface instead of its concrete type lies somewhere between advanced and impossible, depending on what exactly you're doing.
I came away concluding that application code (as opposed to systems or library code) should, to a first approximation, never be written in Rust.
For example, I use the deadpool-postgres crate for database pooling. Getting a connection looks like this:
let conn = self.pool.get().await?;
Because of RAII, you don't need a higher-order function helper, but if you really wanted to make one: async fn with_conn<T>(
&self,
f: impl AsyncFnOnce(Object<Manager>) -> Result<T>,
) -> Result<T> {
let conn = self.pool.get().await?;
f(conn).await
}
Now you can do: with_conn(|conn| async move {
conn.query("SELECT 1") // Or whatever
}).await
If you know TypeScript, this shouldn't be too difficult to read or write. The gnarliest stuff here is knowing the type signature of the function argument; because of async, it must be AsyncFnOnce, for one, and you need to know that the type that the deadpool crate returns is called Object<Manager> (which doesn't sound like a connection, to be fair). Determining the exact concrete type to match type constraints on is sometimes a chore, but TypeScript is surely no different here!If you don't know Rust too well, the "move" part will be a little mysterious, to be sure.
For this family of examples, had been completely stymied by AsyncFnOnce not being released yet. IIRC it had been in the works for several years, was still an experimental feature when I was trying to use it, and I gave up after much frustration at trying to get a version of Rust with experimental features working under devenv (nix).
A subtraction then to my frustrations with Rust -- though I'd still be very wary of doing this, having seen how fragile higher-order functions have been in the past.
f: impl FnOnce(Object<Manager>) -> impl Future<Output = Result<T>>However I share your conclusion, outside scenarios where having automated resource management as the main approach is either technically impossible, or a waste of time trying to change pervasive culture, I don't see much need for Rust.
In fact those that write comments about wanting a Rust but without borrow checker, the answer already exists.
Way back as an undergrad in 2011, I contributed to Plaid, a JVM language whose main feature is based on affine and linear types. I'm one of the very few people in the world who knew what borrowing is before Rust had it. So I know first-hand that borrow-checking is perfectly compatible with garbage collection.
This is also not strange for those in the Rust community with type systems experience, hence the Roadmap 2026 proposals for a more ergonomic experience.
Thus we have Linear Haskell, Swift 6 ownership, D ownership, Koka, Hylo, Chapel, OxCaml, Scala Capabilities, Ada/SPARK proofs, Idris, F*, Dafny,....
Am familiar with Linear Haskell (and actually went on a walk through Tokyo with one of the authors just a few months ago). IIRC: still no resources allocated to add the things that would actually make it useful. Had not been aware of most of those, except the dependently-typed ones. Cool to know about the others. Yay linear types becoming known.
You seem interesting, and I'm curious about your background now. All I can find so far is that you're from Europe and used to lead C++ in some major corporation.
Just happen to be a nerd with interests across systems programming, languages, graphics, that rather reads books and papers than watching dull TV shows.
Day job is boring enterprise consulting across the usual stacks you might imagine.
Here is the roadmap.
https://rust-lang.github.io/rust-project-goals/2026/goals.ht...
Have found in that roadmap things that would help me personally like making async functions dyn-compatible. Have not found the deeper stuff I thought you were hinting at.
Last year, I wanted to use the 5- line Result.flatten function, abs then found it had been stuck in experimental....for 5 years. That left me with no confidence of the language's dev velocity.
(2) In such situations the compiler (type system or borrow checker) is telling you that what you wanted to do has hidden bugs, and therefore refuses to compile. Usually that's a good thing.
(3) &dyn Trait
(2) No, it stems from a compiler limitation (imposed in large part by the need for static memory layout), not because there's anything intrinsically buggy about doing this.
(3) Look up "dyn-compatibility", for the largest, but not the only, problem with doing this.
Aside from having vibes of "I've chosen to get hit weekly in the face with a baseball bat, but have learned to like it, and so should you" it's also seldom true.
All three of these examples are also quite easy to do with C and C++. It's not about garbage collection.
Rust makes the tradeoffs explicit, and Rust programmers tend to obsess over minimizing those tradeoffs to get abstractions that are zero-cost. So doing it “the rust way” is often very complicated and tricky to get right while satisfying the borrow checker and type system, but once found is lean, fast, clean, and safe.
No, boxing everything does not magically make things more dyn-compatible. It will not magically solve the issue that tokio does a whole-program transformation that does its most restrictive checking only after all local checks have been resolved. It will not magically allow more reuse between datatypes. It will solve none of the problems I encountered... because if beginner-Rust could solve any of these problems, then they would have ceased to be problems for me by the time I became intermediate.
> Rust programmers tend to obsess over minimizing those tradeoffs to get abstractions that are zero-cost. So doing it “the rust way” is often very complicated and tricky to get right while satisfying the borrow checker and type system, but once found is lean, fast, clean, and safe.
You and I must be using very different definitions of "lean." For me, "complicated" and "lean" do not go together
It's okay if your language has problems (I have plenty of criticisms of my favorite languages), but I find it odd and concerning how frequently I've seen Rust programmers try to deflect instead of engaging in criticism.
I actually have a huge systems programming background and identify as a systems programmer. C and C++ by and large do not have the problems I've written about. These things are Rust problems, not systems problems.
Only other language that I think gets close to rust ergonomics is Kotlin, but it suffers from having too many possibilities for abstractions.
On my line of work we don't do Web servers from scratch, we use lego pieces like with enterprise integrations.
Think Sitecore, Dynamics, Sharepoint, Optimizely, Contentful, SAP, Mongolia, Stripe, PayPal, Adobe, SQL Server, Oracle, DB2,.....
Axum offers very little over existing .NET, Java, nodejs SDKs provided by those vendors.
For high-performance and high-reliability systems code, Rust is much more of a mixed bag. In a systems context it lacks the ability to easily and ergonomically express idiomatic constructs important for safety and performance that are trivial to express in e.g. C++. When you run into these cases it can get pretty ugly.
Most people don't write this kind of systems code. What most people call "systems code" is really more like low-level applications code, where Rust excels. It is software like highly-optimized kernel-bypass database engines and similar where the limitations start to show.
It’s my favorite language to write, and it gets much easier over time. As a first approximation, if you’re doing something and it feels insanely difficult like the GP is talking about, try to think of a different way to do it rather than fighting it. There’s usually a way to do almost anything, but it’s more pleasant to lean into the grooves the language pushes you towards.
or less ressource hungry software
The things I found quite difficult or impossible in Rust were to me pretty basic patterns for modularity and removing duplication that it's really shocking that these complaints are not more common.
I currently have but two hypotheses for why.
First, the second problem I mentioned only comes from using tokio, which causes your top-level program to secretly be using a defunctionalized continuation data type, derived from where exactly in other files you put your await's, that might not be Send. If you're not using tokio, you won't experience that issue.
Second...I was kinda told to just give up on deduplication and have lots of copy+pasted code. This raises the very uncomfortable hypothesis that Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing, and people who were raised on traditional good software engineering but then got hit with Rust's metaphorical baseball bat of lack-of-modularity over and over until they got used to being hit with a baseball bat as a normal pain of life.
I don't like either of these explanations (esp. with tokio seeming quite dominant), so I'm awaiting an explanation that makes more sense. https://xkcd.com/3210/
This is definitely not the case and is unnecessarily insulting.
The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
I know that it's insulting! And it doesn't make sense, because I generally think Rust programmers are smart people. But right now, it's the only explanation I've got, so it is alas necessarily insulting. So please, please, please give me a better explanation that actually makes sense.
> The truth is that some things are harder in Rust but a) often those things are best avoided anyway (e.g. callbacks), and b) it's worth the trade-off because of the other good things it allows.
This sounds like the seeds of a better explanation, but it needs a lot more to actually suffice. E.g.: why are callbacks best avoided anyway, when they're virtually required for a large number of important programming patterns? (In more technical language: they're effectively the only way to eliminate duplication in non-leaf-expressions. In even more technical language: they're the way to do second-order anti-unification.)
> Surely as a Haskell user of all things you must understand that sometimes making things harder is worth the trade-off. Yeay everything is pure! Great for many reasons. Now how do I add logging to this deeply nested function?
And this is a great illustration of the difference. First, you will seldom find Haskell programmers trying to argue that, actually, things like deeply-nested logging that everyone wants are actually "best avoided anyway." Second, you'll actually get a solution if you ask about them -- in this case, to either use MTL-style, to use a fixed alias for your monad stack, or that unsafePerformIO isn't actually that bad.
BTW, similar to my unpleasant conclusion for Rust above, I have another unpleasant conclusion for Haskell: Haskell is incredible for medium-sized programs, but it has its own missing modularity features that make it non-ideal for large programs (e.g.: >50k lines). But this is a much smaller problem than it sounds because Haskell is so compact that, while many projects can be huge, very few individual codebases will need to approach that size.
Look up "callback hell". Basically they encourage spaghetti.
> you'll actually get a solution if you ask about them
You got solutions to your problems didn't you? Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
If you were expecting Rust to work perfectly in every situation... well it doesn't. GUI programming in particular is still awkward, and async Rust has more footguns than anyone is happy with.
Despite that it's still probably the best language we have for a surprisingly large range of domains.
Ah. I think you're confusing the general idea of a callback with one particular style of use. "callback hell" refers to the deep indentation that occurs when trying to program in monadic style in languages without syntactic support for monads. It was mostly solved by adding async/await syntax, aka syntactic support for the continuation monad. "Callback hell" is not spaghetti in any deep sense, merely syntactically cumbersome.
But a "callback" is a more general term, sometimes a synonym for "function parameter," sometimes for more narrow kinds of function parameter (e.g.: void function, invokable once). Many people will refer to the function argument of the `map` function as a callback, but no-one would refer to that as "callback hell."
Callbacks are quite universal, and most uses do not lead at all to callback hell. I've engaged with this topic a little bit at https://us16.campaign-archive.com/?u=8b565c97b838125f69e75fb... , above the header "Serious Business."
> You got solutions to your problems didn't you?
Mostly no :(
And when I did, I largely got it by figuring stuff out myself, while being told by multiple Rust experts that I either shouldn't care about the verbosity and lack of modularity, or that if I have a problem like "using the interface instead of the implementation" it must be because I'm a Haskeller.
Well, my ultimate solution was to start working on a new product, and to not use any Rust, except for some performance-heavy libraries. With the first product, the market had changed too much by the time we were ready for prime-time, and I'd put somewhere between 25% and 70% of the reason for that delay on our choice to start building new parts of the backend in Rust.
> Macros are a perfectly reasonable thing to use in Rust, even if they are best avoided where possible. Exactly like unsafePerformIO.
Good comparison!
> Despite that it's still probably the best language we have for a surprisingly large range of domains.
I agree with this. I just don't agree that that list of domains has a very large intersection with the set of applications.
FWIW I write primarily rust, and I do not agree with the advice given in your second point, so I’d take it with a grain of salt were I you.
Many things are plainly not permitted, either because the borrow-checker isn't clever enough, or the pattern is unsafe (without garbage collection and so on).
Many functional/Haskell patterns simply can not be translated directly to Rust.
A deeply-baked assumption of Rust is that your memory layout is static. Dynamic memory layout is perfectly compatible with manual memory management, but Rust does not readily support it because of its demands for static memory layout.
A very easy place to see this is the difference in decorator types between Rust and other languages like Java. Java's legacy File/reader API has you write things like `new PrintWriter(new BufferedWriter(new FileWriter("foo.txt")))`, where each layer adds some functionality to the base layer. The resulting value has principal type `PrintWriter` and can be used through the `Writer` interface.
The equivalent code in Rust would give you a value of type `PrintWriter<BufferedWriter<FileWriter>>` which can only be passed to functions that expect exactly that type and not, say, a `PrintWriter<BufferedWriter<StringStream>>`. You would solve this by using a template function that takes a `T where T: Writer` parameter and gets compiled separately for every use-site, thus contributing to Rust's infamous slow build times.
It would be perfectly sane, and desirable for application code, to be able to pass around a PrintWriter value as an owned pointer to a PrintWriter struct which contains an owned pointer to a BufferedWriter struct which contains an owned pointer to a FileWriter struct. You could even have each pointer actually be to a Writer value of unknown size, and thus recover modularity.
In Rust, there is sometimes a painful and very fragile way to do this: have each writer type contain a Box<&dyn Writer>, effectively the same as the Java solution above. This works, except that, if one day you want to add a method to the Writer trait that breaks dyn-compatibility, then you will no longer be able to do this, and will need to rewrite all code that uses this type.
Mostly, this works out well enough: dyn compatibility pretty much just insists your methods can in fact work with just a reference to an unknown variant of the type.
But I see the forums (and I also trued some toy stuff at times) plagued with rigidity problems that in C++ have obvious solutions.
For example, I am not going to fight a borrow-checker all the stack up to get a 0.0005% perf improvement, if sny, when I can use smart pointers.
I am not going to use Result everywhere when I can throw an exception and get done with it instead of refactoring all the stack up for the intermediate return types (though I use expected and optional and like them, but it is a choice depending on what I am doing).
I am not going to elaborate safe interfaces for my arrays of data I need to send to a GPU: there is no vslue in it and I can get it wrong snyway, it os ceremony. I assume this kind of code is unsafe by nature.
I find C++ just more flexible. Yes, it has warts, but I use all warnings as errors, clang tidy and have a lot of flexibility. I use values to avoid any trace of dangling and when it is going to get bad, I can, most of the time, switch to smart pointers.
I really do not get why someone would use Rust except for very niche cases like absolutely no memory unsafety (but this is not free either, as some reports show: you need to really be careful about reviewing unsafe if your domain is unsafe by nature or uses bindings to keep Rust invariants or you write only safe code, in whcih case, if memory safety is critical, it does give you something).
But I do not see Rust good for writing general application code. At least not compared to well-written C++ nowadays.
In no way I am saying it is useless. I just see niche uses for it compared to alternatives.
Rigidity is a trade off: it can make initial development slower but refactors significantly easier, just as an example.
I don’t think any of your examples show it to be niche. It operates well in most of the space where C++ is a good option, and a bit beyond that (embedded, firmware, but also higher level things where you want performance but don’t want to worry about memory safety).
Well, at the cost of having a straight jacket. Result without option for exception handling is an example. You need to refactor all the way up if you notice that suddenly when refactoring you needed a Result bc a new error appears that could not happen before or you need to preventively spam Result everywhere since the start. You need to handle those all the stack up. The borrow checker is also rigid. I do know why it exists. I understand its value. I am just talking about the toll it imposes while coding, and wondering if it is a good default (I think most of the time it is not, but when you need it, it is invaluable, however these cases are a minority).
Another insight is that when you really go low-level, most of the time you are working with unsafe interfaces probably. At that time, you are using unsafe and now you have to satisfy Rust's borrow checker. How? By hand. So you lost part of the value proposition.
Can you recover it? Yes. How? By reviewing that code. But if I have to review that code, what is better from choosing a language (in this situation I mean, there are situations where Rust is the better choice) where I can understand the invariants in unsafe code better and anyway I have linters and a lot of established guidelines that are not difficult to follow? And by not difficult to follow I mean they are embedded in tooling like clang-tidy, not that I can follow because I know a lot.
So for me it is not so obvious at all, especially in the presence of quite a few unsafe blocks. If you want it safe, at that time, you are starting to compete with other unsafe languages: you need human review anyways... if there is tooling in Rust for unsafe blocks (I can imagine there could be something), that improves things competitively for Rust in unsafe blocks. But if you need careful review, you are stuck again in the non-magical real world: things are safe if you checked absolutely everything.
> Rigidity is a trade off: it can make initial development slower but refactors significantly easier, just as an example.
That is certainly true. It is also true that in areas where you put this extra effort and quickly refactor, it makes things more difficult.
Refactoring, if you mix it with unsafe, needs a much more strict review than just pretending things are safe because you refactored and put things behind an unsafe interface and present it as safe.
I am not convinced at all this is what you need in most scenarios. The productivity impact is relatively high IMHO.
OTOH, if I really want correctness (real correctness!) but not absolute full speed, I think I can reach to Ocaml (very practical) or Haskell (this one is also a bit too rigid actually sometimes).
So I am left in a situation where Rust just seems to be appealing for places where the most absolute memory safety is needed. But memory safety is still a composed characteristic of a running program: you have to take into account unsafe interfaces, bindings, etc.
So the only way to get real safety is anyways to review everything (if that is what you really want to deliver), probably proving your code, which anyway requires human intervention. Did we ever (even if less often) see crashes for invariant violations in code advertised as safe in Rust? Certainly yes. I acknowledge it is usually an improvement, but still not a guarantee.
So if it is not a guarantee and I can reach other tools where anyway the guarantee is there through GC or other mechanisms and where it is not I am equal to Rust, then, why bother?
Probably the only place where I see Rust appealing is where you need both max. performance and absolute memory safety (but you will still need the kind of reviews I mentioned if you spam unsafe and interact with bindings anyway). Those are niche cases, not the norm.
I see like a suboptimal choice to write much of the application code in Rust, even when you need speed, compared to C++. C++ has very good tools for compile-time programming, expression templates, good warnings and linters, a big ecosystem and it is way more voluble (exceptions and results can be used, invariants in unsafe code are easier to follow since a borrow checker does not need to be satisfied "by hand").
So I am not sure at all Rust is the reply for a more or less mainstream general-purpose application language.
I think that Rust is valuable in things like OS hardened interfaces, etc. But even there CVEs were found! Right? https://news.ycombinator.com/item?id=46302621
There is no magic bullet here, but I do know that when coding in Rust, the productivity toll I am paying is not negligible and I can reach for tools and techniques that make me very close or equal to that productivity.
This was one of the example approaches I gave. This works...at first. The problem is that, if you want to add a new function to the Writer trait which makes Writer no longer dyn-compatible, such as, say, any async function, then you can no longer write `Box<dyn Writer>` and need to rewrite all code that uses it.
(although you can dig under the hood and specify a pinned-down Future type, covering one kind of awfulness with another)
Oh please. I came to Rust late, after using plenty of other languages. I have never had a problem with “modularity”. You express surprise that these issues haven’t been talked about more, well the null hypothesis is that you are just not very good at Rust and it hasn’t clicked for you - that’s fine, I doubt I would be very good at Haskell. But don’t insult us, and don’t assume your experience is de facto everyone’s and we’re just in denial. It’s incredibly arrogant.
Please respect the fact that I do not understand how one can like Rust without giving up caring about modularity.
I have pretty good reason for thinking this is a possibility. Directly, because the Rust experts I asked for help did in fact advocate giving up modularity. Indirectly, in that I've seen something similar but much stronger in a different language. That culminated in a conversation with several of the creators of that language, in which they argued their language was great for generic programming...while revealing some pretty basic holes in their understanding of generic programming (while calling me a "Haskell weenie," among other abuse). I am very confident in my conclusion about this other language community and not interested in getting into the details of this event. I am just sharing where I'm coming from, in not immediately dismissing this idea as too absurd to consider even when I have no other hypothesis.
I acknowledge that you feel personally insulted by me having a hypothesis that requires a large group of people behave in ways I consider strange. I have evidence for that hypothesis, and have been unable to find a better one. I hope you can see how the comment I am responding to crosses an extra line and is directly personally insulting.
I would very much like to come away from this discussion no longer believing that Rust programminh is at odds with modularity. I have shared some fairly basic and detailed criticisms, the ones I still remember after a year out of the language. Perhaps your can play a role in offering solutions to them -- or admit that these are indeed problems your hadn't noticed before or had gotten used to
I think they meant that in Haskell it is very easy to write externally unreadable code..
As a customer of Mercury, it's truly one of the critical companies my toolkit, and I just can't help but feel that their choosing of Haskell made their progress, development and overall journey that much better. I realize that you can make this argument with most languages, and it's not to say that a FP lang like Haskell is a recipe for success, but this intentional decision particularly pre "vibe coding" and the LLM era seems particularly prescient, of course combined with their engineering culture that was detailed in the post.
I might further argue that the startup-y fintech culture led to good tech culture. The fact that they didn't start as a bank (as opposed to say SVB) means that they didn't have to be as conservative, or integrate with some horrific ancient tech stack.
I'm pleased they've had such success with Haskell, but much like Jane Street and OCAML, I think the language choice is almost accidental*, as much as the companies would like you to believe otherwise.
I would like to know however what they're doing for front-end. I would guess that all of this Haskell is back-end only.
*EDIT by "accidental" I mean to the business side. Jane St had some good trades, Mercury had great focus and execution. They also have some good tech :)
That was my sense reading the article - that the author would be running a successful engineering org using any language really.
In any case, I think the "Haskell tax" concept (where you can pay well-paid programmers less if you have a Haskell shop) is stale by now. Rust attracted away a lot of FP-ers, plus mainstream langs like C++, Java and even Typescript got smarter. Haskell's biggest problem by far is the tiny labor pool, which Mercury seems to wisely avoid.
Looks to me like you can build amazingly robust pieces of software with functional programming.
However, I am divided.
I have a backend that works in NiceGUI for a product. It does the job. The code is reasonable and MVVM. The most important task it does is connecting to a websocket per customer and consume data to present some analytics.
I will not have a great deal of customers, maybe in the tens or maximum hundreds visiting the website.
I also want REPL and/or hot reload, but I am aware that as I grow features (users admin panels, more analytics, etc) maybe functional programming can do a good job transforming data pipelines.
But Haskell or Ocaml are static. I guess if I want something later that grows and scales and is still dynamic Clojure or Elixir should be a good choice. But at the same time I am afraid that if at some point I need to refactor, things will go wrong.
Currently I use Python with Mypy. All is written in the backend: the frontend is generated by NiceGUI from the backend.
Haskell is so so correct that it tends to get a bit on the way and you tend to encode everything in the type system. This is a blessing for correctness and a curse for other stuff (tracing, debugging, adding side-effects).
This is the reason why I am looking at Ocaml instead of Haskell: not so pure, more pragmatic and supports imperative programming well.
As I said, it is double-edged.
In functional/declarative style, you generally describe how things should be, not how things are made, and you let the language piece everything together to get the expected result in the end. It is all well and good (and even better) if you did everything right, but what if you didn't and you don't get the expected result? How do you find the bug?
In a language like C, it is relatively straightforward: go line by line, look at the execution state (the RAM, essentially) between each step and if it isn't as expected, something wrong must happen at that line, so you step in and progress like that. Harder to do when the language goes out of its way to hide the state from you, as it is the case for functional programming.
It is interesting that the longest section of the article is about this problem: "design for introspection", where the author has to go out of his way to make his code debuggable. A good insight on the often overlooked practical use of Haskell.
No other (mainstream) language comes close.
But what about situations where the code cannot be written in such a form (like shared memory concurrency)? I use transactions for that.
No other (mainstream) language comes close.
And that's without the low hanging fruit of no nulls, no implicit integer casts, etc.
It is absolutely true that debugging Haskell code is harder than debugging other languages. If you took away the bottom 90% of footguns, how could it not be?
There are interesting trade-offs in REPL-oriented debugging. One of the big things is that in a language like C you might often start first from whole program debugging and breakpoints to try to hit exactly where you think the problem is. In a REPL-oriented world you often try to build the components of your program in a way that you can test more units of it directly in the REPL.
Your module/API/Type boundaries in a REPL world become to mirror your debuggability story. There is sometimes more pressure to get those right and easy to use than in imperative languages like C/C++ because you might want to reach for them directly in a REPL.
But yes, a tradeoff versus whole program-first debugging is sometimes it becomes harder to isolate complex integration issues between your units in strange real world scenarios. However, that REPL-first approach is often encouraging of minimizing your integration "surface" to a bare minimum so often FP languages don't exhibit some of the same integration effects you see in imperative languages.
> Harder to do when the language goes out of its way to hide the state from you, as it is the case for functional programming.
Functional programming languages aren't really hiding any state from you. They also are running on imperative hardware and still dealing with real hardware states. At some point there is a translation between the "worlds" (which also likely aren't as different as you seem to think that they are). You still have those imperative breakpoints and imperative debuggers to fallback on.
That's why the term is "REPL-guided" debugging. You can use a REPL to pinpoint the problematic unit (the exact module/API/function) and the problematic input giving you the surprise output. If you can't see the bug in the source as written you can still send it to an imperative debugger and watch nearly the same "line-by-line" experience and hope it provides additional missing context. Even better by that point you probably don't need to choose good "breakpoints" because you've already isolated the problem enough in the REPL to have "natural breakpoints" because the unit you are debugging may be small and narrow enough that stepping just that unit is all you need.
> It is interesting that the longest section of the article is about this problem: "design for introspection", where the author has to go out of his way to make his code debuggable.
I think you found the wrong message from that section. That section wasn't about debuggability it was about observability. It was about connecting logging/telemetry systems correctly, mocking fakes during testing, adding retries/circuit-breakers at a systematic/app-wide level rather than relying on individual libraries to get it right. In the imperative world these aren't debugging issues either: These are Dependency Injection issues. These are Middleware installing issues. These are factoring concerns like using Abstract Interfaces over Concrete Classes at your public API boundary.
The design suggestions are factorings. They don't impact debuggability, they impact how easy it is to install observability middleware to someone else's public API.
> The problem is that we cannot trust code we cannot instrument. If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace except by squinting at it and developing theories. So we write our own. More work upfront, but the clients we write are observable by construction, because we built them that way from the start.
Given that tracing etc. is IO, are they just threading IO through the entirety of all their Haskell code?
In any case, anywhere they’re doing HTTP calls they are already threading IO, so they don’t have to pay an additional cost.
Some people call this "high-level," too.
I will say, though, that 2 million lines of code is much less code than it sounds like at first glance, especially for a company in a highly-regulated space like finance, plus a few years of progress.
If anything, having your entire company’s codebase be 2M loc and it be a functional product seems reasonably efficient to me.
Lisp is traditionally not so terse, but still expressive.
a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.
b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.
Absolutely not an objective metric, but I have found that Haskell just has a different "aspect ratio". Line count may be somewhat lower, but the word count is essentially largely the same as more imperative OOP languages.
We didn't create many bugs, and usually functionality could be added very rapidly (e.g., we were the first to achieve a certain certification for hosting sensitive data on AWS).
Though occasionally functionality had to be added more slowly, because we had to write from scratch what would be an off-the-shelf component in a more popular platform. But once we did it, it worked, and we were back to our old velocity, and not slowed by the bloat and complexity of dozens of off-the-shelf frameworks. We could also adapt rapidly because we controlled a manageable platform, which is how we were able to move fast to AWS when there was a need.
The system also had some technical bits of architectural secret sauce from the start (for complex data, and Web interaction), which enabled a lot of rapid development of functionality, and also set the tone for later empowering smartness.
One difference with our system, from the Haskell fintech, was that our team size was very small (only 2-3 software engineers at a time, and someone who managed all the ops). So we didn't have the challenges of hundreds of people trying to coordinate and have a coherent system while getting their things done. Instead, there was usually one person doing more technical and architectural changes to the code, and a prolific other person doing huge amounts business logic functionality for complex processes.
With careful use of current/near-term LLM-ish AI tools, software development might find some related efficiencies of very small and incredibly effective teams. But the model that comes to mind is having a small number of very sharp thinkers keeping things on an empowering and manageable path -- not churning massive bloat to knock off story points and letting sustainability be someone else's problem.
The advantages to Haskell are theoretically obvious. The downsides are harder to intuit.
The temptation is to model _everything_ as types. The codebase itseld becomes a _business specification_, not an application. Every policy change is a major refactor (some of which are shockingly high-touch thanks to Haskell safety).
The lesson is you cannot have your cake and eat it too. Eventually you become trapped by your types.
Haskell is really impressive and powerful, perhaps especially at this scale. However it brings its own unique problems. The temptation to model business logic as types leads to rigid structures. And the safety these structures bring can blind you to other classes of risk.
I interned at Jane Street years ago and they seemed to do a great job of walking that line (in OCaml rather than Haskell, but same difference). They moved remarkably quickly despite working in an area with a lot of inherent complexity and where reliability and correctness are an existential concern to the business. (Which, perhaps surprisingly, is massively more the case for a trading firm than for a Mercury-like neobank...) In hindsight, a key thing Jane Street did was hire some experienced OCaml programmers with great taste (like Stephen Weeks, the author of MLton) and let them build the core libraries and guide the whole codebase from the beginning.
Unfortunately, this is one of the things that Mercury didn't do anywhere near as well.
Tbvh the biggest downside of a Turing complete type system is that you can theoretically implement an application that compiles to dust.
Mercury is pretty tame by comparison I’ll admit.
In languages with option types, if you want to weaken the type requirement for a function parameter, or strengthen the guarantee for a return type, you have to change the code at every call site. E.g, if you have a function which you can improve by changing
- a parameter Foo to Option<Foo> or
- a return value Option<Bar> to Bar
you would have to change the code at all call sites. Which could be anything between annoying and practically impossible.
In languages that solve null pointer errors instead with untagged union types (like TypeScript or Scala 3), this problem doesn't occur. So you can change
- a parameter Foo to Foo | Null or
- a return value Bar | Null to Bar
and all call sites of the function can remain unchanged, since the type system knows that weakening the type requirement for a parameter, or strengthening the promise for a return type, is a safe change than can't cause a type error.
So yes, option types do avoid null pointer exceptions, but they solve the issue in a very suboptimal way.
Actually I think you can just change concrete argument `Foo` to type constraint in Haskell as well using a type class. So the function would be something like `foo :: ToMaybeFoo a => a -> .. ->`. And you would implment `ToMaybeFoo` instance for `Foo` and `Maybe Foo`.
Agree that this is more involved than typescript, but you get to keep `null` away from your code...
> but you get to keep `null` away from your code...
I don't think this would be desirable once we have eliminated null pointer exceptions with untagged unions.
It is quite simple. Instead of accepting a concrete type `Foo`, the function is changed to accept types that can be converted to `Option<Foo>`. Since both `Foo` and `Option<Foo>` can be converted to `Option<Foo>`, the existing call sites that passes `Foo` would not require changing.
If you were calling a function which might return null (String | Null), you will already have null handling at the call site, but if you now change that function such that it never returns null (String), you still have the (now unnecessary) null handling, but this doesn't hurt and you don't have to change anything at the call site.
Likewise, if you were passing a String to a function that doesn't accept null (String), the call site already made sure that the parameter isn't null, and if you change the function so that it does now accept null (String | Null), again nothing needs to be changed at the call site.
I must admit I’ve never had this problem in application development. In fact, I do want to change my callers because strengthening the contract is an opportunity to simplify the callsites - they no longer have to handle the optionality. The change might carry some semantic meaning too, why are you getting x instead of Maybe x all of the sudden? Are there some other things you should reconsider in the callers? I can see how it could be useful in library development, but there are also patterns to account for this that are idiomatic to Haskell.
I don't think Clojure has untagged union types like TypeScript or Scala.
> but null is a high price to pay for this convenience.
Why would it be? Untagged unions prevent null pointer errors just as much as option types do, only they don't have the discussed disadvantages of option types.
The null is a high price to pay because eventually someone will make some type assertion somewhere in the TS codebase that will end up biting you. Sure, you can be diligent, but will every contributor during the lifetime of a project be?
Not sure about Scala, but I did see NullPointerException every so often, and what is the practical advice to handle them in Scala? It’s to use Option[T]
Type assertions and untagged union types are entirely independent. Supporting untagged unions doesn't imply supporting type assertions, and not supporting untagged unions doesn't imply not supporting type assertions.
> Not sure about Scala, but I did see NullPointerException every so often, and what is the practical advice to handle them in Scala? It’s to use Option[T]
Scala only supports untagged unions since version 3, so that's probably the reason why they are not used everywhere yet.
That's literally what they explain in the rest of the comment.
In any case, the fact that the compiler knows what code needs to be updated is a real superpower.
I've been this person, and I've worked with this kind of person, and been the victim of this kind of person. They love language X, or framework Y, and are convinced that so many problems in front of them are shaped in a way that would be solved through the application of it.
They now have a hammer and they go searching for nails to hit with it.
I've been in shops that used Haskell, and it was... fine? It's I guess nice for people who enjoy writing in it -- I prefer other FP languages personally. I like nerdy things like that and used to hang out on Lambda the Ultimate or whatever. But I don't think there's any real secret powers in Haskell or most other tools. I've been burned too many times by that kind of approach.
it's so easy to scout when a company has this haskell philosophy. either by the interviewers themselves or by the bloggers they hired to guide their team.
the trick? i just..lie. "oh yeah i'm super pragmatic. i'm not hardline about haskell. i don't think you should be fancy." see how easy it is? i am suddenly hired and got a fat raise. and if the company moves off haskell? i quit immediately, get another haskell job, and talk to my former coworkers on the way out to embolden them to do the same.
it helps that i have the "real world" stuff on my resume.
i rode the 2010s job hopping ride as a haskeller doing this. each time a 20-30% raise. and i get to still write haskell. and i am always a top percentile haskeller at the company so i can code however tf i want lolol. suddenly - singletons, Generics, HKD!
so here's to earning another million bucks "noodling around with Haskell" :cheers:
Congrats I guess? Not sure where the abuse/guilt comes from.
I've made all my money over a decade in Haskell. Millions. Paid for all my stuff.
It all started with a recruiter on LinkedIn
Yup, I've seen too many of these engineers, trying to shoehorn a poorly thought-out half-working Haskell into every language they come in contact with :)
If only cross-compilation became easy so that I can develop on my chip Macs and deploy on x64/AMD Linux servers.
>statically linking Haskell binaries is quite a challenge
>build requirements really slow down the process. I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.
Also, the Docker-based deployment takes a lot of time as it needs to recompile each module. While you can cache some part of it, it's still slow.
Meanwhile with Go it's painless. And i am not the only one having this issue:
https://news.ycombinator.com/item?id=47957624#47972671
Such a shame Haskell is beautiful and performant language still build is slow.
>A couple million lines of Haskell, maintained by people who learned the language on the job, at a company that moves huge amounts of money? The conventional wisdom says this should be a disaster, but surprisingly, it isn't. The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five days,1 through regulatory examinations, through all the ordinary and extraordinary things that happen to a financial system at scale.
This one is quite telling. Do people have counter examples?
Obviously Mercury is successful, and obviously Haskell is how they did it. So it's essential to their success. Would it be instrumental to anyone else's anywhere else doing anything else? Can't possibly know, I don't think.
You can still compare lines of code and bug rate over the same period of time.
You can reason about frequency of particular types bugs, such as null pointers or overflow, or whether those bugs can occur at all.
[1] https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Being able to minimize boilerplate and have strong refactoring and bug resistant types is a huge edge.
The only problem is their ecosystems are limited so you might spend more time than you like implementing an API or binding a system library.
> This is not a complaint about volunteer maintainers. It is simply one of the ambient risks of building serious systems on a smaller ecosystem.
And so instead of paying the lib authors who already have domain expertise and know their codebase, they chose to rewrite it from scratch/fork without contributing back. So classic.
what does that mean?
> The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five day.
This is a strange way to describe a transaction processing system with total amount of money it processed. The reliability or scalability is not measured as O(n) dollars. In theory, $248bln or $2bln can be done in one transaction although I know it is not the case in reality. It would be impressive to see the typical system design properties like transactions per second, latency distribution, etc.
The Mercury site also looks way better than most other banks I have ever used (load speed is also very good.) On the danger of seeming like a shill (I'm not), I'm tempted to try them out.
I’ve been using Mercury for 5 years. In that time, I’ve been able to wire transfer money without having to worry it might disappear (functionally impossible at certain other banks), created hundreds of virtual debit cards each with their own limit and pulling from different accounts, created dozens of accounts (a “place to put money”) named by function (each of my household utilities gets its own account, with an automatic rule to pull in money whenever it gets paid out), and… well, I think that covers everything.
This has given me unprecedented insight into my financial life. I know exactly how much I spend on groceries, on each utility, and on entertainment. I can project ahead and get a burn rate for my household. And my ex wife uses it too, on the same login, which is as easy as “make an account named with her first name” and a corresponding virtual debit card.
I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
You have to pay for personal banking (a couple hundred a year iirc), but the business banking is free. If you want to try them out, you can start an LLC for a few dollars (at least in Missouri) and get overnight access to Mercury. All that’s required is your EIN.
They’ve been one of the single best products I’ve ever used. The sole wrinkle was when they canceled all their existing virtual cards due to reasons, which threw my recurring billing into chaos. But every great company is allowed at least one mega annoyance, and that one was a blip.
If you’re wondering whether to try them out, the answer is yes, and I’m excited for you to discover how cool it is. https://www.mercury.com
Very well could be true because I had no idea who or what they are.
Do they have strong low level automation support for the customer programmatically even for personal accounts? I use ledger for plaintext accounting for both personal and business and sync of data is slightly annoying, perhaps Mercury’s products solve that trivially?
I made this to solve it https://sras.me/accounts/
Feel free to use it as it stores data on your browser's local storage only. For syncing between devices, you would be able to use Google firebase's free tier and export your accounts (after compressing and encrypting) there and import from another device. Let me know if you want to try it..
You could try them out while still having your US Bank account.
I thought that was the whole point of banks - does money randomly disappear when people do wire transfers?
That’s how I found Mercury. I was looking for a bank that wasn’t amateur hour. Once I had an account the money showed up overnight.
I still don’t know if Nat even got his original $10k back. I hope so. But yes, the failure mode is very much “if your wire transfer details are wrong, the money is just gone”. Apparently.
It seems obvious to me, if you send money to the wrong account, it's not your money anymore.
Try a better programming language next time, dagnabbit!!!
(There will be downvotes I suppose. More lines of code the better?)
Building banking apps? Well, even if it's Haskell, the Haskellers were dreaming of GPU compiler jobs, not banking front ends. So you're probably down to literally 5 qualified people on earth who want your job.
But then 3 of those 5 don't want to relocate or have other operational desires that require you to re-think how you run your team, and 2 of those 5 believe so strongly in supply-demand that their salary should be 3x the industry average.
Many companies, including Jane Street, come to the same conclusion: If you really want developers of a niche language, you have to be very good at finding smart people who don't know the language and training them.
Haskell is admittedly, probably the most powerful widely (or even somewhat widely) used language for doing this, but this general pattern works really well in Rust and TypeScript too and is one of my very favorite tools for writing better code.
I also really like doing things like User -> LoggedInUser -> AccessControlledLoggedInUser to prevent the kind of really obvious AuthZ bugs people make in web applications time and time again.
I've found this pattern to be massively underutilized in industry.
Imagine you have to distinguish between unescaped and escaped strings for security purposes. Even with a dynamically typed language, you can keep escaped strings as an Escaped class, with escape(str)->Escaped and dangerouslyAssumeEscaped(str)->Escaped functions (or static methods). There's a performance cost to this, so that's a tradeoff you have to weigh, but it is possible.
Another way of doing this is Application Hungarian[1], though that relies on the programmer more than it does on the compiler.
[1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
This just isn't true.
In any dynamic language you would not get these guarantees at compile time. You'd get random failures at runtime. That's not safety of any kind.
Also, part of the goal of languages like Haskell is that they help you think about your code before it runs. All of that is lost.
> Imagine you have to distinguish between unescaped and escaped strings for security purposes
That would be a nightmare in many languages. You'd have to rewrite large parts of the code to be compatible with one or both. And in many languages you'd have to duplicate your code entirely.
In other languages, the result would so ugly, you would never want to touch that code. Imagine doing this with say, templates in C++.
>There's a performance cost to this
There is no performance cost in Haskell! This is entirely undone by the compiler.
Also, because the compiler understands what's going on at a much higher level, you can do things like deriving code. You can say that your classified strings behave like your regular strings in most contexts, like say, they're the same for the purpose of printing but not for the purpose of equality, in one line.
That part is (de facto) required for dynamically typed languages, but not for statically typed ones where the newtype constructor/deconstructor can be elided at compile time. Rust and C++ especially both do the latter by having true value types available for wrappers that evaporate into zero extra machine code.
But then just this moment I wondered: do any major runtimes using models with no static type info manage to do full newtype elision in the JIT and only box on the deopt path? What about for models with some static type info but no value types, like Java? (Java's model would imply trickiness around mutability, but it might be possible to detect the easy cases still.) I don't remember any, but it could've shown up when I wasn't looking.
For a single value, it should reliably be free in any reasonable statically typed language that meet your other criteria.
For a collection, it may still be de facto required. Unwrapping a set of addresses into a set of strings takes unnecessary cycles, an unsafe coercion, sufficiently sophisticated affordances around coercion that it can be safe, or a smart enough optimiser. At least some static languages have at least one of these; I'm not sure all do, and certainly not all have always had.
As for other JVM languages like Kotlin and Scala, they have basically what "newtype" is, but it can only be completely erased in the byte code when they have a single field.
What I'm imagining for my curiosity about the dynamic case would look more like “JS/Lua/whatever engine detects that in frob(x) calls, x is always shaped like { foo: ‹string› } and its object identity is unused, so it replaces the calling convention for frob internally, then propagates that to any further callers”, and it might do the same thing when storing one of those in fields of other objects of known shapes, etc. until eventually it hits a boundary where the constraint isn't known to hold and has to be ready to materialize the wrapper object there.
Kotlin and Scala sound like they're doing the Rust/C++ thing at the bytecode level, if it's being “erased”, so just the static case again but with different concrete levels for machine vs language.
Haskell's type system controls effects available to the code. This can be used to implement programs adhering to specific formulas of Linear Temporal Logic or implementing a protocol specification, where operations on any given phase and side are restricted by type system.
I used Haskell's type system to prevent crossing of clock domains in the hardware description eDSL. Also, it was of great help in the CPU simulator description, fixing available commands and resources for different CPU models.
Even the logic of Rust's borrow checker was expressible in Haskell as early as February 2009 - there was HList, there was ParameterizedMonad and that's about what one needs for implementation of borrow checker.
This comes with some design drawbacks. I think Rust's borrow checker would be implementable but unreasonable in Haskell: Haskell already does lazy-evaluation on types to enable its arbitrary depth of type expressivity. But the borrow checker also wouldn't really make sense for Haskell because the default programming model uses a GC. I think Linear Haskell might be a kind of Rust-in-Haskell, though.
Again, caveat emptor.
In principle the more developed the type system is – the closer it to not distinct between types and values. Caviat is that its "type inference" gets worse.
So, in those more developed languages, you could have type-level proofs (guarantees) that your calculator produces correct results, as a proof, as theorems. That 2+3 will be 5, not as runtime test assertion, but as a theorem, that no other result is possible no matter what happens. Or that your parser will never fail on valid JSON etc. but nobody guarantees it's going to be a pleasant thing to write, maintain, and debug. And compile times will probably be terrible.
> The rank-2 type (that is, the type s is scoped within the parenthesis and can't escape) of runST ensures that the mutable references created inside the computation cannot escape due to being tagged with the type s. Internally, all sorts of imperative nonsense may occur. Externally, the function is pure. The world outside the boundary gets none of the mutation, only the result.
C does not have parametric polymorphism, nor rank-2 quantifications, so no, this cannot be done in C.
Regardless, you can also have some limited parametric polymorphism in C with macros. This is very poor, but parametric polymorphism in Rust is based on monomorphization so it is also quite poor. You can also have higher-order polymorphism in C but then you need to use subtype polymorphism.
You can do it in Assembly. That doesn't mean it's cost effective.
The Confucian philosophy that people act like water coming down a mountain, seeking the path of least resistance comes to play.
Haskell, OCaml, F#, and their ilk can yield beautiful natural domain languages where using the types wrong is cost prohibitive. In languages without those guarantees every developer needs discipline to avoid shortcuts, and review needs increase, and time-pressure discussions rehashed.
If a tool can help enforce some ways of doing things, or if it doesn't constrain people much, that has consequences for the type of work that gets done with them and the systems you encounter running out there that you might be invited or find the need to work with.
"I can do it" is exactly the wrong answer. "How can I guarantee that others will do it" is the point being made.
Some people, teams and orgs can benefit from it. "I don't need it" is missing the point. "Not everybody needs it" is missing the same point from a different direction.
And of course Rust and TypeScript were heavily influenced by Haskell... they just don't mention it and call things differently, to avoid the "monads are scary, I need to write a tutorial" effect. Though it's less about monads and more about things like type classes.
Imitation is the sincerest form of flattery.
Personally, never enjoyed Haskell's syntax (or lack of it) and tendency to overthinking. But I did enjoy SML/NJ and OCaml to some extent.
Rust wikipedia says otherwise
Haskell type classes are not classes (like Java or PHP classes); they are comparable to Rust traits -- which are different from PHP traits which are comparable to Java/C# interfaces (with default impls; if you just want contracts you have... PHP interfaces).
A fundamental difference is that you can instantiate/implement a type class (or Rust trait) for any* type, compared to interfaces where each class declares the interfaces it implements. You can therefore create generic (forall) instances, higher kinded type classes, etc.
Actually in modern Java you can simulate type classes approach with a mix of interfaces and default methods implementations.
In C# you can have the experience more straightforward with extensions types introduced in C#13.
Then we have yet another way to approach type classes in Scala, with traits and implicits.
And so on, as I haven't yet run out of examples.
Can you? The beauty of traits/type classes is that you can attach them to any type - in a world where 90% of the functionality of any piece of software is supplied by dependencies - external types which you cannot change - this is a vital feature.
It isn't pretty, but one can try to achieve a similar approach.
https://godbolt.org/z/TjPha3obs
Failing that, there are always Clojure, Kotlin and Scala on the JVM, which expose language features to achieve the same, which you naturally can mix and match with plain old Java.
I was responding to your original claim and I’m well aware of such facilities in both Kotlin and Scala - having used both extensively. I was genuinely curious if the latest Java was in the process of adding support for trait/typeclasses - so I don’t understand why you’d bother to reply with something that completely changes your original claim.
my experience is that ocaml is more powerful than rust for enforcing this sort of type safety, because you have gadts that give you more expressive power, and polymorphic variants and object types (record row types) that give you more convenience. and the module system and functors of course.
you also avoid some abstraction limitations/difficulties that come from the rust borrow checker for places where garbage collection is just fine
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
The point of using the type system to do something like distinguish between sanitized and unsanitized strings is specifically to prevent these kinds of security breaches.
Erlang was designed for traditional telecom, where reliability of connections was the biggest factor, not security. I fail to see how Erlang’s approach can deal with the issue of security breaches or corrupted user data.
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it.
Edit: another thought I had is that type mistakes do not always causes crashes. Silent corruption can be much more insidious, e.g. from confusing types which mean something different but are the same at the primitive level (e.g. a string, number or uuid)
The amount of silencing (implementer error, but quite prevalent) of errors I’ve seen in typescript codebases are horrifying. Essentially ”try happy path, catch everything else and return generic error”, the result is is mostly the same for the user, but night and day for me who is trying to fix it.
There are some expectations where that's a reasonable response to a violation, but there are many expectations where the violation implies a bug elsewhere and crashing the process will do nothing to address that that wouldn’t have been better accomplished with stronger compile time checking.
This seems wrong; the type spelled `Symbol` refers to the boxed interface for symbols[0]. I suspect you meant to write `unique symbol` there, but it can't be used in that position.
I'm not sure if `NewType` in your comment is supposed to stand in for a specific newtype (in which case it probably doesn't need to be generic[1]) or if it's supposed to be a general-purpose type constructor for any newtype (in which case it should take a second type parameter to let me distinguish e.g. `EmailAddress` from `Password`[2]). The use of `unique symbol`s is also only really necessary if you want to keep the brand private to force users to go through a validation function or whatnot, otherwise you can just use string literal types.
I agree these incantations aren't big problems (it all falls out naturally from knowledge of TypeScript's type system, and can be abstracted away as per my comment in [2]), but the fact that you goofed in the very comment where you were trying to make that point is causing me to second-guess myself.
[0]: https://github.com/microsoft/TypeScript/blob/v6.0.3/src/lib/...
[1]: https://tsplay.dev/N7rvBw
[2]: https://tsplay.dev/Ndep0m
There are helper libraries to ease this (zod supports branded types, I think?), but I guess my general point is that while typescript might give you the ingredients you need to implement type safety in cases like this if you try really hard and remember all your rules everywhere, it doesn't come naturally so it's hard to maintain at scale.
I think the point still stands - is this really a big problem? I guess I couldn't recite the syntax from memory, because I usually use a utility type for this
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
You do not need Haskell for that eg it works in Python (via pydantic, attrs data classes)
In my backend system I represent users with different variant states to avoid a lot of unrepresentable states.
As for underutilization, I think only functional languages, Rust and C++ support variants and that might be one reason: people just make blobs of state and choose which fields to use instead of encoding states and make some combinations unrepresentable. Javascript, Java, C# or Python do not have Variant types to the best of my knowledge.In Ocaml and Haskell and with pattern matching they are very natural. In Rust with enums, same. In C++, they are so so but still usable compared to the others that do not have.
In my load tests I even went, since I launch thousands of clients, with a boos.MSM to drive the test behavior. One state machine per user.
I believe `const` functions in Rust are actually be guaranteed to be pure, though I haven't followed that feature closely and there may be nuances.
In most languages purity is a norm rather than enforced by static analysis. I definitely agree that it's much safer to assume that an arbitrary Haskell function is pure than it is to assume that of an arbitrary TypeScript function.