- new
- past
- show
- ask
- show
- jobs
- submit
* explains the reasons (financials, AI enablement)
* talks about what folks who are leaving get in detail (first) and thanks them
* talks to the folks who are staying
Layoffs are hard, no doubt, and I am not sure he's making the right choice. I see plenty of doubt about some of the actions in other comments that echoes mine. I certainly wouldn't want to have 15 direct reports and also ship production code regularly. But as CEO, it's his job to make these kinds of choices.
The proof is in the pudding as they say. We'll see how Coinbase does with this new orientation in the next year or so and that will determine if this was a wise or foolish move. Is there a flood of talent leaving? Major breaches? Business as usual with better than expected profits?
Time will tell.
Its all lip service - either AI generated or hand written.
I don't think this is true. Humans typically prefer "thanks for the hard work, here's your severance" to "you suck, here's your severance, loser."
Humans like being treated with respect, and words are a big part of that. Money is nice, but it's not the only thing we care about.
I'm not convinced a polite but AI-written email hits the same note. At the very least it's unintentionally disrespectful, which isn't a direct challenge. Your boss doesn't care enough to write an email by hand, but also doesn't care enough to burn bridges and insult you.
There is ZERO CHANCE they have used ai unintentionally
> also doesn't care enough to burn bridges and insult you.
By actively using ai they are stating that you are so much beyond them that even a personal "eff you" is not worth the time. One would have to actively try and poke some personally hurtful areas to come off more insulting than use of ai.
There's a difference between your boss not caring about you (does any boss really care?) and your boss actively disliking you enough to call you a loser when they expect to gain nothing from it.
In the former case, disrespect is a side effect of laziness, while in the latter it is the whole point.
If you ask AI to generate hundred different paragraphs and choose the one which best conveys what you actually feel and want to communicate.
Is it is still a perfect nothing?
If you are bad enough with words that you can't write an authentic message, you are also bad enough with words that you won't understand the options with enough nuance to know what you are saying. The bot will put words in your mouth that aren't true.
It is generally better to write poorly and from the heart than to outsource your heart to a really big algorithm. What you accidentally say from the heart will still echo your thoughts, while the AI will not. ChatGPT can't suddenly remember the time when you and your wife went to the beach together and saw a penguin, and she was worried it wouldn't be able to reach the ocean, and then it was totally fine and she got embarrassed, but you felt really in love with her because she cared so much.
You do get how that's worse, right? The person rather spends their time arguing with the clanker than thinking about the person and putting those thought into words, however unstructured they are.
So essentially you have three choices:
1. Spend time writing (or have written by a copywriter) in corporate fluff dialect, where the actual message is still understandable by all parties. At the cost of appearing tone deaf.
2. Spend time reiterating with a bot that speaks some undefined sub-dialect of LLMinese where the reception of the message is unknown. At the cost of appearing even more tone deaf and insulting than a corporate cog.
3. Spend time restructuring message in genuine voice. At the cost of maybe being heard more harshly than intended.
I fail to see how option 2 can be perceived as anything but the worst, unless you assume that the target audience does not distinguish LLMinese from actual speech.
And yeah, I know my tone is harsh and appears to lack empathy and I have only my writing skills to blame and a lack of time. That said I won't be the one to throw it in a LLM for "refinement" otherwise how would I improve? I'm not sure LLMs are to communication as are forklifts to lifting and moving stuff.
As a side note, the general advice regarding code review in my experience was not to take it personally and it's kinda funny to me for reasons I can't pin point how people (like me) have started giving unsolicited advice or criticism in regard to writing when in actuality both (code and writing) reflect personally on the human on the other side of the screen.
Anyway, I pretty much went off on my own tangent here with an apparent lack of empathy to boot but if we end up disregarding such fundamental human skills then what's to stop us from becoming dunces in a few generations? Sure, I'll add another abstraction layer even if it has a lot in common with reading tea leaves because it's not like I manually flip switches to input a program but I'll try my best to keep my individuality where it matters to me, specifically when it comes to expressing myself.
Thank you for coming to my TED rant.
Example: A friend has died and consolation is given. No amount of consolation makes the death a good thing for you, but there is still a difference in how that consolation is presented to you.
The only talk that has real value is "Hey, hire this guy - he's excellent and did an incredible job!"
Which ofcourse leads workers to treat their employers as disposable.
One group are the ones who are staying. They lose teammates, they have to restructure work and fear whether there will be another round soon, which may hit them.
And then there are customers, investors, ... who need to be assured they are not dealing with a failing company.
Who actually is required?
.. fundamentally, it's only the person collecting payment.
People with dignity.
For sure this part screams LLM
Wow. That’s my cue to never use Coinbase again.
"We’re not building Skynet, we’re cutting costs and putting the survivors on prompt duty"
Anything in that format gives that AI feel
Is there a flood of talent leaving after this one? Major breaches? Only time will tell.
Buckle up, and don’t forget your pudding!”
Except for that tone-deaf part at the end, where right after he talks to the people who "will be leaving" (that is, the people getting kicked out), he says that Coinbase will be stronger and healthier for this. Which makes it hard not to draw the conclusion that the people "leaving" are part of the unhealth.
The CEO probably does not even think that, and just wants to reduce costs. But from what was written, the implications are decidecly suboptimal.
But the reality is it is a standard MBA driven "bottom x%" cull dressed up with some 4d chess strategy.
Is this code for "we're firing all the old people"? As I understand it, I can say I'll only hire proficient English speakers (a "bona fide occupational requirement"), but I can't say I'll only hire native speakers, as that would discriminate against various protected groups. This seems like the same thing—proficiency may be a bona fide requirement, but expecting they learned this year's workflow first is age discrimination.
I don't expect ethical conduct from crypto companies and will not be sad if they are sued into oblivion.
This sounds suboptimal to me - probably the kind of employee I would avoid for as long as possible.
I see AI-native as those who have embraced it, and are learning to leverage it appropriately.
Age-ism is reinforced by senior people resisting the notion that they need to change and adapt. I'm not like that (I'm 51). But I'm having a lot of tedious debates with people lately about how they don't want to use AI tools, how their job is somehow special so they can't use it, etc. Many of those people are actually quite a bit younger than me. There definitely is a pattern here of people that are a bit set in their ways not adapting and being a bit stubborn. Age-ism is unfair to people that are actually putting in the work to learn and adapt. But life is unfair.
Nobody actually has more than 6-12 months of experience with agentic coding tools at this point because the tools were pretty much unusable before then. I was using ChatGPT and a few other tools before that for occasionally copy pasting bits of code or figuring out bugs. But that's not really the same thing.
Half a year is not a huge gap to bridge if for whatever reason you are a bit behind on this. So, get on with it. It should not take you that long to catch up. Especially if you are a bit older, the best way to counter age-ism is showing that you have all the skills already.
It's always been someone higher up the ranking wants meetings, training or something dumb because his golf buddy sold him on Kafka support contracts in inappropriate situations, or an architect needs to shoehorn some tech in so they can have it in their designs ready for their next job role. I spend probably more time in meetings than doing coding.
Why can't I have an AI that takes my meetings for me?
unfortunately you still have to show up to the meeting and engage with your friends and colleagues for half an hour.
Why am I not surprised.
The simple truth is that I had to constantly learn something new and this is how it is in this profession. We’ve been in the trenches and we did it over and over again.
Now I’m using AI full time, doing same thing I always did - shipping products.
Newcomers with first set of skills don’t understand what is meta responsibility in this field - it’s never coding something, it’s shipping products to solve business needs.
It is even more abstraction, even harder to follow the code I'm "writing" with AI.
Also I have a fear that if/when the AI tide recedes, I'll be the one caught with my pants down since I have been forced to vibe code the majority of my career. As opposed to greybeards who can fall back on their decades of knowledge.
The best complement to AI will be a human who is part architect (they know not to build the new system on lovable, and they understand the company's digital assets) and part business analyst (can communicate effectively and tease out and distill requirements from customer team).
That indicates someone who has top notch communication skills and also quite a bit of experience i.e older.
Congratulations. But you completely missed my point. I didn't say old people can't be in tune with AI.
> I see AI-native as those who have embraced it
That's not what the word "native" means. In the human language situation I referred to, it's about the language you learned first. It's not a synonym of proficient or fluent. If you learned to code first without AI tools, you are not AI-native by any definition I would understand, no matter how good at using AI you may be.
It's not just "English-native" that makes me think they have this meaning in mind. It's also the term "digital native" that gets thrown around a lot and is absolutely about how old you are. https://en.wikipedia.org/wiki/Digital_native
Another somewhat reasonable interpretation occurred to me later: that they're using "AI-native" as a shorthand for "AI-native systems" aka systems designed with AI / to take advantage of AI from the start, and thus "AI-native talent" as a shorthand for "people talented in creating those systems", rather than the people themselves being AI-native. But again, given who said it, I'm not going to assume that's what they meant.
scoot's comment [1]: "I'm not sure exactly which children they're planning to replace all their staff with, nor how they plan to get around the child labour laws" sounds exactly right to me.
The term that best suits "people who embrace AI-assisted programming" is AI-first programmers, which is what they literally mean by the looks of it. Clearly, they just use what they think sounds cooler.
I'm not sure exactly which children they're planning to replace all their staff with, nor how they plan to get around the child labour laws.
I'm actually shocked that people could take my comment at face value and not realize it was obviously sarcastic. That is eye opening.
I can see how it was sarcastic on re-reading. It can be hard to tell online!
Sorry for the misunderstanding
{1} scottlamb: "I suspect their lofty stated goal of X is a lie, to disguise their true goal of Y, which is something common which companies find much easier and more-desirable."
{2} CityOfThrowaway: "You are wrong, because it's obvious that X is achievable... if you define 'native' in a certain way."
{3} Terr_: "Uh, what? That doesn't make sense. The feasibility of X isn't part of Scottlamb's argument. Even if we assume X is possible, it isn't evidence they actually intend X over Y.
It's totally random to accuse them of using "AI-native" to fire old people.
1. What statistics support this assumption? (Either for Coinbase specifically, or "tech companies" in general.)
2. Nobody has to be a literal greybeard in order to be in the crosshairs of downsizing. Just look at Amazon's "make them quit before vesting finishes" pattern.
Either it's badly named or people are trying to be included (?).
Huh? If it came out this year then everybody had a chance to learn it this year?
You might assume they aren't going to be so stupid as to try to exclude everyone who isn't new to programming. I wouldn't. They're a crypto business.
See also "digital native", a popular term which is absolutely about growing up after the technology in question was ubiquitous. https://en.wikipedia.org/wiki/Digital_native
Is Brian here? Can he speak more to this? What exactly are non technicals shipping to production code?
I've got no position in Coinbase but is that a wise thing to say as a public company? I'd be alarmed if I were a share holder
They hear this from the sellside, from activists, from the guys managing their private market allocations etc.
- big institutional allocators
- activists
- the sellside
- guys managing their private market allocations
Sounds tight I love the direction industry is heading lol.
Your support will be provided by an AI bot almost as smart as Clippy because it was trained on the marketer's corpus of emails.
They could even use one of many headless CMSs combined with a static generator. Claude Code in the hands of non-technical users deploying to prod regularly seems like one of the worst possible ways to do it (except for the "cool" value telling people about it).
At my company the internal devs don't even have access to wherever the company site is hosted, it's a WordPress CMS and marketing can make updates safely with a couple clicks and zero day-to-day development oversight required. IT just helps keep the box updated but otherwise it's entirely their own thing.
As difficult as it is to use CSS to centre a field, the stakes are in a different ball park.
[1] Of course permissions are such that the tools can't do anything that would damage any of the systems.
I'd love to hear more about the positive effects of designers and PMs using AI, especially more on the PM side, if you care to go into more detail
But also the type of investor who is into crypto in the first place will probably love this
Crypto bros :handshake: AI bros
As someone who lived through multiple rounds of layoffs at big tech companies this seemed quite generous.
I got laid off 3 years ago and got a mere 2 weeks + 1 month of COBRA. It was a tech company, but not a big one.
However, I don't think this is that unusual in SV layoff packages.
Either way, I'd still be shitting my pants. 16 weeks is not a lot of time to find another job in today's environment. I know devs who have been out of work for years and had to resort to stocking shelves at Home Depot to tread water.
Everyone should do their damndest to get 6 months worth of bills into savings. This should be easy for well-paid tech workers.
I've been making tech money ($200-250K) for about 5 years now, and my savings is enough that I could ride out a job loss for at least a full year with no change in lifestyle. With some minor belt tightening (I eat out WAYYYY too much), I could go 2 years before I had to start worrying.
If we are not employed, then we have N months until we are broke. This is true for what, 99.9% of us? Whether that N is a high or low number, the slope of the line is still downward and that makes it an emergency. Unless you are retired, and are hoping for N to be greater than your life expectancy.
I've done my share - after buying one smaller apartment some 12 years ago, paying all legal fees, taxes and full reconstruction I was, overall, -1500 euro worth and now with 2 parallel mortgages on my shoulders. Had to take short term employer's loan to get back into positive numbers (that loan, if fired/let go, would be conveniently ignored so that has been be my main motivation for taking it otherwise its a dumb move on its own).
Getting fired during that period and maybe next 6-12 months afterwards would be still devastating for me, I don't have rich parent/family to fall back on, smart moral hard working folks didn't get paid well during socialism/communism. This is where rich kids have massive non-obvious advantage - like ie Gates, they can go and take big risks that are not that big for them, and come crying to rich daddy if they screw up, or be a hero if lucky. Folks like me, they have to risk everything to even get the chance to play the game (which has its own risks which luckily didn't materialize).
I see it even now with my colleagues - nobody would take any big risk, all very risk-averse because they can. My risks though took me further than they managed to get with a massively better starting position. Sometimes, austerity is a great motivator.
But it was a temporary dip, and I had a bit of luck through it. To be in software engineering and having long term no savings, thats... bad life strategy in most cases.
> COBRA is the Consolidated Omnibus Budget Reconciliation Act. It gives workers and their families who lose their health benefits the right to choose to continue group health benefits provided by their group health plan for limited periods of time under certain circumstances such as voluntary or involuntary job loss, reduction in the hours worked, transition between jobs, death, divorce, and other life events. [0]
Americans are not remotely free.
While AI is likely a productivity boost, the underlying reason is not AI.
And something else I don't get about these AI related layoff announcements: if AI was a productivity boost wouldn't you hire more engineers and technical staff to capture the value? Or else you're basically saying "we're a tech company that has no idea what to do with more super-engineers".
They aren't saying that they don't know what to do with the AI productivity boost, but rather they think it worth taking a huge productivity hit right now so they can invest in the future. Whether their vision of the future is realistic...
Execution of unrelated ideas seems like a natural follow on, and having managed several such "labs" efforts, it's actually a good idea but it inevitably grinds up against the lack of will to continue investing in the face of headwinds, especially since the main business line is several orders of magnitude larger than anything labs can deliver in a foreseeable timeframe.
https://www.businessinsider.com/ai-isnt-killing-software-cod...
The only way I can rationalize that so many people refuse to believe this is happening is that they are on the seller side and not the buyer side of engineering labor. This means they have blind sides to the buyers view of the market (some sort of information asymmetry), and secondly they exhibit cognitive dissonance to protect their self-esteem as a seller.
This is an interesting response when faced with concrete data that the buy-side of engineering is actively heating up in direct correlation with LLM adoption.
An alternative interpretation of your observation is that perhaps your company has particular traits that are helped more by LLMs than the average eng org. There's a growing SWE consensus that LLMs boost productivity by 10-20%. However, there are contributing factors that can make LLMs much more of a human replacement:
* Selling labour & services, rather than engineered software. ie an agency that builds customized versions of well-understood software, rather than net new capabilities.
* Selling software that has a low ceiling of complexity and a short half-life, such that LLMs can realistically architect & maintain it over its useful lifetime.
It is perfectly plausible that hiring would be much more without LLMs so that data is not proof of what the article pretends it to be.
It would be slop, but the market would love it
They’ve added tokens and altcoins to the platform, but I don’t think that’s a particularly strong long-term bet.
The competition is also stiff with decades of experience and network effects
The truth is these crypto shops have a pretty poor reputation in the traditional finance industry. Nobody in trading tech goes to work for them unless they offer insane salaries, because they (we) know it's an unstable place to be.
The worst part of using something like Coinbase is having to do yet another bank transfer, waiting for it to clear, doing KYC/AML yet again, etc etc for what most people is just to buy one or two single asset (BTC or maybe ETH probably). Instead just click buy in Robinhood or Schwab along with everything else.
A friend of mine works for one of the major crypto firms and they're starting to deploy algorithmic trading bots on their own exchange.
The spreads on these markets can be diabolical
If interest in tokens and altcoins wanes, Coinbase may be in a weak position.
If you look at Coinbase in 2020 they had roughly 1,200 employees. By 2022 they had roughly 4,500 employees.
They over hired and now they are pairing back, this is all it is.
It's because crypto goes in a cycle and now it's down. You should expect layoffs from them again in 2029/30.
It has poisoned more than one company (especially startups). Its the "go big or go home" mentality. The "the market is ours to take if we just put more fuel to this fire" mentality.
was in a startup once (Reid was an investor). The CEOs bought into blitzscaling, told the whole company we're going to "blitzscale". Hired 2 directors (with 0 reports). They had amibitions of hiring 100s of engineers. Then reality struck. There was no revenue and no path to revenue (because early days of AI). The blitzscaling was "paused". The directors had 1 EM report to them each. You can imagine what happened in the months after that.
what a tone-deaf way to name a business. yuck.
I mean, I want to work... and I absolutely despise the push to keep dev wages down, even at higher levels. But the reality is, at least from my own experience, that most software orgs and projects are actually over-staffed and would operate better with fewer, more experienced staff. Rather than filling hundreds of butts in seats.
No, you didn't. You watched engineers use AI to ship in days something that looks like what used to take a team weeks. After enough rounds of feature evolution, you'll realise that what they actually shipped isn't at all the same. Anthropic's C compiler, which also seemed like a good start that would have taken people much longer to deliver, ended up being impossible to turn into something actually workable.
In a year or so, software developed by "AI-native talent who can manage fleets of agents to drive outsized impact" - which is another way of saying people who ship code they don't understand and therefore haven't fixed the architectural mistakes the agents make - will become impossible to evolve, and then things will get very interesting.
AI can help software developers in many ways, but not like that.
Except that he got good at his short game by the end. LLMs will get there sooner than we think.
I think LLMs are great, and I think people who can use them to get to the green in one and take it from there will soar, just like people who could identify a problem and solve it themselves did in the past.
We do this every day. I'm sorry to say, we are indeed shipping in days what used to take weeks.
I do systems programming. Before AI feature development roughly went like, design, implement, test, review with some back edges and a lot of time spent in test and review.
AI has made the implementation part much faster, at the cost of even more time spent testing and reviewing, though still an improvement overall.
We do not see the weeks to days improvement though. The bottleneck before was testing and reviewing, and they are even bigger bottlenecks now.
What kind of work do you do, and what kind of workflow were you using before and after AI to benefit so much?
I'll stop you right there. AI is not good at systems programming, it's good at CRUD web development, which is where most people are seeing the gains.
AI has solved simple CRUD, yes, but CRUD, was easy before.
Now there may be an additional corner case or 20 where its still valid but they are not your typical software engineering work.
I also have your experience, even 100x code delivery improvement would barely move the needle of project delivery in our place. Better, more automated integration and end-to-end functional tests which reflect real world usage/data flows would actually make much bigger difference, no reason to think llms couldn't deliver this in near future.
For things like web frontents/backends, though, it works beautifully. I ship things in days that would take me weeks to write by hand, and I'm very fast at writing things by hand. The AI also ships many fewer bugs than our average senior programmer, though maybe not fewer bugs than our staff programmers.
The boost is for what are glorified crud apps which it 1000x the tedious work. However, the choices it makes along the way quickly blows up without cleaning. Seniors know how to keep their workstation clean or they should.
Maybe they're using AI for testing and reviewing more than you are, not just for coding?
In my experience, the generated code handles the happy path, but isn't great about edge cases or writing clean code, even with explicit instruction in the initial prompt.
We usually end up doing multiple iterations with what claude/codex output, pointing out issues, asking for changes, etc.
It's glue, custom business workflows, and basic web CRUD stuff. We build almost everything on Rails unless there's a critical reason not to (e.g., maintaining an existing system versus building from scratch.)
With very few exceptions our team composition is one senior engineer paired to a business. So we get to avoid a large amount of SDLC busywork which is inter-team communication. This leaves more time for client<->engineer communication which has a host of additional benefits. We also build with a "North Star" methodology which keeps everyone, including the client, laser focused on the work at hand.
To answer your final question about how we're benefiting so much from AI, I think it's primarily that we're leaning into it for both implementation, testing, and review. I know it's a sin to let AI review AI, but... it works. I'm actively skeptical of it myself, but our error rate and rework rates don't lie.
And we've got clients in various stages of development and/or long-term support. It's not like we're just hammering a bunch of stuff out and then bouncing. Most of these are multi-year tightly-integrated projects with our clients and we don't see a lack of trust or frustration that you'd expect to see if you were shipping slop. Our Honeybadger errors typically stay at zero, our performance metrics are acceptable across the board, and most importantly our clients love the work we're doing.
I can't think of any other way to measure the quality of what we're doing. And by those metrics, AI has made us better, not worse.
I should write a blog post to outline more of this in detail.
Maybe they're using AI for testing and reviewing more than you are?
Obviously it's hard to measure this objectively, but I can't imagine having done this pre-AI with zero downtime and having replaced those SaaS applications in that timeframe.
(Not the exact same chart but similar idea, I guess it's sort of a meme: https://imgur.com/a/YrNGYOR)
So I looked at the most recent CC release notes on Github and the majority look like this:
Fixed /clear not resetting the terminal tab title after a conversation
Fixed session title chip from /rename disappearing while a permission or other dialog is active
Fixed agent panel below the prompt being hidden when subagents are running (regression in 2.1.122)
Fixed external-editor handoff (Ctrl+G) blanking the conversation history above the prompt
Fixed /context dumping its rendered ASCII visualization grid into the conversation, wasting ~1.6k tokens per call
Fixed OAuth refresh race after wake-from-sleep that could log out all running sessions
Fixed 1-hour prompt cache TTL being silently downgraded to 5 minutes
Fixed cache-miss warning appearing spuriously after /clear or compaction when changing /effort or /model
I'd be extremely interested to know what percentage of these were just fixing last week's Claude Code written PR that no human ever set eyes on.But hey, all that churn looks great on charts being circulated on social media as free advertising for their flagship product (and consequently the company's valuation) so never mind, LGTM!
I have an example in my line of work. Full service rewrite in a new language. Would have taken forever without AI. AI makes it easier, faster. The service has better throughput, uses less machines. Having a complete full test harness that allows us to ensure we are meeting all the functionality of the previous service is key. AND we are keeping the old service on standby because we know we don't know what might be wrong with the new one.
What's your example?
> Our projects are closed source due to our clients owning the code, but I can offer anecdote. We have a client whose business operates on 2-3 very niche SaaS applications in the veterinary/animal medicine space. In a span of about 6 months, we completely ripped out 2 of those 3 and are working on replacing the 3rd one right now. We've done this with a single senior engineer working with the client between 20-40 hours per week with no major regressions. The business has been able to continue working as usual with no disruptions throughout this process.
> Obviously it's hard to measure this objectively, but I can't imagine having done this pre-AI with zero downtime and having replaced those SaaS applications in that timeframe.
The difference between it's working now and it will continue working in two years is exactly the problem with AI-generated code because the tests can't tell you that, and you don't know which one you have if you don't look really carefully.
Because of that, we don't typically spend a lot of time on accessibility because it's internal facing software. As far as I'm aware, these businesses don't have individuals who need those accommodations. Of course, if that changed, it is something we'd need to consider.
> We do this every day. I'm sorry to say, we are indeed shipping in days what used to take weeks.
I've been searching for months for evidence of this kinda thing. Do you have receipts you can share? Or is it more of the same "just trust me bro"?
Of course, it's not just shipping, it's shipping stably in a way that doesn't disrupt the day-to-day operations of the businesses we're working for. One client that comes to mind has 2-3 niche SaaS applications that they used independently for various workloads. We completely replaced 2 of those without any disruptions to their business in about 6 months (no, we did not replace it feature-for-feature; we just built what they needed.)
Is there some other metric I should be measuring our code by?
There are strengths, but if you think its writing stream of code and just using it as is, I would LOVE to compete against you.
Most devs aren’t working on cutting edge, low level, mission critical systems. AI is great for that. Every company I personally know have been fast shipping features that are being used daily by millions of people for the past 7 months.
We have the same thing on my team, and we also understand the limitations of AI generated code. If you’re more or less experienced, you can easily see the “good” and “bad” sides of it. So you kinda plan it out in a way that you can “evolve AI generated software”. I wouldn’t say the same thing in 2025 January, but it’s much different times now. Things are already working.
If you're truly "managing fleets of agents" there's no way you're able to sift through the good and the bad in the output. If your AI-generated code is evolvable (which is hard to tell right now) then you're not writing it with "fleets of agents". If you are writing it with fleets of agents, I would bet it's not evolvable; you just haven't reached the breaking point yet.
If you review the code and tell the agent to revert when it gets things wrong (not functionally but architecturally) you're fine. That's not what I was responding to.
If you aren't, it's a skill issue on your part
I was saying how much more productive LLMs make developers unless you use them in the way Armstrong advocates. Coding agents are amazingly helpful but not when you use them through "fleets" or "swarms". People who know how to be most productive with coding agents know that, but Armstrong doesn't.
Isn't this wrong? I thought engineered systems meant something designed with limits.
I have literally built and shipped multiple things that would have taken me many many months to do and I’ve done it in under a week.
Many of these are LLM heavy features where the LLM can literally self-evaluate and self-optimize. I start with a general feature, it will generate adverse, synthetic data, it will build a feature, optimize it the figure out new places to improve. 1 year ago, this would have taken an entire team months to do, now, it’s 2 or 3 days of work.
I have experienced areas where high productivity can be had without much loss in quality. So I can believe it. But it really depends on what you’re doing and I firmly believe many companies will run out of easy stuff that we can blaze through with AI fairly quickly. At least that’s where we seem to be heading
Then, about people using high-level languages like C.
Then, about people using C++.
Then, about people using "toy"/"scripting" languages like PHP and Python.
About people who use ORMs instead of writing SQL directly.
About people who use JavaScript ("not a real programming language" was the dis).
People used to argue how it was the mark of a tourist to use anything more visual than Emacs.
This slight won't stick, nobody cares, and it might end up sounding stupid later. You can't usefully insult a professional engineer in 2026 by pointing out that they haven't memorized ASCII or the Arm instruction set.
Look at the best models from Spring 2025, and compare with now (and similarly for Springs 2024 and 2025). Armstrong and lots of others are betting that this trend will continue, and if it does, the LLMs will ship code the LLMs understand, and whether any human specifically understands any particular part will mostly not matter.
I find this particularly funny. There were more than a couple Star Trek Episodes where some alien planet depends on some advanced AI or other technology that they no longer understand, and it turns out the AI is actually slowly killing them, making them sterile, etc. (e.g. https://en.wikipedia.org/wiki/When_the_Bough_Breaks_(Star_Tr... )
Sure, Star Trek is fiction, but "humans rely on a technology that they forget how to make" is a pretty recurrent theme in human history. The FOGBANK saga was pretty recent: https://en.wikipedia.org/wiki/Fogbank
It just amazes me that people think "Sure, this AI generated code is kinda broken now, but all we need is just more AI code to fix it at some unknowable point in the future because humans won't be able to understand it!"
Not sure that's going to age well.
This is not even limited to code. I've seen people justifying AI datacenters using fossil fuels because AI will solve fusion power plants at some unknowable point in the future.
The problem is that executives could take the 15-20% productivity boost and be content, but they read stuff like this, get greedy, and they don't understand the risk they're taking.
If the average programmer is this bad, then there must be better-than-average programmers reviewing the code. The problem with agents is that they can produce code at a far higher volume than the average programmer.
Anyway, I don't know how well the average programmer programs, but if you commit agent-generated code without careful review, your codebase will be cooked in a year or two.
This is how I feel. It’s building things for me that work. I don’t care how it works under the hood in many cases.
Just a minute ago 5.5 looked at some human-written code of mine from last year and while it was making the changes I asked for it determined the existing code was too brittle (it was) and rewrote it better. It didn't mention this in its summary at the end, I only know because I often watch the thinking output as it goes past before it hides it all behind a pop-open.
I also find I need to run an llm code review or two against any code it produces to even get to the point where’s it’s ready for human review.
In any case they served as an extremely valuable tool.
Geeks who didn't even stand near professional sports should really shut up about anything sport related, lol.
I would really like to see professional, established coach running around with young prodigies on a peak of their biology.
> - AI-native pods: We’ll be concentrating around AI-native talent who can manage fleets of agents to drive outsized impact. We’ll also be experimenting with reduced pod sizes, including “one person teams” with engineers, designers, and product managers all in one role.
And AI clowns will cheer and applaud this, not seeing that they're now doing the job of 5(!) people with the same salary. Why is nobody talking about this?
Also, I find it really bizarre that those neo feudal lords see their companies as just a life stock to count. They don't even count people, just see them as numbers to reduce/scale up. Modern tsardom, but instead of being tied via official decree you're now tied by your lifestyle and family.
"Some of you may die, but that is a sacrifice I am willing to make"
Player-coach used to be a thing in professional sports a long, long time ago. There's a reason you don't have it anymore. A coach can't be expected to take the long-term view while also expecting to contribute. Most examples were players near the end of their career and they didn't tend to do very well.
The only place you see it is in fun adult leagues. Perhaps the message then is that Coinbase wants to be less professional and more amateur-like?
Actually, these scenarios happen in hockey as well. Teams will pick up character guys who have been through it all who are expected to contribute more off ice than on it. Corey Perry is one who comes to mind lately but they're never given a "coach" title. It's entirely possible though that these players may be expected to be a go-between guy between the coach and younger players to help them manage the pressure or to help with encouragement. They're definitely not getting prime minutes though.
I guess that would possibly be the same expectation of a manager who still codes. I can't see them doing anything critical. It's likely picking up some minor bugs or nice-to-have, low priority feature work. I was a manager before and while I didn't reach 15 reports, I was up to 12 at one time. There's just really no focus time that you need for coding. Maybe that's a bit different with AI but even then you still need to find time to make changes and validate. And that's time that takes away from other higher impact things that you could be doing for the team.
In the end, everyone is replaceable. But a king is a bit more difficult to replace, as historically shown.
With very rare exceptions, professional athletes are just not as good athletically at 40/50 as they were at 20. They may be smarter in some ways--which maybe means they'd be better as coaches.
I'm not sure this carries over well to engineering unless you mean that the young people are willing to grind for a lot more hours on nights and weekends.
not sure if focus should be on athletic sports. Chess is better analogy to software I think.
When building software, if you can state an unambiguous goal and what rules apply you are more than halfway done. It's not uncommon to work on something for a year and discover you have been building the wrong thing. Navigating that ambiguity is where all the value in software engineering is.
He won the 2004 Euro Championship, the 2005 FIFA Beach Soccer World Cup along with a number of top 4 places over his 15 years as player and/or coach.
But managers should mostly be about two things IMHO:
> Facilitating for ICs.
> COACHING. To elevate ICs and help propagate the desired "culture".
There's a reason for this change. As players became elite and specialized by position, the budget for specialization expanded. At the top, teams could afford a distinct role for coaching focus. Since the stakes are really high (the difference between 1-3 points is measured in dozens of millions of dollars of impact due to relegation - a concept that is missing on most US elite sports) it follows specialization drive is sky-high at elite levels.
Thus, soccer player coaches have mostly dissappeared at elite level. But the role is alive and well in the semipro tier.
In roles where there's no binary, extreme outcome from specialization, like in semi pro soccer, or at an ENG role at a random company , it is only natural to have someone wear multiple hats and not specialize.
The payoff to being elite at a valuable skill is enormous. Teams generally benefit more from combining players with distinct, elite strengths than from relying on broad generalists who are not truly elite at anything.
This isn’t always possible if you can’t afford to build a team of specialists, or those specialists don't exist at your level of competition. But if you have the resources and coordination (and in sports, the roster depth and cap space) to cover each specialist’s weaknesses, specialization is pretty much always the stronger composition.
And I don't think they're trying this thing that Coinbase is trying either.
“We at the coding company LovelyBeeBunny should be like the samurai’s of the old, willing to pull our swords to die for emperor…” etc. And it is always riddled with complete misunderstanding of the analogous subject, whether sports, history, or warfare.
When I grew up those were the very definition of "not girly". Our math and comp sci faculties at uni would bend over backwards for any of the girl students.
I would agree though that academics in general were "not manly" and at school at least streams of "academic" or "sporty" existed. For boys anyway.
For the girls (less fascinated by sports) the top sporties were often top academics as well.
History has shown that being academic is always better than sporty (if you gave to pick one.) The "status" given to sports is often an acknowledgment that it's a poor financial path, but we can offer "status" instead.
Yes, sports metaphors can be amusing, but its the winners we're smiling at.
> Also, I find it really bizarre that those neo feudal lords see their companies as just a life stock to count. They don't even count people, just see them as numbers to reduce/scale up. Modern tsardom, but instead of being tied via official decree you're now tied by your lifestyle and family.
People don't work somewhere like Coinbase if they're concerned about morality or mitigating the harms done to society.
The GP post describes a common problem in _most_ workplaces in the market today. It’s not specific to crypto, AI, or anything in between.
It is not specific to a crypto company. But the element of it being a crypto company cannot be ignored. Crypto companies are not like ordinary businesses. They have very unique qualities to them. Same with crypto industry as a whole. Ever been to a crypto conference for example? I have read about and have seen the videos. These things have the highest concentration of the scammers and the gullible any one place.
Actually, it sounds like you’re the one who hasn’t been to a crypto conference :)
The crypto market winter that started in Q4 last year led to Coinbase's ~worst quarter ever ($667M loss). Crypto has not recovered. Coinbase has done nothing to stem the outflows. That same quarter HOOD showed a net profit of $605M; and showed a $346M profit last week. COIN and HOOD are two very similar companies.
COIN's earnings are in two days. They preceded the earnings call with layoffs, which is always a bad sign. And HOOD's net income has dropped by like 40%, though they're still at least profitable. You should be prepared for COIN to announce a similar drop; except, COIN wasn't even profitable before. Its going to be a bloodbath.
Edit: it’s because the loss is an accounting loss due to mark to market adjustment, while the company is operationally profitable.
I assume that’s still no great, but not nearly as dire as the reported loss suggests, and not a sign of a dying company.
The macro is not great right now. The world economy is on a razor's edge. If things unwind, we could all be in for a world of economic hurt. There aren't many levers to pull us out this time around, either.
Crypto is in an even worse state. Investors want liquidity for the uncertainty. Plus there's the looming Q-day that keeps getting pushed earlier and earlier by the experts while we're also inching nearer and nearer on the clock.
This cycle is about max extraction and fraud - Legitimized by the presidential family cashing out billions in meme coins, insider trading and forks of existing protocols.
Hacks have also been hitting hard. North Korea has stolen 500m this year alone and 2b last year.
So… no thriving. On the opposite. Dying is a more appropriate word at this time. Some would call this an opportunity. I see more pain ahead.
No wonder Coinbase is laying off people with the excuse of AI. The reality is that volume is zero. At this stage only me and a bunch of other retail weirdos keep on buying bitcoin paycheck by paycheck…
Crypto volume comes from institutional liquidity, not retail. All of that liquidity has moved from crypto to AI. It turns out that the liquidity wasn't actually interested in the technology or the philosophy; they were interested in outsized ROI. Think of BTC not as a currency, but as a share of stock in the crypto technology sector.
That's the problem with building your castle on a quicksand whose fundamentals aren't in the same order of magnitude as the market cap you command. When all you truly offer is gambling, eventually a shinier casino will open up and eat your lunch.
I'm remember of when I went out for drinks with a startup consultant friend and she mentioned one founder she spoke with refer to his staff as "biological units" when addressing use of proceeds to hire additional staff.
A company_is_ the sum of its people, their talents and aligned behind a mission statement.
This is so far misguided, I can't help but think this 'biological unit' of a founder won't last long.
This is a really strange nit. You are aware it's an analogy about skill and role. To reduce this to being about biology and the impacts of senescence on ability is weird, and doesn't really apply here.
E.g. you can't just spew nonsense like "let's work together like a bee hive, everything for the Queen/CEO, no matter the personal cost to an individual" without others pointing out the stupidity of comparing humans with bees.
You can't just come up with a desirable adjective and start coming up with random scenarios in which those characteristics may occur. "Let's make the company strong as a gorilla, big as an elephant, smart as Von Neumann, bright as a Sun, as courageous as young guys from youtube fails compilations." This makes no sense whatsoever.
Sure, there are good player-coaches, but there are also great pure leaders. There are also very bad player-coaches. A coach who is trying too hard and too deep to be a player when they are less "fit" (or skilled) has historically led to many problems in many cases
There's not much equivalent to "fit" here, just skill, and they decided they don't want the pure leaders, they want ones that are knuckle deep in the sausage.
Good decision or not, that very basic analogy is completely fine.
Like the guy who "just gets math" is often NOT a good teacher.
https://en.wikipedia.org/wiki/Unionization_in_the_tech_secto...
The benefits of unionization extend beyond this particular situation or company.
They can help shift the balance of power back to the employee and help them guard against being squeezed by their employer to produce more or take on more work for less benefits or compensation.
American tech workers have been fortunate to avoid such aggressive practices, but working conditions will only deteriorate from here, with workers crushed between LLMs and offshoring.
F these leaders.
And then this person leaves, leaving no documentation or workflow. That's ok though, another ai agent will pick up right back and add slop on top of that until the codebase is a black box interacting with another black box.
Oh and this company handles other people's money? That's going to end well.
Well today is your lucky day!
https://en.wikipedia.org/wiki/List_of_NBA_player-coaches
https://en.wikipedia.org/wiki/List_of_Major_League_Baseball_...
https://en.wikipedia.org/wiki/Pete_Rose
https://en.wikipedia.org/wiki/Player-coach#Player-coaches_in...
"Though primarily known as a dominant forward "Mr. Hockey" for the Detroit Red Wings, he came out of retirement in 1973 at age 45 to play with his sons and took on coaching responsibilities with Houston."[1]
[1] Gordie Howe, playing on the same NHL team as his two sons.
Reggie Dunlop is ready for duty, he'll get the job done.
Experienced high IQ player in a team sport could also be considered player-coach. Players like Lebron James or Nikola Jokic come to mind.
Bill Russell is (was) the guy you’re looking for and he is arguably the greatest basketball player of all time.
The CEO is looking at revenue and at costs. He can see what will happen if current burn rate isn’t reduced. Doesn’t it come (in part) to numbers, which must be reduced/scaled as needed? (Along with other costs)
Do they not see that this will drastically change their lives for the worse? I'm in Europe, none of them has ever earned "fuck you" money.
Exactly. People are too naive these days
The Marxist view of everything valuable being a product of a person's labor is tired and debunked.
That could be an incentive to keep companies small, but high-scale companies do have unique benefits to society.
This is absolutely not true. It never has been at any point in history. Not even CEOs would claim such a thing until the 1980s, and they were wrong then as now.
Even today, Costco and other businesses are thriving.
Stop drinking the Koolaid.
sounds stupid to me
Not by me. I know you'll go out of business if you pay employees 2.5x your competition.
It will even turn out ok if the other 4 people find new work that pays the same. But if everyone fires 4 out of 5 employees because they're focused on "run my business more efficiently" to the exclusion of everything else...it's not going to end well for any society.
for example, the last obvious inefficiency i remember was sys admins. the most worthless, self aggrandizing group of people at any company. got wiped out mostly (the best work for the cloud engineering companies), and i think it was for the better!
engineers today handle deployments, and it is far better.
Too bad AI is not about efficiency. It's about headcount reduction, which is exactly what Coinbase is doing here. AI just gives them plausible cover.
Feels like a problem that will solve itself. There are more cars today than people ever had horses.
I’ve worked with many mids but most people were really good. They’re all even better now.
In both technical and non technical roles.
I think people who are average skill at their jobs are about to be rocked if I’m honest.
I don't think anyone is applauding this. The only people applauding stuff like this are the CEO's of Anthropic (because that means more tokens/profit). Most other CEO's in big tech have toned down the rhetoric big-time.
The job of 5 people being done with the same salary is a function of the job market. It's an employers market now. So stuff like this happens. If you had an employee's market this wouldn't happen.
fwiw - and this is a separate topic. If health insurance were de-linked from employment most people would flee the job market on their own.
That would be visible in all major markets outside of the US, no?
Experimenting or cost-cutting? Are these one-person "teams" you g to be paid more for having multi-domain roles regardless of how fast AI can churn out pseudo-MVPs?
We're going to see this become a trend beyond Coinbase, IMO. The idea that companies just want employees to be more productive is a farce. The C-suite would prefer to make no profit, have few to no employees, and get personally richer in the process.
Plenty of us here can conceive, design, architect, build, ship and own things from soup to nuts, and feel a lot more invested in the result as a consequence.
If the compensation is good, and it feels less shackled and less bureaucratic, is that necessarily a bad thing?
Many founders recycle into tech jobs after they discover exactly why failure rates of startups are so brutal. Apparently 15-25% of employees aged 30–39 at major SV companies have a failed or acquihired startup in their history. Golden handcuffs can appear very pretty after you've missed out on striking gold by yourself.
However, I understand rationale, as the money was not in-flowing enough.
---- edit ----
When reading about AI-native talent who can manage fleets of agents, I shout out. Hire me. I will tell you why this won't work
Let OP make his “hire me and I’ll tell you why your AI first approach is bunk” market.
---- edit ----
TBH I will post an article, I'm finishing it. But it won't be so doomy, but rather on what to avoid to not fail
If engineers already know up front with clarity what they need to build, and, the leadership are very focused and concentrate resources on doing a few things.. then increasing the rate at which LOC is written is not beneficial - because getting the product built right is what matters.
Im beginning to realise people who are too concentrated on one dimension (e.g software engineering) can’t see how things actually fit together. You only know what you know I guess.. but it’s blindly obvious to me.
What's the theory on this? It seems to be common conclusion, but I don't understand why AI changes the situation here.
I understand that AI means you can do more with fewer people. Fewer people means less coordination overhead and fewer managers and fewer layers. What I don't get is why you want your managers to be doing IC work more so with AI than before. I don't see why anything changes about needing roughly 1 first line manager for every 6-8 people, or why it would be more beneficial now that the managers have production programming responsibilities.
Both before and after AI it's important that managers have real technical knowledge of the codebase. Having managers do actual production IC work in my experience has been a bad allocation of resources, though, and I don't see why AI changes that.
(a) Someone has to do the management tasks. Why do we think that isn't a full time job anymore?
(b) When managers do production IC work, in my experience it increases the load on ICs in review, because the manager one would _expect_ to not be _as_ expert as pure ICs on the codebase, and yet they are perceived as "senior". ICs then have overhead in having to manage that power imbalance in review. I have known a few extremely productive manager/ICs… but the effect on their teams was not super great. It made the manager into something of a micromanager and the actual ICs lacked autonomy.
This is going to end poorly for them. The only good managers I've had over around 20 years in the industry were 100% people managers and had no IC type of role expectations.
I've personally walked away from multiple manager role interview loops when I ask about the split only to find that they expected managers to also take on partial roles with IC engineering work. I know I can't be effective in either when having to juggle two entirely different hats, and in my anecdotal experience I've never seen anyone else do it well either.
Crypto was a big hype of last decade.
Every year that goes by there are fewer people interested in an old hype, and therefore a smaller and smaller market for coinbase.
Coinbase is on a path to death. It might take 20 years, but the decline has already begun.
Or maybe they have to start designing shoes first, IDK.
What happens when this person inevitably leaves and they have no one who knows even a little bit about the process or tools used?
The extreme being people that produce only one report a month and that more than justifies their income + bonus.
I would forget half the processes I use if I didn't document them all religiously. The benefit now is that I can save myself significant time by having an LLM help me write the docs.
/s
As someone who did have 15 direct reports for a while, it’s a joke.
You basically are their manager in name only. Your time is so split you can’t give any one direct reports the attention they deserve. Quarterly and annual reviews are a farce because you genuinely don’t really know how people are doing except the signals you can receive when you’re not in a meeting with one of your 15 reports.
Just goes to show how far up their own asses some CEOs are. Meanwhile real people just want a boss who cares. Hope Brian feels happier with an extra billion dollars or whatever this year!
> You basically are their manager in name only. Your time is so split you can’t give any one direct reports the attention they deserve. Quarterly and annual reviews are a farce because you genuinely don’t really know how people are doing except the signals you can receive when you’re not in a meeting with one of your 15 reports.
Don't forget "No pure managers". So, it's 15+ direct reports while also being "a strong and active individual contributor".
With the amount of tech leaders blabbering about this, I came to the conclusion that the profession of the future is going to be Security Engineer.
It almost makes we wish there were legal requirements for giving proof backing up the reason. It doesn’t need to be an actually good or noble one, but just in the sense of actually being accurate information being put into the world. I imagine this could be sold as a part of financial transparency laws.
Because as of now, it really seems like companies are using AI as a cover to fire people.
Presumably investors and those shorting the company would benefit from more accurate information about a company. So the market as a whole would be healthier and less prone to inflationary claims.
I also don’t think that excuse would really hold up under scrutiny: “we fired 14% of our workforce to maximize shareholder value” isn’t exactly a straightforward answer. Right now the answer seems to be latching onto whatever’s trendy and blaming the layoffs on that.
If there is an expectation that reasons will be investigated, then I think you’d just get more accurate information in the market, tldr.
"We took out some huge debt and need to pay it off asap so ...."
"I made a strategic mistake, so ...."
"I'm hoping to get a huge rise in the stock price and make money off it somehow so ...."
I'm just joking but I think the point is that the smug person doing the firing wants to make themselves look good rather than bad and HAS to try to make the company look good to shareholders even if it's not.
OP is discussing firings.
And yeah, there's crossover but they're not 1 to 1. At the same time, if a company is taking two people of equal position and firing one, or keeping the other, the honesty in how they came to that conclusion through transparency has value. Was the decision one of seniority? Performance? Geographical relevance? Was it favoritism masked in another reason? The person receiving the pink slip deserves to know the truth, especially in cases where legal matters could be of question where a company may say one thing, but be acting on another.
Laughs in labour protections.
In many countries (the vast majority of developed countries, and plenty of developing ones), you can't lay off employees for any reason, and reasons can be scrutinised and sued over.
E.g. in France, I can be fired for performance after I've been written up and given an opportunity to improve, or fired immediately if I steal money or harass someone at work. But my employer cannot invent themselves a reason. If the reason they want to let me go is because they're going through economic headwinds, or no longer need my position, they have to document that, give me the opportunity to find another job in the company if possible, and if they're lying (e.g. immediately replacement with someone younger and cheaper), I can sue with almost guaranteed success.
Boy that's scary for a company that's effectively fintech...
The question remains, if there are no pure managers, then is this CSM / Sales shipping production code? If yes, then it's indeed scary...
> No pure managers: Every leader at Coinbase must also be a strong and active individual contributor. Managers should be like player-coaches, getting their hands dirty alongside their teams.
YMMV, I suppose, but this combined with the AI nonsense just makes the dislike even harder.
I noticed it was especially bad for on-call and incident response; these managers get pulled in to all the incidents because of their status and supposed involvement, but are not particularly useful in those rooms, adding even more cooks to the already crowded kitchen.
Went on for about a year, worse each week, before i left.
Knowing what you don't know and knowing how to get qualified information from people around you makes up for a lot of not having a programming background.
If anything, the managers with technical backgrounds who weren't active programmers tended to significantly underestimate the difficulty of doing something because back in their day, things were different or some such nonsense.
It can certainly overlap with what makes a great engineer, but not most of the time.
This has always been the case where I work, long before AI.
And surely the place you work hired with this in mind. Many places have not, and yet now expect PMs who haven’t coded in years, or in many cases not at all, to contribute to their products’ codebases.
why not, managers should be like left handed specialist relievers, they come in for a short time to handle a specific issue and otherwise let the team alone
> Over the past year, l've watched engineers use Al to ship in days what used to take a team weeks. Nontechnical teams are now shipping production code and many of our workflows are being automated.
So on one hand they are the most secure business on the Internet and on the other hand YOLO!
Do fintech customers share your ideals as to what is "critical stuff" and what isn't? How much of this business could _plausibly_ be "non critical?"
Internal tools keep the lights on and allow customer facing code to function!
Operational tooling also isn’t a sexy thing, but it’s vital for any company to function.
Have fun trying to get your funds out of Coinbase. I managed after about 3 days and 10 support tickets. The process seems intentinally broken. What a nasty company.
But the few years to come are going to be wild for a lot of folks out there.
I don't expect Coinbase to publish a "we're hiring everyone back" in 5 years from now, but I hope at some point media will spot those trends as they'll - I have no doubts - will happen, and propagate that tune.
For the end user it looks like an evil cash-grab, but really it's the company protecting itself from regulatory vengeance.
Your coins frozen with no reason given even internally except for "machine said no" - no one gets any slap on the wrist unless you sue real hard, happen to win, and most likely that'll be just a scratch that won't be noticed enough to change any attitudes.
The Man sees that someone they don't like transferring their coins through the fintech company - that's what those companies are really concerned about, because it would be a punch in the gut the company will feel.
Thus, the incentives. Current social design doesn't punish for false positives (until they hit really high levels), only false negatives.
What licenses of theirs were terminated? Seems to me that the regulatory oversight is a joke.
Just a vague nonsense about compliance, that magickly aligns with padding their float. In reality they are using compliance and regulatory language as a shield to prop up their numbers. They are using KYC/AML to hold your funds hostage, as it's the most plausible explanation that also allows them to legally seize it under a legal sounding explanation. The fact that they do have to perform KYC/AML and there are penalties for not doing so just happen to make it a valid enough sounding excuse for when it's used overly aggressively because it lines up with other goals.
If they move the hair trigger to freeze funds 2x as often as they need to against the innocent false-positives to pass compliance checks, due to a hair trigger, then it falls under plausible deniability and even better when the regulator comes they can say some insane bullshit about how good their KYC/AML is. If they freeze it less often but instead just steal some for a little while and then return it, then it's more obvious a crime has been committed. It's obvious what they're up to.
Of course the KYC/AML/ regulatory officers are probably just pawns in this. The executives in the crypto and fintech space tell these people they need to set the sensitivity up to the 9s which does increase KYC/AML 'true positives' but the unspoken part is that money is now locked up into the company's accounts which creates a moral hazard in their fiduciary duty. They know damn well what that actually does is inflate their float, at the cost of a bunch of false positives. In theory that's satisfying AML because a function of doing so is you trigger more true positives, but in reality it's merely stealing money to increase floats not actually optimizing to meet the cutoffs to keep your license. But no one is actually going to come out and say this. It will probably take a class action suite, which I have little doubt will eventually happen when someone comes out and admits one day that these regulatory compliance triggers were intentionally set on the sensitive side for non-regulatory reasons.
As far as I understand, they're often not allowed to disclose that. E.g.,
https://www.bitsaboutmoney.com/archive/seeing-like-a-bank/
> In the specific case of “Why did the bank close my account, seemingly for no reason? Why will no one tell me anything about this? Why will no one take responsibility?”, the answer is frequently that the bank is following the law. As we’ve discussed previously, banks will frequently make the “independent” “commercial decision” to “exit the relationship” with a particular customer after that customer has had multiple Suspicious Activity Reports filed. SARs can (and sometimes must!) be filed for innocuous reasons and do not necessarily imply any sort of wrongdoing.
> SARs are secret, by regulation. See 12 CFR § 21.11(k)(1) from the Office of Comptroller of the Currency...
It's obvious when someone gets their money frozen for a month only to just have to perform a KYC check that even if the KYC check was legitimate, and these kinds of results are common over years, the delay was a result of a business decision that increased their float.
I think you're conflating the requirements with the BSA with how executives are using it in a hostile way against customers. They can make the deliberate decision to slow down KYC/AML officers and checks after a trigger, while putting them on a hair trigger, while citing secrecy under the BSA. That is the regulatory nonsense under which they are dressing up a business, non-regulatory decision. It's there to provide plausible deniability.
The compliance officer in this case is plausibly just following the law but in reality they're just running cover for increasing the float -- maybe even unwittingly.
They are legally prevented from telling you by the regulators, at least in the US.
Put otherwise, suppose I run a bank and you deposit your paycheck. I decide our reserves are a little low so I set KYC/AML triggers even more sensitive on a hair trigger so that an extra of 0.2% of innocent paychecks get held up an extra 4 weeks (I have also conveniently slow down / underhire customer service) which also causes me to catch 1 or 2 more real criminals. That's not KYC/AML even though that's the mechanism by which I claim to have held it. I'm not bound by the BSA secrecy in such case since the underlying trigger was for increasing the float rather than actually KYC/AML compliance.
------- re: below due to throttling ---------
I am accusing fintech and crypto businesses in general of committing mass fraud through intentionally setting KYC/AML on an artificially sensitive trigger to increase their floats, yes.
I do not know if Coinbase specifically does that -- my limited experience with them is they are one of the few fintech companies that hasn't fucked me over.
I have an absolutely massive body of evidence that leads me to that conclusion, through my own transactions and frozen funds as well as studying a wide amount of CS complaints that show evidence that KYC/AML checks on frozen funds are stalled for weeks to months without any plausible explanation of what is happening which is not a KYC/AML regulatory action but rather an intentional choice to raise floats for free interest and padding their numbers.
Of course what's extraordinarily ironic here is when fintech claims you violate KYC/AML then "law says we provide no evidence" but if you turn around and accuse them then the industry shills will scream "without evidence" while simultaneously saying your counterparty doesn't have to provide it! They are hypocrites! The very people accusing you without evidence betray their own sins accusing you of same! They were the ones that set the bar that they don't need to present evidence, not me.
Just one rebuttal ago, it was explained why it was okay to freeze customer funds without providing any evidence.
Now we are Jekyll and Hyde'ing back to getting upset about an accusation without evidence. That was a crux of my entire case! I am being damned, for allegedly, using the same standard of evidence as my accuser (though I dispute I am presenting as little as them)!
If that's your case, then you have concluded and rested my case for me in my favor. The entire KYC/AML argument falls apart because it fails your requirement to present evidence at accusation.
Either accusation without present evidence bad, in which case KYC/AML as it is used in stalling people for weeks to months without providing evidence totally falls apart and I rest my case -- or -- that standard of evidence is OK in which I've at least presented as much or more evidence as fintechs provide in their accusation against customers (nothing) and in that instance I also rest my case.
Whichever of these last two Jekyll and Hyde responses we pick, it isn't working against me.
What I'm really intrigued by is the non technical staff deploying code to production. Now that's a gamble I want to see in the crypto space.
4 months basic severance pay + 1 month for 2 years emploument is nice? so total 5 months severance after 2 years of working for them or only 6 months after 4 years
let me guess you are from US if you think this is nice, as European I would say this is fairly standard, nothing to brag about, 3 months should be bare minimum by law
That doesn’t make one model universally better. There are clear tradeoffs on both sides. But it is part of the equation worth considering in response to your point.
All I wanted to say was I don't find 4 months something particularly "nice" as European, though I am sure there are even some Europeans who would find it nice since they work for crappy companies in countries with less protection, so they are in lose lose situation, no US benefits (salary/taxes), no Europe benefits (severance pay/notice period).
I must live in a different Europe then. I'd say this would be EXTREMELY generous for Europe.
1. you get fired with 2 months notice period and they will tell you, you don't need to bother to come anymore = 2 months of severance, you can sit at home, look for job for 2 months with full salary
2. on top of this you will get also extra 2 months severance pay
so in total de facto 4 months of severance pay , but I understand shitty companies will expect you to work even during notice period (especially if they are firing you) and somehow expect you will be delivering same results, smarter companies know the reality when they are firing someone and just tell him not bother coming anymore, this was my case in last 1-2 jobs I've had more than 10 years ago when I was still employee (plus they wanted to give me 1 month severance pay, but I argued about years I worked there and certain operation practices which could be published, so got 2 months, unlike my less assertive colleagues), I'm nowadays contractor/freelance for companies outside Europe so no law protection for me
my wife is always employed as employee and got fired this winter under conditions I mentioned in point 1&2 and got 2+2 months after 1 year of work, two jobs ago she was fired without severance but didnt need to work during notice period
plus I've found funny mention of the 6 months COBRA as some benefit, you are covered by insurance in Europe regardless of your job status whether employed or unemployed you are always covered by universal healthcare
sure you can earn more, but there are plenty of benefits coming from Europe, for instance how many days of vacation you have by law in US? what's the point of the more money in US if employer will work you to death with no work/life balance
I found amusing mention of COBRA for 6 months, that's in most of the EU permanent benefit of all citizens not given by employer, your stuff is just paid from the universal healthcare and doesn't matter whether you are employed or unemployed, in US you can end up in situation you don't earn enough to have good health insurance, but you earn enough to not be covered by insurance for low income people, no such thing possible in EU (thought his doesn't really affect IT field)
It'd be looking a gift horse in the mouth to whine about "well they get 22+% at XYZ"
If you're making 2x or more what a European developer makes, you're responsible for your own emergency fund. You ignore that at your own risk. I'll take that trade.
https://www.cryptopolitan.com/user-tricked-grok-bankrbot-to-...
As a security engineer this statements fills me dread.
+ 2021 | 3,730 employees + 2022 | 4,706 employees + 2023 | 3,416 employees + 2024 | 3,772 employees + 2025 | 4,951 employees + 2026 | 4,250*
*Estimated following May 2026 layoffs.
So the reduction gets them closer, but still higher than where they were in 2024. Given the fact that the crypto business doesn't seem to be growing much over the last few years it can be argued that they over hired in 2025 and going back to 2024 numbers just makes sense. And as others have said in the comments, they haven't turned a profit so likely this makes business sense and the AI shine is trying to make the news less ugly for investors.
Oof. That smacks of hubris and valley-buzzwordism.
> Leaders will own much more, with as many as 15+ direct reports.
> Every leader at Coinbase must also be a strong and active individual contributor.
So, a manager who's managing 15 people AND expected to ship -- that sounds awful for both sides.
Right?? I saw that too. My first thought is that any good managers left will be racing for the exit. You can't fake "managing 15 people" with AI. You have to actually have the 1:1s and do the performance calibrations. How are they going to have time left for IC work??
They'll switch to async communications for everything, and ideally have a bot that answers Mm-humm like a psychologist on his chair.
More seriously, the solution is to move to a flatter org, but that's a drastic change with unknown consequences for most companies.
I feel like managers should be able to contribute. Managing a good team isn't that hard, though managing a bad team (or a good team in the midst of a ton of bad processes) is a nightmare.
Notable is what they're not doing--annual reviews. This duty is now handled by the all seeing "intelligence" machine that can evaluate employees in real-time.
Freedom for who, exactly? Coinbase's executives, I suppose.
You know, hire, stop hiring, then start firing
Also, it is clear at this point that thought tech leaders decide, probably over group chats mere mortals are not allowed in, on messages to deliver for a few days, urbi et orbi: introspection is overrated; the leaders-followers dichotomy; now, the disdain for "people managers," as if they were imposed by the Galactic Empire instead of being people whom their organization hired for years.
And, like, what sort of message is, to be sent when announcing lay-offs: "from now on, teams will have not 14 but 15 ICs (whatever numbers), the new IC will be the manager, who will continue to be a manager but also will do some IC work"?.
It is high-school all over again.
Since roughly 2018 I reckon, at least.
And I suspect that over the coming year, we'll be watching the consequences of this unfold.
Some of the biggest AI adopting companies are still shipping garbage (Meta, Amazon, Microsoft, etc), and I’m desperately curious what infinite AI resources are actually doing for them.
More reports for accounting? What?
> Non-technical teams are now shipping production code
if you vibe code financial systems this cannot mean anything good for your business
Have some empathy for people losing their jobs because of upper management’s incompetence.
Have some empathy for the misled retail investor that gambled their savings to thieves?
Did I miss some news where Coinbase literally stole people’s money, or at least did something that could reasonably be called evil?
Crypto is always about to take off. If the company is sitting so well, and is facing imminent growth, then they don't need to do layoffs, they want to. Or the company is not sitting so rosy and they're not too sure about their future.
> Non-technical teams are now shipping production code
What could go wrong?
Given Coinbase is a financial platform this doesn't make me feel great. Hopefully they're contributing in areas that don't affect security or money.
If you're a leader and you've said that your company is too big and have to downsize by 10+%. This is a you're the problem.
Firstly, the business needs to have active business and new initives. If you are not supporting that: You've failed.
If you're so inefficient that you need that extra 14%, you made that mistake.
If you "overhired" and didn't find a way to use that extra capacity to find the business.. you are the problem.
If you say that AI has changed your business, that 14% more people means 14%*the AI lift of more capacity to accomplish greater things.
It's not the talent, and it's not the talents' fault for your issues. A lot of people assume that layoffs means removal of bad performers. The reality is not there.
Heh. This is the kind of phrasing that just begs to be misunderstood.
Can anyone share how and when they see market is getting in a better shape?
Specifically I am curious, how we would be working with AIs even if market gets in a better shape
As a reward, people driving the productivity have now received a reduction in their colleague pool.
Terrifying.
However, do we really need them to AI-wash the fact that as a lot of companies, this company over-hired during ZIRP? Do we really need them to AI-wash the fact that the crypto hype is gone, therefore their business is smaller? “Company as intelligence” and “AI productivity” are just buzzwords so their stock price doesn’t suffer.
Companies above a certain scale- let's use Dunbar's Number as a good threshold- need full time managers to handle the necessary information flow through the company. Middle-manager is actually something that AI can't do yet, because their main job is to figure out what things everyone else around them needs to know (inside and outside their team), which requires a theory of mind that current LLM's just don't have. Is this policy change worth telling your team about? Is this feature creep worth telling other teams about? That is the decision that managers have to make dozens of times a day, and it requires a model of what various people know, to know whether this is important to them or not.
It wasn't that long ago that, in SV, the dominant values were humility, kindness and openness to all views (even if behind the scenes there was the ruthlessness demanded by capitalism). The last few years have seen this value system corrode, and it seems like its hurting everyone. From the tech workers constantly churning for no good reason, to the tech executives sequestered in their own thought bubbles until reality finally hits them (usually, too late to change).
This resonates but I can't put my finger on why for the founders of AirBnB. Do you have examples? Obviously true for Elon.
It seems like the previous generation of founders were always paranoid that their companies could/would fail in an instant, which led to the management styles of Andy Grove, Gates, Jobs etc (and I'd argue Larry and Sergey as well). That mindset meant they knew they couldn't afford to be surrounded by yes man and their egos were secure enough when challenged by their underlings.
Despite the intensity of all three, you hear stories of how Gates only respected people who could credibly argue back against him, Jobs empowered his team, etc. The current generation of founders seem to believe their own mythical BS to such an extent that anyone who disagrees with them is culled from the organization, resulting in a natural selection effect of only the yes-men survive.
Today, not a single mention in that email.
I can't help but feel that there is a superficial chasing of trends at play here (adopting the same playbook that Block used earlier).
Question is, where will we all be in 3 years from now?
I was shocked at how easy it was to train and develop a model that can replace senior leadership in a company.
The CEO was the easiest. I simply loaded the model with as much corporate jargon, double talk and the ability to talk down to people. The model nearly wrote itself.
Then simply ingesting the Wall Street Journal, Barrons, Financial Times and SEC 10-K reports and annual reports, I was able to compile the perfect CFO. It was able to spit out regulatory reports, answer questions on investor calls.
Strangely, the component of the model I had write in house was the ability to give up part of their bonus to keep key people employed. Seems in all of those financial reports, there were no examples of anyome that the model could leverage.
How long would it be that people realise that they are playing "passing the parcel" with a ticking explosive?
The reads like typical MBA-efficiency-idiocy taken to the extreme. Clearly this guy is so deeply isolated from the actual work that he cannot even begin to comprehend just how utterly stupid this idea is. It's one thing to push for 100x engineering "output" with "AI", but something completely different to expect a single person to be 3-4 persons in one. Pure schizophrenia - but at least companies like Coinbase which adopt the AI-first illusion will burn themselves faster and leave the room for something new and genuinely innovative.
Maybe you don’t have to make comments like this?
It takes one massive breach and theft from the exchange as a result of this and they are cooked.
Exchanges never recover after billions of dollars get stolen from the exchange.
Generally engineers are not well placed to be building UIs.
Rookie mistake by your AI; otherwise it did a flawless job, and the glaze it's been giving you is 100% accurate. You are the bestest.
If one more AI calls me "insightful" or says that my question "really cuts through the noise" or "gets to the heart of the matter"...
Why would non-programmers need to ship production code in a financial context?
The Tether narrative has just been broken and Iranian assets have been frozen:
https://edition.cnn.com/2026/04/24/politics/us-freezes-crypt...
This of course means that the primary use case of Bitcoin, sanctions' evasion, is no longer secure.
It becomes clearer and cleared that Lutnick and Trump are actually the deep state and the big boys mean it. Further crackdowns on China and Russia are coming and it does not look good for Bitcoin.
But by all means, cite AI nonsense as a favor to fellow founders to pump up their valuations.
Good luck to those (human) teams when the briefness stuff hits the fan thanks to an AI hallucination... oh wait, the Active Individually-contributing leaders will be there to lend a hand, right?
There is nothing that can go wrong with having non-tech people vibecode slop and push it to production... and certainly not when money (or monetary equivalents) are at play.
Print it all out and bring it to the meeting please.
The AI bullshit is CEO feel-good talk.
I think all of us are a bit sad now that AI has essentially removed what it means to be a coder.
There will never again be the time like we had, the golden age of being a nerd. We nerds had it all, and then we destroyed it by making something too smart!
As a Texan, it's kind of like cowboys. Coders were wrangling the computer, but now we have been replaced by industry and mechanics.
Having read the twitter post, it was raw and honest, and I want to share some ideas about life that I feel are relevant.
The first one is that when you work, you should always do something you believe in, because nobody can take that away from you.
If you worked for the money, or because someone told you you could be a part of a cool team, your whole world falls apart when you get let go.
But if you work because you truly believe your work is worthwhile, you will always be glad you did it.
I feel that people on here continually complain about capitalism and how bad corporations are. I challenge all you all to check yourself and ask what are you doing to be a part of the system. If you go accept employment at a 9-5, you are part of the system and making it stronger.
I have always refused to have a job. At age 32, I have only ever worked at one company as an employee, and that only for a short time, and the person was a genuine friend of mine.
I ask each person here to quit working at a company. I think all of us should choose to only ever work at a nonprofit.
Fundamentally Capitalism can't be defeated if we complain and then try to negotiate the biggest salary or benefits.
It's logically stupid for us to be saying they are evil, when we do the exact same thing with a salary.
Instead, each of us should work at a nonprofit, and we should NEVER accept a salary but instead ask them to give to us when they have something left over.
Ultimately, friends, I chose to tell my boss one day (the guy I ended up being an employee at his small company for for a bit), that I didn't want a salary, just donate if you want.
Ever since then, I have been happy.
I hated life when I worked for money. But now, I love it. I have gotten to code on many fun projects, but for the first time I felt alive.
It was terrifying with a wife, a kid and a mortgage to say that. But I am a true believer that the universe, or God has a plan for everyone, and that if you stop worrying and doing what you are told, and just go out and love people, it will all work out.
What I found is that the pay you get working for free is better than the pay you could ever get with money.
You can finally live with yourself when you just love everybody, every day.
If you pay me, and I did great work, you will never know if I love you. But if I did it for free, for all of eternity, you will know that you know that I care about you. And that, to me, is worth more than all the money in the world.
That's why I never accept a salary when I work. I just let people give as they feel fit.
Yes, it is hard, and it doesn't always feel fun. But it is 1000X worth it.
Thank you for reading, God bless you and have a great day!
I think this will be commonplace in the not too distant future.
Some disasters will happen, just like they did before AI. Skeptics will gleefully point out these failures while more and more non-technical teams ship code.
Technical teams still need to design and build out the infra.
Technical teams still need to think about how to design and secure the backend systems.
The only thing that changes is that non technical people can now build UIs and internal tools on top of your core assuming you have solid APIs, MCPs, docs, and components to build on top of.
If you're allowing non-technical teams deploy mission critical software then you're not doing it right.
No one wakes up the frontend dude at 2am because the JS is doing something weird in the browser... All of the core infra and backend should still belong to technical teams.
I'm sure Coinbase understands this and when they say non-technical people are shipping software they don't mean they're vibe coding terraform infra and deploying full-stack user-facing applications.
And due to this it deserves even more mockery.
Oof. So not only are they giving their remaining managers more reports, but those managers will be expected to do lots of other, non-management work.
Sure, nothing can go wrong there... Even if they didn't have non-managerial work to do, 15+ direct reports is just too many. They're not going to get to spend enough time meeting each report's needs, not a chance.
I think as layoffs emails go, it's a pretty good one (as the current top comment points out[0]), but boy, I would not want to be working at a company like what Coinbase is turning into. Non-technical teams shipping code to prod? No thanks. "AI-native pods"? No thanks. I do like the idea of one-person teams; I was at my most productive when I was in that kind of role (though I'm not sure my experience generalizes). I get that companies are still struggling to figure out how to adapt to LLMs, but... damn.
Pretty solid severance package for the folks being laid off, though.
[0] https://news.ycombinator.com/item?id=48021843
Before that the manager was essentially the best engineer in the team (or the one that wanted to get promoted). Being a manger meant you were respected directly for your skills and you were expected to still be a full time contributor. Directors meant you were one of the best ICs out there. Now, being a manager or a director means you sometimes did an MBA in an unrelated field. This brought a ton of politics, nonsense meetings (because the most visible output for managers is more meetings where they can posture).
Let's go back to what it used to be. We don't need weekly 1:1s to check on feelings. We don't need a full layer of managers syncing with each others and taking political decisions that will mainly advance them. We don't need another layer of gatekeepers.
I'm not saying all managers are bad, but this charade has been pushed a bit too far.
As a manager that does weekly 1:1s, I agree with that statement. But I do need 1:1s to check on progress, uncover blockers that people haven’t surfaced on their own, make continuous small decisions, offer support, assess performance, collect status information for my manager, and last but not least give employees the opportunity to share feelings frequently. They do, and it’s not very often, but it’s important to have a dedicated place for it otherwise devs often don’t share until damage is being done.
I’ve also watched devs who didn’t have weekly check-ins go pretty far off the rails. One dev I remember would go off by himself for weeks designing clever code and over-engineering things that weren’t needed. I thought to myself that someone should be checking in with him, and then months later I got stuck doing overtime before a delivery deadline with dozens of other devs on a weekend chasing an intermittent release-only runtime crash that turns out he caused by trying to get tricky with copy constructors. A quick 1:1 could have prevented this bug that ended up costing tens or hundreds of thousands before it ever happened.
BTW, the best managers I’ve ever had were technical contributors, and they tended to be more relaxed about check-ins than the non-technical managers, in part because they had a better sense of where things sat. Personally I also feel like a better manager when I’m contributing technically to a project, and devs seem to respect that more.
I constantly reiterate to people, whether they're reporting to me or not, that they need to speak up when there's a blocker. I feel its a very telling skill of engineers whether or not they can communicate issues in an effective manner urgently and figure out the best course of action to unblock.
I've heard tales of 300k/yr engineers that just sit there and wait for a manager to ask if they're blocked, or just sit there until they're told what to do.
This is widely presumed to reflect reality within a 1-2 degrees of separation from myself as well as from many of the people I speak to. Part of the problem is that there is always plausible deniability. Like the adage of how unwise it is to fire custodians just because you never see a mess and therefore you never actually see the custodians do anything, it may be "unwise" to lose the presence of these 300k/yr engineers just because they somehow actually keep things going smoothly.
> I constantly reiterate to people, whether they're reporting to me or not, that they need to speak up when there's a blocker.
This is presuming a particular/healthy culture where open communication is valued, appreciated, and not punished. This is not always the case, and an "objective description of a blocker" could result in some bruised egos where it transforms into blame upon some person or team for being or causing the blocker. People who experienced these cultures may be waiting for private conversations (such as 1-on-1s) that minimizes the risk, and they may be waiting to identify you (or whomever they are talking to) as a person who could make communicate the nature of the blockage in a politically favorable or neutral manner. All of this may be happening without the people involved consciously aware of this behavior of pushing out information through private conversations. And this maintains plausible deniability for ALL parties. The person who is blocked is never identified. The person who may have been the blocker is never identified. And hopefully everything gets fixed before anything is actually worth escalating.
I could be this person that appears not communicate, but the reason is because I've never had a manager that could unblock me faster than if I didn't tell anyone and just did it myself. For the longest time, every manager I've ever had was mostly useless (for unblocking some issue), it took quite a few years before I got an EM that actually makes shit happen. Only then did it become a habit I had to break.
It doesn't make sense to tell someone who can't or won't help you that you're blocked on something. Eventually you just default to never asking.
I can see why this happens, and I was (and still sometimes I am) guilty of thinking as long as I have a little more time, I can solve the problem myself. We are all capable of figuring out how things work, we all want to learn, and we all have fears that admitting spending time spinning our wheels might reflect poorly or reveal weaknesses, and/or might be used against us in reviews.
Part of making people feel comfortable surfacing blockers is making sure the environment is supporting that behavior. Devs need to be rewarded for working together, and rewarded for being proactive about telling their manager or the team they need help on something. If these highly paid engineers have had negative experiences in the past, they might have learned not to bring otherwise important issues to light. Occasionally there are also people who learn what they can get away with and will optimize for the minimum.
IMO, the environment also needs to allow devs some space to go slow for a while, solve unfamiliar problems, and learn new things - so for me there’s a certain amount of being okay with blockers, when people are still being proactive. I’d rather talk about them than not and make a conscious decision, but I do try to be sensitive to what I label as blocker.
In some particular cases and SMALL groups I do think a Manager by itself is unnecessary, especially if everything is working out and they are responsible enough to present usable information to others in the hierarchy, but if not, please stop this fighting and only complain when the Manager is really annoying.
If you think they are robbing you of valuable time, time it. Time it and tell them with hard data you're being robbed of at least a certain % of your working time, which means you can probably deliver less if they want X action from you.
That's why historically having managers being strong IC contributors remove that need because they already KNOW the progress and issues on the field.
Modern BigTech created that artificial layer of managers that need to know about all the blockers and progress, just to report it to another layer of managers. That layer is so big that they need 8 hours of meetings to sync with each other. This is purely an organizational issue.
I'm saying a lot of the managerial work is busywork that got pushed as a need by BigTech need to empire building.
Calling it a feature of empire building might be somewhat accurate in the sense that yes all companies and most groups aim to make money, or otherwise grow and succeed. Still, that seems like a pessimistic way to put it. Even small companies and church groups and libraries and PTA associations will have presidents & treasurers. Middle managers appear as soon as group size hits a certain limit.
But the state right now is that:
- those management layers are now more disconnected than ever from the actual work. Most managers are career managers that have given up on any actual technical work. Even if they used to be technical ICs, most bigtech actively discourage managers to do anything remotely technical. I still cannot understand this.
- There are way more managers per IC than there ever was (Last big tech I was in, my org had ~5 ICs per manager. I couldn't believe it and have no idea what those people actually did the whole day besides meeting each other!). This is directly because Directors decide to hire Managers and have a direct incentive to grow the amount of layers and amount of people in their pyramid. Even if they don't mean to, directors and VPs are managers as well and as such they have a direct belief and incentive that more management is better.
IMO face time is very important and serves more purposes than the explicit information transfer. It’s also a much faster, more efficient, and clearer way to have a conversation, when back-and-forth is needed (which may be more often than you assume.)
In my experience, devs (including younger me) often argue for what’s easiest or most comfortable for themselves, but sometimes they don’t see what’s actually best for themselves, nor what’s most effective for the organization, and they sometimes don’t care what’s best for the manager. (And I’m not suggesting they should have to care what’s best for their manager, just pointing it out.) Nobody likes a budget or oversight. Nobody wants to track time and be watched, and have to explain themselves, and have to compromise in order to finish tasks. Still, having budgets are sometimes good for us and sometimes produce better results, when money is limited and when focus is needed. Budgets also inhibit risk taking, which can be good or bad, sometimes we need risks and exploratory work… so, yeah, the right tool for the job…
Why do you assume not talking by default is better? Have you considered the downsides of your instinct to save yourself a few minutes a week? One thing a lot of devs don’t seem to realize when they push back on communication is the opportunities they’re missing to affect change in the group, to convince the managers to invest in things they need or want to work on in the future, to make changes to team communication practices, and to brag about what they’ve done, how hard they’ve worked, and what they really care about.
Yes, have them. Once a week.
I will again say that most 1:1s are BigTech cargocult rituals where you talk about your path to the "next level" or "how are you feeling this week" but it's also a lot of project management and busywork that mainly exists because the manager is in the picture. Without so many managers most of those self-sustained meetings would go away.
I like to think of it as a manager isn't required to have technical expertise, it can help, I can hurt, but they have to be a leader. Junior and mid career are required to have technical expertise, but not required to have leadership, though it would certainly help them be high impact and thought leaders in their space.
The more senior I get, the more like a manager I am. Less hands on, more coaching, guiding, teaching and setting direction. Meetings and docs become my tools less than code. When I'm writing code I'm only increasing the output of one person, me. Everything else is force multiplication. I just don't have to do the bullshit performance management.
Having a manager manage performance is the worst organizational option, except for all the others.
Good managers understand they (like senior ICs) are the grease between the working gears of a large company.
Bad managers think it means status.
I haven't had many excellent experiences with project managers, but dang was he good at keeping me unblocked.
In my BU there were directors with 2 direct reports. Even at the next level up, the number of non-IC directs is only high single digits. There are many managers who were already engaging technically with the product (not PRs but playing an active role in planning work) and they have no idea what directors are actually doing...aside from attending meetings with other directors.
Almost all decision-making capacity has been moved outside of teams which has resulted in almost no actual work (because everything needs to be cleared by someone with no engagement with product) and people leaving (because promo decisions are made by people who have no idea what anyone is contributing, the worst ICs are the only ones they can retain ofc).
It is a terrible environment to work in.
I don't necessarily think the manager should be best IC but definitely someone who is genuinely talented with sufficient scope and responsibility to make good decisions/add value for ICs. There are way too many passengers today.
Also, this is true of higher-level ICs. At my work, they have no real engagement with product so have influence through ambiguous statements about the general direction that get passed around like the word of God. None of these decisions, so far, have been helpful or relevant.
A decade or so ago, the high level ICs I interacted with were much more technical.
They were the kind who would perhaps not invent truly novel things--but plenty did in the right companies--but they had mastered their domains and genuinely solved thorny problems that others struggled with.
Nowadays, they are more political and less involved. I have met many that do not code or barely code. I've been in months of meetings to decide to do something fairly obvious just to ensure "alignment" even though no parties actually disagreed, just wanted to nitpick minor details that could just be a comment on a PR.
I'm not sure that's ever been true.
Google and other adtechs are not hard tech, that's why they have so many managers)
An underappreciated reason for this is empire building: Someone needs to be promoted to Senior Director and one way to do this is to add a layer of management: Adding 5 headcounts that essentially do busywork makes it easier to advocate for why your org is very important and why you should be promoted.
- entry level dev
- senior dev (start being groomed for management)
- senior dev/leader (take on 25% management duties)
- manager - management track.
Once you're on a management track, you essentially are taken off of any dev work and then depending on how well you've networked determines how fast you move up the management chain. Some companies like Target, they groom and move anybody up relatively fast who they see any potential in.
The only exceptions I've seen in my career are either startups or medium sized companies where there is no management track. You're a developer from the day you're hired until you either get fired, laid off or leave the company.
When I was an entry level dev, I left three companies because they wanted to start grooming me to move up into management. I was way more into being a developer and writing code then managing people.
A good manager is worth their worth in gold even if they produce zero technical output. I've had managers that were absolutely instrumental in my career as a programmer, and they did close to zero IC work.
>>Before that the manager was essentially the best engineer in the tea
Yes, and it was absolutely awful. Keep the best engineer in the team as the best engineer on the team. Call them experts, distinguished, senior++, whatever, don't make them managers.
>>Let's go back to what it used to be
God, please don't.
>>We don't need weekly 1:1s to check on feelings.
Speak for yourself please. I find weekly 1:1 extremely important for the entire team, especially in fully remote roles.
The two extremes of company culture are status cultures and service cultures.
In a status culture the product is the internal status hierarchy. External products are largely incidental goals, and customers and markets are only valued to the extent they create metrics that can be exploited by status seekers. Likewise employees.
In a service culture the goal is customer service through high quality output and employee development.
US corps lean far more to status culture than service culture. This is excellent for short termism, but the culture often becomes dysfunctional, if not outright abusive, and sooner or later it implodes, because status cultures aren't good at accepting reality, or at accurately reading it when they do accept it.
And status cultures tend to cargo cult management, where the C-suite is comparing its status to other C-suites, and copying apparent status-raising actions without thinking them through.
In good times a status culture will overhire, because hiring more employees looks like growth. In bad times status cultures will overfire because "cutting the slack" is lowest common denominator status management.
AI is the same on steroids. You get the promise of more growth with fewer employees, and that's hard to resist, even though it's entirely speculative and could easily be catastrophic. (Company results, and especially lasting company results, are orthogonal to whether some employees get good results with AI, because what actually affects results is how predictable the improvements are, whether there are likely downsides, and whether they're structurally in the right places.)
Whether managers should also be ICs is a side issue.
Please tell me where these 'managers make a lot of money and do nothing but approve timesheets' companies are, I'd kill to work for one!
On the other hand I've had managers just the same - cannot understand why anything is difficult and certainly wouldn't waste their time trying to help you.
It's just people, they're sympathetic or not. Determined or not. They care about the outcome for more than themselves or not.
But what we are saying here is that they are essentially an artificial layer of busywork that adds very little value. This is what decades of empire building and organizational issues have created.
It's slowly changing and people are realizing a lot of the manager work is self-created and sustained.
My prediction is that most tech companies will go through flattening cycles now that we start realizing that adding managers adds a similar amount of busywork.
the places - where pure managers just existed i.e manager, senior engineering manager, director, vp etc - just added unnecessary overhead.
places that flourished had a manager who was also an IC - who reported directly to the CTO
which means I was 1 layer away from the CTO
I worked in several BigTech that had managers excellent at talking and posturing in meetings but doing no or negative works.
The issue is that managers are hired and judged by other managers, not IC, not producers. This creates that managerial class that make themselves self-important.
It's as if people are saying they want a direct democracy in which every issue is voted on directly by the participants, with just one layer between the people and the "prime minister." Good luck with that when the group size exceeds 50 people, and one realizes that people don't want to vote on every issue affecting a larger society or organization.
I have been employed before the 2000s and that was the norm before AdTech started to empire building around managers.
Hard agree. One-on-ones are one of the silliest fads in our industry lately. Why would you wait until a weekly scheduled meeting to bring something up? Your manager's job is to be available to you when you need something, not just once a week. And if they want to know how you're feeling.. they should ask, putting it on an agenda feels very disingenuous.
Me and my direct manager (a C-level) tried weekly 1:1s for a full year and ended up giving up on it because it was clearly unproductive cargo-culting.
The dirty secret is that most employees in the software field are not capable of that level of maturity and forwardness.
I’ve had employees like that and our 1:1s switched to monthly cadence and were frequently skipped if they felt like there was nothing to talk about.
For the majority of the others they had some level of anxiety about discussing problems and needed the structure of a scheduled meeting to feel safe enough to bring up issues.
I see comments in this thread being dismissive about discussing feelings and I assume they would be terrible managers who couldn’t handle the first time they had a direct report break down in tears in front of them while struggling with some task and feeling worthless.
As a result, people pop in my office regularly to start these conversations, which I prefer because it leads me to believe I am approachable, which is by far one of the most important things a manager should be.
I prefer the exact opposite, especially when working remote.
When I was a manager, I saved non-urgent topics for a weekly 1-1 instead of pestering busy people with "Quick chat?" or "Do you have a minute?" messages. I wish others would do the same.
I'm also quite aware of what my people are working on, so its never a "what are you doing?" conversation. Some of my folks are remote, sometimes I am remote. If you do it right it really is just natural.
My own boss seemed to see the time as an opportunity to apply pressure so of course I utterly hated them and wanted them to end ASAP. I didn't want it to be like that for my team. I thought I should be a source of help.
I do think the most efficient form of team is a "cell" of three people. One is a little unstable.
Cheap money went away which caused companies to start asking hard questions about productivity and how much those dedicated managers were contributing.
It is pretty stupid and pathetic tbh, but easier than making an effective organization.
In most sports you've retired by the age of 40 and most coaches are older than that. I would say that's the reason it's common in sports, but that's the exception not the rule
Don’t know if it will work with this weird arbitrary cap, because 15 is fine for some things and way too many for others.
Mandates do be like that.
Hope they're well paid!
IMO it's just efficient to use any excuse to say "what's up, how did the house move go?" or whatever and make sure that you do that with everyone and that you behave in such a way that they don't fear or hate to have a 5 minute chat and know you are ready to listen if they want to say more. i.e. to take an actual interest in each person.
Popular conception of what a manager is is wildly unambitious.
Weekly 1:1 is performative and useless. It's not what makes a good manager. What makes a good manager is:
... etc ...If a manager is doing these things well I don't need a standing meeting at all. Or we can meet quarterly to check in.
Email is a thing.
But the thing is this makes no sense. Tech issues always turn into people issues - when there is a disagreement, who adjudicates? How can a manager adjudicate something they don't understand. And how will engineers respect / follow the decision?
And people issues invariably become tech issues. How can you hire the right people if you don't understand the tech? How will you know when to fire?
This setup makes no sense to me and i have very rarely seen it work. It seems like it was a product of an earlier time when there was a lot of money floating around and provided a way to (a) shield senior eng from dealing with people problem they just didn't want to, and (b) provide cushy jobs to professional managers that didn't know much about the tech.
But it doesn't work. There's no way to do the shielding well and a person with hiring/firing power needs to know what the fuck is going on.
Really good eng leaders must be both good at tech and good at people. That's the job.
People management is about managing the company's resources to achieve goals. If you are not the one leading the implementation of those goals, you are not going to be able to:
You will be completely dependent on a technical lead who does have that information. So then what is your independent role? Just to shuttle information between the technical lead and others?I got to know him much better through these productive interactions then awkward smalltalk in a 1:1.
And it kind of make sense to meet privately quarterly since perf reviews are also quarterly and that's the only reason I can really think of for a private scheduled face-to-face.
Of course I could always just ask for a private meeting anytime I wanted, which I guess I did from time to time. But it always for a product reason: a tough tech choice I was wrestling with or similar.
Plus I think the regularity/cadence of it is supposed to provide some psychological safety. Asking for a one-off meeting feels like overkill for a normal 1:1, and yet a little intimidating for the type of 1:1 that you really need to have a 1:1 for (like discussing interpersonal issues).
* I suppose if everyone's fully remote, in theory the water-cooler talk moves to Slack.
It seems quite counterproductive to assume such a system would scale to everyone else, or that everyone else could possibly implement this. This is cowboy levels of human resource management, not careful engineering.
If you can do it w/ the first model why on earth would you not?
Hell, Dunbar's Number is 150 people, and you expect to have 50 directs? That's literally 1/3 of your 150 being occupied by directs. It seems clearly infeasible the more you think about it.
Different roles though.
What needs? If you squeeze people hard enough there are no needs anymore, only responsibilities and urgent+important backlogs that have no bottom.
Welcome to 2026.
It is on every single worker to make sure that they don't please the system beyond what is reasonable. Often the problem is people who overwork themselves to please and set the bar over the reasonable amount of work. Still when the majority does not raise their output to an unhealthy amount that must be accepted as a ceiling.
In real organizations people tend to raise their performance to the [often unreasonable] level of expectations, even when situation stops being sustainable long-term for the whole group.
Suggesting that people should simply avoid overperforming assumes a level of control they don’t really have.
What do you think will actually happen at Coinbase now? Is it more likely that people will start saying hard “no,” or that they would stretch to meet the new expectations despite the personal cost?
Of course, as a manager my normal workload was reduced to account for the managerial tasks, because that's what most industries outside of tech do.