- new
- past
- show
- ask
- show
- jobs
- submit
Since Cursor often relies on Claude models, some of those services will flow back to their own datacenter compute. Especially if there's, lets call it, "customer demand loadbalancing optimization agreements" that makes those Cursor services prioritize Claude models using the app keys that get load-balanced onto the SpaceX datacenter.
Did SpaceX just spend $10B to rent out its own datacenter, juicing their recurring revenue metrics with their own AI services investment?
It is publicly known that the vast majority of deals in the AI space are circular in nature without the need for explicitly encoding any of it in a legal contract or even tacit agreements.
e.g. Nvidia has invested significantly in many AI companies including both Anthropic and OpenAI which rely heavily on Nvidia's hardware and will undoubtedly use some of said investment towards that end.
SpaceX is getting dressed for their debutante ball and is putting on the makeup to make a grand entrance on the auction floor.
Is there a difference? I legitimately have no idea. You are right that we can add another entry to the list of interconnected circular dealmakings. All this ain't gonna end well next time the music stops playing.
My point was that there is a lot of this happening, it is not a unique statement nor is it surprising to see at this point.
I made no attempt to dismiss or justify any of it.
> I don't think it's the conspiracy theory that you're making it out to be.
Which is it then?
Companies appear to be spending endless billions on AI but ultimately it's a huge wank.
Its also not a good sign because he should be able to leverage Grok, his billion dollar investment, instead of renting it out to Anthropic. But hey what does it matter to investor? if the IPO explodes, it is clear that people either can't read, don't care or don't understand.
Why do you say that? I was under the impression that everyone in the datacenter business was printing money.
Every once in a while, declare peace. It confuses the hell out of your enemies.
If he could fill his Datacenter with Grok use, he would make a lot more money.
This is not a good sign at all.
Investors in the SpaceX IPO are buying a call option on Musk.
At least he doesn't come across as a happy person...
While i'm really curious though when someone might hit him back after all the garbage he did and still does
From Elon on X: ... After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.
I am worried about Google and Microsoft, yes.
I use gemini models daily. Jetbrains tells me when they are overloaded and switches to alternative (usually to openai which turns everything to shit). I'd say happens about fortnightly.
It's a good litmus and forecaster for AI demand and I wish we had more visibility.
Then you've got SpaceX buying 1200 cybertrucks from Tesla, so it's serving as failure laundering vehicle for all his endeavors.
Which would be fine to me if Tesla wasn't a publicly traded company and SpaceX wasn't about to IPO. Whereas juicing companies in a way that affects the open stock market feels very inappropriate.
And now SpaceX investors are going to be left as the bag holders for X.ai/Twitter.
Overall though, to classify the work he's done and the impact on the world as unsuccessful is just insane. It's almost always from someone who hasn't even managed to lead a team of 10 through one project too.
But he also plays in areas were market disruption can't be done by many people at all.
But look Tesla: He did the cybertruck debakel. He tanked Tesla as a brand, he is burning money on xAI and Twitter, he destroyed a beloved brand Twitter. He did the boring company garbage.
The only thing this shows is some kind of masterclass between manipulation, public ignorance, luck, economy of high invest high risk and risk adverse industries.
Starlink doesn't scale very well which is a low margin business, especially when Amazon and the others are joining the club.
xAI is just a loss.
Twitter probably still a loss.
Tesla made a lot of money with co2 certificates. And a market were people were quite ignorant for a long.
Space-X he wants to push that to the death, without a real endplan. He now talks about Mars and Datacenter in space like there is any real business up their.
Anthropic gets the compute they so desperately need to keep growing. Elon rents out compute that xAI couldn't make use of due to little demand for Grok. SpaceX gets revenue on the books for IPO.
PS. I want to translate this part:
We’re very intentional about where we’ll add capacity—partnering with democratic countries whose legal and regulatory frameworks support investments of this scale
To real speak: We're putting profits above anything else. Yes, Elon is a far right guy who supported Trump, a president who isn't very democratic, but we're just really desperate for more money. We're also trying to make you forget that xAI is funded by Middle East non-democratic governments. Heck, we'll even buy compute from China if we can sell Anthropic models there.Considering that Anthropic mass-bans Chinese users accounts based on using VPN (used to circumvent the Chinese firewall) and then demands an ID or a residence permit of a country where Claude officially works to ensure that the user doesn't live in China, seems unlikely.
https://www.wsj.com/tech/ai/anthropic-ai-defense-department-...
China can get plenty of value from Claude without needing to use it for anything similar.
They very specifically avoided a trap where the next time the US blows up a school full of children they were very obviously going to blame Claude for it.
What's the problem here exactly? Are you insinuating any non-democratic government is bad and evil and only democratic governments are the correct and right way to govern? sort of like: "there is only one true prophet, and it's the one I follow, and all the others are false!"
The ones run by people who chop up journalists certainly are.
My point is that Anthropic cares a lot about "democracy" but will buy compute from a data center mostly funded by non-democratic nations.
But assuming there are people that care, if a government doesn't derive its right to govern from the will of the people it governs, under what definitions can it be considered legitimate? Divine right of kings?
America could do so much to compel the world to work in from a human rights perspective rather than petrodollars. I can't imagine any serious person would say the average American benefits from US imperialism. All US politicians did was traded away were secure middle class lifestyle for cheaper widgets, hardly anything worth caring about.
Who benefits from American petrodollar policies? Not Americans, all the wealth gets extracted to the elites while civilians suffer from the imperial blowback/boomerang.
Look at what the new deal coalition brought in and they nearly burnt out enough to allow neoliberalism to flourish during their fall. What do we have in return? No universal healthcare, no universal childcare, a broken welfare system, increasing income inequality, losing the ability to make a better life.
Anthropic is either taking this space business more serious than the general public, or posting this sentence was part of the deal to get the compute.
This 100%
I assume privately they may not share that opinion, but it’s not in Anthropic’s interest to talk about this (very little to gain, and may ruffle a lot of feathers if they say the wrong thing).
If you're someone with a lot of money, who dislikes governments meddling in your business, and often pisses off governments...
... oh, I see why this is an Elon talking point now.
I suppose if you are desperate to justify a large investment this what you would do - frame the story in a particular way.
Once computer constraints ease up, you will see much larger models. The reason LLM seems to have stalled a bit is because there just not enough compute.
You have more people using AI which requires more compute, and you want to build larger models which requires more compute and you have limited compute. What do you do?
" The reason LLM seems to have stalled a bit is because there just not enough compute."
lol okay mate.
and yet now we have far bigger rooms with far bigger computers anyway
Hardware may improve exponentially, but demand for compute increases double-exponentially. we'll always need more, bigger computers
There is no doubt that it's not a serious idea.
That claim seems reasonable. I have zero knowledge of the economics of launching and maintaining satellites though.
When people say 'running it hot is bad for reliability', they mean 'running it hot and then brining it back to room temp from time to time will eventually kill it'.
That leaves only two kinds of people left who are still talking excitedly about datacenters in space: The uninformed and the grifters.
There’s very little research work needed to make this happen; it’s all about engineering some satellite buses and having them fly in close formation to get a “data center”. And this group of satellites in sun-synchronous orbit would relay to a comms constellation e.g. starlink itself) and operate as a global scale data center. The heat management and orbital mechanics are all straight forward really.
Are we overloading the term "datacenter"? Or is it not overloaded but somehow able to achieve datacenter-like speeds / (tail) latency even when distributed across satellites?
AI calcs may handle wrong calculations better than cpus where software will tend to panic.
The space data center hypothesis relies on compute supply growing faster than power supply. (Both are bottlenecked on parts of the supply chain that will take ages to scale.)
Even if you believe that's the case, the point at which orbital data centers start making sense is incredibly sensitive to the exact growth rates.
The economics are vastly different when opex is near zero for these things
H100 rental prices are still as high as when the cards were brand new. The prices vastly exceed the power costs.
In a world where power or DC permits are the current bottleneck those H100s would be getting retired in favor of Blackwells. But they aren't. They are instead being locked in for years long contracts.
If silicon were relatively abundant and power/DC space scarce, you'd get an order of magnitude more bang for the Watt by replacing the H100s with newer GPUs.
But nobody is doing that. Blackwells are being installed as additional capacity, not Hopper replacements.
So it is pretty clear that silicon is the primary bottleneck.
That said eventually they can be lifted to higher orbits and have robots deliver and swap updated compute (if not made in space itself!).
LEO is high risk and star link satellites deorbit or burn up all the time. Not good from a capex POV on graphics cards.
Its still very dumb because of economics, logistics, serviceability and more.
Things get cheaper.
Related, US readers should call their reps and ask them to support a successor to EPRA, the Energy Permitting Reform Act, the vast majority of the generation that’s waiting for approval is from clean energy sources. It nearly got over the line before the last Congress ended, and it’s one of the most impactful things we can do to combat climate change, combined with electrifying various carbon intensive activities.
This is a self defeating argument. Neither can space!
Any scenario in which you can get data centers and power into orbit is easier on land.
And the hardest part of my home solar install, by far, was the counterparties (inspectors, power company, and subcontractors). My understanding is that it's much worse when you're trying to get a grid scale install online, the interconnection queue is currently years long. This avoids most counterparties except the ones they're already routinely dealing with.
How much power do starlink sats draw and how does it compare to say 8x H200s?
27,500 satellites need launching - fast! - just for Claude to meet a demand spike?
So clearly not a problem for them.
All that gets you 70kW of cooling. Radiating to vacuum isn't very efficient.
Not efficient, and it doesn’t have to be, because the cooling system has 0 opex cost. And capex clearly can be made to wor
Why are we not building it on land again in some abandoned mall's parking lot?
Physics still gets a say.
And SpaceX already proven they can launch sort of datacenters 10k times by launching Starlink (up to 20KW of solar each IIRC).
FWIW Musk should support Bernie Sanders more. Putting moratoriums on datacenters would make space based ones far more economical.
It's not that you can't put a server in space, but the costs to do it almost assuredly don't make any sense. Because, if you can do it in space you can do it easier on the ground and save yourself millions in launch cost and extra complexity. Your cooling challenges are way cheaper and simpler in an atmosphere.
There's nothing much being in space really gets you, other than it makes it harder for a government to take your computers away. Not impossible, just harder.
The economics don't work unless Starship is doing flights in quantity, and it has met or exceeded its cost targets.
Roughly, a single rack plus solar to power it in the $15m+ range just to launch. (This assumes power dissipation is handled via some means that does not require launch to orbit. Also does not include batteries.) Choose your own hardware for the rack, but call it < $5m.
SpaceX earning $15m every time someone launches a $5m rack would be a great business for SpaceX.
Use your own calculator/LLM, but mine is suggesting that the ~$7B Colossus 1 data center in TFA would be around $50B if launched on Falcon 9 (still ignoring cooling and batteries).
(There are obviously a lot of other asterisks. I'm ignoring power storage and heat dissipation. Maintenance probably doesn't matter given 75% of cost is in the launch. Network bandwidth could be a problem considering how DCs are used. Competition - if Company A spends $100B for $25B of actual AI infra, how competitive will they be against Company B who gets $100B for their $100B by spending it in Canada or Mexico, which they can do right now? Etc.)
None of this works without Starship, which has not set a date for its first LEO insertion test yet. Yet the whole point of orbital DCs is nothing on the ground can move fast enough, hence the rush to orbit...which can't really move at all right now.
No, it doesn't make any sense.
If it happens it happens, if not, it doesn't.
This is stupid. I don't understand what's happening... specifically, what mental virus is spreading that lowers everybody's IQ by 10-20 points, evidently including my own. Put the data centers in the ocean, powered by solar and networked with Starlink or LEO. Put them in the desert. Put them 20 miles south of Nowhere, Idaho.
But space?!
Elon claims (which I take with a huge grain of salt because he's made endless broken promises in investor calls and interviews) that he disagrees with the administration's stance on solar and would use it to power his DCs if he could, but contends that permitting is a huge problem.
The US needs to figure out how to build again.
> This is stupid. I don't understand what's happening... specifically, what mental virus
"Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes"
And you don't need permits in international waters, any more than you need them in orbit. Lease space on container ships.
If the DC is for training or text inference, latency seems irrelevant, so go where you can quickly plop down power.
It's fraught to make a DC for a single purpose because it reduces the value of the DC. A DC that serves multiple purposes can handle other workloads. Moreover even if inference is slow, latency does still matter, and it costs quite a bit to light up net capacity (you still have to run fiber to an interconnect and depending on how far you are, this can get expensive fast.)
Supposedly some of the behind the meter gas turbines that have been getting installed are rated for a ten year service life. The DCs are burning them out in 10 months from rapid cycling. If they are willing to treat $10-100 million generators as disposable, cost seems irrelevant.
I want to be clear, I do think that one day something like that will exist, I just don't think it's anywhere close to being a reality, much like FSD.
Also it costs them, almost [0], nothing to say it and then later come up with some reason why they are no longer interested.
[0] Maybe a little bit of respect
All it says is expressed interest.
That's like asking a casual how are you...
You honestly expect this trajectory to continue unabated?
Knowing humanity's history, yes. Not sure we're ever going to see a second French Revolution. People are pacified and are not rioting. And they really should. Most of us are kind of privileged. I know people out there who are barely holding on and the recent fuel + food price increases might push them over the edge to actual poverty.
I'm just a software engineer, all I need to know is SpaceX is aggressively pursuing this - that's enough for me to believe it's viable
SpaceX operates literally orders of magnitudes more satellites than anyone else. If anybody understands the physics and engineering of space compute, it's SpaceX. Lay people debating this online is just showing their ignorance as far as I'm concerned, and it mostly comes from an emotional place of wanting Musk enterprises to fail
Ironically it is Elon who has said that anthropic is evil:
https://gizmodo.com/elon-musk-teams-up-with-anthropic-a-comp...
And if you need to be reminded: https://hsph.harvard.edu/news/usaid-shutdown-has-led-to-hund...
Everything Elon does is somehow stupid or evil. Actually, that reminds me of Thunderf00t YouTube streams where he was (or still is?) betting Starship would fail miserably every test flight, and he'd talk about how evil and stupid Elon is for 3 hours with chatters, watch the flight then say something like "it's still bullshit."
I think it's a mixture of cope and a little bit psyop from adversaries like Russia who are being crippled in Ukraine because of Starlink.
I also hope that the fact I had OpenClaw in my sandbox once is not why I hit these limits so damn fast. I don't use it anymore and I've tried to rid my sandbox of anything "openclaw" but it is in my git history in various places on various projects. Claude doesn't seem to be transparent about this limitation.
- Codex
- OpenCode Go
- Ollama Cloud
All are very useful, still a subscription, but with higher usage limits.
Specific providers like GLM also provide subscriptions like Z.ai.
Using DeepSeek, Kimi etc. through OpenRouter or from them directly is also great, here you pay per token but it's still more usage overall.
If you're using it 24/7 then yes, I'm sure the weekly limit is more of a concern.
If you're just using it during working hours - ie. you only use two 5-hour windows per day - then you probably, like me, struggle to hit the weekly limit even if you do max out some 5-hour windows.
Based on the size and complexity of the task, as well as any inter-task dependencies, the orchestrator deploys one or more subagents (sometimes 5 or 6 subagents) to work on these mini tasks. Once all tasks are completed, the orchestrator initiates verification and launches a review workflow. This workflow uses the original prompt, acceptance criteria, repository internal guidelines, and relevant skills to conduct a thorough review of the agents’ work.
Typically, there are one or two review iterations, during which the review agent identifies any issues. Sometimes, I may also notice issues and have to "steer" the orchestrator. The time required for a slice to complete ranges from 30 minutes to 4 or 5 hours, depending on its size, complexity, and the number of subtasks it contains.
Only if I run about 3 such orchestration in parallel I can reach hourly limit.
The 5h windows are frustrating because I can go through them quickly if I have a more complex task. I haven't yet met the weekly limit. I'd say there are many cases similar to mine.
On heavy weeks I probably am using it consistently for at least 6+ hours a day.
Although, I’m pretty rigorous about always keeping my sessions under 200-250k tokens.
Mentally i think about the weekly usage in terms of usage per day so about 14% per day which results in me not using that much early in the week so i can kinda "burn freely" later on. which leads me to a spot where usually on the final two days im sorta thinking about how can i expend that usage ive "saved".
the 5 hour windows make this harder, sometimes the final day of the week im trying to get that 10% in every 5 hour window of my waking hours and i HATE that, i wanna work when i am most productive, not around some ridiculous window of time, i dont wanna think "I am gonna be utilizing claude the most around 11am so i should send a dumb message to haiku to get my 5 hour window started at 7:30am so i can have it roll over at 12:30."
So im happy about this change sure. But it is 100% them creating a problem and pretending having some relief from that problem is them doing their users a favor. I understand they are doing it to lower peak hours usage and all that, I still despise it.
Using Advisor [1], you can use Sonnet most of time; Sonnet can handoff work it can't handle to Opus. When Opus is done, you automatically go back to Sonnet.
[1]: https://www.mindstudio.ai/blog/claude-code-advisor-strategy-...
So with the stock sonnet i get the chatty confidently wrong sonnet instead of a strict crafted agent. Stock Opus is a lot more reasonable, and hands off simple tasks to crafted sonnet agents with the chatty and more strict workflows, so i guess im literally doing the opposite(closer to what that old article describes).
I hit my weekly limit around day 4, with 2 maxed out windows per day (and sometimes a bit of usage at night).
I completely understand why people would use Opus for everything, it’s much more thorough and effective. Sonnet as well, but on Pro it’s gonna be Haiku all the time.
I have a pretty nailed down .claude/ where the goal is single sources of truth, so agent md files all reference the relevant files for what domain they are working within with that domain's conventions and structure etc, i think keeping this stuff up to date is massive compounding context savings, as well as just better for performance because it keeps all agents context windows free of noise by helping them only load in what is actually needed.
I've never really messed with haiku for anything besides absolute low end repetitive tasks, its usually an agent i have crafted when i want to ask it to generate a bunch of seed data or generic questions for tests or something similar. My assumption is that it would just be terrible and even though its super cheap, it is still inevitably bringing the final results back to the better models and if thats not valuable tokens then im wasting the haiku tokens and the passoff to the better models with work that will be repeated anyway.
20%, there are 5 work days in a week, not 7.
However you see it, it's an improvement for the consumer.
>The following three changes—all effective today—are aimed at improving the experience of using Claude for our most dedicated customers.
>First, we’re doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.
>Second, we’re removing the peak hours limit reduction on Claude Code for Pro and Max accounts.
>Third, we’re raising our API rate limits considerably for Claude Opus models,
Looks like Elon's finally giving up on XAI and just selling the compute
I don't think that's certain yet, but I do think that the open-source models like Gemma and Qwen are getting so good so fast that even Anthropic has real risk around the long-term value of their models and tooling.
Basically, if I'm Anthropic or xAI, I try to get revenue whenever and wherever possible and see what sticks. There's no value in playing for monopolistic control when everything is so volatile.
I'd run agents consuming hundreds of millions of tokens for less than a hundred dollars.
But even then, I suspect their hands were tied in some areas because Elon had some expectations from his AI.
Meta engineers on the other hand, couldn't wait to jump ship. But that only reinforces the B team theory.
I'm just speculating, but a particularly killer offering Elon wouldnt be able to refuse would be if Anthropic agreed to give them some training data / technology.
Or is that actually his main motivation. Hard to know. Either way it's a win win win for him.
I guess loosing a ton of money then trying to get some if it back makes you a genius...
On the other hand, power and compute are limited. Ridiculous as orbital compute sounds, land/power on earth is not easily scalable. There are too many limiting factors, chief among which in the US is regulation. But in space, if you make one satellite work, you just get more resources and launch more. This also leads naturally to Tesla's plan for a chip fab.
So if you squint, Musk might not be that crazy.
-Elon
The scale is just mindboggling here. Are there any blog posts or anything discussing what kind of infrastructure is used for even just the inference side (nevermind the training) for SotA models like Opus? I would have thought it might be secret, but given that you can actually run the models yourself on AWS Bedrock doesn't that give an indication?
> It’s regulation with the utilities. There are ramp rates, there are all of these things that you’re supposed to do to not screw up the grid. Data centers have been in gross violation of that. When you think about what’s wrong with data centers, they have load volatility, which we just talked about, then they decide to power it with behind-the-meter natural gas generators. These natural gas generators, their shaft is supposed to last for seven years. It’s lasting 10 months because of all the cycling.
https://www.volts.wtf/p/doing-data-centers-the-not-dumb-way
On the compute infrastructure, there are standard NVIDIA reference designs like this:
https://www.nvidia.com/en-us/technologies/enterprise-referen...
I haven't bothered to look but I'd guess Mellanox GPU-to-GPU networks, and massive custom code for splitting tensors across GPUs, and for shuttling activations across GPU nodes.
That's not exactly how it works. Anthropic are hosting their models in AWS Bedrock as a managed service. Customers call those LLMs just like calling any other API. There's no visibility into what kind of AWS infrastructure is serving that API request.
The massive scale is all massively parallel: test-time compute for users, test time compute for RL rollouts (and probably increasingly environments for those rollouts), other synthetic data generation, research experiments, …
That’s just for the SpaceX part (over provisioning for grok, lol).
The Amazon and Google deals are each over an order of magnitude larger! Pretty wild indeed!
While this is good news, I'm not coming back. Anthropic just lost me with too many wrongs in too short of a time period.
Opus has been replaced with GPT 5.5, DeepSeek, Kimi, Qwen and they all allow me to use my own, single harness and switch models easily if any of them start treating me the same.
The only certainty is that you can swap models quickly and painlessly.
Having said that, Anthropic’s position is fully understandable, as Sam took a very large risk here, and OpenAI’s future is all but certain.
Yes.
To quote:
> Anthropic CEO Dario Amodei said his company tried to plan for 10-fold growth. But revenue and usage increased 80-fold in the first quarter on an annualized basis, which he says explains why it’s been so hard to keep up with demand.
> “That is the reason we have had difficulties with compute,” Amodei said Wednesday at his company’s developer conference in San Francisco. Amodei added that the company is “working as quickly as possible to provide more” capacity and will “pass that compute on to you as soon as we can.
https://www.cnbc.com/2026/05/06/anthropic-ceo-dario-amodei-s...
I think "scrambling" is a fair characterization of the CEO saying "we have had difficulties with compute" and "working as quickly as possible to provide more"
They've also signed new compute deals with Google and AWS recently.
Once we've gone through the AI equivalent of the dot.com crash, will Anthropic still be scrambling for more capacity, or will they have more than they can profitably use, like the dark fiber we were left with last time?
At the moment computer providers are charging more for outdated H100 capacity now than when the H100s were new. That capacity is going to the smaller labs, not the frontier labs.
That hardware has already been depreciated financially so even if all those small labs disappeared it's not sending computer providers bankrupt - they can just cut prices and so long as they can charge more than electricity and maintenance they'll just keep them running.
The fine-print-omission appears to be that weekly limits are not doubled. The progressive 5-hour rate limit shrinking was indeed an efficiency blocker that finally convinced me to cancel, but being only able to get 4 full sessions a week as opposed to 8 doesn't compell me to resubscribe.
At this point if feels like if you properly scope your work open weight LLMs are adequate.
https://en.wikipedia.org/wiki/Colossus_(supercomputer)#Envir...
Now I have to avoid Claude too.
That’s what virtue signaling is I guess - the action you’re taking is pointless, the only point is to tell everyone you’re taking it therefore feed the narrative forward?
The entire economy runs off gas turbines though this is the thing you boycott?
But more than that, the emissions generated by the Colossus data centers are far worse than typical combined-cycle gas plants or data centers that buy renewable: these turbines emit NOx, fine particulates, carbon monoxide, and formaldehyde into a population-dense area.
I thought people knew about this already. Post from last year: https://simonwillison.net/2025/Jun/12/xai-data-center/
Deciding not to spend money with a company you don't like is not pointless. The point is that you're not participating in something that you judge to be wrong.
The world is full of things I feel are wrong yet have near zero power to stop. That does not mean I should willingly support those things.
Hopefully Elon lets you into his glass bubble when the s** cooks on the fan.
This is nothing like burning coal.
https://www.msn.com/en-in/news/world/zohran-mamdani-faces-ma...
Minor risk that taking what took 200 million years to put into the ground out in a few hundred years?
Righteo, I guess I better suspend my white privilege.
SpaceX/xAI also has Colossus 2, with double or more the GPUs
Seems xAI will still be around
Certainly an interesting day for xAI.
He literally did a Nazi salute on stage, twice! Check the video, and tell me what you see.
edit: https://giphy.com/gifs/elon-musk-nazi-salute-8W0ItVv7T1kRdwb...
https://old.reddit.com/r/gifs/comments/1i7w4nz/comparison_of...
Could you join me in that statement?
300MW is peanuts compared to their multiple 50GW+ deals to the point you start to wonder why just 300MW is making the difference in their capacity that they can increase limits this much... also, why couldn't their many existing multi billion dollar deals not allow them to expand capacity?
when you take this into account, then you read their statement about orbital compute it starts to smell quite fishy
There aren’t that many 300MW+ datacenter in the world, relative to the capacity Anthropic has online, it’s a lot, probably in the 20% range.
* Inference becomes cheap
- speciality accelerators hit the market and race to the bottom begins
* Training remains expensive
- This works out for Anthropic/OpenAI, they go into the business of training
* Models become rental units or purchasable assets, you run on inference hardware
- Rent or own inference hardware
* Or you pay someone to do all of the above for you, at a premium
Groq (acqui-hired by NVidia) came up with a different processor architecture: metric shit-tons of SRAM attached to a modest single core deterministic processor. No HBM needed on this card, and 32x faster inference than today's best GPUs at inference!
These LPUs are pretty useless for training though, which is useful for companies training models! Training is expensive, inference is cheap (someday, not now).
There's also a Canadian company that _literally burned the model as a silicon mask_ on a chip. It's unbelievably (1000x) fast, but not flexible of course: https://chatjimmy.ai
Today they say this, then tomorrow they'll silently reduce limits and argue with anyone who calls them on it.
SpaceX is extremely uniquely positioned to crush the rest of the world combined in order to orbital data centers.
Sure, as long as your data center is 3x4m - size of a Starlink satellite (think Spinal Tap Stone Henge) . Anything bigger than that (i.e. actual data center sized) is going to require some assembly.
I've heard TeslaBot is good at folding shirts, and serving drinks (at least while teleoperated) - perhaps it can help?
In any case, it appears that Musk can't even generate enough AI demand to utilize his own ground based data center. Maybe he can add "data centers in space" to part of his Mars colonization plan. Maybe have Tesla Bots driving around in Cybertrucks too ?
1. GPUs create heat. There's no efficient way to get rid of the heat in space (vacuum is an insulator). 2. Die-shrink makes modern processors and memory more and more susceptible to radiation; shielding is possible, but adds cost + mass (which adds cost)
So, they handed out all of their data center to Anthropic; Grok wasn't using it much?
xAI has added about 500MW of nvidia gpu capacity in ~April
and will add another 500MW before the end of the year totaling about 2GW.
[0] https://wccftech.com/xai-using-just-11-percent-gpus-while-me...
Staying with Claude is like going back to the restaurant where you got food poisoning: you kinda get what you deserve next time you get sick.
To me this is the mind-bending piece. It's not a like a datacenter has a plug-and-play with well written spec and an international standard interface.
On the plus-side, it told me how much cheaper Deepseek is and that it's on parity for reverse engineering work.
Ok I guess, this was a bit of a hassle, but you're not increasing my weekly allowance, you're just not annoying me as often.
> Second, we’re removing the peak hours limit reduction on Claude Code for Pro and Max accounts.
It wasn't a limit reduction (as in, I didn't have a lower 5-hour limit), it was "tokens are more expensive" and it ate my weekly limits faster. This should never have been instituted to begin with.
> Third, we’re raising our API rate limits considerably for Claude Opus models, as shown in the table below:
Meh.
This is why I don't care for all the "it's a subscription, you're free to not use it!" arguments here. It's not an all-you-can-eat subscription with some generous fair use limits, it's a "X tokens per month for $Y", and they keep lowering the X unilaterally and in secret.
If you think that's fine, I have access to an all-you-can eat buffet to sell you for only $2000 a year, it's a steal.
This might be a good time to drop Claude.
1. https://wccftech.com/xai-using-just-11-percent-gpus-while-me...
I have got xAI blocked in OpenRouter as I do not want to support any business controlled by Musk.
My first impression to this post is "what the hell are they thinking?", but actually it seems like a decent move by them.
They basically made it so that normal users can better utilize their plan while not benefitting the backgroundagentmaxxers and stealth openclaw abusers in the ranks of their subscription audience. Making their plan more attractive to the people they actually want to sell to.
Hopefully this leads to a loosening of harness restrictions later.
*Buys compute from actual fascist Elon Musk in a failing democracy during the death throes of late state capitalism.
I'm starting to think the problem with "ethical" AI was always that no company could ever act ethically in the long term. They are and always will be a cancer to society and AI will only serve to amplify this further.
I'm posting immediately after cancelling my claude subscriptions.
CEO that accelerated space industry by 10+ years
CEO that accelerated HCI industry by 10 years
so what?
Nobody is 100% evil
Musk helped dismantling USAID which leads to many people’s death.
China was doing this regardless. It was a national security issue for them.
Which is kind of like the exception that proves the rule hahaha
People haven't been saying "GabeN can do no wrong" for awhile.
https://theconversation.com/a-million-new-spacex-satellites-...
FTA: "SpaceX has done a lot of engineering work to make its Starlink satellites fainter. They are still too bright for research astronomy, but thanks to new coatings, their brightness has not increased dramatically even as SpaceX has launched larger and larger satellites."
Also who said pollution has to be harmful? Light pollution is a thing, and this is the same class of problem.
Why dont they dip the satellites in vantablack to make them truly invisible?
Light pollution is borderline, but actually acceptable is it does cause harm. It disrupts sleep quality and sleeping patterns, also generally affecting plants and animals negatively.
> It’s a public resource that a private company is stealing from all of us
Just because the government can't accomplish what private industry is doing doesn't mean it's "stealing".
Elon doesn't figure out anything. He pays people to do it and then tries to take the credit.
what are we even talking about
But, if you will pardon a little rant: I hate the idea of subscription inference plans and also 'dumping' by subsidizing non-profitable products. Inferencing should be pay as you go and dumping illegal.
So you can put Anthropic on your list of companies that like to talk big about safety, but when the rubber hits the road, profits matter more than safety.
"The company began operations at its first site, Colossus 1, in June of 2024 and used as many as 35 unpermitted gas turbines to power the facility. Despite receiving intense public pushback over the use of illegal turbines and the lack of public input and transparency around Colossus 1, xAI officials said it planned on “copying and pasting” its unlawful turbine strategy to power Colossus 2."
"xAI removed its unpermitted turbines at the Colossus 1 data center after SELC, on behalf of the NAACP, sent a notice of intent to sue under the Clean Air Act. The company obtained permits for its remaining 15 turbines."
[0] https://www.selc.org/news/xai-built-an-illegal-power-plant-t...
CO2 is bad for us long term. But there are plenty of other nasty combustion products that are extremely bad for humans in the short term. Which is why we have pollution and air quality regulations.
Portable generators don’t meet any of the stronger requirements that utility scale systems have to meet, because it’s assumed they’re only operated in small numbers for short periods of time. They’re not designed to safe to operate in large numbers over long periods of time in the same place. For that you need proper pollution controls
[0] https://techcrunch.com/2025/06/18/xai-is-facing-a-lawsuit-fo...
[1] https://www.theguardian.com/technology/2026/jan/15/elon-musk...
[2] https://www.selc.org/news/xai-built-an-illegal-power-plant-t...
Not sure how much it hurts then compared to blocking openclaw though.
It is similar to the xAI gas turbines in that it tarnished their image - at least amongst those naive people who saw them as a plucky startup rather than a profit seeking corporation who don't like competition.
I agree with you that the ethics are very different.
sources: https://www.tba.org/?pg=Hastings2025AIX (Tech, Toxins, and Memphis: Evaluating the Environmental Footprint of the xAI Facility)
> "The xAI facility has already deployed *nearly 20 gas turbines, including four large units with a combined capacity of 100MW*, to power its AI system Grok... There are plans to add *15 more gas turbines between June 2025 and June 2030*, and the turbine application projects *annual emissions of around 11.51 tons of hazardous air pollutants*."
> "it is currently *running gas turbines without the necessary permits from the Shelby County Health Department*"
> "findings from the Southern Environmental Law Center indicate that the facility has 'installed' gas turbines. This suggests that new industrial systems are in place and that *xAI is obligated to comply with the new NSPS* [New Source Performance Standards] *to avoid violating the Clean Air Act*"
> "NSPS are authorized under *Section 211 of the Clean Air Act*... All new sources must comply with the *Best System of Emission Reduction (BSER)*, which mandates the use of state-of-the-art technology to minimize air pollutants."
> "there is a history of Elon Musk's companies, such as *SpaceX and the Boring Company, being fined thousands of dollars for violating environmental law* to circumvent regulation"
1. https://www.linkedin.com/pulse/nox-reduction-technologies-ga...
2. PDF: https://www.ifc.org/content/dam/ifc/doc/1990/handbook-nitrog...
Just the other day we had news that some Californian environment protection agency denied permits for SpaceX for political reasons as opposed to following objective rules, as ruled by a judge. So the fact that some permits were not issued doesn't tell me anything.
Edit: I re-read https://www.tba.org/?pg=Hastings2025AIX and yes, it seems that xAI never applied for permits related to the gas turbines as they're making the argument that the permits aren't required.
For some facts, the colossus data center is next-door to a steel mill and city sewage treatment plant, a vacated gigawatt scale coal power plant complete with nasty Coal Ash Ponds, and a brand new combined cycle gas power plant. The area is at the far edge of Memphis city limits up against the river, in a heavy industrial area. There’s even a major Valero oil refinery right there too.
Memphis has trillions and trillions of gallons of water, both in a gigantic underground aquifers and the Mississippi River itself. xAI has agreed to shed load in case of impending brownouts. The fear mongering is out of control.
They had a ton of portable turbines that were under operating under a temporary permit, and that was the disputed part. However, the blame should rest with TVA and or Memphis light gas and water for not being able to run an appropriate high voltage connection less than 1 mile from the plant to the data center in a timely manner. However… What difference does it make if the natural gas is burned at TVA plant or very similar gas turbines on site in the same neighborhood. Environmental groups and the county health department tried suing, was struck down, xAI works closely with the State, but the whining continues. xAI is paying gargantuan taxes to the city, no tax breaks.
These environmental groups do not care about the nasty unregulated cars burning oil, that I have to breathe every day. We terminated our motor vehicle inspection requirements due to the “burden” it places on the low income population. So they can burn their oil in my face, but then they sue to stop a SOTA turbine in an industrial area? There are junkyards in these same areas that burn their piles of waste tires every year or so “on accident”. No lawsuits there either.
[0] https://www.datacenterknowledge.com/regulations/how-are-data...
> Thus, when it comes to income tax, at least, many data centers – especially hyperscale data centers owned by large companies – don’t generate tax revenue because they don’t generate direct operating income.
At the end of the day, people are paying money to utilize the servers within the datacenter. That money is revenue. That revenue ought to be taxed by the state.
For all the big talk from U.S.-Americans on European 'overregulation', they sure seem to have much more dystopian societal failure modes materialize.
Qwen-coder-next is considered SOTA for things you could actually run locally.
The plan was to develop a recycled wastewater facility, which will pull arsenic from contaminated shallow acquifers, and pump that into the drinking water supply's acquifers.
Source: https://www.datacenterknowledge.com/sustainability/4-strateg...