No Creds Notes #18
Humanoid skeptics, HALEU gaps, and grid mechanics
👋 Hey!
Hope you all enjoyed my piece earlier this week on the importance of pursuing deep expertise! Check it out below in case you missed it:
Outside of Uncredentialed and my job, I got a nasty sun burn sitting by the pool (which sucks but also kinda cool to be in a place I can get sun burned at home in March) and then, on Tuesday night, I finally watched Dune 2! Seeing that the Dune 3 trailer had come out put the pressure on. Really enjoyed it, though, so check out the Dune movies if you haven’t yet.
Anyways, I’ve got 3 stories for you this week: the anti-humanoid coalition is adding serious voices, advanced nuclear has a fuel supply chain problem that isn’t getting enough attention, and NERC wants data centers to become proper grid citizens. It’s time.
But first! If someone sent you this post, subscribe here to cut out the middle man! And to returning readers, use the share button below to become the middle man that gets cut out! One more subscriber means I’ll hit 200!
Two Legs Bad
The anti-humanoid case is picking up some credible voices. Late last week, Mark Cuban went on TBPN with a pretty bold take: humanoid robots fail as a category within 5-10 years. He’s not alone. MIT robotics legend and iRobot co-founder, Rodney Brooks, has been making the same argument from the technical side for years. This week, even Morgan Stanley added a cautionary note drawing a line between “robots that dance” and “robots that do useful work.”
The bull side is equally loud. Musk is promising Tesla Optimus at scale. Figure and Physical Intelligence are raising enormous rounds on the premise that a humanoid form factor is a universal key, effectively arguing that human workplaces are already built for human bodies, so a general-purpose humanoid can slot into any job without redesigning the environment around it.
The bear case is that the universal key framing is a category error. To them, the most capable industrial robots in the world don’t look like people. They look like the AMR wheeling through an Amazon fulfillment center, the surgical arm with 6 degrees of freedom built specifically for a laparoscopic port, the delta robot running 150 picks per minute above a conveyor line. These machines are co-designed with their operating environments, allowing the system to run cheaper, faster, and more reliably than any humanoid forced to navigate a world it wasn’t shaped for.
Brooks’ core argument is that forcing human form on a robot adds constraints that don’t add value. If you’re building a warehouse, the shelves don’t need to be at human-standing height. If you’re doing automotive assembly, the work cell doesn’t need to accommodate bipedal locomotion. The environments we’ve optimized for humans exist because humans are doing the work, not because that’s the most efficient geometry for the task.
I wrote about this in Human(oid) in the Loop and still think the right place to land is the middle. Unstructured environments like residential care and construction sites, for example, are places where adapting the environment isn’t really an option, and so humanoids should win. The issue is, if that’s the only category of space they win, that’s a much narrower category than the bulls are pricing in.
Most importantly, I think Cuban is overoptimistic in thinking entire environments will be adapted to non-humanoid robotics within the next decade. Sure, economic incentives will drive most industrial use cases to rethink, rebuild, and adapt to the new efficient equilibrium, but the domestic sphere will be much slower to adapt. Especially when you consider the (relatively) affordable pricepoint many humanoids are expected to go for (~$20k-$30k, comparable to a new car), I expect the consumer side will be dramatically slower to adapt their environments vs. just acquiring a humanoid that can work in their current environment.
Ultimately, I believe non-humanoids will come to dominate the business side of robotics, while humanoids will win the consumer side.
Running on (HALEU) Fumes
Advanced nuclear is having a moment. Reactor designs are getting certified, sites are being scoped, capital is flowing into the sector, and uncredentialed writers are sharing 101 guides on the space. Spend a little more time learning about the space, though, and you’re sure to hear about a supply chain problem that’s often glossed over in headlines: the fuel.
Most next-generation reactor designs, including Oklo’s Aurora and Kairos’s fluoride salt-cooled reactor, run on HALEU (High-Assay Low-Enriched Uranium), enriched to between 9% and 19% U-235. Conventional light-water reactors run on fuel enriched to roughly 3-5%. That higher enrichment is what makes advanced reactors commercially attractive, enabling smaller, more efficient cores. It also means the entire existing commercial fuel supply chain essentially doesn’t apply.
HALEU requires specialized equipment for enrichment, different transport containers (higher enrichment demands tighter criticality controls), and different fabrication facilities. Commercial infrastructure for all 3 doesn’t exist at scale.
Centrus Energy is currently the only US company licensed to enrich to HALEU grade, running a demonstration cascade at Piketon, Ohio. Demonstration-scale output is not commercial-scale supply. There are no HALEU transport containers in mass production. There are no certified commercial-scale fabrication facilities. The whole chain needs to be built from scratch because, up until now, there were no commercial customers to sell to while simultaneously there was no potential for commercial customers because the fuel didn’t exist. Classic chicken-and-egg.
Earlier this month we took a step in the right direction when Oklo signed a JV with Centrus to produce HALEU specifically for Oklo’s Aurora reactors. 3 days prior, the DOE announced a restart of HALEU production at the Savannah River Site, marking another concrete step. Commercial scale is still years away, though, and the deployment timelines being discussed in advanced nuclear assume fuel supply catches up.
There’s no ribbon-cutting moment when a transport container gets its regulatory approval, so this bottleneck won’t get the same coverage that reactor design milestones do, but the Oklo-Centrus JV and Savannah River restart are truly exciting steps towards closing the current gap. Time will tell if they can move fast enough to meet the aggressive timelines of advanced reactor projects.
Too Big to Unplug
NERC (North American Electric Reliability Corporation), which sets reliability standards for the North American grid, is proposing to formally register large data centers as Registered Entities, giving grid operators visibility into when they connect to and disconnect from the grid.
The proposal responds to a grid mechanics problem that most people haven’t thought through.
I think people are pretty familiar with the opposite of this problem. Whether due to maintenance, nature, or anything else, a power plant goes offline, generation drops below demand, frequency falls, and automatic responses ramp up reserves. The grid was engineered to handle this, with generators having mandatory frequency response obligations making it a non-issue.
The new problem is the reverse. Generation falling off is manageable. Demand falling off, suddenly, unexpectedly, and at scale, is not.
A hyperscale data center campus draws 1,500 MW under normal operations. When a grid disturbance hits, its uninterruptible power supply (UPS) systems automatically switch to battery and disconnect from the grid, doing exactly what they’re designed to do to protect equipment from unstable energy supply. 1,500 MW of demand disappears in milliseconds. Now supply exceeds demand, frequency overshoots, and neighboring generators start tripping their overfrequency protection relays. A data center acting rationally to protect its own equipment can amplify a grid event rather than absorb it.
Campuses at 1,500 MW are no longer hypothetical. Several hyperscale facilities are at or approaching that range, and the pipeline of 1-3 GW campuses is growing. Each one is a potential frequency event that grid operators currently have no visibility into until the load disappears from aggregate figures.
Formal NERC registration would require data centers above a threshold to share load profiles with operators and participate in coordinated response programs during stress events. This added insight should help grid operators smoothly manage our infrastructure and prevent data center driven surprises from amplifying a crisis.
No Creds Reads
Sadly, Niko McCarty announced that Asimov Press is going on hiatus
Sam Boboev shared on the challenges and promise of agentic AI in fighting financial fraud
Noah Smith chatted with Claude
I finally found time to read Packy McCormick’s deep dive co-essay on World Models
If you enjoyed this post, share it with a friend! And if you haven’t yet, subscribe below!



