No Creds Notes #16
Brains-as-a-Chip, Fusion, and Data Centers!
👋 Hey!
Three fun ones this week! We’re going from brain-like chips a couple weeks ago to brains-as-chips today, along with yet another exciting breakthrough in fusion and a new entrant to the data center tech stack. Let’s gooooo!
If this is your first time reading Uncredentialed, welcome! If you want to be a captive audience to my meanderings on tech, startups, and strategy, you should subscribe below! My lawyer friend said he found it useful once, so you might even enjoy it.
Your Brain on DOOM
A few weeks ago I wrote about neuromorphic chips that are designed to behave like a brain, but still use silicon architecture. Cortical Labs is working on a sort of inverse of this, putting actual brain matter onto chips.
Their CL1 chip grows 200,000 human neurons derived from stem cells (typically converted from donated skin or blood samples) on top of a high-density electrode array. The chip sends electrical signals to the neurons; the neurons fire back, engaging in a two-way conversation at sub-millisecond latency.
This week, an independent researcher used Cortical Labs’ Python API to teach those neurons to play Doom. Instead of using the neural net/backpropagation/gradient descent/etc. techniques we’ve all heard plenty about in the age of AI, the neurons learned the same way your brain does: synaptic plasticity. When the character moved in a useful direction, the neurons got positive feedback. They adjusted based on this feedback and gradually figured out how to play the game.
To be clear, they kinda suck at it. Cortical Labs’ chief scientist admitted they “play like a beginner who’s never seen a computer.” Critiques aside though, they show legitimate evidence of seeking out enemies, shooting, and intentionally navigating, which is pretty cool considering it’s a literal brain on a chip.
Where things get really interesting, though, is energy. Each CL1 rack draws under 1 kilowatt, compared to the thousands of kilowatts drawn by GPU racks. The human brain runs on roughly 20 watts (less than a light bulb) by co-locating memory and processing, only firing when there’s something new to process, and learning from feedback rather than brute-forcing through data. If you could scale biological compute and keep the cells alive reliably, the power math gets extremely interesting.
Near-term, Cortical Labs is most excited about physical AI; think drones and robots operating in unpredictable environments, with the reasoning being that those are exactly the conditions biological neural systems evolved for. They’ve already built a prototype biological data center in Melbourne (120 CL1 units) and are working on a Singapore facility targeting ~1,000 units by this fall.
Worth flagging: keeping cells alive at scale is still a significant engineering challenge, and the underlying research hasn’t completed full peer review. This isn’t replacing GPUs anytime soon, but it is a novel compute paradigm that I’m excited to watch evolve in the coming years.
Also, the idea of it being an actual brain sitting on a server rack doing our thinking for us feels like actual science fiction. And a little Matrix-y. Will be interesting to see where public opinion falls on it.
Look EAST For Fusion News
If you read “A Sun in a Box” last month, you know how a tokamak works: plasma heated to over 100 million degrees Celsius, suspended in midair by magnetic fields, never allowed to touch the reactor walls. The second it touches those walls, it cools, ending the reaction in an event called a disruption.
Until recently, physicists have operated under the assumption of what they call a hard density ceiling. The theory went that if you packed too many particles into the plasma, there’d be turbulence at the plasma-wall boundary (the scrape-off layer) that becomes uncontrollable, leading to lots of disruptions. Conventional wisdom said that once you hit that hard density ceiling the energy output would crater, so there was no point in going beyond it.
China’s EAST (Experimental Advanced Superconducting Tokamak) just proved that ceiling isn’t fixed. By carefully managing what happens at the scrape-off layer (how the plasma interacts with the wall material) they sustained stable plasma at densities previously considered disruption territory.
Why does density matter? More particles in the plasma means more fusion reactions per unit volume. Higher density is one of the key levers toward Q > 1 (net energy output) and toward the economics of a commercial plant. Reactor designers have been working around an assumption that turns out to be conditional, not absolute.
EAST has been running since 2006. It’s a government research machine, not a VC-backed startup racing toward a Microsoft power contract (cough, cough Helion cough, cough). The physics it discovers feeds into how everyone models plasma behavior, benefiting the entire global fusion field.
AI Networking Makes its Next Hop
In 2024, Anshul Sadana left his job as COO of Arista Networks. Arista builds the switching equipment that runs inside AI data centers (as a reminder, switching equipment is the hardware that routes data between thousands of GPUs). Arista is one of, if not the, leader in the space, making it all the more interesting to see an exec like Sadana leave to start a rival.
His company, Nexthop AI, just raised $500 million at a $4.2 billion valuation, led by Lightspeed and Andreessen Horowitz. The pitch: the switching architecture that works for today’s AI clusters won’t work for tomorrow’s.
Here’s the problem. AI clusters today connect tens of thousands of GPUs and the direction they’re heading is hundreds of thousands. As clusters scale, the spine (the top-level switching layer that ties all the pods and racks together) becomes a ceiling. Traditional spine architecture is built around an expensive chassis that uses proprietary software and is not designed for the traffic patterns that large-scale AI training generates.
Nexthop’s answer is to blow up the chassis. Their Disaggregated Spine breaks the traditional chassis into purpose-built functional tiers. Each layer is independently designed, swappable, and customizable. It runs on open-source operating systems called SONiC and FBOSS instead of vendor-locked software. Hyperscalers can tune it themselves rather than depending on a vendor’s roadmap. Nexthop claims this cuts cost and power by around 30% versus legacy systems.
The parallel is whitebox servers. For years, companies bought branded servers from Dell and HP. Hyperscalers figured out they could strip out the proprietary software layer, run their own OS on commodity hardware, and get more control at lower cost. That same logic has been steadily moving up the stack. Nexthop is betting it reaches spine switching next.
Whether they can pull this off, displacing Cisco, Arista, and HPE in one of the most demanding networking environments in existence, is of course not a sure thing, but $650 billion in projected hyperscaler capex in 2026 should create lots of opportunities for exciting innovations like Nexthop’s to prove themselves.
No Creds Reading List
Figure Helix 02 cleans an entire living room autonomously
James Wang shared an intro to AI agents for non-technical people. If you haven’t experimented with this stuff yet, PLEASE give it a read
Brian Naughton wrote a primer on AI antibody design
Max Bray put together a thesis on turning the UK into a nation of entrepreneurs
If you enjoyed this, send it to your friend. Or parent or sibling. Or anyone. Let’s get some eyeballs on it! And if you haven’t already, consider subscribing!


