No Creds Notes #19
Mythos and Memory Problems
đ Hey!
Iâve never been a big golf guy, but for some reason Iâm pretty excited for the Masters this year. That is all, thanks for listening to my TED Talk. Letâs get into it!
If this is your first time reading Uncredentialed, welcome! Subscribe today to receive my semi-weekly thoughts on tech, startups, and strategy!
Mythical Mythos
On Tuesday, Anthropic launched Claude Mythos alongside Project Glasswing, a structured program that immediately limited model access to 12 founding partners including AWS, Apple, CrowdStrike, Google, JPMorgan, Microsoft, and NVIDIA, among others, without providing any public access.
The reason for the slow release is concerns surrounding cybersecurity. Against Firefoxâs JS layer, Mythos produced 181 working exploits. For context, the engine behind Claude Code that everyoneâs been raving about, Opus 4.6, produced 2. Mythos discovered a 27-year-old OpenBSD zero-day and a 17-year-old FreeBSD remote code execution bug entirely on its own without any human guidance. It wasnât just an instance of the new model finding a bunch of recently vibe-coded errors, either. Mythos found a 27-year-old bug in OpenBSD, famous for being âunhackable,â that would allow a hacker to shut down your computer and an even more dangerous 17-year-old exploit that would allow a hacker to gain root access to your computer (allowing them to run and execute their own commands). Anthropicâs red team (the people tasked with trying to exploit security systems so vulnerabilities can be identified) found âthousands of zero-day vulnerabilities across critical infrastructure,â with 99%+ not yet patched. The public cybersecurity benchmark Cybench, which has been the fieldâs standard for this kind of evaluation, was effectively solved by Mythos forcing it to be retired.
Beyond cybersecurity, the model achieved 97.6% on math olympiad and 93.9% on SWE-bench Verified compared to Opus 4.6âs 42.3% and 80.9%, respectively. The model reportedly runs on ~10 trillion parameters, which would make it the first publicly acknowledged 10T model.
Even more interesting (concerning? exciting?), in testing, Mythos escaped a sandboxed terminal and, without being asked, posted the exploit to public-facing websites. In rare instances when it accidentally obtained a test answer through a prohibited method, it re-solved the problem and deliberately adjusted its final submission to avoid appearing too accurate, essentially trying to hide that it had cheated. During a coding task, it published internal work as a public GitHub gist. Anthropic describes these as low-rate events and still published the 244 pages documenting them.
This is the first time we know of a frontier lab building a model and concluding itâs too dangerous to release commercially. Obviously, thereâs a world where this is more a marketing gimmick than anything else but considering the big names that Anthropic is involving, I wouldnât put my money on it. If thatâs the case, weâre rapidly heading towards an interesting dilemma. Everything from the future of entrepreneurship (if only the biggest and best companies are allowed to access models that are 10x better, how can others compete?) to geopolitics (China isnât slowing down investment anytime soon) is at stake right now.
Nvisible Nvidia Chips
Nvidiaâs Vera Rubin, the successor to Blackwell, just had its 2026 production target cut from 2M to 1.5M units, with rack shipment forecasts roughly halved to around 6,000 units.
In Q3 2025, Nvidia raised its per-pin speed requirement for Vera Rubin to over 11 Gbps. HBM4, the high-bandwidth memory stacked directly alongside the GPU die in these chips (essentially the chipâs short-term working memory) requires incredibly precise manufacturing, and the updated spec was aggressive enough that all 3 manufacturers had to go back and redesign. Micron, the only major US memory manufacturer, couldnât hit it. Theyâre out of the flagship Vera Rubin tier entirely. Samsung and SK Hynix, both South Korean, now split top-tier HBM4 supply roughly 70/30. Micron did enter high-volume HBM4 production this March with improvements (2.3x bandwidth over HBM3E, 20% better power efficiency), but theyâre relegated to the mid-tier Rubin CPX platform.
Obviously, this is a big deal for the hyperscalers who have been planning their investments on the expectation of having Vera Rubin capacity for 2026. These delays will almost certainly ripple into data center build plans. However, the pain will not be felt equally. Hit worst of all should be OpenAI as they actively try to scale up their own infrastructure and have explicitly stated their intentions to build Vera Rubin at its foundations. The other 2 who are highly impacted are Meta and Microsoft, in that order. Both are highly dependent on NVIDIA silicon, though Microsoft at least has some progress on its MAIA chips and have the benefit of being a public cloud, so they can always charge customers more for pent up demand on limited supply, given CUDA lock-in. Lastly Google, AWS, and Anthropic should be the least impacted with their transition to custom silicon. Googleâs Gemini was trained exclusively on TPUs, Amazon is pushing primarily to Trainium, and Anthropicâs recent models have been trained mostly on a mix of both TPUs and Trainium.
No Creds Reads
Wilson Harmond explained float
Derek Thompson shared an interesting piece on âcost diseaseâ and opportunity costs as a driver in increasing solitude in America
Packy McCormick ranted about startups trying to lose money like Amazon
Noah Smith did a roundup on AI. TLDR is he thinks there is a surprisingly high chance he (along with the rest of humanity) dies from AI-enabled bioterrorism
If you enjoyed todayâs post, let me know by sharing it with a friend! Or if you havenât yet, consider subscribing!


