Build the most powerful model on earth. Tell everyone about it. Lock it away from the masses. Let a select few access it based on self-defined gating mechanisms.
It used to be that $20 a month provided anyone who could afford it access to state of the art AI. GPT-4. Claude. Fortune 500s and three-person startups had access to the same state of the art models. AI was going to be the great democratizer of knowledge work.
This started to shift when OpenAI introduced ChatGPT Pro at $200 a month. Power users needed more tokens. The labs needed better economics. It made sense that the best model was no longer available at the same rate as what the broader public was paying. Freemium to enterprise tiering is the SaaS model we were all comfortable with.
That model shifted last week. Anthropic released Claude Mythos Preview with a 244-page model card and a safety narrative about why the model couldn’t fall into the wrong hands. Mythos found thousands of zero-day vulnerabilities in production software without being trained to. Anthropic’s response: restrict access to roughly 50 hand-picked organizations through Project Glasswing. Apple. Microsoft. CrowdStrike. JPMorgan. $100 million in API credits for the consortium. Nothing for anyone else.
Days later, OpenAI followed with GPT-5.4 Cyber. Limited release. Vetted partners only. And just today Anthropic released Opus 4.7.
The safety rationale isn’t fabricated. According to virtually every expert and Jamie Dimon himself, we are entering a new era of AI-led cybersecurity. A model that finds 27-year-old flaws in OpenBSD is a real capability threshold. This is something every CISO is having emergency meetings on as we speak.
It was there in the days of the printing press, electricity, nuclear power, and the internet. It’s important who gets to define what safety means, and right now it looks like we’re leaving that up to Dario and Sam.