When OpenAI rolled out GPT-5.5 in April 2026, it did something odd: it announced a more open approach to cybersecurity than Anthropic, its chief rival, at the same moment Anthropic was restricting access to its own technology. Two companies that started from opposite premises on transparency are now swapping positions. If you follow AI, you have probably noticed this pattern before. The stated values shift; the rival-watching stays constant. But headline coverage treats each announcement as its own event, disconnected from the institutional pressures that produced it. The decisions these labs make about openness, safety, and commercial ambition shape how billions of people will interact with machine intelligence. And those decisions have a history that product launches are designed to obscure.
Most coverage of OpenAI treats GPT-5.5 as a benchmarking exercise: how fast, how capable, how does it compare to the last model. What you almost never get is a way to explain why OpenAI makes the structural choices it does. Why did it shift from nonprofit to commercial engine? Why does its stance on openness keep moving? The same blind spot applies to DeepMind and its absorption into Google. Without that organizational history, every new announcement floats free of context, and the pattern of rivals mirroring each other looks accidental. The missing method is biographical and institutional, not technical.
Parmy Olson's *Supremacy* fills that gap by reconstructing the parallel rise of OpenAI under Sam Altman and DeepMind under Demis Hassabis. Olson, a Bloomberg technology writer, drew on access to high-ranking sources at both labs, and the resulting narrative is built on specific boardroom decisions, funding negotiations, and hiring battles rather than broad pronouncements about AI's future. The book's explanatory power comes from its biographical method.
Olson traces how Altman and Hassabis each believed they were building the most consequential technology in human history, and how that conviction shaped everything downstream: recruiting strategies, tolerance for compromise, willingness to take corporate money. When OpenAI accepted Microsoft's backing, the reasoning reflected a specific theory about maintaining independence while scaling. When Hassabis sold DeepMind to Google, he negotiated structural protections that were supposed to preserve research autonomy. Olson documents both negotiations in enough detail that you can see the logic and the self-deception operating in the same room.
This produces a transferable insight about the GPT-5.5 announcement. OpenAI's current willingness to be more transparent about security than Anthropic fits a pattern Olson identifies early: the company's identity has always been defined against whichever rival matters most at the time. First DeepMind, now Anthropic. The positioning shifts because the rival shifts. The cybersecurity and openness postures these companies adopt in 2026 are downstream of governance choices made years ago. One real cost: Olson's access to OpenAI's internal deliberations appears stronger than her access to DeepMind's. The DeepMind chapters sometimes read as the view from outside the room, filling gaps with inference where you want sourced detail. The portrait of Hassabis feels thinner than the portrait of Altman, which weakens a book that stakes its structure on parallelism. If you are coming to this primarily for the Google side of the story, you will feel the asymmetry. What holds up best is the documentation of rivalry as an institutional driver. The competition between OpenAI and DeepMind accelerated timelines, reshaped safety commitments, and turned hiring into a zero-sum contest. *Supremacy* shows that this dynamic is baked into the funding models and governance structures each organization chose early on. That analysis is what makes the book useful past its publication date: it gives you a way to read each new product announcement as a move within an ongoing institutional game, with rules set long before anyone typed a prompt into ChatGPT.
If GPT-5.5 or the next Anthropic release has you trying to sort signal from positioning, *Supremacy* gives you the institutional backstory that most coverage skips. Its DeepMind side could be stronger, and it will not make predictions for you. But it will change the questions you bring to the next headline about openness, safety, or rivalry, and better questions are worth the cover price.
