Google acquired AI company DeepMind in 2014 and fully integrated it into a single division in April 2023. DeepMind has long been a prominent player in the AI sector. Its reputation has been built on the quality of its scientific publications and the collaboration of some of the brightest researchers. However, this approach may change soon.
According to several sources obtained by the Financial Times, Google DeepMind is delaying the publication of some papers it deems “strategic” or sensitive in the field of generative AI. This strategy aims to protect the company’s competitive edge and prevent rivals like OpenAI from capitalizing on its latest and most valuable developments.
The Transformer Architecture: The Star of the Show
You can’t fully understand the evolution of generative AI without acknowledging Google’s contributions. One pivotal moment was the 2017 publication of the Attention Is All You Need paper. Authored by eight researchers, the paper introduced the Transformer architecture. This framework significantly enhanced the ability of AI models to process vast amounts of data efficiently.
The Transformer architecture served as the foundation for models like Google’s Bidirectional Encoder Representations from Transformers (BERT). The company integrated BERT into its search engine in 2019 to improve natural language understanding. The Transformer architecture also played a crucial role in developing pre-trained systems like OpenAI’s GPT, including the current GPT-4 and GPT-4.5 models.
As one of the world’s largest publicly traded companies, Google boasts substantial financial resources and access to critical technology. However, the launch of ChatGPT, also built on the Transformers architecture, took Google by surprise. This led the tech giant to declare a “code red” and reorganize internally to start being competitive again in the AI sector, which OpenAI is currently leading.

It’s widely recognized that when a company becomes a major player in the tech industry, it tends to lose some of the dynamism that characterized its early days as a startup. Big Tech giants often find it challenging to move fast and break things. They need to protect substantial assets and manage complex operations that can’t afford failures. Taking risks becomes significantly more complex.
Despite this, it’s impressive how quickly Google has managed to catch up in the AI race. In a short time, the company has launched a wide array of AI products based on advanced language models. Notable offerings include Gemini, a direct competitor to ChatGPT, and Gemini Live, aimed at rivaling OpenAI’s advanced speech mode. Meanwhile, the Gems function like custom GPTs, and NotebookLM is a groundbreaking tool.
Google’s New Approach
In recent years, Google has implemented significant internal changes. One of the most notable changes affects its policy on publishing scientific papers. The company now imposes a six-month embargo on content deemed strategic before it can be released to the public. The group, led by Nobel Prize winner Demis Hassabis, has also tightened internal processes with stricter reviews.
Google’s new approach may create some unease within the scientific community. A current researcher shared their concerns about this shift with the Financial Times. “I cannot imagine us putting out the Transformer paper for general use now,” they said. Meanwhile, a former DeepMind researcher told the outlet, “The company has shifted to one that cares more about product and less about getting research results out for the general public good.”
Images | Pawel Czerwinski | Google
Log in to leave a comment