During London Tech Week, Prime Minister Rishi Sunak revealed that Google DeepMind, OpenAI, and Anthropic have reached an agreement to grant the U.K. government access to their AI models for research and safety purposes. This move aims to facilitate the development of better evaluations and a deeper understanding of the potential opportunities and risks associated with these systems.
Sunak’s speech at the event celebrated the transformative potential of AI in sectors like education and healthcare, highlighting the U.K.’s position as an innovative hub. He expressed enthusiasm about the combination of AI models and quantum power, emphasizing the extraordinary possibilities that arise from this convergence.
However, Sunak acknowledged the importance of ensuring safety in AI development. While the U.K. government had previously adopted a pro-innovation approach outlined in an AI white paper, Sunak has recently emphasized the necessity of establishing safeguards, or “guardrails.”
The U.K.’s ambition, as stated by Sunak, extends beyond being an intellectual center for AI safety regulation. The goal is for the country to become the physical home for global AI safety regulation. While specific regulatory proposals were not disclosed, plans for a global summit on AI safety, similar to the UN COP climate change conferences, were unveiled. Additionally, a Foundation Model Taskforce will pioneer research on AI safety and assurance techniques, benefiting from £100 million in funding.
Sunak also highlighted other key areas of focus for the U.K., including semiconductors, synthetic biology, and quantum technology. He emphasized that the country’s agile and balanced regulatory approach will continue to make it an attractive destination for investment. Recent developments, such as Anthropic and OpenAI establishing European headquarters in the U.K., along with Palantir’s announcement of an AI research hub, further solidify the country’s position in the AI landscape.