Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK

chikicik
4 min readNov 8, 2023

--

The realm of artificial intelligence (AI) is currently a subject of intense debate and discussion. Some believe AI holds the potential to address pressing health issues, bridge educational disparities, and serve various other benevolent purposes. However, concerns about its implications in warfare, security, and the spread of misinformation have also become pervasive. AI has not only captured the attention of businesses but has also become a mainstream fascination for the general public.

AI is undoubtedly multifaceted, yet it has not managed to replace the vibrancy of in-person interactions. This week, the United Kingdom is hosting a groundbreaking event, the “AI Safety Summit,” at Bletchley Park, a historic site renowned for its role in World War II codebreaking, which now houses the National Museum of Computing.

The Summit, several months in the making, seeks to explore the long-term questions and risks associated with AI. Its objectives are lofty, aiming for a “shared understanding of the risks posed by frontier AI and the need for action,” “a forward process for international collaboration on frontier AI safety,” and “appropriate measures for organizations to enhance frontier AI safety.”

This high-level aspiration is mirrored in its attendees, featuring top government officials, industry leaders, and prominent thinkers in the AI field. The guest list includes notable figures like Elon Musk, although some notable figures, such as President Biden, Justin Trudeau, and Olaf Scholz, have opted not to attend.

The Summit is an exclusive gathering with limited access, prompting various other events and news developments to emerge alongside it. These additional activities encompass talks at the Royal Society, the “AI Fringe” conference taking place across multiple cities throughout the week, announcements of task forces, and more.

While the division of AI discussions between the exclusive Bletchley Summit and other events has raised concerns, it also presents an opportunity for stakeholders to convene and address broader AI-related issues.

A recent example of this collaborative approach was a panel at the Royal Society, featuring participants from diverse backgrounds, including Human Rights Watch, a trade union, a tech-focused think tank, a startup specializing in AI stability, and a computer scientist from the University of Cambridge.

The AI Fringe, though seemingly on the periphery, has effectively expanded its scope to complement the Bletchley Summit. Organized by the PR firm Milltown Partners, it spans multiple locations and offers both in-person and streaming components, allowing a wider audience to engage with AI-related discussions.

However, criticism has arisen due to the exclusion of various stakeholders from the Bletchley Park event. A group of trade unions and rights campaigners sent a letter to the prime minister, expressing concerns that their voices were being marginalized in the AI conversation.

Marius Hobbhahn, an AI research scientist, suggests that smaller, focused gatherings can be more productive, as larger groups may struggle to reach conclusions or have meaningful discussions.

The AI Summit serves as a pivotal part of the broader ongoing conversation about AI. The UK’s Prime Minister, Rishi Sunak, recently announced plans to establish an AI safety institute and research network. Additionally, a group of renowned academics published a paper on “Managing AI Risks in an Era of Rapid Progress,” while the United Nations launched its own task force on AI implications. President Biden also issued an executive order to set AI security and safety standards.

Debates about AI’s potential “existential risk” have fueled discussions, with some arguing that these concerns have been exaggerated to divert attention from more immediate AI issues. Misinformation is a notable area of concern, where AI poses potential short- and medium-term risks. The Royal Society conducted an exercise focusing on misinformation in science to understand AI’s role in this context.

The UK government appears to acknowledge the multifaceted nature of AI, aiming to facilitate international collaboration and shared understanding of its risks. However, concerns about “regulatory capture,” where industry leaders may influence risk discussions, persist.

The business perspective on AI, while distinct from safety and risk considerations, is essential. The UK seeks to position itself as a hub for AI businesses by hosting these discussions. However, the road to AI investment is not without challenges, with companies realizing the significant time and resources required for reliable AI outputs. Despite AI’s evolving capabilities, many projects still demand human oversight, and risks associated with AI applications, including data security, remain.

The discrepancy between business interests in AI implementation and safety and risk discussions at Bletchley Park underscores the complexity of the AI landscape. While the Summit’s focus may be on high-level safety discussions, it has prompted discussions on practical AI applications, such as in healthcare. This diversity of perspectives offers valuable opportunities to address AI’s future and its implications.

Read more about Existential risk? Regulatory capture? AI for one and all? A look at what’s going on with AI in the UK

--

--

No responses yet