UK Shifts Focus of AI Safety Institute to Crime and Security

From safety to security

The UK’s AI Safety Institute has rebranded to centre around crime and national security, diverging from its focus on bias and freedom of speech. It has been renamed to the AI Security Institute (ASI) to reflect this change.

The AI Safety Institute was established in 2023 by former prime minister Rishi Sunak. It originally sought to increase public awareness of AI and to investigate risks such as bias and misinformation. 

The ASI will now prioritise monitoring the security risks posed by AI, including cyberattacks, fraud, and development of biological weapons. 

Peter Kyle, the UK secretary of state for science, innovation, and technology, made this announcement at the Munich Security Conference. He spoke prior to the event about how the upcoming plans will ensure responsible AI development in the UK. 

“The work of the AI Security Institute won’t change,” Kyle explained in a statement, “but this renewed focus will ensure our citizens, and those of our allies, are protected from those who would look to use AI against our institutions, democratic values and way of life.”

A Shift From Safety To National Security

This rebranding comes days after the US and UK declined to sign an international artificial intelligence agreement at the AI Action Summit in Paris.

The event, which took place earlier this week in the French capital, saw 60 countries sign a statement which pledged a safe and ethical approach to AI. However, the two Western states failed to sign.

At the summit, US Vice President JD Vance criticised increased regulation on artificial intelligence, sharing that he believes “pro-growth AI policies” should take precedence over safety measures.

The UK government explained that their refusal stemmed from national security concerns, aligning with the ASI’s renewed focus. 

In a statement, a UK government spokesperson said the declaration “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security”

The UK government has adjusted their approach to AI as the technology is becoming more incorporated into the public sector. A new agreement between the UK’s sovereign AI unit and AI startup Anthropic aims to utilise AI to improve the lives of UK citizens.

“The main job of any government is ensuring its citizens are safe and protected,” Kyle said. “And I’m confident the expertise our institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”

Subscribe to our newsletter for updates

Join thousands of media and marketing professionals by signing up for our newsletter.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Share

Related Posts

Popular Articles

Featured Posts

Menu