News

Britain Opts For ‘Adaptable’ AI rules, With No Single Regulator

artificial intelligence robot

Britain plans to split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.

AI, which is rapidly evolving with advances such as the ChatGPT app, could improve productivity and help unlock growth, but there are concerns about the risks it could pose to people’s privacy, human rights or safety, the government said.

It said it wanted to avoid heavy handed legislation that could stifle innovation and would instead take an adaptable approach to regulation based on broad principles such as safety, transparency, fairness and accountability.

The European Union is tackling the issue head on by attempting to devise landmark AI laws and create a new AI office. The speed at which the technology is advancing, however, is complicating its efforts, sources have said.

Britain said its approach, outlined in a policy paper published on Wednesday, meant it could adapt its rules as the technology developed.

It said that over the next 12 months, existing regulators would issue practical guidance to organisations, as well as other tools and resources like risk assessment templates.

It said legislation could later be introduced to ensure regulators were applying the principles consistently.

(Reporting by Paul Sandle; Editing by Mark Potter)

Paul Sandle
Related News
Related sized article featured image

The announcement was made during French President Emmanuel Macron's three-day visit to the South American country.

Ueslei Marcelino
Related sized article featured image

Southeast Asia's digital economy is expected to hit $600 billion in value by the end of the decade.

Fanny Potkin