In recent weeks, the UK government has showcased its ambition to lead in AI safety, but its approach to regulation has raised concerns. While pledging £100 million for AI safety research, it has avoided passing new legislation for AI applications, promoting a "pro-innovation" stance.
The Ada Lovelace Institute's report on the UK's regulation of AI highlights several contradictions. The government's reliance on existing regulators, without providing new legal powers or funding, leaves AI safety rules open to interpretation and risks regulatory inconsistency.
The report calls for a comprehensive definition of AI safety to address the various harms that AI systems can cause today. It emphasizes the need for effective domestic regulation to provide a solid foundation for the UK's AI economy and to ensure safeguards extend across all sectors.
The UK's deregulatory reform of the national data protection framework raises concerns about data privacy and AI safety. The proposed changes risk undermining the government's AI regulatory proposals and could provide lower levels of protection for individuals affected by AI-informed decisions.
The Institute recommends the government to rethink elements of the data protection reform bill and to introduce strict transparency and accountability obligations for regulators. It also suggests exploring the introduction of an AI ombudsperson to support those adversely affected by AI.
On foundational model safety, the report suggests mandatory reporting requirements for UK-based developers and notification to the government of large-scale training runs. This would enhance oversight and allow policymakers to prepare for the impact of advancements in AI capabilities.
To strengthen the UK's AI regulation, the report urges the government to invest in pilot projects, clarify laws around AI and liability, and introduce a statutory duty for regulators to have regard to AI principles. A robust regulatory regime is essential for the UK's global AI leadership.
The UK government's ambition to be an AI leader is commendable, but it must bolster its domestic regulatory regime to be taken seriously and achieve its global ambitions. Striking a balance between innovation and AI safety is crucial, requiring comprehensive definitions, strict transparency, and collaboration with regulators. Only then can the UK establish itself as a world-leading authority in AI regulation.