AI and Regulation: Advocating for an Informed and Collaborative Approach

Posted by Madhav Chinnappa

A regulator, a technologist and a politician walking into a conference room the beginning of a very geeky joke or more accurately, the start of the second Thomson Talks with Madhav Chinnappa, focused on the specific subject of AI/News & Regulation.

  • Lexie Kirkconnell-Kawana, CEO of IMPRESS, offered perspectives on media regulation, the complexity in identifying harm, regulatory methodologies such as state versus self-regulation, and keeping in mind who would be the beneficiaries of any regulation.
  • Jennifer Beroshi, Head of Policy at Google DeepMind, discussed the challenges of regulating AI, particularly in distinguishing between application of AI and models underpinning them, and the need for forward-thinking governance and posited while this is a broad, controversial and complex area at a high level, that there must be specific smaller issues where we can agree quickly and build from there.
  • Lord (Chris) Holmes of Richmond MBE advocated for agile and transparent regulation, emphasising the importance of public engagement and ethical considerations in AI governance.

The discussion that followed was filled with much divergence of opinion on many issues, including the foundational one of whether there should be regulation or not. The answers varied from Yes, No, to It-Is-Already-Here.


Regulation: Yes v No

The “Yes to Regulation” camp talked about how the harms are real and exist today and therefore action is needed. On a more practical level, there are areas of less controversy that we should be working on now. This requires collaboration due to the urgency that exists. Election integrity was raised and given the timings in the UK, the need for an immediate voluntary code of conduct between the major political parties, tech companies and publishers was outlined.

The “No” (or perhaps “Maybe/Be Careful”) camp raised the lack of understanding among regulators, how it can stifle research and innovation as well as the potential unintended consequences of hasty regulation. A point was made that GDPR has hindered identifying bad actors in disinformation troll networks. Additionally, there are existing laws around copyright and data privacy that remain relevant to the AI debate.

Globalisation: There were a number of concerns about how the global nature of AI impacts regulation on both a practical level as well as an equity level. For example, given the EU AI Act and the US’s AI Bill of Rights, how can the UK act in concert with those initiatives versus being sidelined. Additionally, so much of the thinking is from a Global North perspective where the harms might be even more damaging for the Global South, especially if mishandled. How do we address this imbalance?

Democracy: The impact on democracy overall was discussed and posited that that is what we should be trying to protect, rather than journalism or incumbent players in the information ecosystem. What is the public good and how can we protect that?

Agentic: Regulation could be even more difficult in an “agentic” world where AI acts without humans in the loop and therefore there are no true ‘good’ or ‘bad’ actors as in the future of the AI itself is its own agent.

Process: While perspectives varied on the necessity and efficacy of regulation, there was some agreement that a collaborative, dialogue-based approach was needed to achieve the balance between innovation and safeguarding against harm. An interesting point was made by a regulator that one of the most effective techniques they have is human-to-human dialogue to create change in companies without the need for regulatory interventions. A further point was made that involving the public is critical and the process used around IVF could be a model as it was a groundbreaking technology with many ethical issues that required bringing the public on board during the development of regulation.

The second Thomson Talks held on 24th February 2024 at Thomson's London headquarters provided valuable insights into the multifaceted issues surrounding AI regulation in the news industry, highlighting the importance of informed and collaborative approaches with the emergent regulatory landscape to ensure the development of good regulation versus just fast and potentially damaging regulation.

The third Thomson Talks will be held in Germany in June 2024.


AI & Disinformation courses

Madhav Chinnappa

Madhav Chinnappa

Thomson Talks

Madhav Chinnappa, the former Director of News Ecosystem Development at Google and veteran news executive, leads our Thought Leadership events Thomson Talks.

Explore our projects across the globe