This is a summary of the inaugural Thomson Talks with Madhav Chinnappa held on 30 November 2023, focused on the specific subject of AI/News & Disinformation.
The event was kicked off with three expert ‘stimulators’ from the fields of academia, technology and publishing respectively.
The academic made the point that AI is not the primary driver of misinformation, and that we should focus on strengthening norms and technological solutions to mitigate the risk of misinformation.
The technologist discussed the potential for AI to be used for autonomous systems and data systems to build agents that seek rewards and gain systems they interact with, which could pose new challenges for news organisations.
The publisher highlighted the dangers of AI deepening polarisation, polluting the idea of what's true, and polluting the question of who to believe.
These summaries were aided by the use of AI via Google’s Bard
The group discussion touched many different subjects:
Given that in many elections/countries existing polarisation is such that one only needs to swing 2% of an electorate to change the outcome that AI will end up being a negative force overall and that AI could also super-charge and reinforce existing polarisation in society.
The counter to this is that many countries have a multi-party system unlike the US and thus this mitigates this somewhat.
Also, have publishers themselves embraced enough social media and micro-formats (like TikTok) which their audiences are consuming?
While there was much discussion of bad actors, there was also concern about “neutral actors” where there is a belief that so-called “hallucinations” are a feature of the system not a bug.
A point was made that we should stop using the word “hallucination” as it anthropomorphises the tech and dodges responsibility; bad outputs are “mis-information” - all else is dis-information.
Additionally as the AI systems are predominately being built by white men in California, this will exacerbate the concentration of power there, continuing the “Californication” of technologies.
This should be an opportunity to move out of a defensive mode that has crept in since the beginning of the internet age, into a more “attack mode” which proudly makes the case for human-based journalism.
In a world where the majority of content is AI-produced, human-made content could/should have a premium.
A true culture change in the news ecosystem is needed, especially around news senior management who until ChatGPT launched underinvested and undervalued AI tech.
What do we want the outcome to be and how do we get there? Believing in governments alone to mitigate the harms is not enough.
We should be creating new products that include collaborations between tech companies, journalists, NGOs and governments.
JournalismAI is a good start but needs more support and publicity throughout the ecosystem.
Strong points were made about how AI tech and LLMs in particular are not diverse enough and contain existing societal biases.
For example, the Slovenia AI-faked audio talked about buying the Roma vote which feeds on anti-Roma prejudices or how can small languages be part of LLMs while retaining agency and not be disintermediated/de-powered?
A counter point was made that actually AI can help address more minority audiences who may not have been catered to or focused on by mainstream output due to the operational burden.
There were two differing views on small newsrooms and AI.
Smaller newsrooms may not have the bandwidth, expertise nor time to engage as they are doing all they can simply to keep the lights one. However, they have an advantage over larger newsrooms in that the decision-making is quicker and commitment can see positive change much quicker as a result.
The event concluded by returning to the point that this should be an opportunity to go into “attack mode” around the value of human-based journalism.
The next Thomson Talks is scheduled for February 2024 around AI/News and Regulation.