By Olivia Le Poidevin
GENEVA (Reuters) -Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations’ International Telecommunication Union urged in a report on Friday.
Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its “AI for Good Summit” in Geneva.
The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing.
“Trust in social media has dropped significantly because people don’t know what’s true and what’s fake,” Bilel Jamoussi, Chief of the Study Groups Department at the ITU’s Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI’s ability to fabricate realistic multimedia, he said.
Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness.
“We need more of the places where users consume their content to show this information…When you are scrolling through your feeds you want to know: ‘can I trust this image, this video…'” Rosenthol said.
Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material.
“If we have patchworks of standards and solutions, then the harmful deepfake can be more effective,” she told Reuters.
The ITU is currently developing standards for watermarking videos – which make up 80% of internet traffic – to embed provenance data such as creator identity and timestamps.
Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users.
“AI will only get more powerful, faster or smarter… We’ll need to upskill people to make sure that they are not victims of the systems,” he said.
(Reporting by Olivia Le Poidevin; Editing by Hugh Lawson)
Comments