OPENAI JOINS AI CONTENT ALLIANCE AS ELECTION COMMISSION WARNS OF DEEPFAKES

New Delhi: In a significant move towards building consensus in identifying and tracking artificial intelligence (AI)-generated content online, OpenAI, the developer of ChatGPT, has joined the steering committee of US-based Coalition for Content Provenance and Authenticity (C2PA). 

This marks OpenAI's second global collaboration amid mounting concerns about AI's increasing influence on various content forms, especially during, and in the run up to, the elections in major economies such as India and the US.

OpenAI's decision to join C2PA comes a day after the Election Commission of India issued a notice to political parties, outlining the potential penal actions that could be initiated under various provisions of the law for deploying deepfakes in political campaigns. 

Also Read: OpenAI says it can now detect images spawned by its software—most of the time 

The penalties, covering sections 66C and 66D of the Information Technology Act, 2000; section 123(4) of Representation of People Act, 1951; and sections 171G, 465, 469 and 505 of the Indian Penal Code, could lead to several years of imprisonment and fines for the perpetrators.

However, global enforcement of such regulations has so far been challenging, considering the evolving nature of AI and digital content. 

20 tech firms signed accord in February to identify AI-altered political content

To address the challenges, 20 leading technology companies, including Adobe, OpenAI, IBM, LinkedIn, Snap, and TikTok, signed an accord on 16 February in Munich to identify AI-altered political content, limit their distribution, and enhance “cross-industry resilience” to identify “deceptive AI-based election content”.

The C2PA initiative, comprising Adobe, Google, Intel, Microsoft and Sony, among others, seeks to advancing the efforts further. The central theme will be to develop standards to identify content credentials, and attach “tamper-proof metadata” to reveal all the details, including the origin of content, be it in text, image, video, or audio formats.

Also Read: AI Tracker: OpenAI brings ChatGPT memory feature to Plus users

While efforts to identify content sources have faced intense scrutiny, the dangers associated with deepfakes—altered versions of existing videos crafted to convey alternative political messages—have prompted governments and the industry to curb the menace, especially with the increasing integration of AI features into mainstream social media and mobile apps. 

Amid the ongoing seven-phase general elections in India, public figures, including Aamir Khan, Ranveer Singh, and Union Home Minister Amit Shah, were forced to file police complaints over concerns of certain deepfake videos with altered speeches circulating under their names.

Such developments have prompted many public policy leaders to call for a universal standard that could help trace AI-generated content worldwide to bridge the technical divides existing between services of various tech firms.

Rohit Kumar, founding partner at public policy research firm Quantum Hub, said: “There is a pressing need to intensify public outreach efforts to address and identify deepfakes. Building public resilience and advocating for critical evaluation of all content is going to be essential in this election cycle to discourage blind trust in information.”

According to a statement issued by Anna Makanju, vice-president, global affairs, OpenAI, the decision to join C2PA could “advance shared standards around digital provenance”. 

“Existing adoption, advocacy, and ongoing commitment to content credentials will bring an important voice to working efforts to guide development (of the common standard),” Andy Jenks, chairperson, C2PA, said in the statement

While discussions on developing such standards are ongoing, implementing them have so far been a challenge. However, last September, in an interview with Mint, Nick Clegg, vice president, global affairs, Meta Platforms, had said that globally intercommunicable policies were “not only possible… they were, in fact, very much desired”.

2024-05-07T15:22:36Z dg43tfdfdgfd