WASHINGTON: An influential American Senator on Friday requested the US social media corporations as to what preparations they’ve made for elections in India, the place social media platforms, together with Meta-owned WhatsApp, have a protracted monitor file of amplifying deceptive and false content material.
The letter, written by Senator Michael Bennet, a member of the Senate Intelligence and Guidelines Committees, which has oversight over US elections, comes on the eve of the announcement of elections in India by the Election Fee of India (ECI).
The letter by Bennet to the leaders of Alphabet, Meta, TikTok, and X is addressed to in search of info from these corporations about their preparations for elections in numerous international locations, together with India.
“The risks your platforms pose to elections usually are not new – customers deployed deepfakes and digitally altered content material in earlier contests – however now, synthetic intelligence (AI) fashions are poised to exacerbate dangers to each the democratic course of and political stability. The proliferation of subtle AI instruments has diminished earlier obstacles to entry by permitting nearly anybody to generate alarmingly lifelike photographs, video, and audio,” Bennet wrote.
With over 70 international locations holding elections and greater than two billion individuals casting ballots this 12 months, 2024 is the “12 months of democracy”.
Australia, Belgium, Croatia, the European Union, Finland, Ghana, Iceland, India, Lithuania, Namibia, Mexico, Moldova, Mongolia, Panama, Romania, Senegal, South Africa, the UK, and the USA are anticipated to carry main electoral contests this 12 months.
In his letter to Elon Musk of X, Mark Zuckerberg of Meta, Shou Zi Chew of Tik Tok and Sundar Pichai of Alphabet, Bennet requested info on the platforms’ election-related insurance policies, content material moderation groups, together with the languages lined and the variety of moderators on full-time or part-time contracts, and instruments adopted to determine AI-generated content material.
“Democracy’s promise – that individuals rule themselves – is fragile,” Bennet continued. “Disinformation and misinformation poison democratic discourse by muddying the excellence between reality and fiction. Your platforms ought to strengthen democracy, not undermine it,” he wrote.
“In India, the world’s largest democracy, the nation’s dominant social media platforms – together with Meta-owned WhatsApp – have a protracted monitor file of amplifying deceptive and false content material. Political actors that fan ethnic resentment for their very own profit have discovered quick access to disinformation networks in your platforms,” the Senator wrote.
Bennet then requested about particulars of their new insurance policies and folks that have positioned for India elections. “What, if any, new insurance policies have you ever put in place to arrange for the 2024 Indian election? What number of content material moderators do you presently make use of in Assamese, Bengali, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Sanskrit, Sindhi, Tamil, Telugu, Urdu, Bodo, Santhali, Maithili, and Dogri?” he requested.
“Of those, please present a breakdown between full-time workers and contractors,” Bennet mentioned.
The Senator advised the social media CEOs that past their failures to successfully average deceptive AI-generated content material, their platforms additionally stay unable to cease extra conventional types of false content material.
“China-linked actors used malicious info campaigns to undermine Taiwan’s January elections. Fb allowed the unfold of disinformation campaigns that accused Taiwan and the USA of collaborating to create bioweapons, whereas TikTok permitted coordinated Chinese language-language content material important of President-elect William Lai’s Democratic Progressive Get together to proliferate throughout its platform,” it mentioned.
In response to the Senator, he has heard from the heads of the US Intelligence Neighborhood that the Russian, Chinese language, and Iranian governments could try and intervene in US elections.
“As these and different actors threaten peoples’ proper to train in style sovereignty, your platforms proceed to permit customers to distribute fabricated content material, discredit electoral integrity, and deepen social mistrust,” he wrote.
Bennet requested info on the platforms’ election-related insurance policies, content material moderation groups – together with the languages lined and the variety of moderators on full-time or part-time contracts – and instruments adopted to determine AI-generated content material.



LEAVE A REPLY

Please enter your comment!
Please enter your name here