Deepfake Audio Of Philippine President Urging Military Action Against China Sparks Concerns

The Presidential Group Communications Workplace (PCO) issued a public warning

In a scary incident, a fabricated audio clip that includes Philippine President Ferdinand Marcos Jr. instructing his navy to answer China has raised vital alarm amongst Manila’s authorities officers. They warning that this might have implications for the nation’s overseas coverage. 

The manipulated audio includes a deep pretend voice of Marcos Jr, the place he purportedly signifies to his navy to intervene if China poses a risk to the Philippines. He provides that he can’t tolerate additional hurt to Filipinos by Beijing. 

Deepfake know-how includes using synthetic intelligence to exchange features of an individual’s look or voice with these of one other particular person in artificial media.

“We can’t compromise even a single particular person simply to guard what rightfully belongs to us,” says the voice within the faked audio, which was reportedly launched by way of a YouTube channel with 1000’s of subscribers. The audio was accompanied by a slideshow of photographs displaying Chinese language vessels within the South China Sea, the South China Morning Submit reported.

On Tuesday evening, the Presidential Group  Communications Workplace (PCO) issued a public warning in regards to the manipulated media and confirmed that it was solely pretend.

“It has come to the eye of the Presidential Communications Workplace that there’s video content material posted on a preferred video streaming platform circulating on-line that has manipulated audio designed to sound like President Ferdinand R. Marcos Jnr,” the PCO mentioned in an announcement.

“The audio deepfake makes an attempt to make it seem as if the President has directed our Armed Forces of the Philippines to behave towards a specific overseas nation. No such directive exists nor has been made,” it added.

The PCO mentioned that it’s actively engaged on measures to fight pretend information, misinformation, and disinformation via its Media and Data Literacy Marketing campaign.

“We’re additionally carefully coordinating and dealing with authorities companies and related non-public sector stakeholders to actively handle the proliferation and malicious use of video and audio deepfakes and different generative AI content material,” it mentioned.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here