World

Creating illusions… How are countries leveraging AI in their media agendas? | Technology

Last November, Olga Luik, a Ukrainian student studying at the University of Pennsylvania, launched her own personal channel on YouTube.

Olga did not get the audience she was waiting for, but soon after the channel went online, she discovered that most of her viewers came from Chinese social media platforms and, most importantly, it was not her who appeared in the video clips, but an AI whose image, as he pointed out, was rotated by artificial intelligence.

Digital lookalikes like “Natasha” pretend to be Russian women who speak fluent Chinese, thank China for supporting Russia against the West, and try to make money by selling products such as Russian candy.

What’s even more surprising is that these fake accounts have hundreds of thousands of followers in China, more than Olga herself has on YouTube.

The incident represents a large number of examples on Chinese social media of what appear to be Russian girls, who express their affection in fluent Chinese and say they seek to support Russia's war effort by selling imported goods from their homeland, but none of them exist.

Experts said the images of the girls were generated by artificial intelligence by imitating and stealing videos of real women on the internet, often without their knowledge, and the fake videos were used to market men's products in China, Reuters reported.

Jim Zhai, chief executive of Exmove, a company that develops artificial intelligence technology, said the technology to produce such images had become “very popular because a lot of people in China use it”.

More broadly, the incident has drawn attention to the dangers of rapidly evolving artificial intelligence technology, which has raised concerns about its role in the spread of misinformation and fake news, and its role in spreading targeted narratives or media. Publicity has increased in recent months over the growing popularity of AI systems such as chatbots.GBT Chat”.

Chat GPT
The proliferation of generative AI systems such as ChatGBT chatbots (Shutterstock)

Cheater's Gain

Governments, political parties, and influential parties around the world (both democratic and authoritarian regimes) use AI to generate text, images, and videos with the goal of swaying public opinion in their favor and enforcing automated censorship of content critical of them on the internet.

In their annual report released last October, researchers at the human rights group Freedom House documented the use of generative AI in 16 countries to “sow doubt, discredit opponents, or influence public conversation.”

The report found that global internet freedom has declined for 13 consecutive years, partly due to the spread of artificial intelligence technology.

The researchers said one of their most important findings relates to changes in how governments use artificial intelligence, with political parties continuing to use these technologies to amplify misleading information to serve their interests as they become more advanced.

While these developments are not necessarily surprising, one of the most interesting findings is that widespread use of generative AI could undermine confidence in provable facts, the report said.

As AI-generated content spreads naturally on the internet, it could cause parties to doubt reliable and true information, a phenomenon known as “liar’s payoff” as vigilance against fakery increases public skepticism of correct information, especially during times of crisis or conflict when false information spreads.

Artificial intelligence makes tasks easier

Artificial intelligence technology is automating the creation of fake news, leading to a surge in website content that mimics real articles but spreads false information about elections, wars and natural disasters.

According to a report from NewsGuard, an organization that tracks misinformation, the number of websites publishing misleading AI-generated articles has increased by more than 1,000% in the past year, from 49 websites to more than 600.

Historically, propaganda operations have relied on armies of low-wage workers or highly coordinated intelligence organizations to create websites that appear authentic.

But artificial intelligence technology has made it easier for almost anyone — whether a member of a spy agency or a teenager in a basement — to create these sites and generate content that is sometimes difficult to distinguish from real news.

Experiments in a new research paper published last February also showed that language models can generate text that is just as persuasive to American audiences as content produced by real covert foreign propaganda campaigns.

Strategies of collaboration between humans and AI models, such as modifying the chatbot’s direction and coordinating outcomes, have resulted in articles that are as convincing or more convincing than the original propaganda.

OpenAI, the company that developed the ChatGBT chatbot, says it now has more than 100 million weekly active users and that its tools facilitate and accelerate the generation of large amounts of content, which they can also use to hide language errors and generate fake interactions.

Governments and influential political parties around the world use AI to generate text, images and videos (Shutterstock)

Covert influence activities

At the end of May last year, OpenAI revealed five covert operations from Russia, China, Iran and Israel that used the company's artificial intelligence technology to influence global public opinion, which the company said it was capable of stopping, according to Time magazine.

In its new report, the company explains how these groups, some of which are linked to well-known advertising campaigns, used its technology in a number of “misleading activities.” These activities included posting comments, articles and photos on multiple social media platforms in different languages, creating names and biographies for fake accounts, modifying software code, and translating and language proofreading text.

These actions focus on a range of issues, including defending the war in the Gaza Strip and Russia's invasion of Ukraine, criticizing opponents of the Chinese government, in addition to making comments on Indian, European and American policies as it attempts to influence public opinion there.

The examples cited by OpenAI analysts reveal how these actors are using AI technology in the same types of online influence operations they have been conducting for a decade, and these actors are focused on using fake accounts, comments, and articles to influence public opinion and manipulate political outcomes.

“These trends reveal a threat landscape that is evolving, not revolutionary,” researchers who lead OpenAI’s intelligence and investigations team wrote in the report. “These actors are using our platform to improve the quality of their content and become more efficient in their operations.”

For example, the company’s report details how Stoic, a Tel Aviv-based political marketing firm, used OpenAI tools to generate pro-Israel content about the ongoing genocidal war in the Gaza Strip.

The campaign, called “Zero Zeno,” targets audiences in the U.S., Canada, and Israel, and Meta also said it has removed 510 accounts on Facebook and 32 accounts on Instagram that were associated with the same Israeli company.

The group of fake accounts — which included ones posing as African Americans and students in the U.S. and Canada — often responded to prominent people or media organizations in posts praising Israel, criticizing anti-Semitism at U.S. universities and denouncing “radical Islamist” trends.

According to Open AI's report, the campaign does not appear to have achieved any significant interaction with it.

One Russian influence campaign the company stopped, called “Bad Grammar,” used its AI model debugging code to run a bot on the Telegram app that posted short political commentary in English and Russian. The company said the operation targeted Ukraine, Moldova, the United States and the Baltic states.

Another Russian operation called Doppelganger (the US Treasury has ties to the Kremlin) used OpenAI models to generate news headlines and convert newspaper articles into Facebook posts, in addition to writing commentaries in English, French, German, Italian, and Polish.

The famous Chinese network Spamoflag also used OpenAI tools to search social media activities and generate text in Chinese, English, Japanese and Korean, and published it on multiple platforms such as X, Medium and Blogspot.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button