WASHINGTON, Nov 30 : Someone in China created thousands of fake social media accounts designed to appear to be from Americans and used them to spread polarizing political content in an apparent effort to divide the US ahead of next year’s elections, Meta said Thursday.
The network of nearly 4,800 fake accounts was attempting to build an audience when it was identified and eliminated by the tech company, which owns Facebook and Instagram. The accounts sported fake photos, names and locations as a way to appear like everyday American Facebook users weighing in on political issues.
Instead of spreading fake content as other networks have done, the accounts were used to reshare posts from X, the platform formerly known as Twitter, that were created by politicians, news outlets and others. The interconnected accounts pulled content from both liberal and conservative sources, an indication that its goal was not to support one side or the other but to exaggerate partisan divisions and further inflame polarization.
The newly identified network shows how America’s foreign adversaries exploit US-based tech platforms to sow discord and distrust, and it hints at the serious threats posed by online disinformation next year, when national elections will occur in the US, India, Mexico, Ukraine, Pakistan, Taiwan and other nations.
“These networks still struggle to build audiences, but they’re a warning,” said Ben Nimmo, who leads investigations into inauthentic behavior on Meta’s platforms. “Foreign threat actors are attempting to reach people across the internet ahead of next year’s elections, and we need to remain alert.”
Meta Platforms Inc., based in Menlo Park, California, couldn’t definitively link the Chinese network to the Chinese government, but it did determine the network originated in that country. The content spread by the accounts complements Chinese government propaganda and disinformation that has sought to inflate partisan and ideological divisions within the US.
When asked about its ad policy, the company said it is focusing on future elections, not ones from the past, and will reject ads that cast unfounded doubt on upcoming contests.
And while Meta has announced a new artificial intelligence policy that will require political ads to bear a disclaimer if they contain AI-generated content, the company has allowed other altered videos that were created using more conventional programs to remain on its platform, including a digitally edited video of Biden that claims he is a pedophile.
“This is a company that cannot be taken seriously and that cannot be trusted,” said Zamaan Qureshi, a policy adviser at the Real Facebook Oversight Board, an organization of civil rights leaders and tech experts who have been critical of Meta’s approach to disinformation and hate speech. “Watch what Meta does, not what they say.”
Meta executives discussed the network’s activities during a conference call with reporters on Wednesday, the day after the tech giant released its policies for the upcoming election year — most of which were put in place for prior elections.
“This is important ahead of 2024,” Nimmo said. “As the war continues, we should especially expect to see Russian attempts to target election-related debates and candidates that focus on support for Ukraine.” (AP)