Report claims China funded ‘army’ of fake accounts designed to discredit US in Phillipines
A new report has shown exactly how China attempted to secretly manipulate a nation crucial to its ambitions in the Pacific.
A new report has claimed a Chinese embassy secretly funded a marketing firm to run fake social media accounts aimed at undermining a foreign nation’s alliance with the United States.
A Reuters investigation found that several fake accounts commissioned by the Chinese embassy in Manila were sowing dissent with a wide array of online tactics, from disparaging Western vaccines to downplaying the Philippines’ maritime claims.
The revelation has come as another grim yet timely reminder of the dangers of AI misuse and its potential for propaganda and misinformation campaigns.
Internal documents released this week show the Chinese embassy paid InfinitUs Marketing Solutions to manage a covert online campaign using fake profiles to spread pro-Beijing propaganda and attack critics.
The report claims the firm also built a media outlet disguised as Filipino-run and amplified content from local figures who had received cash awards linked to Beijing.
“They came from fake accounts paid for by the diplomatic mission,” one document states.
Employees created an “army” of fake Facebook profiles to post pro-Beijing talking points, cheerlead the Chinese ambassador and smear outspoken critics.
“Army always supports the advocacies and activities of the Chinese ambassador’s page,” one embassy report proudly declared.
Another bragged the trolls had spread videos “about the cons of the Typhon missile of the US being deployed (sic) [in] the Philippines.”
The accounts also promoted content from Ni Hao Manila, which was supposedly a Filipino-run media outlet, but in reality was just another InfinitUs creation. With hundreds of thousands of followers across TikTok and YouTube, the page pumped out high-resolution videos praising Beijing’s military power and mocking Philippine security ties with the US.
The accounts were later removed by Facebook’s parent company, Meta, for violating policy.
But the campaign didn’t stop at just bots. Prominent Filipinos, including politicians, media figures and academics, were reportedly showered with cash awards by the Association of Philippines-China Understanding (APCU). This group was re-established with the help of the Chinese Communist Party.
Among the alleged recipients was Rommel Banlaoi, a counter-terrorism expert whose 2022 bid to become deputy national security adviser was blocked by wary security officials.
APCU said the embassy-backed awards came with thousands of dollars attached, which is far more than the average monthly wage in the Philippines.
Other awardees included provincial governor Manuel Mamba, who admitted accepting a plaque but denied knowledge of a cash prize, and Regina Tecson, an aide to Vice President Sara Duterte, who said she donated her award money to charity.
Meanwhile, Beijing has brushed off the revelations.
“We will not, and have no interest in, interfering in Philippine elections,” a Chinese foreign ministry spokesperson insisted, further claiming that accusations of meddling “have failed and instead have backfired.”
InfinitUs itself has previously denied engaging in “illicit digital activity.”
The revelations land at a critical moment in history, both in geopolitical and technological terms.
Under President Ferdinand Marcos Jr, Manila has tilted sharply back toward Washington in strengthening its military ties by confronting Beijing more aggressively in the South China Sea.
Both superpowers see the country as a crucial battleground. With Taiwan just across the water, and Manila bracing for a future regional conflict, influence in the Philippines is viewed as crucial to the potential future of the region.
Polls show support for America remains strong, but Beijing’s efforts may have made a dent. The frontrunner for the 2028 presidential election is Sara Duterte, who is a staunch critic of Marcos Jr’s pro-US policies, and daughter of China-friendly former president Rodrigo Duterte.
AI could supercharge propaganda campaigns
A new study that tested whether large language models (LLMs) can convincingly imitate state-backed disinformation has warned just how effective AI can be in swaying entire populations.
The research paper How Persuasive is AI-Generated Propaganda? released in PNAS Nexus, surveyed more than 8,000 Americans and compared articles written by OpenAI’s GPT-3 with real propaganda pieces from Iran and Russia.
Both proved highly persuasive.
Original propaganda nearly doubled agreement with its message, while AI-generated articles lifted support almost as strongly, raising agreement by 19 percentage points compared with a control group.
In many cases, the machine-written content was almost as effective as the real thing, influencing readers across political lines, demographics and media habits.
The danger increases when humans refine the AI’s output. By editing prompts or curating the best articles, researchers found GPT-3 propaganda could be as persuasive, or even more persuasive, than authentic state-produced material.
Many respondents also rated the AI text as more credible and better written, suggesting it could blend easily into today’s disinformation-saturated online environment.
But experts warn the threat is likely greater than the study shows.
The researchers used GPT-3, a now-outdated system compared with newer, more advanced systems, and tested only the effect of a single article. In reality, AI makes it simple and cheap to flood the internet with endless variations of the same narrative, exploiting the “illusory truth effect” where repetition makes falsehoods feel true.
The authors argue that future work should focus on detecting the infrastructure used to spread AI propaganda, such as bot networks and fake outlets, and explore interventions like clearly labelling AI content.
Meanwhile, the 2025 Real Concerns report, commissioned by Real Insurance, found more than half of Australians (52 per cent) are now “very” or “extremely” concerned about fake news.
Alarmingly, nearly a quarter (23 per cent) admitted they “rarely” or “never” fact-check political stories before sharing them, leaving them especially vulnerable to manipulation.
