pwshub.com

How foreign influence campaigns manipulate your social media feeds

We estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform’s trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams.

In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets. Current state-of-the-art large language model content detectors are unable to distinguish between AI-enabled social bots and human accounts in the wild.

Model misbehavior

The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Therefore it is unclear, for example, whether online influence campaigns can sway election outcomes. Yet, it is vital to understand society’s vulnerability to different manipulation tactics.

In a recent paper, we introduced a social media model called SimSoM that simulates how information spreads through the social network. The model has the key ingredients of platforms such as Instagram, X, Threads, Bluesky and Mastodon: an empirical follower network, a feed algorithm, sharing and resharing mechanisms, and metrics for content quality, appeal and engagement.

SimSoM allows researchers to explore scenarios in which the network is manipulated by malicious agents who control inauthentic accounts. These bad actors aim to spread low-quality information, such as disinformation, conspiracy theories, malware or other harmful messages. We can estimate the effects of adversarial manipulation tactics by measuring the quality of information that targeted users are exposed to in the network.

We simulated scenarios to evaluate the effect of three manipulation tactics. First, infiltration: having fake accounts create believable interactions with human users in a target community, getting those users to follow them. Second, deception: having the fake accounts post engaging content, likely to be reshared by the target users. Bots can do this by, for example, leveraging emotional responses and political alignment. Third, flooding: posting high volumes of content.

Source: arstechnica.com

Related stories
1 month ago - Cybersecurity researchers found new Iranian hacker networks targeting U.S. political campaigns. Kurt “CyberGuy" Knutsson reveals what you need to know and how to protect yourself.
1 month ago - Meta removed dozens of accounts promoting a fictitious political advocacy group ahead of the 2024 election.
3 weeks ago - Still on X, though — US said Russian media worked with Kremlin to influence election, foment unrest. ...
1 month ago - Restaurant meals are more expensive than ever. Here's how to use a rewards credit card to help offset the cost.
1 month ago - Musk bought X with a combination of his own money, bank loans and capital raised from friends and associates. Here’s how far underwater his investors are today.
Other stories
20 minutes ago - The company is working to get drugs on doorsteps in less than 24 hours by building pharmacies in existing same-day delivery facilities.
26 minutes ago - According to a new 32-page filing, the DOJ is considering both "behavioral and structural remedies" to correct Google's anti-competitive practices. And when they say structural, they really mean it – DOJ lawyers are floating the idea of...
44 minutes ago - Hurricane hunter Peter Dodge's last flight into the storms he spent his life studying.
44 minutes ago - Researchers who developed protein folding software, and used it to predict new, useful proteins win the Nobel.
44 minutes ago - The sale of 3,300 Fisker Oceans is now in doubt as they cannot be migrated away from Fisker’s server.