In an era where information is power and disinformation can be a weapon, social media and comment bots matched with artificial intelligence (AI) are starting to play a pivotal role in shaping and dividing public opinion. Recent events have highlighted the growing threat of AI-driven disinformation campaign bots, such as the one disrupted by the U.S. Justice Department, which was attributed to the Russian government. In this blog post, we will examine the insidious use of AI bots in disinformation efforts, how social media can protect their APIs (Application Programming Interfaces) and websites from this emerging threat, and how law enforcement can help to mitigate the most egregious bots.
Understanding AI-driven disinformation campaigns.
Goals of disinformation Bot Herders.
Some of the advantages of free societies are the freedom of speech for their citizens and the exchange of ideas. Everybody has an opinion they can state publicly. This leads to a consensus on issues, the biggest example of which is the election of public officials.
However, rogue countries see this as a weakness. If everybody has an opinion, it could lead to chaos, as very fringe ideas or the volume of negative sentiment outweigh the power of the public debate and consensus process. Not satisfied with just disagreeing with the philosophy, some of these rogue countries will use freedom of speech as a tool against those countries that espouse it.
One of the ways that rogue nations can fragment the consensus process is with disinformation bots. By supporting extremist views on both sides of contentious issues, they can create divisions among citizens and destabilize the united front of democratic nations like the US, EU, and their allies and partners. When the global community stands united against rogue nations and their governments, they can effectively respond to these types of geopolitical threats.
The mechanics of Disinformation Bots.
As with any threat, the key to disrupting the threat is understanding the process they follow.
In the case of AI-backed disinformation bots, the following tools and methods are used:
- Building a Bot: Bot herders hire cloud services or exploit vulnerabilities in internet-facing devices to upload and install software on them to carry out specific goals. Bots can be as simple as automated web-scraping scripts or as complex as a bot that manipulates the social media mobile application.
- Solve for Geolocation: Positioning bots within specific geographical regions helps them blend in with the normal user population in the target country. This can be done through proxies inside the target country, GPS spoofing on a mobile device running the application, or compromising systems inside the target country.
- Social Media Integration: Connecting bots to social media platforms, video sites, news outlet comment pages, or online forums—any service that allows free and open debate—through APIs or web scraping gives them access to read, like, subscribe, and post.
- Create Bot Accounts and Simulate User Behavior: This ensures that bots interact frequently with other bots and real users to create a “lived-in” appearance. It might involve stealing or reposting content from other accounts or adding simple, short comments like “good job.”
- Give the Bots Political Content: Early versions of disinformation bots used templates and standard key phrases for their posts. Media-savvy humans could easily pick out bots in their social media feeds.
- Incorporating AI Large-Language Models: Feeding bots AI models trained on political rhetoric and general-purpose writing and grammar allow the bot to generate convincing content that can blend in more by unique to each post but also to mimic the language of social media users.
Social media is API-driven.
Social media and other online forums have evolved over the past 10 years to be very API-driven. There are 3 primary ways that social media and their cousins use APIs:
- Mobile applications are API-Driven. Programmers use APIs to develop and deploy apps that provide an interface for social media services. These APIs do not necessarily have to be queried from the mobile application, and a smart bot can replicate the mobile application API behavior and read and post it.
- Most Social Media Websites are a skin over APIs. This is a technique called a “Single-Page Application.” They consist of a blank base page which is populated inside the browser based on the results of API queries.
- Advertising Integration. Social media platforms rely heavily on advertising revenue, leading them to partner with advertisers to integrate their ads into the user experience through APIs.
Bots love APIs.
Most people think of bots as web scrapers that request the HTML and text content on a website and then parse through that content to derive data. While this is true, and those bots still exist, most bots today are built to use APIs. There are several reasons for this trend:
- APIs are designed to be used by friendly bots and are supported by a client library built by the social media site. This changes the challenge from one of identifying bots to one of identifying good versus bad bots.
- API responses are built to be easily consumed by bots. This reduces the amount of HTML parsing a bot must do to consume content off a website.
- APIs do not have contextual items such as JavaScript or browser events that help to identify users in web browsers. This makes it harder to identify requests from bots in API traffic.
- APIs have smaller requests and responses, which means that a bot can effectively send more requests in a quicker amount of time.
Combatting AI-driven Disinformation Bots.
The role of law enforcement.
As we have seen with other large-scale threats such as LockBit ransomware, DDoS booter services, and malware infections such as DNS Changer, law enforcement can conduct surveillance and takedown operations to disrupt cybercriminal operations. The news that the DoJ has started to prioritize takedowns of disinformation bots is a positive sign.
However, social media and related sites are all privately owned. This means that social media operators need to request law enforcement assistance and cooperate with investigators in a way that still preserves the rights and privacy of their user populations. This typically means that the social media site needs to analyze bots and their supporting infrastructure and only take a hand in law enforcement bot information with a high degree of assurance.
Bot management for social media.
Several mechanisms in modern Bot Management solutions can detect bots that interact with social media APIs and websites:
- Bot Signatures and Intent: Detecting bots by analyzing their behavior and identifying malicious patterns.
- Enforcing Workflows: Requiring an API client to interact with several APIs in sequence. For instance, you must read a social media post before you post on it.
- Requiring Authenticated Use: Implementing strict authentication measures to correlate actions with user accounts, which can be disabled if found suspicious.
- Rate-Controls or Equivalent for Critical Functions: By setting limits on the velocity of account creation, login, and posting, bots can be slowed.
- Identifying and Mitigating Vulnerabilities: Vulnerabilities allow bots to bypass other controls such as authentication, account creation, etc.
Once a bot is identified.
There are also several actions that social media operators can take on their platform when they discover bots:
- Pivot Analysis: When a bot is discovered, the social media service can trace its connections, friends, and followers to find other bots in its network.
- Network Analysis: By identifying the IP source addresses of social media bots, social media operators can discover additional accounts that the bot is using.
- Blocking Bots: Bots can be blocked by disabling their logins or by blocking their access to the API and website.
- Law Enforcement Takedown: For larger bots, site operators can cooperate with law enforcement to arrest bot herders and seize their bot equipment.
UltraAPI Bot Manager detects and blocks bots.
Vercara’s UltraAPI Bot Manager provides a solution to detect and mitigate automated attacks against APIs and websites, such as AI-driven disinformation bots providing value in several areas:
- Real-time Protection: Continuous monitoring and protection for APIs and web applications.
- Multi-dimensional Machine Learning: Leveraging the largest API threat database to adapt and respond to new threats.
- Enhanced Security Posture: Providing complete visibility into attacks, reducing incident response time, and improving overall security.
- Cost Efficiency: Saving money and IT resources with minimal false positives and efficient management.
- Quick Deployment and Broad Coverage: Immediate effectiveness without requiring complex integrations, protecting a wide range of applications and devices.
The future of democracy and free society.
The disruption of the Russian government-backed disinformation campaign, which aimed to spread falsehoods about the war in Ukraine, underscores the threat that bot-based social media campaigns pose. By creating fictitious social media profiles that appear to belong to authentic Americans, these bots disseminate propaganda, influence public opinion, and undermine the democratic processes.
The integrity of democratic processes and open, honest debate hinges on our ability to combat AI-driven disinformation. Vercara UltraAPI Bot Manager offers robust protection against the sophisticated threats posed by AI-driven disinformation campaigns, preserving the integrity of our digital interactions. Contact us to set up a demo today to learn more about how UltraAPI Bot Manager can safeguard your social media APIs and websites.
Stay informed, stay protected, and let us work together to defend the future of democracy.