Fighting E-Commerce Bots

Fighting E-Commerce Bots

Table of Contents
Share on LinkedIn

The online marketplace has consistently been a competitive arena where each request, click, hover, and transaction represents a critical moment to earn revenue, and potentially a loyal customer, for an eCommerce merchant. However, issues with bots and other automated programs such as inventory scraping, scalping, and hoarding bots threaten to disrupt these vital moments and jeopardize online transactions.  

These bots lead to a wide range of problems such as server unavailability and poor customer experience. eCommerce Application Programming Interfaces (APIs) are no exception to this and in fact bots have started to exclusively focus on APIs for the advantages that they bring. 

Types of eCommerce bots. 

There are several types of bots that can cause significant issues for eCommerce companies and disrupt the customer experience. These bots include: 

Inventory scraping bots.  

These bots systematically map all SKUs, product descriptions, and prices on an eCommerce storefront. This data can be used to establish competitive pricing and marketing strategies for a rival merchant or to support a reseller or dropshipping business. 

Scalping bots.  

These bots acquire high-value, limited-release items such as sporting event tickets, sneakers, collectible memorabilia, and gaming consoles by allowing the bot handler to “jump the queue” and purchase these items before other users. These items are then resold on secondary markets at an increased price. Notably, scalping bots were used to buy toilet paper and paper towels during the COVID19 pandemic. 

Inventory hoarding bots.  

These bots create shopping carts and fill them with products but never proceed to checkout. This ties up inventory in the shopping cart system, preventing genuine shoppers from purchasing items that are physically available in the warehouse but not reflected in the online inventory. 

Account takeover and cashout bots.  

These bots test a large number of user logins—typically harvested via phishing or data breaches—to see if they are valid on the site. Once they compromise an account, cyber criminals use the compromised accounts to buy anything they can, such as electronic items or prepaid gift cards. 

Vulnerability scanners.  

These bots test the application to see if it is vulnerable to exploits such as SQL injection or cross-site scripting. This helps bad actors be more effective when they eventually launch an attack against an API. 

And there are millions of other bots that you will see with uses like feeding artificial intelligence models, cost of living calculators, translation farms, and SEO spam.  

APIs for eCommerce are growing. 

Venture Capitalist Mark Andreessen infamously said, “Software is eating the world.” Today, that could just as easily be “APIs are eating the world” because of how tightly coupled APIs are with modern software. An API is an application programming interface and they enable programs, including those running on a desktop, mobile devices, server, and Internet of Things (IoT) devices —to talk to other programs. Internet-accessible APIs are growing rapidly because of the compatibility they enable between services. 

Today’s eCommerce company uses 3 important types of APIs: 

API-driven websites.  

These are websites where the base page is blank and is populated via API calls. These APIs can also be called independently to refresh components of the page based on user input. 

Mobile applications.  

Mobile applications are driven heavily by APIs. As a company grows and uses more functionality in their mobile applications, this increases the types of APIs that they deploy to support those applications. 

Business-to-business APIs.  

These APIs are used to connect the application servers of one company with another. For example, an eCommerce provider might use APIs to allow affiliate outlets to promote products or to perform order fulfillment. 

Bots love and use APIs. 

Just as APIs make it easier for automation to interact with a storefront and the inventory and payment systems behind it, APIs also enable unwanted bots to do the same. 

APIs have several reasons why they are harder to secure: 

APIs are faster.  

APIs are fast and machine-readable. As a result, a bot using an API can accomplish its goals faster than crawling through a website and dealing with HTML content. 

Lacking context.  

APIs, in general, have a lack of context when looked at on a per-request basis. API requests only hold the minimum amount of data necessary. 

No knowledge base on protection.  

APIs and API developers do not have years of cumulative knowledge about how to protect them like websites now do. 

Rapid deployment.  

APIs are built and deployed more rapidly than other applications, leaving security teams unaware of the velocity of releases. This can create vulnerabilities that bad actors can then exploit. 

No client-side bot detection.  

APIs are mostly not used by web browsers, so in-browser bot protections such as JavaScript telemetry-gathering do not work. 

The impacts of bots on eCommerce sites and APIs. 

All this bot traffic on websites and APIs impacts the company that depends on them for eCommerce. These impacts include: 

Server overload.  

Bots, either a single bot running aggressively or a large amount of small volume scraping bots, generate many requests to their target. At lower scale, this can create unpredictable server performance, and at higher levels it can cause a denial-of-service condition. 

Product unavailability.  

Scrapers and inventory hoarding bots reduce the amount of available inventory by either buying or otherwise tying-up inventory so that real users are unable to purchase them. 

Unsatisfactory user experience.  

When bots consume server resources and inventory, genuine users often face delays and erratic platform behavior, resulting in lost sales. With a direct connection between page load times and conversion rates, this can have a particularly damaging impact on organizations.  

Rising infrastructure costs.  

Widespread bot intrusion can lead to increased costs for retailers and businesses as they struggle to keep up with the demand, causing a cascading effect on their bottom line. This manifests itself either as higher usage rates and Operational Expenses (OPEX) from network, bandwidth, or CDN (Content Delivery Network) or as increased Capital Expenditures (CAPEX) on servers, routing and switching, and software licenses. 

Brand reputation damage.  

With bots causing chaos on eCommerce sites, businesses run the risk of losing credibility and customer trust, which could affect their future sales and revenue. 

Distorted analytic data.  

With bots masquerading as visitors, website analytics become increasingly detached from reality and not reflecting true customers and their behavior. 

Not all bots are bad. 

But not all bots are bad. There are a lot of uses for bots, especially when it comes to APIs. In fact, most API clients are friendly bots that you allow into your APIs to help your business grow. 

Some good examples of bots: 

Partner and supplier IT systems.  

It is common to have partner and supplier IT systems, such as Enterprise Resource Planning applications, that access your APIs. 

Caching or security bots.  

Some bots are used for their ability to cache content or services to offload expensive API calls from the application server. This helps organizations save on operating costs while improving performance.  

Search engine crawlers.  

Google, Bing, Yahoo, and others crawl the internet to index it. They do not aimlessly browse websites all day long, but they act as bot clients requesting pages and parsing through the code, looking for links and keywords. 

Monitoring services.  

Many businesses use monitoring tools like Pingdom or New Relic that periodically make requests against your site to ensure its online availability. 

Vulnerability scanners.  

PCI-DSS requires that you use an Approved Scanning Vendor (ASV) for vulnerability scanning of your web applications to identify and remediate vulnerabilities. 

Affiliates.  

Affiliate bots are used to monitor certain aspects of your website, such as pricing changes, so the affiliate can update their site and keep it fresh with your latest and greatest offerings. 

Social media sites.  

Many businesses use social media automation tools like Hootsuite or Buffer to schedule posts on various social platforms and to get a preview or thumbnail of the content. 

Your own sub-sites or mobile apps.  

It is common to have multiple sub-sites or mobile apps that all use the same API for data retrieval and processing. 

Other uses for bots include customer service chatbots, virtual assistants, and automated messaging bots. These can help streamline communication with customers and provide quick responses to inquiries. Bots are also becoming more prevalent in e-commerce, with many retailers using them for tasks such as product recommendations, order tracking, and personalized shopping experiences. 

This should change your approach to bot management on your APIs and web applications. Not all bots should be automatically banned from accessing your online properties. It is important for businesses to carefully consider the purpose of each bot request and determine its potential impact on their operations before taking any action. By understanding the diverse types of good bots and their uses, businesses can effectively manage their bot traffic without negatively affecting their business operations, profitability, or reputation.  

Detecting bots on eCommerce APIs and websites. 

The first step in defending your organization against bad bots is detecting them. Some effective strategies include:  

IP and CIDR reputation.  

Many bots come from cloud service providers or hosting providers. As a result, IP and CIDR reputation provides an effective way to screen out a large amount of bad bot traffic. This method allows you to block or restrict access to IP ranges that have a history of malicious activity. 

User-agent analysis.  

By analyzing the user-agent header in HTTP requests, businesses can identify and block bots that are using outdated or fake user-agents. User-agent analysis can also help differentiate between legitimate human users and automated bots. 

Rate controls.  

Another common method for detecting bots is by implementing rate controls, which limit the number of requests a single IP or user can make within a specific period. This helps prevent abusive bot behavior and protects against DDoS (Distributed Denial of Service) attacks. However, it is important to note that legitimate users may also be affected by rate controls if they exceed the set limit. 

Behavior-based detection.  

Some businesses use machine learning algorithms and behavioral analysis to detect suspicious activity and differentiate between human and bot behavior. This approach looks at factors such as mouse movement, keyboard typing patterns, and browsing history to determine if a request is coming from a bot or a real user. 

Authentication and authorization.  

APIs and web applications can also require bots to authenticate themselves before accessing certain resources. This can be done through various methods, such as API keys, OAuth tokens, or other authentication mechanisms. By only allowing authorized bots to access sensitive data and functionalities, businesses can prevent unauthorized bot activity. 

Workflow detection.  

Another approach to detecting bots is by analyzing the workflow or pattern of requests. Bots often follow a specific set of actions and requests, which can be identified and blocked by businesses. This method is especially useful in preventing automated attacks such as credential stuffing, where bots attempt to use stolen login credentials to access user accounts. 

Business logic enforcement.  

Businesses can also enforce specific business logic to detect and prevent bot activity. This involves creating rules and thresholds for certain actions or requests that are not considered normal behavior for a legitimate user. For example, a rule could be set to flag any login attempts from multiple IP addresses within a brief time span, which is often indicative of bot activity. 

CAPTCHA challenges.  

This is also known as the Completely Automated Public Turing test to tell Computers and Humans Apart. It is a challenge-response test designed to determine if the user is human. This method has been widely used on eCommerce sites to prevent bots from accessing sensitive information and completing transactions. 

Bot detection services.  

For businesses that do not have the resources or expertise to implement their own bot detection methods, there are several third-party services available that specialize in detecting and blocking malicious bots. These services use a combination of techniques such as IP reputation, behavioral analysis, and machine learning algorithms to effectively identify and block bots. 

As the tactics used by malicious bots constantly evolve, it is critical for businesses to continuously monitor their traffic and update their bot detection strategies accordingly. 

Effective bot management means profitability. 

While bots have their positive uses in the eCommerce world, they can also pose significant challenges for businesses, especially when it comes to APIs. By implementing various bot detection techniques, businesses can better protect their model, inventory, and IT resources. Not all bots are bad, but it is crucial to have measures in place to differentiate between helpful and harmful ones. Success for modern eCommerce merchants lies in managing bots effectively, minimizing the downsides of malicious bots, and optimizing the abilities of friendly bots. 

To learn how Vercara’s UltraAPI solutions can help you detect and manage bots, visit our solutions page 

Published On: June 25, 2024
Last Updated: July 24, 2024
Interested in learning more?
View all , content.
Experience unbeatable protection.
Schedule a demo to see our cloud solutions.
  • Solutions
  • Products
  • Industries
  • Why Vercara
  • Plans
  • Partners
  • Resources
  • Company