Some bots are useful and fully fit the legal context, as is the case with search engine crawlers. However, malicious ones can disseminate spam, clutter websites’ feedback forms, overburden servers, and congest critical communication channels. These programs only pursue sketchy objectives, and they can harvest sensitive information, make fraudulent requests to financial systems, steal passwords and promo code combinations, and execute DDoS attacks. In addition, some bots are capable of collecting and stealing personally identifiable information (PII), credentials, or system files. This data can be weaponized for phishing, spamming, or planning high-profile cyber-attacks later on.
To top it off, these entities are getting smarter and can bypass basic security mechanisms to facilitate different cyberattacks.
Luckily, bot mitigation and bot management systems are created, allowing webmasters to specify which automated programs are allowed to access their resources. This way, benign bots can seamlessly interact with a website, and evil ones are blocked.
Bot management plays an important role in maintaining stable performance and robust security of a website. If malicious bots aren’t banned from accessing it, they can siphon off the server resources and thereby cause a denial-of-service condition or slow down the connections for normal users.
At the same time, bot management systems that mistake good bots for bad ones can badly impact an organization’s business workflows. For example, blocking search engine crawlers can cause the traffic, conversions, and revenue to take a nosedive.
Systems that protect websites against bots
The evolution of anti-bot systems is keeping pace with the ever-advancing capabilities of these malicious automated programs and the ways attackers use them. Nowadays, the task of bot management is twofold: to identify intrusive bots that increasingly mimic the actions of humans; and to distinguish malicious bots from legitimate ones, which can be hugely important for an organization’s day-to-day operations.
The following three approaches are currently used to detect and manage bots:
- Static approach is geared toward examining the headers and other content of web requests. Since this is a passive method, it can only spot known bots that are active in the wild.
- Behavioral approach assesses a user’s activity and compares it against known identity verification patterns. This method uses multiple profiles to categorize behavior patterns and distinguish between human users, harmless bots, and dodgy bots.
The most effective protection strategies combine all these methods to identify bots as efficiently as possible, including unknown ones or those that exhibit dynamic behavior. Bot management systems additionally use a mix of security and machine learning technologies to ensure maximum accuracy of blocking malicious activity while giving the green light to inoffensive bots.
Let’s now zoom into several types of bot attacks that can entail serious consequences for any organization.
DDoS attacks hinge on numerous compromised devices to send requests to servers in bulk, thus draining the bandwidth or overloading the computation power. As a result, websites, applications, or services can become unavailable to authorized users.
Credential stuffing is an attack in which criminals use bots to automatically enter leaked or stolen usernames and passwords to gain a foothold in a web resource. The bot cycles through different combinations until a match is found. Such attacks can be successful because users tend to reuse sign-in credentials for multiple accounts on different services.
Cybercriminals can also use bots to enter promo codes or generate fake gift cards. Such codes or cards can then be converted into money.
Furthermore, bad bots can scan websites, social networks, or forums to find people’s personal information. Attackers can abuse this information to boost the effectiveness of phishing and banking fraud schemes.
Finally, cybercriminals use bots to retrieve commercial secrets from corporate servers. These can include branding content, product design details, or partnership offers. E-commerce portals are particularly susceptible to such exploitation.
The state of the global market for anti-bot systems
At this point, the countermeasures for bot attacks run the gamut from simple mechanisms, such as CAPTCHA, hidden fields on web pages, and form fill-out time assessment, to more complex ones. The latter include traffic filtering services, Web Application Firewalls (WAFs), digital fingerprinting of devices, anti-malware, network security solutions, and behavior analysis systems. Many vendors use a comprehensive approach and apply a series of these techniques at once.
In its article named “Hype Cycle for Application Security, 2021”, technology research company Gartner identified the following key driving forces for the contemporary bot protection market:
- The need to prevent large-scale automated attacks executed with simple bot programs.
- The realization by InfoSec and IT executives that the benefits of bot management systems are not limited to security but also extend to the key business areas, which propels the demand for these systems.
- Concerns about direct financial and potential reputational damage caused by a scarce ability to differentiate bots from normal users and customers.
- The capability of bot management systems not only to detect bad bots but also to maintain a decent level of the user experience by allowing legitimate apps and benign crawlers to work properly.
According to Gartner’s findings, the penetration of these products in the information security market is fairly high, ranging from 20 to 50%. The above-mentioned whitepaper says that the bot management market segment is getting over the stage of disappointment due to excessive expectations and will reach a productivity plateau over the next few years.
Forrester, another well-known market research firm, also analyzed the bot management market and presented a chart in their article “The Forrester New Wave: Bot Management, Q1 2020”, in which it ranked the vendors according to their strategic strength, current supply, and market demand.
Netacea, Akamai Technologies, Imperva, and PerimeterX were among the leaders. DataDome and White Ops (now rebranded as Human) were named strong performers. Their main contenders included Shape Security and Radware. All others (Cloudflare, Alibaba Cloud, Instart, AppsFlyer, and Reblaze) fell into the category of challengers.
What does the future of anti-bot systems hold?
The problem of malicious bots is not going anywhere in the foreseeable future. These automated programs are increasingly sophisticated and complex. Although simple mechanisms like CAPTCHA, hidden fields, and the evaluation of form completion time used to stop this foul play in its tracks, they can’t do the trick anymore. That said, more sophisticated systems based on traffic analysis, digital fingerprinting of devices, and behavioral mechanisms are required.
As previously stated, the number of bots on the Internet is growing steadily. This trend applies to both legitimate bots and troublemaking ones. Therefore, the use of specially crafted solutions to manage and protect against bots is an inevitable necessity, and this demand will continue to increase.
The problem of separating the wheat from the chaff – that is to say, benign bots and regular users from malicious programs – will also remain relevant, and it is increasingly important to strike a balance between security, the accessibility of web resources, and the quality of the services provided to normal users.