As the World Wide Web turns 25, bots reign supreme

As the World Wide Web turns 25, we’ve just had another reminder of how far we’ve come. On March 10th, just two days before this milestone, Cryptome reported that Twitter had been taken over by a massive botnet. The botnet, it seemed, was sending thousands of spam tweets. Although the responsible party behind this creative attack, along with the exact number of spam distributed, remains unclear, this kind of public assault raises questions about bots on the Internet and what they’re capable of.

Botnets are fundamentally herds of hijacked computers engineered to be remotely operated. These bots attempt to masquerade as humans and perform repetitive automated tasks at a rate much faster than humans. However, botnets are most often the topic of news outlets across the world because they’ve evolved into the tool of choice for the modern hacker community.

The most damaging characteristics of a botnet are their speed and global distribution. Often, by the time attacks have been noticed, the attack itself has already done damage. At which point the resolution is entirely reactive, focused on damage control, not on uncovering the who or the why.

While the Twitter botnet has yet to display any malicious content, its existence (and widespread proliferation) works against the legitimacy of the social media giant. Twitter’s user base has swelled considerably in recent years to approximately 250 million users, roughly half of Facebook’s user base and is still climbing. In tandem with this growth, however, the company has been beset with accusations of “fake accounts” or bots. The exact number of these accounts can only be speculated upon, but their existence is potentially harmful to other users since they often provide links to insecure websites and diminish the credibility of other, legitimate users.

In December 2013, after tracking 1.45 billion visits to websites over a 90-day period, Incapsula reported that bots now account for 61.5 percent of website traffic, a 21 percent increase from 2012. While the majority of that growth was attributed to increased visits by good bots (i.e. certified agents of legitimate services, like search engines) whose presence increased from 20 percent to 31 percent in 2013, more than half of the bots detected were malicious, meaning they intend harm.

While the report found fewer spammers and hacking tools, the number of “impersonators,” unclassified bots with hostile intentions, increased by eight percent. Where other malicious bots originate from known malware threats with a dedicated developer, GUI, “brand” name, and patch history, these instances include custom-made bots, often designed and targeted for one or a series of coordinated malicious attacks.

So how do companies design their security strategy against the attack of bots? It takes an overall reluctance and constant attention to profiling a website’s traffic. When dealing with bots, you always have to play “devil’s advocate” and assume that everything you see is false, until reliably proven otherwise.

Keeping your online business or web properties Bot-FREE is similar to ensuring that your house is protected: Keep “locks on your website’s” doors by using secure passwords and stronger forms of authentication. Prevent strangers from snooping around and interrogate their behavior by using a Web Application Firewall.

Take advantage of the “web neighborhood watch” by using IP reputation and known threat services that track the hackers across the globe. Last but not least: make sure you know who the good bots are, like Google and others. Keeping them out is like shooing away your local postman or grocery delivery service.

More about

Don't miss