sb-nz logo
Story image

Interview: Inside the crybercriminal gig economy for bots

05 Feb 2019

Cybercriminals are moving towards a gig economy model as the internet makes it increasingly more convenient for hackers to buy and sell services from one another.

This ‘cybercriminal gig economy’ is driving specialisation, and marketisation, across different attack verticals.

Techday spoke to Akamai Asia Pacific security technology and strategy head Fernando Serto about how organisations are being impacted by the rise of the cybercriminal gig economy and specialised bot attacks.

What changes in trends have enabled the development of a cybercriminal gig economy?

The shift to a gig economy has been enabled by the launch of task-oriented platforms, where specialisation is rewarded, and finding skills has become as easy as opening an app and making a request.

We’ve seen a similar behaviour on marketplaces in the dark corners of the web, powering the ‘cybercriminal gig economy’.

These marketplaces operate similarly to legitimate apps, where specific jobs are posted and attackers are ranked according to a rating system evaluating them on the accuracy of the data they are selling, or the efficacy of the tools they are selling.

One example is the marketplace for validated credentials, where the sellers of these credentials are providing lists of credentials they already went through the effort of validating.

Therefore the accuracy of the data is extremely important for the person acquiring these with the intent of launching account take over attacks, and eventually fraud.

In addition, anonymous cryptocurrencies have also contributed to a shift in behaviour.

How can businesses distinguish between bots that benefit their sites vs bots that negatively impact their business?

For a business to be able to answer this question, it’s paramount that they have visibility into which bots are hitting their applications, and once they do, what exactly are they accessing and how often.

Even bots that benefit a business, such as search engine crawlers, site monitoring services or content aggregators, can have a negative impact to applications.

For example, if an application is getting too many hits from known ‘good bots’, there can still be a negative impact to the business from an application performance perspective at peak times.

It’s a lot easier for an organisation to identify good bots, as they typically identify themselves with a static User Agent, as well as a URL to their company.

On the other hand, identifying bad bots becomes even more challenging, as they tend to use highly distributed IP addresses, User Agents and behaviour that mimics real browsers.

What are some of the evasion tactics hackers who use bots are utilising?

Bot operators are extremely creative and continuously come up with new attempts to evade security defences.

There are several techniques that range in effort and complexity.

A very simple technique is to change certain characteristics, such as the User Agent or other HTTP header values, in an attempt to impersonate a real user.

Operators will also use multiple IP addresses to avoid IP address-based security controls.

This technique is also used to launch “low and slow” attacks, which are a lot harder to detect as the application owners don’t see any spike in traffic or anything that leads them to believe they are under attack.

Other techniques include the use of VPNs and Tor in an attempt to bypass any geo-fencing controls customers may have in place.

How can organisations mitigate this threat?

When we’re talking about the simplest techniques for evasion, an organisation can block the IP addresses of known bad bots.

However, as soon as an organisation starts to get targeted by more complex bots, the level of effort and difficulty to mitigate them go up significantly.

We also see several of our customers getting targeted by multiple bots, but some of those bots are capable of utilising multiple evasion techniques.

For example, bots that leverage thousands of IP addresses, randomise User Agents, impersonate browsers and session replay.

These evasion tactics add a high level of complexity and increase the level of effort to mitigate. When bots are very complex, it’s not feasible to apply the same security controls anymore.

Bots can also change their behaviour if they think they’ve been detected.

Therefore it’s important to accurately differentiate a bot from a real user.

Story image
CrowdStrike integrates with ServiceNow program to bolster incident response
As part of the move, users can now integrate device data from the CrowdStrike Falcon platform into their incident response process, allowing for the improvement of both the security and IT operation outcomes.More
Story image
Thales: A/NZ cybersecurity approach more talk than action
“While some organisations are talking a good story … predicted spending shows that most have the wrong focus.”More
Story image
Video: 10 Minute IT Jams - Who is CrowdStrike?
Today, Techday speaks to CrowdStrike ANZ channel director Luke Francis about the company's key products and offerings, its upcoming annual security conference, and the infrastructure it leverages in the A/NZ region.More
Story image
NZX CIO David Godfey to resign by year's end
"David has been in the business for more than a decade, and has been a great contributor over that time - including through the challenges we faced this year due to COVID and the more recent cyber attacks where he has shown wonderful calmness and support of his teams.”More
Story image
5 ways to use data science to predict security issues - Forcepoint
Data science enables people to respond to problems in a better way, and to also understand those problems in a way that would not have been possible 50 years ago.More
Story image
Is cyber deception the latest SOC 'game changer'?
Cyber deception reduces data breach costs by more than 51% and Security Operations Centre (SOC) inefficiencies by 32%, according to a new research report by Attivo Networks and Kevin Fiscus of Deceptive Defense.More