The top four cloud IT security misconfigurations and how to fix them
Article by ExtraHop A/NZ regional sales manager Glen Maloney.
In recent years, reports of large-scale data breaches have become depressingly common. Banks, healthcare providers, retailers and governments have all become targets of cybercriminals looking to extract data or cause disruption.
At the dawn of a new decade, this situation is likely only to get worse. With the value of data increasing by the day, its appeal to criminals will continue to grow.
Another contributing factor is the increasing usage by organisations of Infrastructure as a Service (IaaS) platforms. In recent years, research by security firm McAfee shows almost 70% of data record breaches (a total of 5.4 billion) were caused by unintentional internet exposure due to the misconfiguration of services and portals.
As the research found, the vast majority of these misconfigurations go unreported and often even unnoticed. This means the problem is likely to be even larger than people might think. Thankfully, there are some effective steps that can be taken to overcome four of the most common security issues, thereby reducing the attack surface. The top four misconfigurations issues are:
1. No restrictions on outbound access
To ensure effective IT security, outbound data traffic from a cloud platform should always be configured using the principle of ‘minimalist authority’. Many users of cloud platforms tend to only configure inbound ports in security groups and don’t pay the same attention to outbound ports. However, limiting outbound traffic can ensure data is only made available to the applications and users that are authorised to use it. By doing this, the security team can reduce the risk of internal network scans and lateral movement.
2. Failure to restrict access to non-HTTP and HTTPS ports
While web servers are primarily designed to host websites and web services exposed to the internet, they are also able to handle services such as SSH or RDP for management or databases. However, if this is the case, it becomes vital to block access to them from the public internet.
If the ports are left incorrectly or improperly configured, an organisation can find itself open to attackers prepared to use brute force to gain access to systems. If, for some reason, these ports are opened to the internet, it’s vital to ensure they are configured to only accept traffic from particular, pre-determined IP addresses.
3. Not placing restrictions on inbound access on seldom-used ports
It’s often said that ‘security through obscurity’ never really works. Some services running within an IT infrastructure use high-numbered TCP or UDP ports to obfuscate what is running, but it doesn’t do much to improve security. It certainly won’t offer any protection from a determined cybercriminal looking for access. The use of high-level ports should always be carefully restricted so that only necessary systems have access.
4. Having unrestricted ICMP Access
Experience shows that ICMP is a useful networking protocol, however if it is left open to the internet it can open an organisation to some rather straightforward attacks.
One of the most common uses of ICMP is to use ICMP Echo to verify that servers are online and responsive. While ICMP Echo is an excellent diagnostic tool for security teams, it's also a popular tool among cybercriminals. A simple ping scan of the internet, using Nmap or Fping, can alert criminals to the fact that there is a server online with the potential to be attacked.
Attackers can also use ICMP in other ways. For example, a ping flood can be created that overwhelms a server with too many ICMP messages and creates an effective Denial of Service (DoS) attack. It’s best to make blocking ICMP part of regular security activity.
The increasing role of Network Detection and Response (NDR)
As more organisations take advantage of cloud services and resources, the complexity of their infrastructures will continue to increase. As this complexity rises, so too does the challenge of keeping it secure.
Indeed, while the ability to quickly build servers and services on a cloud platform delivers significant operational advantages, it also brings with it some big security risks. In a complex environment, it’s easy to miss a single setting or configuration option that can open the entire infrastructure to attack.
One of the biggest reasons that security for cloud platforms has lagged behind that of more traditional on-premise infrastructures has been that it has been very difficult to capture and parse network traffic in the cloud. Thankfully, this situation is now changing.
Increasing numbers of organisations are discovering that it’s possible to monitor network communications in real time by using a Network Detection and Response (NDR) tool. They find it’s the is the easiest way to stay on top of complex and dynamic IT environments that include cloud-based components.
Consider whether putting a NDR tool to work within your infrastructure might allow you to enjoy the advantages of the cloud without having to contend with complex and unwieldy security requirements.