SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
The three Rs of enterprise security: Rotate, repave, and repair
Mon, 10th Jul 2017
FYI, this story is more than a year old

The most forward-looking technology companies in the world all want to deliver applications at a faster pace and are willing to try new tools, techniques and processes to get there. However, they also know that security is a top concern, both with their existing infrastructure and their next generation cloud infrastructure.

Many of them use previously defined tools and methodologies to help ensure the appropriate level of security and often, these tools are calcified within the organisation. Some are helpful, some aren't.

When evaluating cloud infrastructure, the single most important concept for an enterprise security organisation is to challenge the status quo. This will dramatically and immediately improve the security posture of any IT organisation.

The trap of resisting change to mitigate risk

The traditional approaches to mitigating risk is to choose between moving fast and accept unbounded risks, or slow down to try and mitigate risks. The general assumption is that slowing down is the best approach.

The sad trust is that the foundation of traditional enterprise infrastructure centers on resisting change.

Firewall rules and so do long-lived Transport Layer Security (TLS) credentials support this hypothesis. It's natural to expect that an enterprise security team, after decades of consuming infrastructure that resists change, would also embrace the same approach.

Unfortunately, we can no longer adopt the reluctance to change mindset that prevails in traditional organisations.

The dreaded mega-breach

At or near the top of security concerns in the data center is something called an Advanced Persistent Threat (APT). An APT gains unauthorised access to a network and can stay hidden for a long period of time. Its goal is usually to steal, corrupt, or ransom data. It's the dreaded, front-page newsworthy mega-breach.

A lot has been written about the anatomy of an APT. Unfortunately, APT has become an umbrella buzzword; it means different things to different people. Specific to this article, APT is defined as an attack that worms its way into the data centre, sits in the network, observes, and then does something malicious.

These types of attacks need at least three resources in order to blossom: time, leaked or misused credentials, and misconfigured and/or unpatched software.

Time gives the malware more opportunity to observe, learn, and store. Credentials provide access to other systems and data, possibly even an ingress point. Vulnerable software provides room to penetrate, move around, hide, and gather more data. These are like sunlight, water, and soil to a plant. Remove one or more and it's not likely to mature.

Now, consider the relationship between the calcified enterprise security system and attacks.

First and foremost, there's lots and lots of time. For example, credentials seldom rotate. If an attacker can find some, they are likely to remain valid and useful for a long time.

No organisation regularly repaves their services or applications from a known, good state. As such, it's not uncommon for an enterprise to leave a server vulnerable for 5 months or more. Also, it often takes months to deploy patches to operating systems and application stacks, even in a virtualised world. Organisations often apply incremental changes and the slate almost never gets wiped clean.

The combination of the enterprise software vendors' approach to mitigating security risk and the trap of the rigid enterprise create this rich, fertile, undisturbed pastures for attacks to flourish.

The Acme pattern

Enterprise accreditation process is there for a good reason, but it also has a nasty side effect. A typical accreditation at Acme Corporation that is required on every major software release takes about six months, from the time they begin the process to installation, and it is typical to factor in about two to three months of delay. This provides plenty of time to give an attack all the resources it needs to transition from a seedling into a monster.

There's more. Based on typical industry best practices, Acme would most likely require that their software vendor keeps prior version of their software patched for a long period of time. This further complicates improvements because the software vendor must devote non-trivial resources to this effort. Those resources can't be used to improve the product, so releasing new versions of software can take even longer.

The cycle perpetuates and grows, and inadvertently feeds the attacker. This cycle, needs to be broken.

Why faster is better

The traditional mindset dictates an environment where organisations are resigned to live with slowly-changing infrastructure. Much of today's IT budgets are spent monitoring for change and hoping for the best. This is akin to treating the symptoms rather the disease.

Many security monitoring solutions embody a self-fulfilling prophecy. Since updating a system open looks like an attack, IT teams either stop paying attention to alerts or they resist updates. While this is not to dismiss of monitoring solutions, they are seen as palliative treatments at best.

If the above reasoning is sound, then, the logical approach is to starve attacks of the resources before they grow into monsters.

The three Rs of enterprise security: Rotate, repave, and repair

If you identify with the above reasoning, then it's natural to wonder about the cure. This 3R – rotate, repave and repair approach -- is not the be all and end all approach to enterprise security, but key to this path is to starve attacks of the resources they need to grow into monsters.

  • Rotate your data center credentials every few minutes or hours.

At high velocity, the three Rs starve attacks of the resources they need to grow. It's a complete 180-degree change from the traditional careful aversion to change to mitigate risk. Go fast to stay safer. In other words, speed reduces risk.

To an attacker, it's like playing a nearly unsolvable video game. They need to get to level 100, but they can't get past level 5 because there's not enough time. In addition, what worked the first try didn't work on the 20th try.

  • Repave every virtual machine in your datacenter from a known good state every few hours without applications downtime.

By deploying your applications from critical infrastructure, the application containers will also be repaved every few hours. Following which, deploy security patches to your organisation's data center with just a few clicks of the mouse.

Every virtual machine in a Pivotal Cloud Foundry cluster is imaged with an image called stem cell. It is possible to repave every virtual machine in the cluster on an interval.

  • Repair vulnerable operating systems and application stacks consistently within hours of patch availability.

Faster is safer. This is not a fantasy — the tools exist to make most of this a reality today to make a dramatic improvement in the enterprise security posture.

When evaluating the security of your cloud platform if a tool that your organisation is investigating or using doesn't help your organisation to achieve the three Rs, it is time to rethink your security approach.