Today’s approach to securing IT infrastructure is passé. In a dynamic world of unpredictable and often frequent infrastructure changes, the traditional approach to security falls short. It is no longer sufficient to just scan frequently for vulnerabilities and then try to interpret this data in real time without (human) error.
Additionally, despite smart analytics, this approach to illuminating security issues and remediating them is extremely time consuming. How many organizations can really claim to have identified and fixed all vulnerabilities? None!
Automation has brought agility and consistency to infrastructure and other workflow services now. Security can and should expect to see similar gains. In this blog we explore some of the reasons that make organizations vulnerable and provide guidance on how they can better counter and secure their infrastructure and applications.
You are never stronger than the weakest link
Adversaries don’t care about the latest in fashion! Whether the open backdoor is offered by the latest shiny RHEL version, or one of its aging versions, attackers don’t care. You are never stronger than the weakest link. This implies all systems must at all times be under continuous scrutiny and protection.
All running system need active protection. It doesn’t matter that the team which previously was responsible for a few boxes in the far corner of your data center no longer works for the company, or that the couple of virtual instances someone in the outsourcing team launched a few years ago no longer seem to be in active use. Any active system must at all times be secured because the main approach of adversaries is to identify and try to exploit the weakest links.
Defy unintentional weak links
Like adversaries don’t care about fashion, so should the security industry. This will forever be a cat-and-mouse game. Therefore IT-operations benefits from a layered approach. The more layered the security provided the more likely the weakest link becomes less weak, a bit more hidden, a bit more obscure.
However, just as equally futile as spending time writing management reports no one reads are security reports that are not acted upon. Who cares if one was aware of the vulnerability that was exploited? Unfortunately, the amount of work that needs to be done to stay secure always seems to exceed available resources.
With AI still in its infancy and the “smartness” offered by security analytics tools not appearing to extend beyond some simple heuristic algorithms, using automation to ensure active enforcement of a security baseline can go a long way. With automation, asserting all machines to be compliant with PCI-DSS, (or any other compliance schema) becomes plausible. The extra work needed to improve security is minimal and only really requires tighter cooperation between operations and security teams.
Knowing that all running RHEL machines whether the latest shiny version, or the forgotten corner boxes or cloud running instances, continuously comply with security baselines greatly improves the level of security in ways no other scanning system can provide.
Dynamic organizations need security automation the most
A startup with a small homogenous infrastructure is vastly different in its security and operations needs than large heterogeneous infrastructures found in bigger companies. Each additional layer of complexity whether in form of new operating systems, middleware, application teams, environments (dev, test, prod, etc.) or new lines of businesses multiplies the security challenge. The more dynamic the organization, the more the need for automated security. The traditional mode of reading vulnerability reports and then manually acting upon them rapidly become impractical.
For large organizations, the only true constant is change, and each change poses a new potential vulnerability. The more the number of people, the more the desire for manual system intervention (“I just need to…”), and the more lines of businesses with disparate architecture and technology stacks the more the complexity and potential for serious security gaps – due to the weakest link syndrome.
Large organizations have thousands of applications, and millions of configuration settings. Add to this that speed wins and security comes second, one can easily argue for the need for security through automation.
Security issues don’t always emanate from within organizations. Acquisitions and outsourcing come as two major sources of security challenges. Once external consultants or systems gain internal access, or an acquired organization’s IT operations merge, whole new sets of security gaps emerge. Again, automation provides a solid solution to ensure hygiene and basic defense.
Automated security with CFEngine
CFEngine’s autonomous architecture and self-healing capabilities fits the bill perfectly to play the role of an automated security mechanism.
By defining and dividing your IT-infrastructure as smaller autonomous units, CFEngine will, thanks to its lightweight and cross-platform capabilities, continuously ensure security compliance, whether a system runs on an old RHEL box in a faraway corner, or new shiny RHEL instance on AWS. As long as security baseline policies exists for different software versions, and new versions rejected for provisioning before policies exist, CFEngine will meticulously do all the work. No manual intervention, or humongous reports to worry about.
By making it mandatory to have CFEngine installed and running on any connected system an invaluable extra layer of security can be established.