DevOps is driving ownership of the entire environment away from the legacy server and network admin teams over to the development team. DevOps have enough to do already. Ensuring these workloads, applications, and identities are managed appropriately has to be an automatic part of the process to ensure adequate risk mitigation.
Modern development processes have complicated application environment security
Developers and operations teams are rapidly becoming wholly responsible for the entire environment where cloud applications are running, and they already have their hands full with plate juggling and firefighting features and code. From networks to devices, to all the way up the application stack, developers may be (directly or indirectly) responsible for entire complicated environments that would historically take several teams and many weeks to accomplish. Also, thanks to the ephemeral nature of the DevOps world, the device and application churn in these environments is enough to make any CMDB quiver.
Unfortunately for most enterprises, the new DevOps support staff is neither trained nor tooled to support an around-the-clock, business-critical function. However, by leveraging behavioral analytics and automated workflow execution, we can now leverage environmental analytics to ensure all network segmentation, device configurations, and identities are in line with the expected norms of the enterprise.
While configuration drift is expected and managed within legacy enterprise infrastructures, the pace at which drift occurs in the cloud is virtually unmanageable without some type of automation. Developers will spin up new cloud environments at will, adjusting segmentation rules, system configuration, and account access to suit their individual needs. Being able to monitor, and automatically correct, any missteps is a crucial capability when considering leveraging cloud resources.
Fast-moving environments have rendered legacy controls ineffective
As with any dynamic environment, attempting to wrap static controls around a transient cloud environment typically fails spectacularly. To adequately support dynamic cloud environments, the controls need to be dynamic as well, monitoring current activity and adjusting accordingly. Network segmentation is a perfect use-case of how a well-thought-out behavioral analytics model can be leveraged to ensure any drift from expected norms is managed in real-time, eliminating, or at least alerting on, unapproved network connectivity as it occurs. This type of real-time modeling is becoming ever more critical with the almost daily onslaught of attackers whose job is to move laterally through an environment. Having the ability to identify new, malicious traffic and immediately shutting it down actually gives today’s enterprise a fighting chance.
This same approach can be taken when we evaluate system configuration and file integrity. By developing workload baselines around what is acceptable configuration drift and what is not, security teams are allowed to focus on known changes rather than investigating thousands. Additionally, especially in regulated industries, clearly documenting file changes is a critical operational control, even though file Integrity reporting has been onerous and time-consuming for decades, typically arriving in ‘batch’ reports from processing overnight. Through leveraging continual file integrity monitoring, the workloads remain in a far more stable and secure state, providing a higher level of confidence in the control and a cleaner regulatory reporting mechanism.
Service accounts and automation bring access risk Access Management misses
Finally, arguably one of the most pervasive risks within a modern DevOps cloud environment is the lack of controls around access. Identity management in general, but specifically ‘roles’ and ‘service accounts’ are wreaking untold havoc in most cloud environments. As cloud environments manage access fundamentally differently than legacy, on-prem environments, the ‘old way’ of managing access does little to mitigate access risk. Thanks to the way cloud environments inherit access, trust relationships often spider out uncontrollably, leaving cross-workload trust relationships that were never intended nor desired.
If there was ever a need for a machine learning-based analytical solution, understanding the complex relationships between access rights, roles, workloads, and devices is it. Not only is there tremendous value across network segments, but within devices as well. Layering on behavior analytics on top of a real-time file integrity monitoring solution truly gives insight not often available to security teams.
A shared source of truth between Application and Security teams is a must
However, while behavior analytics-based automation can play a pivotal role in mitigating risk, doing so requires a solid set of parameters agreed to between both the DevOps and the security teams – something that’s not always easy to come by. Developing an environmental baseline that not only includes system configurations and access controls but network segmentation rules as well, is a crucial factor in a successful DevOps/Security partnership founded on trust. Both teams can achieve their ultimate goals by developing a model that defines acceptable drift versus non-acceptable. When evaluating behavioral-based solutions, having ones that can evaluate the environment and provide a ‘current state’ model is invaluable in finding that common ground between DevOps and security.