What to Look For in the Cloud During 2016
Much has changed during the ten years since Amazon Web Services was first introduced. But if there’s one common thread, it’s this: We increasingly understand that public clouds aren’t a metaphorical reimagining of a pervasive standardized electric grid. Rather, they’re part of an IT toolkit which needs to be thoughtfully applied to business problems within organizations that are increasingly shaped by digital technology.
“The reality is that IT organizations need to meet security, risk management, and compliance goals across their entire environments”
This level of nuance, the interplay with other aspects of IT and the business, and a requirement to operate as part of a complete IT delivery chain all play into many of the trends we see happening in cloud computing today.
The first such trend is the clear preference for hybrid IT models (as opposed to all public or all private) by the vast majority of enterprises. We see enterprises employing a combination of on-premise systems, dedicated hosting, public clouds, and a variety of Software-as-a-Service from a wide range of vendors. There’s little evidence of a shift toward a one-size-fits-all world. We also see this trend reflected at the public cloud providers themselves. In a number of cases, they’ve invested heavily in enterprise sales forces and hired executives with enterprise software credentials.
One aspect of hybrid IT is sometimes called “bimodal” or “two speed” IT. This approach recognizes two fundamentally different rates of IT change and avoids a “timid middle” that just attempts to split the difference. A compromise approach moves too quickly for classic IT to handle while still being insufficiently agile to deliver new applications and services at the pace modern businesses require.
Another way to think about the distinction between IT modes is that the focus with “Mode 1” is to modernize and renovate while the focus with cloud-native “Mode 2” is to innovate and move quickly. Whatever exact terms one uses, the basic concept is resonating with many CIOs because they recognize the need to embrace modern infrastructures and practices but can’t tolerate the disruption and cost that would be associated with ripping out and replacing all their existing investments.
Another significant trend is the increasing sophistication of security discussions. In the early days of public clouds, headlines like “The cloud is unsafe” were common as were pronouncements like “no one will ever run production workloads in a public cloud.” By and large, we’ve moved beyond that sort of binary thinking.
The reality is that IT organizations need to meet security, risk management, and compliance goals across their entire environments. This includes addressing many of the same basic requirements for mitigating vulnerabilities, implementing configuration management, and establishing access controls that have long existed.
That said, practices do need to be amped up for a higher velocity and volume of threats, an IT architecture that’s far more open to the world, and infrastructures that are heterogeneous and hybrid. It’s a measure of the seriousness and sophistication of attacks that strategic Chief Information Security Officer (CISO) positions are becoming more common and that Incident Response plans are starting to look more like those associated with traditional life safety professions such as firefighting.
Security needs to be approached in the context of the business as opposed to just a technology problem. This means, for example, defining the business’ risk appetite in terms of loss tolerance. A credit card issuer knows that it’s going to have losses due to fraud. The only alternative is to make using credit cards so onerous that hardly anyone will use them. So their goal is instead to put controls in place that make using credit cards a mostly streamlined process while keeping losses to a level that is acceptable as a business outcome.
While reflexive fears about a lack of security in public clouds may be naive, public and hybrid clouds do introduce risk and compliance considerations and challenges that are different in degree—and sometimes in kind—from traditional on-premise datacenters. It’s important to understand which areas you’re still responsible for for when using public clouds. For example, in the case of Infrastructure-as-a-Service, you will need to exercise the same care in sourcing (e.g., from certified repositories) and maintaining your operating system and applications as if you were running it on-premise—and deploy appropriate cloud management platforms to enforce policies.
Frameworks are available to help IT executives and architects evaluate and mitigate risk associated with using public cloud providers. A good example is the Cloud Controls Matrix (CCM) from the Cloud Security Alliance (CSA). The CSA CCM provides a controls framework across 16 domains including business continuity management and operational resilience, encryption and key management, identity and access management, mobile security, and threat and vulnerability management.
Development and operational practices are evolving alongside infrastructures that increasingly also use containers and Docker packaging to simplify application packaging and improve portability. Collectively, these practices are often referred to as DevOps.
DevOps means somewhat different things to different people depending upon which aspect they’re most focused on. But you can think of DevOps as an approach to culture, automation, and platform design for delivering increased business value and responsiveness through rapid, iterative, and high-quality IT service delivery. It’s an evolution of agile and continuous integration/continuous delivery practices, in a sense. A culture of collaboration values openness and transparency. Automation accelerates and improves the consistency of application delivery. A software-defined, programmable platform provides a foundation that’s dynamic enough for DevOps to be most effective.
What is simultaneously perhaps most exciting and most challenging about the cloud space broadly is how quickly things are changing. Containers were on few radar screens even two years ago. Today, the only real question is the degree to which they replace, rather than augment, legacy hardware virtualization. New orchestration, automation, and management projects pop up seemingly weekly. Applications are evolving to more loosely-coupled and API-centric architectures. The challenge for enterprise IT shops is consuming this tsunami of innovation and for vendors it’s packaging that innovation to make consumption possible.