This is the 13th article in the “Real Words or Buzzwords?” series about how real words become empty words and stifle technology progress, also published on SecurityInfoWatch.com.
By Ray Bernard, PSP, CHS-III
Cloud computing offers new capabilities that premises-based systems can’t provide at a reasonable cost. However, without understanding what a true cloud system is, and without knowing how any particular cloud offering is architected and secured, how could an end-user customer or an integrator fully evaluate product offerings?
I first heard the words “True Cloud” at a security conference where Dean Drako, founder and owner of Eagle Eye Networks and owner of Brivo Systems, explained why he coined the term. Previously, there was no terminology to make a distinction between a simple server application being hosted in the cloud, and a server application that was engineered for the cloud. At first, I thought this was just a cute marketing ploy. Having later encountered systems that were not specifically engineered for cloud deployment, but were labeled as “cloud offerings”, I realized that this was a much more important topic, especially for security design consultants.
Thinking About True Cloud Systems
Cloud computing offers new capabilities that premises-based systems can’t provide at a reasonable cost. However, without understanding what a true cloud system is, and without knowing how any particular cloud offering is architected and secured, how could an end user customer or an integrator fully evaluate product offerings?
Furthermore, true cloud systems are continually advancing and evolving. You have probably experienced this with your smartphone. Continually evolving applications makes product roadmaps much more relevant and important. Under modern cloud software development practices, cloud application improvements are automatically implemented on a regular basis. Improvements are made to enhance features, performance and security. Application updates typically occur on a bi-weekly or monthly basis. Users would notice most feature improvements, some performance improvements, and may not notice security improvements at all.
Given the current state of cloud adoption in the physical security industry, continual software improvement means that, prior to subscribing, new customers have the opportunity influence the priority, and sometimes the feature capabilities, of roadmap items critically important to them.
At first glance, cloud offerings seem to present a significant challenge for security system designers and specifiers. Pre-cloud, security design consultants either selected or designed the system architecture based upon the capabilities of the system software. They specified the computing equipment and operating systems to be used, and thus could assure the desired system capacities and system performance. Furthermore, they could conduct proof of concept tests prior to purchasing, to make sure that critical functionality works as needed, and perform acceptance tests to prove out system capabilities prior to customer acceptance of the system. Additionally, customers knew exactly what computer and network security elements were in place, as they owned them.
Under cloud computing, system designers and customers can still achieve the same or better levels of understanding and certainty regarding cloud system design and performance as in the pre-cloud days. It just takes a different approach.
Client-Server vs. Cloud Software
Thus, this article presents the NIST and ISO/IEC definitions of cloud computing, with some examples of how our product design thinking needs to change to fit the nature of cloud offerings. The first and most major change in thinking occurs with the cloud provider’s cloud application deployment, because in a true cloud offering, customers (subscribers) share the same single instance of software. Unlike a client-server security application, which would typically have maybe a dozen users for a small company and up to a few hundred users in a very large enterprise, the single cloud-based application will have thousands to hundreds of thousands of users—or millions of users, if most of a subscriber’s employees or building occupants will be mobile users. Supporting millions of users in the client-server world simply means having millions of software application downloads. In the world of cloud computing—it is an entirely different situation. A single application instance must support all users in a major geographic region.
Defining Cloud Computing
The Cloud Security Alliance’s recently released Security Guidance for Critical Areas of Focus in Cloud Computing v4.0 provides an excellent description of cloud computing, included in the paragraphs below. Version 4.0 is the first major update to the CSA’s guidance document since 2011. This significant rewrite makes the 4.0 version less of an academic document, and more of a real-world conversation. It contains three times as many illustrations as version 3.0, and includes guidance for cloud-related technologies, such as DevOps, IoT, mobile, and Big Data.
In its guidance document, the Cloud Security Alliance states:
“Cloud computing is a new operational model and set of technologies for managing shared pools of computing resources.”
“It is a disruptive technology that has the potential to enhance collaboration, agility, scaling, and availability, as well as providing the opportunities for cost reduction through optimized and efficient computing. The cloud model envisages a world where components can be rapidly orchestrated, provisioned, implemented and decommissioned, and scaled up or down to provide an on-demand utility-like model of allocation and consumption.”
The key word in the paragraph above is “potential”. Cloud computing will not be disruptive in the physical security industry unless manufacturers design and develop applications that take maximum advantage of cloud-computing capabilities, so that customers can experience benefits that client-server-based systems cannot provide.
If physical security system cloud applications are engineered to pass cloud-computing’s capabilities along to application users, for example, cloud-based video storage would be offered in a utility-like model: you could store as much recorded video as you like, and you would only pay for the storage that you use. Under such a model, cloud-based video management system subscribers would only specify how many days of video retention they require (as opposed to terabytes of disk storage), such as 30 days of retention, and the cloud VMS would assure that 30 days of retention was always achieved, automatically adjusting the billing to reflect the storage used in the previous month.
So, for example, if it rained for two weeks, and outdoor cameras were set up to record on motion, it wouldn’t matter if three times as much video as usual was recorded by a particular subscriber—the storage would automatically expand to assure 30 days of retention. A month later, when the amount of recorded video dropped back to normal, so would the amount of storage allocated to that subscriber.
The Service Level Agreement would include a guarantee that the subscriber-requested video retention period would always be met—and for countries that impose a limit on security video retention—would never be exceeded.
This kind of capability is not automatic just because VMS software is running in the cloud. It has to be engineered by the VMS manufacturer, whose software would now include billing functionality for video storage—something not included in client-server VMS software.
A contract-based guarantee of video retention could be part of a disruptive cloud-based VMS feature set. So would on-demand and scheduled use of video analytics, which many school-districts would be happy to pay for. Holidays, major sports games and other school events, and the month surrounding graduation time often see a spike in prohibited high-risk activity, which could be curtailed using the new generation of advanced video analytics. However, most schools would only want to pay for such usage during the times they actually need it. These and other capabilities just aren’t possible with premises-based systems, but they have been slow to arrive in cloud-based security video systems.
NIST defines cloud computing as:
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
The ISO/IEC definition is similar:
“Paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand.”
The Cloud Security Alliance states it this way, with my comments inserted in square brackets for this article:
“A (slightly) simpler way of describing cloud is that it takes a set of resources, such as processors and memory, and puts them into a big pool (in this case, using virtualization). Consumers [meaning security system applications, and for our example, VMS systems] ask for what they need out of the pool, such as 8 CPUs and 16 GB of memory, and the cloud assigns those resources to the client [VMS system application], who then connects to and uses them over the network. When the client is done, they can release the resources back into the pool for someone else to use.”
Applying these capabilities to video surveillance applications, it would mean that when there is a lot of activity in the fields of view of the cameras, the cloud-based video analytics applications would request additional CPUs and computing memory to handle the video analytics processing. When the activity is over, the additional CPUs and memory would be released. This requires two new dimensions of application development unique to the cloud: allocating specific computing resources (such as CPUs), and billing specific subscribers for each subscriber’s portion of computing resource allocations. This is not easy development work.
To be fair to security industry manufacturers, cloud computing capabilities have evolved significantly over the past five years. Cloud provider application frameworks, meaning the capabilities that cloud service providers such as Amazon and Microsoft make available to cloud application developers, have not always supported the kind of specific computing resource allocation capabilities described above.
The continuing evolution of cloud technology means that the scope of “true cloud” keeps evolving, if the definition of true cloud means “continually making maximum use of evolving cloud computing capabilities for the benefit of cloud application subscribers.” That’s not exactly what Dean Drako meant when he originally coined the term, but it should be a goal of security industry cloud application providers. It is a way to keep increasing the value provided to customers, and consequently the profits earned by the cloud application provider.
With cloud computing capabilities, there are many new opportunities specific to individual business sectors, to design and price security operational capabilities in ways that client-server on-premises computing cannot provide. These would be true cloud offerings, because they take the capabilities that cloud computing platforms offer, and make maximum use of them to create affordable security applications with features that can’t possibly be achieved in client-server on-premises systems.
This topic requires further discussion, to be continued in the next article in this series, which discusses the six key characteristics of cloud computing. (NIST defines five, and ISO/IEC 17788 adds one more.) The article will address proof of concept testing, acceptance testing, feature trials, and the role of cloud application roadmaps.
Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Mr. Bernard is a Subject Matter Expert Faculty of the Security Executive Council (SEC) and an active member of the ASIS International member councils for Physical Security and IT Security.