This is the 14th article in the award-winning “Real Words or Buzzwords?” series about how real words become empty words and stifle technology progress, also published on SecurityInfoWatch.com.
By Ray Bernard, PSP, CHS-III
Cloud-native applications are the only way to achieve the high performance and cost-effectiveness required for large scale (i.e. high subscriber count) cloud-based systems. They are also the only way to achieve the rapid application advancement required to keep up with accelerating trends in technology, business and security. This is why it is important to know about the six key characteristics of cloud computing. Cloud application vendors should be able to explain to integrators and consultants, exactly how they are using these cloud computing characteristics to support the features and capabilities of their applications.
★ ★ ★ GET NOTIFIED! ★ ★ ★
SIGN UP to be notified by email the day a new Real Words or Buzzwords? article is posted!
Real Words or Buzzwords?
The Award-Winning Article Series
#1 Proof of the buzzword that killed tech advances in the security industry—but not other industries.
#2 Next Generation (NextGen): A sure way to tell hype from reality.
#3 Customer Centric: Why all security industry companies aren't customer centric.
#4 Best of Breed: What it should mean to companies and their customers.
#5 Open: An openness scale to rate platforms and systems
#6 Network-friendly: It's much more than network connectivity.
#7 Mobile first: Not what it sounds like.
#8 Enterprise Class (Part One): To qualify as Enterprise Class system today is world's beyond what it was yesterday.
#9 Enterprise Class (Part Two): Enterprise Class must be more than just a top-level label.
#10 Enterprise Class (Part Three): Enterprise Class must be 21st century technology.
#11 Intuitive: It’s about time that we had a real-world testable definition for “intuitive”.
#12 State of the Art: A perspective for right-setting our own thinking about technologies.
#13 True Cloud (Part One): Fully evaluating cloud product offerings.
#14 True Cloud (Part Two): Examining the characteristics of 'native-cloud' applications.
#15 True Cloud (Part Three): Due diligence in testing cloud systems.
#16 IP-based, IP-enabled, IP-capable, or IP-connectable?: A perspective for right-setting our own thinking about technologies.
#17 Five Nines: Many people equate high availability with good user experience, yet many more factors are critically important.
#18 Robust: Words like “robust” must be followed by design specifics to be meaningful.
#19 Serverless Computing – Part 1: Why "serverless computing" is critical for some cloud offerings.
#20 Serverless Computing – Part 2: Why full virtualization is the future of cloud computing.
#21 Situational Awareness – Part 1: What products provide situational awareness?
#22 Situational Awareness – Part 2: Why system designs are incomplete without situational awareness?
#23 Situational Awareness – Part 3: How mobile devices change the situational awareness landscape?
#24 Situational Awareness – Part 4: Why situational awareness is a must for security system maintenance and acceptable uptime.
#25 Situational Awareness – Part 5: We are now entering the era of smart buildings and facilities. We must design integrated security systems that are much smarter than those we have designed in the past.
#26 Situational Awareness – Part 6: Developing modern day situational awareness solutions requires moving beyond 20th century thinking.
#27 Situational Awareness – Part 7: Modern day incident response deserves the help that modern technology can provide but doesn’t yet. Filling this void is one of the great security industry opportunities of our time.
#28 Unicity: Security solutions providers can spur innovation by envisioning how the Unicity concept can extend and strengthen physical access into real-time presence management.
#29 The API Economy: Why The API Economy will have a significant impact on the physical security industry moving forward.
#31 The Built Environment: In the 21st century, “the built environment” means so much more than it did just two decades ago.
#32 Hyper-Converged Infrastructure: Hyper-Converged Infrastructure has been a hot phrase in IT for several years, but do its promises hold true for the physical security industry?
#33 Software-Defined: Cloud-computing technology, with its many software-defined elements, is bringing self-scaling real-time performance capabilities to physical security system technology.
#34 High-Performance: How the right use of "high-performance" can accelerate the adoption of truly high-performing emerging technologies.
#35 Erasure Coding: Why RAID drive arrays don’t work anymore for video storage, and why Erasure Coding does.
#36 Presence Control: Anyone responsible for access control management or smart building experience must understand and apply presence control.
#37 Internet+: The Internet has evolved into much more than the information superhighway it was originally conceived to be.
#38 Digital Twin: Though few in physical security are familiar with the concept, it holds enormous potential for the industry.
#39 Fog Computing: Though commonly misunderstood, the concept of fog computing has become critically important to physical security systems.
#40 Scale - Part 1: Although many security-industry thought leaders have advocated that we should be “learning from IT,” there is still insufficient emphasis on learning about IT practices, especially for large-scale deployments.
#41 Scale - Part 2: Why the industry has yet to fully grasp what the ‘Internet of Things’ means for scaling physical security devices and systems.
#42 Cyberspace - Part 1: Thought to be an outdated term by some, understanding ‘Cyberspace’ and how it differs from ‘Cyber’ is paramount for security practitioners.
#43 Cyber-Physical Systems - Part 1: We must understand what it means that electronic physical security systems are cyber-physical systems.
#44 Cyberspace - Part 2: Thought to be an outdated term by some, understanding ‘Cyberspace’ and how it differs from ‘Cyber’ is paramount for security practitioners.
#45 Artificial Intelligence, Machine Learning and Deep Learning: Examining the differences in these technologies and their respective benefits for the security industry.
#46 VDI – Virtual Desktop Infrastructure: At first glance, VDI doesn’t seem to have much application to a SOC deployment. But a closer look reveals why it is actually of critical importance.
#47 Hybrid Cloud: The definition of hybrid cloud has evolved, and it’s important to understand the implications for physical security system deployments.
#48 Legacy: How you define ‘legacy technology’ may determine whether you get to update or replace critical systems.
#49 H.264 - Part 1: Examining the terms involved in camera stream configuration settings and why they are important.
#50 H.264 - Part 2: A look at the different H.264 video frame types and how they relate to intended uses of video.
#51 H.264 - Part 3: Once seen as just a marketing term, ‘smart codecs’ have revolutionized video compression.
#52 Presence Technologies: The proliferation of IoT sensors and devices, plus the current impacts of the COVID-19 pandemic, have elevated the capabilities and the importance of presence technologies.
#53 Anonymization, Encryption and Governance: The exponential advance of information technologies requires an exponential advance in the application of data protection.
#54 Computer Vision: Why a good understanding of the computer vision concept is important for evaluating today’s security video analytics products.
#55 55 Exponential Technology Advancement: The next 10 years of security technology will bring more change than in the entire history of the industry to now.
More to come about every other week.
A native application is one that has been developed for use on a specific platform or device, and executes more quickly and efficiently because it makes maximum use of the capabilities built into (i.e. native to) that platform or device, and doesn’t require any extra layers of translation or interface to run there. Thus, we see the terms “native iOS app” and “native Android app” used to refer to mobile apps whose software code is written just for Apple’s iOS or Google’s Android operating system.
In 2015 the Cloud Native Computing Foundation was founded with the specific purpose of creating and driving the adoption of cloud-native design.
Cloud-native refers to an application that has been designed and built to take maximum advantage—based on the purpose of the application—of the key characteristics of cloud computing. These computing characteristics are very well defined and each provides specific benefits not available in standard server-based computing.
Cloud computing is not simply one or more virtual servers that can be scaled up in real time from a virtual “standard server” to a virtual “humongous server” with gigantic CPU power, RAM memory and database storage. Scaling up every aspect of a virtual server, or firing up multiple instances of a server, is not what cloud scaling and on-demand services are about. It’s not efficient and it costs too much due to its wasteful use of cloud resources.
Cloud computing takes a set of computing resources, such as processors and memory, and puts them into a big pool, typically using virtualization. Thus, a cloud-native security system application, when experiencing a spike in user activity due to a critical security incident, would, for example, allocate exactly and only what resources are needed out of the pool, such as 32 CPUs to process analytics for many video streams. The cloud infrastructure instantly assigns those resources to the application. The user-responsiveness of the system is not affected by the spike in demand. There is no “maxing-out” at 100% CPU utilization. When incident response activity is done, the application releases the resources (in this case, virtual CPUs) back into the pool for someone else to use.
When this happens, cloud-application service providers pay only for the computing power that is used, and this resource-cost minimization is passed along to the subscribers. This kind of cost effectiveness can’t be duplicated with on-premises systems, and it can’t be duplicated in the cloud with non-cloud-native applications.
Why the Key Cloud Characteristics Are Important
Cloud-native applications are the only way to achieve the high performance and cost-effectiveness required for large scale (i.e. high subscriber count) cloud-based systems. They are also the only way to achieve the rapid application advancement required to keep up with accelerating trends in technology, business and security. There is zero future-proofing in non-cloud-native systems.
This is why it is important to know about the six key characteristics of cloud computing. Cloud application vendors should be able to explain to integrators and consultants, exactly how they are using these cloud computing characteristics to support the features and capabilities of their applications. For example, two cloud-based video management systems, the Axis Video Hosting System solution and the Eagle Eye Cloud Security Camera VMS, have users specify video storage retention as the number of days of recorded video to retain, as opposed to the number of gigabytes of video storage space to reserve. The cloud applications manage the storage automatically, expanding and contracting it as needed to maintain the video retention requirement.
The Six Key Characteristics of Cloud Computing
The NIST Definition of Cloud Computing provides five key characteristics, and ISO/IEC 17788 adds a sixth. The original language is liberally paraphrased below to make it less technical and more user-friendly. In the paragraphs that follow, the original NIST broad term “consumer” is replaced with “cloud-native application”, “user” (meaning a customer or integrator using the cloud application) or “subscriber” (meaning the integrator’s customer) to specify which type of consumer is being referred to.
From the NIST Definition: Cloud computing is a model for enabling anywhere, anytime, convenient, on-demand network access to applications that run using a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or cloud service provider interaction—typically, automatically provisioned and released based upon the level of subscriber use of the cloud-based application.
However, since that definition was written, cloud server virtual resources have been further refined to enable just parts of a server to scale up, such as CPUs or RAM. Such capability has led to serverless computing, a broad category that refers to any situation where the cloud user doesn’t manage any of the underlying hardware or virtual machines, and just accesses exposed computing functions. It’s this kind of fine-grained resource virtualization that the Cloud Native Computing Foundation was formed to enable.
- Resource pooling. The computing resources of the cloud infrastructure provider (such as Microsoft Azure, Amazon AWS or Google Cloud Services) are pooled to serve multiple subscribers with different physical and virtual resources dynamically assigned and reassigned according to subscriber demand. Examples of resources include storage, processing, memory, and network bandwidth.
- On-demand self-service. A cloud-native application can on its own automatically provision computing capabilities, such as server computing time and network storage, as needed, without requiring human interaction with service providers.
- Broad network access. Cloud-native application capabilities are available over a network and accessed through standard mechanisms, such as an Internet Service Provider connections and corporate networks that provide Internet access, and this enables use by various kinds of client devices (e.g., mobile phones, tablets, laptops, and workstations).
- Rapid elasticity. Capabilities can be elastically provisioned and released, preferably automatically, to scale rapidly up and down commensurate with demand. To the user, the capabilities often appear to be unlimited and can be appropriated in any quantity at any time. For example, if a subscriber needed to send out an emergency notification to 5,000 employee mobile users, the network capabilities to establish several thousand mobile device connections at once would be automatically allocated to that subscriber, for the 5- or 15-minute duration of the message broadcast and the mobile users’ return responses. Network traffic of other subscribers would not be affected.
- Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability within the cloud infrastructure, at some level appropriate to the type of service (e.g., storage, processing, network bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the cloud-application provider and the subscriber to the utilized cloud service.
- Multitenancy. Multitenancy (sometimes hyphenated as “multi-tenancy”) is a software architecture in which a single instance of software serves multiple subscribers, referred to as tenants. A tenant is a group of users (for a security system, this could be subscriber employees and building occupants) who share a common access with specific privileges to the software instance and associated database storage. Under the multitenant model, each subscriber’s allocated cloud resources are separate and distinct from those of another subscriber, so that subscribers can only access their own data and will only use their own allocation of computing resources (necessary for accurate billing). This contrasts with subscribers each being given a separate virtual servers, applications, and databases. In a cloud-native application, each cloud data center runs only a single instance of application software and any databases, shared by all subscribers.
Implications of Cloud-Native Systems
Cloud-native system architecture is very different from the client-server-based system architectures of traditional on-premise security systems. With the client-server-based systems, it was easy to perform lab-based proof of concept (POC) testing for systems integrations, and site-based security system acceptance testing. Now that the computing infrastructure has moved to the cloud, accomplishing the objectives of such testing requires different approaches. These and other cloud-native subjects are generally not being discussed within the security industry. True cloud computing, at least for electronic physical security systems, should take these things into account—because cloud computing is intended to improve the system experience, not negatively constrain it. True Cloud (Part Three) will directly address these subjects.
Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Mr. Bernard is a Subject Matter Expert Faculty of the Security Executive Council (SEC) and an active member of the ASIS International member councils for Physical Security and IT Security.