This is the 45th article in the “Real Words or Buzzwords?” series about how real words become empty words and stifle technology progress.
By Ray Bernard, PSP, CHS-III
Examining the differences between Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) technologies and how they’re being leveraged in security applications today
★ ★ ★ GET NOTIFIED! ★ ★ ★
SIGN UP to be notified by email the day a new Real Words or Buzzwords? article is posted!
Real Words or Buzzwords?
The Bi-Weekly Article Series
#1 Proof of the buzzword that killed tech advances in the security industry—but not other industries.
#2 Next Generation (NextGen): A sure way to tell hype from reality.
#3 Customer Centric: Why all security industry companies aren't customer centric.
#4 Best of Breed: What it should mean to companies and their customers.
#5 Open: An openness scale to rate platforms and systems
#6 Network-friendly: It's much more than network connectivity.
#7 Mobile first: Not what it sounds like.
#8 Enterprise Class (Part One): To qualify as Enterprise Class system today is world's beyond what it was yesterday.
#9 Enterprise Class (Part Two): Enterprise Class must be more than just a top-level label.
#10 Enterprise Class (Part Three): Enterprise Class must be 21st century technology.
#11 Intuitive: It’s about time that we had a real-world testable definition for “intuitive”.
#12 State of the Art: A perspective for right-setting our own thinking about technologies.
#13 True Cloud (Part One): Fully evaluating cloud product offerings.
#14 True Cloud (Part Two): Examining the characteristics of 'native-cloud' applications.
#15 True Cloud (Part Three): Due diligence in testing cloud systems.
#16 IP-based, IP-enabled, IP-capable, or IP-connectable?: A perspective for right-setting our own thinking about technologies.
#17 Five Nines: Many people equate high availability with good user experience, yet many more factors are critically important.
#18 Robust: Words like “robust” must be followed by design specifics to be meaningful.
#19 Serverless Computing – Part 1: Why "serverless computing" is critical for some cloud offerings.
#20 Serverless Computing – Part 2: Why full virtualization is the future of cloud computing.
#21 Situational Awareness – Part 1: What products provide situational awareness?
#22 Situational Awareness – Part 2: Why system designs are incomplete without situational awareness?
#23 Situational Awareness – Part 3: How mobile devices change the situational awareness landscape?
#24 Situational Awareness – Part 4: Why situational awareness is a must for security system maintenance and acceptable uptime.
#25 Situational Awareness – Part 5: We are now entering the era of smart buildings and facilities. We must design integrated security systems that are much smarter than those we have designed in the past.
#26 Situational Awareness – Part 6: Developing modern day situational awareness solutions requires moving beyond 20th century thinking.
#27 Situational Awareness – Part 7: Modern day incident response deserves the help that modern technology can provide but doesn’t yet. Filling this void is one of the great security industry opportunities of our time.
#28 Unicity: Security solutions providers can spur innovation by envisioning how the Unicity concept can extend and strengthen physical access into real-time presence management.
#29 The API Economy: Why The API Economy will have a significant impact on the physical security industry moving forward.
#30 Future-Proof: What does Future-Proof mean in an era of managed services, continuous delivery, and ever-accelerating technology advancement?
#33 Software-Defined: Cloud-computing technology, with its many software-defined elements, is bringing self-scaling real-time performance capabilities to physical security system technology.
#34 High-Performance: How the right use of "high-performance" can accelerate the adoption of truly high-performing emerging technologies.
#35 Erasure Coding: Why RAID drive arrays don’t work anymore for video storage, and why Erasure Coding does.
#36 Presence Control: Anyone responsible for access control management or smart building experience must understand and apply presence control.
#37 Internet+: The Internet has evolved into much more than the information superhighway it was originally conceived to be.
#38 Digital Twin: Though few in physical security are familiar with the concept, it holds enormous potential for the industry.
#39 Fog Computing: Though commonly misunderstood, the concept of fog computing has become critically important to physical security systems.
#40 Scale - Part 1: Although many security-industry thought leaders have advocated that we should be “learning from IT,” there is still insufficient emphasis on learning about IT practices, especially for large-scale deployments.
#41 Scale - Part 2: Why the industry has yet to fully grasp what the ‘Internet of Things’ means for scaling physical security devices and systems.
#42 Cyberspace - Part 1: Thought to be an outdated term by some, understanding ‘Cyberspace’ and how it differs from ‘Cyber’ is paramount for security practitioners.
#43 Cyber-Physical Systems - Part 1: We must understand what it means that electronic physical security systems are cyber-physical systems.
#44 Cyberspace - Part 2: Thought to be an outdated term by some, understanding ‘Cyberspace’ and how it differs from ‘Cyber’ is paramount for security practitioners.
#45 Artificial Intelligence, Machine Learning and Deep Learning: Examining the differences in these technologies and their respective benefits for the security industry.
#46 VDI – Virtual Desktop Infrastructure: At first glance, VDI doesn’t seem to have much application to a SOC deployment. But a closer look reveals why it is actually of critical importance.
#47 Hybrid Cloud: The definition of hybrid cloud has evolved, and it’s important to understand the implications for physical security system deployments.
#48 Legacy: How you define ‘legacy technology’ may determine whether you get to update or replace critical systems.
#49 H.264 - Part 1: Examining the terms involved in camera stream configuration settings and why they are important.
#50 H.264 - Part 2: A look at the different H.264 video frame types and how they relate to intended uses of video.
More to come about every other week.
IMAGE COURTESY BIGSTOCKPHOTO.COM
At the ASIS GSX 2019 event I participated in a panel session titled, “How IoT, the Cloud and AI Deliver Business Intelligence.” The video/audio recordings will be available online from ASIS in a few weeks. I have been holding off on writing about artificial intelligence (AI), machine learning (ML) and deep learning (ML,) which are the subjects of this article, because offerings powered by AI technologies were newly emerging and the related vendor vocabularies were still evolving.
However, the #1 question for our panel session was, “What’s the difference between AI, machine learning and deep learning?” Nearly all of the attendees nodded their heads to indicate their strong interest in the question.
I’m going to define the terms here and then provide references to well-written articles that will let you delve as deeply as you like into the topics. Before presenting the definitions, you should know – and maybe you do already – that machine learning is a type of artificial intelligence, and deep learning is a type of machine learning and thus is also a type of artificial intelligence.
An Important Aspect of AI History
The history of the AI research goes like this:
- Artificial Intelligence – 1950’s to 1980’s and still ongoing
- ⇛ Machine Learning – 1980s to 2010s and still ongoing
- ⇛ ⇛ Deep Learning – 2010s to now and still ongoing
For decades the AI researchers weren’t getting the results they wanted, and eventually many of the leading AI scientists wondered if there was something wrong with their approaches to AI software. The actual issue was not the software, but the hardware capabilities they were using to design and run the software.
The hardware wasn’t capable of handling what the AI scientists wanted to do. This became obvious as now – thanks to the exponential advancement of computing technologies including hardware virtualization and cloud computing – we have hardware that is capable of supporting the kinds of software that can perform amazing amounts of processing tasks in parallel. That’s the story behind the story of the development of machine learning, and then its refinement into deep learning.
A lot of this story, and some very well-written explanations about the technology, are available on Nvidia’s company blog (links follow later in this article). Nvidia is one of the companies that makes the computer cards holding multiple high-speed parallel-processing computer chips that make the running of machine learning and deep learning software feasible.
Such processing involves the handling of massive amounts of data, and so today’s computer CPU chip designs include support for massive data throughout to and from the GPU (graphical processing unit) chips that Nvidia and others make. Initially they were developed for video gaming, because the video displays – especially for 3D games – had to be able to process the three-dimensional visual aspects of hundreds and thousands of objects on the display screen or video virtual reality headset. They also had to process all the computer code behind the visual displays, such as is required to realistically bounce a ball across a floor. That’s a lot of parallel processing power, and soon AI researchers found that they could use these chips to run various parts of their AI software in parallel.
That processing capability made such a great difference in AI results, that Nvidia, Dell and other chip makers began designing chips just to support the kind of software that AI scientists wanted to create. Somewhere along the line the scientists realized that their theories about what AI software could do were correct – they just needed hardware that could support the vast amounts of data processing required for their software.
These topics involve incredibly complicated logistics and very complicated software design that goes far beyond the ways we’re used to thinking. We can easily deal with two-dimensional and three-dimensional concepts because we live in a three-dimensional world that is often represented visually in two dimensions. What about data that’s twelve-dimensional, where many dozens of tiny software programs are all exchanging data with each other at the same time? Now multiply that by a million or two, and you have computers performing data processing tasks that are literally mind-boggling for humans. Our discussion in this article is about mind-boggling technology – but we don’t need to understand the mind-boggling parts. We just need an accurate description of that the AI software does, and how we can use it for security system applications. That’s the scope of this article’s discussion.
Artificial Intelligence is the computer performance of tasks that have been thought to require human intelligence. Such tasks are composed of processes like learning (getting not just information but also the rules for how to use the information), reasoning (using the rules to reach exact or approximate conclusions), and self-correction (for which software uses feedback loops that enable the software to evaluate the results of the reasoning that it applied).
Just like some people are smarter than other people, especially from one subject area to another, so some AI software is “smarter” than other AI software across various subject areas. While that’s interesting, the most important question is whether or not the AI software can do what it is intended to do, with the data that it’s intended to use. Can the AI software get the results we want it to?
Machine Learning involves using AI software to give systems the ability to automatically learn and then improve from experience, without the system having been specifically programmed for those improvements. Machine learning originally required structured data. An email is a piece of structured data – sender, recipient, subject, and message body – and even the message body can contain a structure such as salutation, message content, and signature. Thus, it was possible to feed machine learning software thousands of emails classified as spam emails and good emails, and give it rules about the various parts of the data structure, and train it (by feeding it example) to use those rules to tell the difference. That was how some types of spam filtering software came about.
Deep Learning is a type of machine learning that can go beyond structured data. It’s designed using software called neural networks, which are pieces of software code (called nodes) that exchange data with each other in specific ways. Each node has its own data evaluation task to perform, and the outputs of those data evaluation tasks become inputs to other data evaluation tasks. It mimics the way that scientists think the human brain’s nerve cells work (that’s why it’s called a neural network). Each group of software nodes that perform their processing in parallel is called a layer. Technically speaking, more than three layers of machine learning is called deep learning.
But how many layers of parallel processing there are is irrelevant to us, the security system designers, manufacturers and end users. What matters is the end result, and that result is in our security domain, not in the domain of data science and computer processing.
Getting Real About AI
The greatest immediate impact of AI for us is for video analysis, both in real time and in after-the-fact review and data extraction. The big advantage we have – that information technology folks working with business data AI systems don’t have – is that video data is already visual. We don’t have to convert the data into charts and graphs for visualization. So physical security systems AI-enabled products can be easily evaluated just by seeing if the results match the video images. This is one reason why AI-enabled security systems will be easier to adopt and have faster success than AI for business data.
We don’t have to learn anything new to evaluate AI-enabled products or platforms, except how the new products and systems work.
However, the technology behind it is interesting – especially to those of us who deal directly with it – and so here are some links to broader and deeper discussions about AI, ML and DL technology:
- What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning? This Nvidia blog article expands on the discussion above.
- What’s the Difference Between Deep Learning Training and Inference? This Nvidia blog explains the technology that makes possible what I wrote about in my most recent Convergence Q&A column titled, “The Move to Enable Proactive AI in Security Operations.”
Future Real Words or Buzzwords? articles will discuss more aspects of AI, ML and DL as products using them continue to emerge.
Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). In 2018 IFSEC Global listed Ray as #12 in the world’s Top 30 Security Thought Leaders. He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Mr. Bernard is a Subject Matter Expert Faculty of the Security Executive Council (SEC) and an active member of the ASIS International member councils for Physical Security and IT Security. Follow Ray on Twitter: @RayBernardRBCS.
© 2019 RBCS