There is no custom code to display.

Real Words or Buzzwords?: Intuitive

Print Friendly, PDF & Email

This is the 11th article in the award-winning “Real Words or Buzzwords?” series about how real words become empty words and stifle technology progress, also published on SecurityInfoWatch.com.

By Ray Bernard, PSP, CHS-III

Manufacturers: click here!


It’s about time that we had a real-world testable definition for “intuitive”, since so many product specifications proudly include “intuitive user interface” in their Design Criteria and Performance requirements!

  • All-in-one RWOB

  • Over the past few months, my work has put over a hundred product A&E specs in front of me. Almost all the specs for web or mobile-device applications contained the phrase “intuitive user interface”. I had been away from A&E specs for a while, and I forgot how flawed specs can sometimes be. Not that writing good specs isn’t challenging—it is, especially when the specifications are for software products.Early in my engineering education, the words “testable specifications” were practically unavoidable, and almost every book or paper about writing requirements and specifications contained the mandate, “Specifications and requirements must be testable.” Otherwise, how do you know you match the specifications and have conformed to the requirements?

    I don’t think anyone in the physical security industry disagrees with that. At least, not until it comes to software. Many non-testable terms abound, such as “user-friendly software”. The problem is not confined to the security industry. Whenever I would ask what the criteria were for user-friendliness, the most common answer was, “You can’t define it, but you can tell when you see it.” Given the number of products that end users don’t find so friendly, I think we need a better answer than that, don’t you?

    The Importance of Being Intuitive

    There are several technology trends that are relevant to products being intuitive to use.

    • Consumerization: the reorientation of product and service designs to focus on, and market to, the end user as an individual customer, in contrast with an earlier period of designing for and focusing on organizations as customers. Organizations have IT departments, individual customers don’t That has meant the end of training classes, long learning curves and the need for skilled technical deployment teams.
    • Do It Yourself (DIY): This trend’s product requirements include: auto-discovery, instant connection, auto-configuration, no-training-necessary, simplified operation.

    All the aspects of these two trends have intuitive product interaction as a key requirement.

    So, things should be getting more intuitive, and if so, this should be a common product claim. It should appear in specifications.

    CSI Specifications

    The Construction Specifications Institute (CSI) publishes guidelines and standards for specification writing, including guidance and requirements for the famous three-part CSI specification, with the very familiar PART ONE, PART TWO and PART THREE headings.

    What does this have to do with “Intuitive” software? I’ll get right to the point, per the CSI standards.

    The following quoted text is taken from the SectionFormat™/PageFormat™ 2009 document, from Section Format page 21, in a slightly different order than in the document.

    “Any section that specifies a product should include ‘Design and Performance Criteria’ in Part 2.”

    “Include performance-related characteristics of products.”

    “Performance characteristics may apply to systems, assemblies, components, and materials.”

    “Include appropriate methods of substantiating performance characteristics . . .”

    “Performance should be capable of being verified by observations or tests and stated as: (1) a property name; (2) a value and units of measure, if applicable; and (3) a method of evaluating or verifying performance, such as a test method.”

    So, let’s see how those three performance verification items (1, 2, and 3 above) apply.

    Testing “Intuitive”

    For #1, the property name would be “Intuitive”. That was easy, and completes our testing element #1.

    For #2, we need “a value and units of measure”. Since “intuitive” covers a lot of ground, we should have multiple values, each with units of measure.

    For #3, we need “a method of evaluating or verifying performance, such as a test method”.

    These are completely doable, so let’s address our testing element #2 next.

    Defining Intuitive

    Although the term “intuitive UI” has been in use for decades, it wasn’t defined until June of 2010, when Everett McKay published the definition below, based upon Wikipedia’s definition of “intuition”:

    A UI is intuitive when users understand its behavior and effect without use of reason, experimentation, assistance, or special training.

    However, we still need help connecting the dots between that conceptual definition, which states the design goal, and the real-world software characteristics needed to achieve the result. So, Everett developed such a definition.

    Everett’s Definition of Intuitive

    Everett uses the term affordance, which means an aspect of a product or object that provides clues as to how it could possibly be used. An outstanding discussion of affordances, complete with excellent example, is provided in this article about affordances from the Encyclopedia of Human Computer Interaction, 2nd Edition. Jump to this article, look at the first three figures, and then come back to read the detailed definition below.

    Here is Everett’s definition.

    A UI is intuitive when it has an appropriate combination of:

    • Affordance. Visually, the UI has clues that indicate what it is going to do. Users don’t have to experiment or deduce the interaction. The affordances are based on real-world experiences or standard UI conventions.
    • Expectation. Functionally, the UI delivers the expected, predictable results, with no surprises. Users don’t have to experiment or deduce the effect. The expectations are based on labels, real-world experiences, or standard UI conventions.
    • Efficiency. The UI enables users to perform an action with a minimum amount of effort. If the intention is clear, the UI delivers the expected results the first timeso that users don’t have to repeat the action (perhaps with variations) to get what they want.
    • Responsiveness. The UI gives clear, immediate feedback to indicate that the action is happening, and was either successful or unsuccessful.
    • Forgiveness. If users make a mistake, either the right thing happens anyway or they can fix or undo the action with ease.
    • Explorability. Users can navigate throughout the UI without fear of penalty or unintended consequences, or of getting lost.
    • No frustration.Emotionally, users are satisfied with the interaction.

    Of these elements, the first two reflect the dictionary conceptual definition Everett first provided, and the others are those extra attributes that go beyond the that definition to provide values that can be utilized in a test method.

    To this list I add one more characteristic: Consistency. Some companies break down their developer assignments so that each part of the application has a different developer, and when the above characteristics are not taken into account uniformly by the development team members, the user experience varies from one part of the product to another. I have found this to be true even with products described as a “unified platform”. Here is the description of that characteristic:

    • Consistency. The points above must be applied to a consistent depth or level throughout the user interface, so that the user experience is consistent throughout the entire product or platform.

    When the Consistency factor is low, there can be a persistent high level of frustration for users with the part of the application that they use the least. If those functions that are used infrequently. such as emergency or incident response functions, are of critical importance—it is important to give Consistency sufficient consideration.

    These characteristics provide eight points of intuitive design we can use to rate the degree of intuitive user interaction for any product. The characteristics server as the “values” described by the CSI guidance. Below is a scale that provides “units of measure” to rate them on, which completes our testing element #2.

    Rating Intuitive User Interaction

    I have used this rating approach successfully in several dozen critical system design projects, about half of which were electronic physical security systems, and all of which had the following challenges:

    • There were multiple user categories, each with differing use requirements.
    • None of the candidate products were ideal, but one of them had to be selected.
    • The user base contained personnel equally spread among novices, intermediates and experts in terms of operations tasks and familiarity with the type software applications being evaluated.
    • The customer team had tried to select a product on its own, and could not come to agreement.
    • None of the reference customers for any of the products were using their system in the same way this team would, and not at the same scale as the client trying to choose a product.
    • They were hoping that my firm would select a product for them, provide them with a rationale for the selection, and that this would make them satisfied—which we knew would be a recipe for disaster.

    Addressing these challenges led me to the following evaluation method, to identify the “most intuitive” product.

    Important note: To be successful in terms of the overall project scope, the most intuitive product evaluation step must be the last step in the product evaluation process. To make a successful selection based upon the intuitiveness of the products, any one of the candidate products must be otherwise acceptable for use. The purpose of the final test is to select the most intuitive from a set of otherwise qualified products. Sometimes this means there are only two or three candidates.

    Evaluation Method

    Our testing element #3 is the evaluation method. The scientific method that is the basis of this evaluation is called “pairwise comparison” or “paired comparison”, and it is a method that will scale up to any number of product comparisons, and any number of features to compare, in a situation where a small group of evaluators (we have used up to 15) must reach a consensus decision. It allows for subjective evaluation by observation or even “gut feel”. It works where some of the evaluators don’t have an opinion because the particular item or feature under consideration is outside their realm of knowledge or experience. They can just score such comparisons as a tie, and their scores will then be neutral when combined with those of other evaluators.

    Of course, one person alone can use this method to satisfy his or her own requirements, but security systems typically have a variety of user roles, such as the basic set: administration, operations, and investigations.

    In a pairwise comparison, a whole list of items is ranked by comparing only items two at a time, and assigning a numerical rating to the comparison. Adding up the ratings produces a score for each item. Some items score higher than others; the numbers document the ranking.

    This is a divide-and-conquer approach where for each pair of items, a simple comparison is made. The approach is so simple, any group of end users can do it, regardless of the depth of their experience. Why is this true? Because the entire purpose of this process is not to find the universally “best product” of all products. It is to find the product that best fits the specific users and helps them get their work done in the ways that are easiest and best for them.

    To see what the product comparison chart looks like, see this example pairwise comparison chart.

    Simple Comparison

    1. Establish Final Say. Elect someone to the tie-breaker role in case consensus cannot be reached on any individual rating score.
    2. Determine the Critical Application Functions. For each category of user, select the most critical application functions. Usually there are four to six, but there could be as many as nine or twelve.
    3. Independently Rate the Critical Application Functions.For each critical function, start with one product and compare it one-by-one with each of the remaining products. Rate each pair products on the eight “intuitive” characteristics, using the following three-point rating system. Keep the definitions of the characteristics above on hand, so that you can reference them during the comparison. Use the rating criteria listed below.

    1. Affordance

    • 3  Better use of affordance
    • 2  Equal use of affordance
    • 1  Worse use of affordance

    2. Expectation

    • 3  Meets functionality expectations better
    • 2  Meets functionality expectations equally
    • 1  Meets functionality expectations worse

    3. Efficiency

    • 3  Higher efficiency
    • 2  Equal efficiency
    • 1  Lower efficiency

    4. Responsiveness

    • 3  More responsive
    • 2  Equally responsive
    • 1  Less responsive

    5. Forgiveness

    • 3  More forgiving
    • 2  Equally forgiving
    • 1  Less forgiving

    6. Explorability

    • 3  Better explorability
    • 2  Equal explorability
    • 1  Worse explorability

    7. Frustration

    • 3 Lower frustration level
    • 2 Equal frustration level
    • 1 Worse frustration level

    8. Consistency

    • 3 Better consistency
    • 2 Equal consistency
    • 1 Worse consistency
    1. Generate a Tentative Final Rating. Add up the individual rating scores and divide by the number of evaluators to get a chart with the overall rating.
    2. Collectively Finalize the Rating. In a group session, review the overall rating. If any evaluator disagrees with the final rating, allow them to present their case to the group and consider the factors on which evaluators differ. Come to a consensus for a final score. Rely on the Final Say role to keep the process advancing if needed.

    The simplicity of this method hides its power. Even the most feature rich products can be evaluated this way.

    Request a Product Evaluation Template

    If you are interested in applying this method, email me using this link to request a product evaluation template that we have used successfully in our projects.

    Note to Manufacturers and Sales People

    If you really want to create a breakthrough “next generation” product, have your customers and prospects compare your next generation designs with your existing product. This is one way to get powerful results from your user groups or focus groups. For each business sector that is important to you, set up 1-day product evaluation sessions where your customers and products evaluate your products using this method. You will get more valuable feedback from this method than any other approach you have used in the past. Try it with your most important target market, hopefully before your competitors do.

    Raising Security Industry Success

    We owe it to our security industry customers to do a better job of helping them to evaluate, select, acquire and deploy products. Isn’t that how industry companies earn their revenue? The product evaluation process has long been a frustrating part of the role of security specifiers. This is partly because there is no universally “best product”. The product selection process must be tailored to each customer’s need. Most consultants and design engineers know how to perform risk assessments, and apply their results to system design. It’s the product selection part that is the challenge, especially in this day and age when technology advancement continues to accelerate.

    This is one tool that can support manufacturers, consultants and integrators in helping their customers more easily meet their security technology objectives.

    We can all win with highly intuitive products and systems.

    Ray Bernard, PSP CHS-III, is the principal consultant for Ray Bernard Consulting Services (RBCS), a firm that provides security consulting services for public and private facilities (www.go-rbcs.com). He is the author of the Elsevier book Security Technology Convergence Insights available on Amazon. Mr. Bernard is a Subject Matter Expert Faculty of the Security Executive Council (SEC) and an active member of the ASIS International member councils for Physical Security and IT Security.