There is no custom code to display.

Pass or Fail: The Out-of-the-Box Experience

Print Friendly, PDF & Email

By Ray Bernard, James Connor and Rodney Thayer

Originally appeared in Security Technology Executive Magazine

 

Much has been written and said at security conferences, in magazines, and in online forums about network equipment requirements for putting physical security systems onto corporate networks. The majority of the discussions center on security video. This is to be expected, given that networked video has higher bandwidth requirements than all of the other physical security technologies combined (such as access control, intercom, and intrusion detection monitoring). Other discussions cover related topics like collaborating with IT.

There is very discussion about the wider scope of best practices for deploying physical security technology on enterprise networks. Such practices are needed because many security devices and systems were designed on the assumption that the equipment would be deployed on a completely independent security network rather than in an enterprise network environment.

In many organizations, fully independent networks for security systems require a level of duplication and cost that, at least for some systems and technologies, would be not only unwanted but needless. For organizations with enterprise-wide networking in place, an infrastructure exists to make security information and control available at all points in the organization where that makes sense for security, safety or business operational purposes.

Thus the authors have formed the Bp.IP Initiative, to advance best practices for deploying IP based security systems in enterprise environments, including practices that compensate or work around the network environment shortcomings of the spectrum of security products currently available.

Best Practices and Standards

When it comes to placing physical security systems onto an enterprise network, what defines best practice? Ultimately, best practices should result in better-than-otherwise performance; cost; compliance with standards; compatibility with existing network equipment and devices; and level of effort to deploy.

Standardization covers people, process and technology aspects of computing systems and networks. In the IT world, there are best practices that cover software design, development, deployment, maintenance, and administration—including testing and validation. Specific standards exist for computing and network devices, and for the security of networks and applications. Standards abound for telecommunications and network infrastructure.

IT practitioners learned long ago that standards and best practices allow them to deploy and manage very large and complex networks across geographic as well as language and cultural boundaries with these results:

  • Highest quality, reliability and performance
  • Lowest cost to deploy and maintain
  • High scalability and adaptability
  • Compatibility and interoperability among different brands
  • Overall infrastructure that can evolve as areas of technology advance
  • Stability even while undergoing improvements and upgrades
  • Highest achievable ROI for money invested

When deploying physical security systems on an enterprise network, failing to follow applicable IT standards and good practices not only means walking away from many of these benefits, it can also mean introducing problems that raise network management costs and even interfere with other systems.

Why Best Practices are Needed

When putting security systems and equipment onto an enterprise network, best practices are needed to:

  • prevent the systems and equipment from interfering with other systems;
  • isolate what would appear to be “network attack behavior” from network segments that are monitored to catch and stop it;
  • enable security networks, security systems and devices to benefit from existing network scan and monitoring programs;
  • facilitate troubleshooting in the enterprise environment; and
  • facilitate support from IT to leverage the organization’s existing investment in IT resources (including expertise) as well as to reduce response and recovery time.

This is why the authors have collaborated to identify a selected set of criteria to use in establishing “best practice” examples. These are not necessarily “advanced practices” or “difficult to learn” methods. They are basic approaches that address issues and questions commonly of concern to enterprise IT departments.

These criteria are intended to address the fact that most installed security systems and devices, and many new security devices, are not fully network-ready. They were not designed to co-exist on a network with the many types of non-security systems and devices to be found on an enterprise network.

Some security system products lack desirable features. Some have mistakes in their implementation of IT standards and protocols. Some violate a network standard or protocol severely. Some correctly follow standards and protocols to a “T”, but can’t be managed in the way that IT groups want or need to manage networked devices. It is quite a varied landscape that results from a physical security systems deployment.

Managing Networked Systems

With good reason, IT operations personnel have come to value the management protocols and capabilities that are built into their network devices. Their value becomes clear when you look at their operational benefits. Instead of 50 or 100 cameras, IT has hundreds or thousands of PCs and other devices to manage (including printers, scanners, wireless access points, network switches and routers, etc.). They use network management software to tell when something is offline—before an operational problem results.

Managing with Murphy in Mind

Physical security operations folks are used to the “Murphy’s law” scenario in which a new card reader is installed, and the one card that doesn’t work at it belongs to the CEO or another high-level executive, who is trying to escort the board of directors into the building and out of a rainstorm. Is there an equivalent case in the IT world? There are many. Here is a story about printing a critical report.

Printing Saga

About 20 minutes prior to a critical meeting, a senior executive instructs an executive assistant to print an important confidential report that she has just received. The assistant discovers that he can’t print it because the printer seems to be offline. But when the assistant walks over to the printer, everyone else seems to be printing to it without any trouble! What gives? The assistant, fearing a reprimand for being late with the report, copies the report to a USB drive and takes it to a friend in a different department whom he saw at the printer retrieving a document. This is a violation of the executive’s specific instructions (as well as company policy) about the handling of that type of confidential material.

After delivering the printed report to a frustrated boss, well after the meeting has started, the assistant calls an IT support person make sure that the document is not being retained in any buffer or storage area in the printer. The result: job stress for the assistant, lost work time, violations of confidential information protection, and an unhappy senior executive who looks to peers to have been lax in preparing for the critical meeting.

Avoiding Murphy Consequences

The way that IT folks prevent that kind of scenario from happening includes monitoring the health of workstations, servers, printers and other network devices. By being alerted to an offline or malfunctioning network switch or other device (in this example, a device on the network between the assistant’s computer and the printer), the problem can be remedied quickly and trouble prevented. With thousands or tens of thousands of computers and network devices to manage, high-risk systems or products (those with a likelihood of labor-intensive maintenance or troubleshooting) are avoided like the plague—at least when those who have to support the technology are listened to.

The comparatively small scale of physical security system deployments (independent from enterprise IT infrastructure) has allowed a much more lax approach for deploying security systems than IT can accept or tolerate.

The comparatively small scale of physical security system deployments has allowed a much more lax approach than IT can accept or tolerate. Now that the size of networked security systems is scaling up in most organizations, the lax approach is no longer feasible, especially in today’s budget-constrained operating environment.

For example, many networked physical security system deployments are done using unmanaged network switches. These are switches can’t report their health or status to network monitoring software. Thus if a camera’s video stream is lost, someone has to physically go to the camera and to network equipment rooms to troubleshoot the problem. Many video systems are not set up with real-time video loss alarms, and not all cameras are closely monitored by personnel. This often results in problems going undiscovered for days, weeks or months —as many TV news stories report each year, including for two major airports this year.

Sound Engineering

The way that systems are designed and deployed must be done in a manner that highly facilitates their management and maintenance. This is not a new concept. In 1877 marine engineer Alfred Holt stated to a meeting of the Institution of Civil Engineers (referencing both the yet-unnamed “Murphy’s Law” and best engineering practice):

“It is found that anything that can go wrong at sea generally does go wrong sooner or later…. Sufficient stress can hardly be laid on the advantages of simplicity. The human factor cannot be safely neglected in planning machinery. If attention is to be obtained, the engine must be such that the engineer will be disposed to attend to it.”

This is what engineers in the IT domain have learned: that their equipment and networks must be designed and deployed so that technicians will be disposed (inclined) to attend to them. That can’t be said, for example, for video deployments where “black video” goes undetected and unattended to for days, weeks or months, and where system setup and troubleshooting is complicated.

Security manufacturers, systems integrators and security consultants who do not take sound deployment factors into consideration, can’t excuse themselves by saying or thinking that these are “new IT topics” that the physical security industry is just a little late in catching up on. As Alfred Holt’s words indicate, these critical system success factors were known to systems engineers in the 1800’s.  In reality, the physical security industry is 200 years late in taking sound deployment engineering into account. It is only because the industry’s customers are not engineers that the industry’s comparatively low caliber of deployment engineering practice is commonly accepted.

To get a good look at best practices in a related industry, see the excellent white paper by TAC titled, “Smart Facility Automation Solutions for Regulatory Compliance”. In particular look at pages 11 and 12 that deal with Good Automated Manufacturing Practice (GAMP). Nearly all of these practices apply to security systems. (Download from: http://tinyurl.com/andover-gamp-paper)

The point is that IT‘s design and operations practices are much more than an IT-specific way of doing things. They are universally sound engineering practices applied to information technology deployment.

Evaluation

Enterprise IT groups look to have systems and equipment that can be deployed quickly and accurately, with a minimal amount of effort, and that can be operated at low cost and low risk. IT groups have personnel who are assigned the task of evaluating candidate technology to see how they comply with these general requirements.

Evaluation Criteria

The first step in such an evaluation is informally referred to as judging the “out-of-the-box experience”.  What does it take to unpack, connect and “fire up” the system or device? What kind of problems can be anticipated? What are the general characteristics of its network traffic? How accurate and complete is the documentation?

The key question is: Will the product PASS or FAIL the out-of-the-box experience?

Most security industry manufacturers, integrators and consultants are surprised to learn what IT evaluators can conclude from the out-of-box experience.  Tables 1 and 2 are charts showing some simple evaluation actions for networked appliances and end devices. It includes example conclusions that can be drawn, expressed in informal language, for the initial evaluation steps from opening the box to examining the documentation.

Most vendors and security practitioners have never heard of such an evaluation. Yet in July 2010 a Google search on the term out-of-the-box-experience returned 97.5 million results. These results include a wealth of product reviews, not limited to computer or network products, as well as two Wikipedia entries on the topic:

http://en.wikipedia.org/wiki/Out-Of-Box_Experience
http://en.wikipedia.org/wiki/Out_of_Box_Failure

It is important to note the valid conclusions that are likely to be drawn by the IT evaluator. These are the points that physical security manufacturers and system providers, and their physical security practitioner customers, are generally not aware of.

Most security industry manufacturers, integrators and consultants are surprised to learn what can be validly concluded from the out-of-the-box experience.  Tables 1 and 2 below are charts showing some simple evaluation actions for networked appliances and devices, including example conclusions that can be drawn for the initial evaluation steps from opening the box to examining the documentation.

Although these example evaluation criteria are being presented in an apparently formal fashion in Figure 1, such evaluations are often not very formal. They are done mostly against the background of common experience. The more experienced an evaluator is, the less forgiving the evaluator will be, because those points of forgiveness are likely to be points of pain and regret somewhere down the line. “Once bitten— twice shy” is an old and common expression. But “thrice bitten—no way!” is a more likely scenario for an experienced evaluator. If the security product will be used in any way to achieve regulatory compliance, the evaluation bar will be set particularly high, and with good cause (see the GAMP white paper referenced above).

The authors have spoken to that it would be unfair to judge their products on the out-of-the-box experience, because they have many successful deployments. But are they defining “success” in the same way that enterprise customers do? It is completely fair to the customer to judge the likely product deployment costs and efforts in large part on the out-of-the-box experience. It’s not just the product that is being evaluated. It’s the vendor as well, based upon how well the vendor enables its customers to be successful with low-effort deployments and cost-effective customer internal support.

Table 1. Common Out-of-Box Evaluation—From Opening Box to Documentation Check – Part One

Action Steps Favorable Results Favorable Conclusions Unfavorable Results Unfavorable Conclusions
Open Box Take out contents. Looks like the right stuff. Good so far. Wrong product. FAIL EVALUATION
Find Documentation Is there a list of parts (separate or in installation guide) and pieces against which the contents can be checked? Identify all parts and verify contents complete. Contents complete.

 

Can’t verify contents as complete. Continue evaluation until a question can’t be answered or another shortcoming is encountered. Then FAIL for poor product and packaging, due to high likelihood of excessive time required for support.
Identify Documentation. Look for paper and/or CD:

  • installation guide
  • user manual
  • release notes
  • disclosures
  • data sheets
  • application notes
  • network hardening guidance
  • tech support contact info
All items are found and are a match for the product that has been provided for evaluation. All documentation is found and is correct for the product. Documentation is incomplete.. Documentation is incomplete.

  • If no tech support contact info – FAIL because there is no way to determine whether or not the evaluation can be completed.
  • If tech support cannot be reached in reasonable time – FAIL as the evaluation will probably take too long, and support will probably be too troublesome.
Examine release notes Check release notes for:

  • sufficient history
  • bug fixes
  • known issues
  • incompatibilities
Able to identify Vendor has active maintenance processes and will be able to support a deployment. No release information available, old releases, mention of lock-in to legacy technology. Vendor shows insufficient evidence of maintenance processes and/or investment in the product
Check release note history Check for:

  • things being fixed
  • things being upgraded and/or enhanced
  • notifications of compatibility with new devices
Able to confirm at-risk technologies are being developed and maintained. (Having a bug WITH A SOLID REPAIR HISTORY is a GOOD thing not a bad thing.) When (not if, be realistic) bugs are found the vendor is likely to make a good effort to correct them. Lack of release history or other technical data indicates poor support habits on the part of the vendor supply chain. “BAD MARKS” go to this vendor – can’t trust the vendor to “be there” if we had to call with a problem; can’t expect a timely resolution. FAIL if any more “bad marks” accumulate.
Check release note bug fixes Check the bug fix lists for each release against online discussion forums. Bug fixes match complaints in forums and discussions in a timely manner. Details are available on the vendor’s website. Unaddressed issues appear in “known issues” list. Continue evaluation because the vendor seems to be responsive and responsible in addressing and reporting bug fixes. Few or no bug fixes reported, and forums contain serious complaints that span multiple releases. Track record on the web of vendor belligerence towards bug reporters generally indicates a lack of commitment to delivering technically solid products. FAIL – Vendor releases products without sufficient testing (i.e. uses customers as guinea pigs), and is not responsive enough in addressing problems. Product will be troublesome to utilize and may fail critically at some point.
Check release note known issues Check the documentation and vendor website for workarounds for the known issues. Workarounds are published at least for critical issues. Forums report that the workarounds are satisfactory. Continue evaluation because the vendor seems to be responsive and responsible in addressing and reporting bug fixes.  .  A track record of dialog on resolving issues shows it’s worth working with the vendor if there are limitations, compensating controls required, or workflow modifications required. Critical known issues have no workarounds in vendor documentation or in online forums. YouTube has videos of how to compromise product and there are no published workarounds. FAIL – Vendor releases products without sufficient testing (i.e. uses customers as guinea pigs), and is not responsive enough in addressing problems. Product will be troublesome to utilize and may fail critically at some point.

 

Table 2. Common Out-of-Box Evaluation—From Opening Box to Documentation Check – Part Two

Action Steps Favorable Results Favorable Conclusions Unfavorable Results Unfavorable Conclusions
Check release note incompatibilities or for vendor-supplied documentation on known compatible products (in sufficient detail.) Check the list of incompatible items against the security department’s technology roadmap, IT and security department standards, and the lists of installed products in that category. If such information doesn’t exist, provide the security technologists with the list of incompatible products and have them respond as to whether or not there is a conflict with existing or planned equipment deployments. Incompatible items ARE NOT FOUND in existing or planned deployments

—OR—

Incompatible items ARE FOUND but can easily be replaced per cost/benefit analysis.

Continue evaluation. Give vendor “good marks” for listing incompatibilities. (A) Incompatible products are found in existing or planned deployments, and will not be changed out.

(B) If no Incompatibilities are listed, check online forums for incompatibilities, and check with vendor tech support. If no incompatibilities are known to exist, PASS.

(C) If significant incompatibilities are found to exist but are not documented, FAIL.

If (A) FAIL based upon incompatibilities.

If (B) PASS based upon sufficient technical information available to avoid or work around problems.

If (C) FAIL based upon likelihood of serious problems.

Check documentation Confirm the documentation for the specific product configuration as specified is identifiable and check it for usability and validity. Able to find necessary information (how to set up, how to harden, how to reset to factory defaults, how to enable/confirm use of required features, how to integrate with an enterprise network infrastructure Documentation appears sufficient to support evaluation/confirmation of vendor delivered features, deployment, and ongoing operations Multiple gaps in documentation when compared with state of the art enterprise infrastructure deployments; explicit feedback from vendor support of undocumented items; clearly visible error messages that are totally undocumented. Insufficient or no documentation (will make deployment risky/difficult/expensive, will make operations difficult/unreliable.
Check “getting started” guidance Check to confirm a brief evaluation can be performed to evaluate the product. Information provided through some appropriate channel (might be text file, might be online website, might be a YouTube video, might be tech sales rep explaining how they’ll install it in front of you with narration…) Product is likely to meet requirements, sufficient to merit technical validation exercise; also sufficient information is present to develop a sound evaluation plan. Vendor has never heard of a ‘getting started’ guide; no demo is available; no online information can be found; additional vendor-proprietary equipment (connectors, power supplies, special cables, etc. not provided) required to perform an evaluation. FAIL – Vendor sales process limits or prevents customer access to known technical limitations in order to close the deal; support likely to be poor; engineering likely to be underfunded/shoddy and unresponsive to customer needs.  Deploying this product would be a technical risk.
Evaluate Product Perform brief (one week, in an office lab environment) set-up and exercising of product. The product can be deployed without excessive vendor support and/or the demonstration provides sufficient technical detail to assure deployment and operations teams of the product’s viability.; substantial technical validation via live exercises produces data to back up customer management buying decision process. PASS – Usable technical results to support the project team’s product selection process. No serious issues with network traffic or behavior on the network. Product does not work, product has bugs, product hard to use, and evidence contra-indicating the product fits the desired use cases. Causes unacceptable network problems. FAIL – Product not appropriate for deployment in the intended use cases.

 

Conclusions

Tables 1 and 2 are only a partial listing of considerations and findings from such an evaluation.

If you are an enterprise security practitioner who likes the performance or specifications of a specific networked security product, you would do well to have that product approved, or even established as a standard, by your IT department.

Before you or your systems provider submit the product to IT for approval or evaluation, be sure in advance that all of the out-of-the-box experience ingredients are included in the evaluation package. If you can’t obtain them all, ask yourself whether or not the vendor cares enough about supporting your company’s ease of deployment in an enterprise networked environment.