GGB is committed to providing updated news and analysis on our weekly news site, GGBNews.com.

SURVEILLANCE & SECURITY: The Surveillance Dashboard

Steering the casino operation in the right direction

SURVEILLANCE & SECURITY: The Surveillance Dashboard

Nothing is more critical to casino operations than meaningful data to assist in steering the ship. Every department within the operation bears the responsibility to not only provide useful data, but to continually improve the value of the data they provide.

Surveillance has long been credited as the eyes of the gaming operation, though it is often overlooked as one of the most powerful business intelligence sources in the entire operation. Having the capability to monitor and review every activity on video combined with auditing operations in real time means every surveillance department is ripe with valuable data.

Creating an effective surveillance dashboard reduces mountains of surveillance data and boils it down to useful and actionable information which is capable of steering casino operations.

The 21st century presents an unprecedented challenge to crunch and analyze staggering volumes of data to produce useful and actionable information. The sheer influx and the speed with which data is received today makes it seemingly out of reach to separate the chaff from the wheat in order to extract value. There is a heavy dose of irony in the fact that within this fire hose of information exists an unsatisfied need for more information.

Trying to tame the mountain of data is a daunting task, even for experts who spend every day thinking about big data. It brings to mind a bit of wisdom: “Don’t boil the ocean.”

The simplicity of this concept could not be truer when trying to rationalize all of that data. Avoid attempting to analyze all of the available data and focus on a starting point that provides value quickly and easily. Start with extracting high-level information that is useful and actionable. As that information starts to demonstrate its value to the operation, questions will occur naturally. Those questions and the quest for answers will increase the depth of analysis and set the direction over time.

Even if the entire ocean of data could be boiled at once, the results would be nothing more than a snapshot of a single day. To create a powerful data-driven surveillance department, a foundation of data extraction must be developed to feed growth and provide questions. This is a multi-dimensional problem for which the solution starts with a very powerful analytic paradigm—the data cube.

The cube represents three (or more) dimensions of data usually in a time series. This can be visualized as taking a stream of events over a period of time and extracting three or more data points from each event. These events over time are collected together in multiple dimensions, creating a cube of data for analysis.

Using a data cube as a foundation for data extraction is a very effective foundation for being a data-driven surveillance department, and can drive a very powerful dashboard for casino operations. The question becomes, what data points create a strong foundation for the cube?

The data points for the initial cube should be high-level and abstract enough to be relevant for any piece of information while also providing useful data for analysis. A strong foundation for data analysis at a very high level is department, risk and value, which are relevant across all data points.

These data points may not all be readily available in the typical stream of data, but that should not be a deciding factor. Modern reporting solutions can “infer” these data points by inspecting each piece of data using a set of rules. The rules evaluate the raw data to determine the correct value for the data point.

The selection of data points should always be based on what will empower the operation, not what is easily available in the raw data. Experts in data analysis often avoid injecting static data points into the data stream and instead “infer” the data points by evaluating the data in real time when the cube is generated. This has the added value of being able to change the data classification logic as needed without modifying legacy data.

Determining the department for any given data is probably the easiest data point to determine for a surveillance department. The source of the data or the action being taken will clearly attribute the data point to a given department. In fact, most surveillance departments are already classifying their data by department, and if not, it is a very easy change to make.

Determining a risk value is a much more subjective process that could lead to hours of internal debate. Starting with a basic assessment of low, medium and high is an easy starting point that can be adjusted later. Low-risk items could be standard operational activities such as drops, fills, escorts, etc. Medium-risk items are things like policy and procedure violations, soft count visitors, or even surveillance visitors. High-risk items would include the most significant events such as larceny, internal theft, injuries, etc.

Even more subjective is estimating value, but everything surveillance does revolves around value. It is an interesting exercise to value every action taken in surveillance. Even if the classification is generalized to reflect risk as being low, medium and high, the results are no doubt enlightening.

The lofty task of recording real dollar amounts into surveillance data can demonstrate the value of surveillance in terms that non-surveillance personnel can much more easily digest. A good balance is recording value in buckets such as $0-$500, $500-$1,000, etc. These approximations can be applied to everything recorded by surveillance with a little vision and minimal effort.

The task of determining values for data points such as department, risk, value or others is the basic process of data classification for analysis. This brings up a very important point about these processes and the resulting dashboards. After all, this is a statistical science. Statistics are a world unto themselves, and can be twisted to validate opposing opinions using the same data.

The magic of statistics lies in both the interpretation of the data (classification) and the ever-present margin of error. The margin of error will be present no matter how clean and well-organized the data becomes.

When classifying risk and value, you would do well to remember this margin is always going to be there. Be prepared to approximate, accept those approximations, review them regularly, adjust them as necessary, and always be prepared to represent them.

Having an analytic strategy starting with a data cube of risk, value and department is a strong foundation but still lacks two critical components—software tools and skill set. Thankfully, in the information age, there are many options available to self-service these requirements as well as commercial solutions that are very powerful and often easier to use with support.

The skill set to use the tools effectively can be one of the most challenging requirements to fill. The underlying skill set that needs to be developed is an understanding of databases, how to connect to them, and how to select data.

All tools used for data analysis are going to have the capability to connect to a wide variety of data sources which can include databases, documents, spreadsheets and more. A basic knowledge of the way data is stored is a requirement to unlock the power of the analysis tools. This is where training and support from vendors can play a critical role in using data effectively.

Selection of analytic software is equally important, and there are a variety of free and commercial solutions available with varying levels of usability, training and support. The key requirements are that the software be able to connect to various data sources, supports computed columns for inferring data points, supports generation of data cubes, and finally, can generate dashboards.

A true mastery of Microsoft Office products can accomplish a lot toward this effort when used creatively; Excel is a very powerful analytic tool assuming the data can be manually gathered. Many available commercial tools streamline the steps required and provide both training and support.

A strategy used in many businesses is to implement a data warehouse, which is effectively a single database with no purpose other than to collect data from other databases into a single location. The data structure of the data warehouse is typically designed to support generating reports and dashboards.

ETL tools (“Extract, Transform and Load”) can be used to copy and convert data from one database to another on a routine schedule. Often, ETL tools can infer data point values as part of their transform step to populate the data warehouse. A data warehouse may not be an option for the typical surveillance room today, but in the future it may be commonplace, as well as having a dedicated surveillance database administrator.

No matter the choice of software tools, a fairly standard set of steps will be followed, starting with defining data sources, data sets, computed columns, data cubes and finally, dashboard design.

Data sources are configured for each source of data that is to be included in the data analysis. The data source will instruct the software on where the data is stored, what type of data is stored there, and what credentials to use to access the data. There is no limit to the number of data sources which may be defined to access all of the data. This can include databases for surveillance logs, incident reports, slot data, access controls and more.

Data sets are defined to retrieve specific data from a data source. A data set retrieves data from a configured data source based on criteria specified within the data set, such as last 90 days, typical for a running dashboard. Data sets can reference as many data sources as are necessary to extract all the data needed for the dashboard.

Computed columns are where the magic of inference takes place as described above. Computed columns are found inside the data set and allow custom logic to be defined that assigns values such as risk or value assessments as needed. A custom rule could be defined to analyze the dollar amount of a data point, for example, or perhaps a risk value could be determined based on whether an EMT was dispatched.

Using computed columns for classifications not only eliminates the need to store classifications, but also provides the freedom to easily change how data is classified in the future.

Defining data sources, data sets and computed columns leads up to enabling the definition of a data cube that includes all of its classifications. The data cube will include a department, risk assessment and value, all spanning a time period defined in the data set. A well-classified data cube is nothing short of a powerful computed chunk of data ready to prepare dashboards using any intersection of data within the cube.

The real point of any dashboard is to provide actionable measurements—in this example, of risk and value by department—over time. Visualizing risk and value over time generates trend lines worthy of any dashboard that will show variations in operations and how they drive value as well as risk against one another.

The inevitable peaks, valleys and changes shown by department over time will generate questions. Finding answers to those questions will require exploring the data further and will promote better understanding that leads to improved dashboards and reports as a result.

Building a foundation for a data-driven surveillance department is a big investment on many levels. Ensuring these new processes and measurements are reproducible, documented and hopefully automated keeps efforts in line with the standard expectations of doing more with less. Additionally, automation and documentation promote quality control for both generating reliable dashboards and improving the data processes within the department.

No department in the casino operation has access to more operational data than the surveillance department. Making that operational data into useful dashboards with a strategy for long-term growth is guaranteed to steer the organization.

Jason Riffel is co-founder, president and CEO of CIP Reporting, a supplier of risk management and incident reporting software that provides customized solutions for security, risk management and incident reporting