In the Wild West of ATM Environments, New Tools and Methods are Shaking Up the Game for Good
How do you manage technical information about your ATM fleet? Not the fleet itself, but rather the data about the fleet. Terminal performance. Its fault records. The jams. The firmware and patch info. The age of individual modules and replacement parts.
Managing that information, and crunching that data the right way, can completely transform the way your fleet operates.
ATMs in the Wild, Wild West
Whether you’re managing five ATMs or 5,000, there’s a vast amount of information about each unit that can be used to paint an individual picture—one that will likely look very different from its identical twins out in the wild.
When thinking about how to manage ATM technology and hardware, we often look to how IT has been managing networks and datacenters for decades as a reference point. However, a data-center model is virtually useless when trying to draw parallels. In that curated, controlled environment, it’s much more feasible to understand what inputs are occurring and how the technology is functioning at an aggregate level, because it’s all sitting in a similar environment around the world.
The ATM environment, on the other hand, is incredibly dynamic. ATMs are exposed to the elements, the general public, extreme temperature fluctuations and a wide variety of attack vectors. Think about the difference between everything your car goes through on a daily basis, versus, say, your refrigerator.
In the past, to proactively address potential faults at an ATM we focused on tracking the time between failures. A component was extensively tested in the lab and an average time between failures was determined, then planning for service became a relatively simple matter of placing a counter on the component, and making a service call when the number approached the failure rate.
A little better than a shot in the dark, but not completely relevant when you consider that the exact same terminal model could be sitting in a place as cold as North Dakota and a location as hot and humid as Bangkok, Thailand. Differences in the internal and external ecosystem, various media types and usage patterns contribute to a high level of variability.
There’s a New Sheriff in Town
With expansive use of artificial intelligence and machine learning tools, and massive data pooling a reality thanks to powerful and flexible cloud computing capabilities, we can begin seeing the individual personalities of our hardware. By gathering deep sensor and firmware level data and putting that into lightweight data transmissions,
DN AllConnect℠ Data Engine accesses module-level and engineering-level data to build a profile over time that’s hyper-specific to a terminal. And simultaneously, we’re collecting data at the aggregate level that helps us see patterns, so we can provide service that really is conditional and predictive in nature.
Yep, predictive. That’s what takes fleet management to the next level—and it’s only possible using modern automation that crunches extremely large amounts of data in seconds, rather than the days or weeks it would take a team of people to sort through.
Think of it this way: Instead of going to the doctor when you’re sick, getting a prescription and lying in bed for a few days, what if your doctor monitored your vitals every day and alerted you before you started showing symptoms and got sick? That’s the level of precision that can only be achieved when modern technology is paired with the proprietary data engineered into our systems and tools. It’s the difference between crunching fault and repair ticket data (reactive) and accessing machine-level data (proactive).
Answers to Your Burning Data-Related Questions
When we meet with banks and credit unions and the conversation turns to ATM service, we typically get three questions:
1.
What type of data are you accessing (i.e., how secure is all this)?
2.
What is the volume of data you’re going to push through my network?
3.
Is this going to impact the performance of my terminal?
I love answering each one of them, because over the past few years our engineers have thought through each topic ad nauseam. Here’s what I tell them:
First, the data we pull is strictly at the engineering level, which means there are zero implications around any data privacy regulations (GDPR, PCI, etc.). We don’t access any customer data, card numbers, or PINs. In fact, the data we pull through DN AllConnect Data Engine won’t even tell you how many transactions have occurred at a particular ATM; it just isn’t set up that way. We are looking at deep technical data such as sensor states, motor speeds and voltages, etc.
Second, we’ve focused our efforts on making sure our data packets are as slim and concise as possible. Typically, they’re around 4mb per day for our DN Series terminals—or about the same as a single picture on your smartphone. We strive to be partners with our clients, and being a good steward of our clients’ networks is important to us.
And lastly, the short answer is an emphatic NO. We’re trying to ensure higher uptime and availability for your fleet. The last thing we want to do is introduce something that’s going to adversely affect your terminal. DN AllConnect Data Engine is a lightweight agent that runs lean and clean.
Learn more about how we’re transforming service for our clients: Visit
DieboldNixdorf.com/AllConnect or access our
recent webinar on Connected Services Here.