We live in a paradoxical time, where data is abundant, and yet true, actionable insight is scarcer than ever.
To help fleet managers get to those precious nuggets of insight, we turned to Eddie Garza, who leverages data on a daily basis as the safety manager for LeFleur Transportation. Eddie shared ways other fleet managers can use data for the greatest potential to improve safety and get the most out of their resources.
In this first of a three-part series, we’ll dive into how Eddie gets the most relevant insights out of his DriveCam® data. In the second part, we find out how Eddie turns those insights into action, specifically through more-effective driver coaching. And the third and final part tells the story of how Eddie was able to sell the value of data directly to drivers. (Those stories about how drivers aren’t interested in numbers? Not true, Eddie said.)
The following are six pieces of advice from Eddie on working with data in the field.
Make the data work for you. To make sure the data work for you and not the other way around, Eddie advises managers to look for root causes, rather than events. For example, an event is a curb hit. The root cause may be that the driver was drowsy or distracted, or maybe even forced into a situation he or she couldn't control.
“People tend to just look at events,” Eddie said. “Focusing solely on the events can lead to the data driving you. But it’s the root causes that give you the best insights and help you make changes that lead to better performance.”
Look at more than one type of data. Data don’t exist in a vacuum. They often relate to each other. And it’s in those relationships that Eddie often gets his best insights. Late response, for example, often goes hand-in-hand with following distance. Most people who are under the 4-second rule tend to have late responses as a behaviour issue. For Eddie, this is an opportunity to examine multiple indicators to get at root causes.
Sounds complicated, but he has a simple trick that helps him take some of the guesswork out of the equation—he takes the linear graphs of multiple indicators and lays them on top of one another. Then he looks for patterns—things that spike at the same time, or spikes in one indicator that is closely followed by a spike in another. A spike in close following distance, for example, followed by a surge in late responses suggest a causation. “No driver will have the same issues. No one event is the same,” he said. “You have to approach it as a whole.”
Resist the temptation to freak out when the monthly numbers go up. It’s almost second nature to be alarmed when key metrics such as collisions go up within a short time frame. But fluctuations are normal, Eddie said. Instead of panicking, Eddie applies his first piece of advice, which is to look for root causes instead of focusing on the events themselves. Let’s say, for example, a fleet logs a 25 percent uptick in late responses during the month of June compared to May. Although that’s a substantial spike, the root cause may be an increase in the number of vehicles on the road ferrying families off on summer holidays. If a closer look at the data shows that this July’s incident rate mirrors those from July of the previous five or so years, then there would be less cause for alarm. But if this year’s numbers are substantially higher than prior years, then further investigation might be needed. Trends can reveal what is normal, and what needs a closer look.
Use average safety scores as a baseline. The more established analytics platforms track and report fleet safety scores in a variety of flavors, and in some cases, a lower score can be better. Averages are useful because they establish a baseline. Once there’s a baseline, managers can do two things. The first is to monitor how that number changes, up or down, over time, Eddie advises. This is a decent gauge for how the company is doing overall. The second thing is to look for drivers who are significantly above or below that company average. If the company’s average safety score is 4.5, and a driver comes in with a score of 9, that driver may need coaching, depending on the root causes of the recorded event and the driver him or herself, Eddie said.
Know the difference between the average safety score and the total safety score. Safety scores also come in a second flavor—the total safety score. In this case, the score isn’t averaged, but added together to produce a sum total. Why look at both? Let’s say a fleet had an average safety score of 4.2 in January. In February, the score is 4.1, a decrease of 0.1 points. That’s good, right? Not so fast.
When we take a look at the total score for the same period, we see that there were 10 incidents in January, each with a safety score of 4.2. If we multiply those 10 incidents by 4.2, we get a total safety score of 42. Looking at February, we recorded 20 incidents, each with an average score of 4.1. The total safety score for February is 82, nearly double that of in January!
If we only looked at average safety scores (4.2 compared to 4.1), we would miss the fact that something happened in February that caused the number of events to spike (from 10 to 20).
“If that’s the case, the question managers have to ask is ‘Why are we having more events? Did we add new regions or new vehicles?’ If not, then a look into the other metrics may be required to find the root cause,” Eddie advised.
Data segmenting is your friend. Before managers head out to conquer their data, Eddie has one last piece of advice: Use data segmenting to fine-tune your analytics. For example, if a company operates in multiple regions, segment the data by location. That can help managers isolate and focus on areas that appear to consistently underperform and help get at the root cause. Perhaps it’s simply a matter of one area having more-congested traffic, as in New York City, while other areas have wide open roads, such as you find in Omaha. Perhaps one group has a stronger safety culture, while another group needs its culture to be reinforced. Either way, segmenting is a powerful way to get at root causes.
Segmenting is also useful when a company adds new territory. Eddie recommends keeping that territory’s data separate for a few months. New operations can take three to six months to work out kinks and hit their strides, he said. Until then, keeping their data separate will prevent their scores from skewing those of the rest of the company.
Next up in part two of this series, we go over how Eddie turns the insights he derives from working with data into action.
# # #