At Geotab Connect 2026 in Las Vegas last week (10-12 Feb), more than 4,000 transportation professionals gathered to hear how AI is reshaping commercial fleet management.
On stage, Geotab CEO Neil Cawse delivered a keynote charting the company’s 26-year growth trajectory to 3,500 employees across 22 global offices. He pointed to more than $200m in annual R&D investment and described how AI now underpins vehicle utilisation, predictive maintenance and safety optimisation. AI systems, he noted, already outperform humans in certain coding tasks, a signal of how rapidly the technology is advancing.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
As the event shifted to breakout sessions and the exhibition hall, discussion turned to connected fleets, product design, vibe coding and digital twins. Yet behind the product launches and partner announcements lay a more fundamental question, one that goes to the heart of the Geotab Safety Center and its broader safety proposition: what kind of AI architecture should power collision-risk analysis?
That question surfaced most clearly in a private conversation on the sidelines of the conference. Mark Miller, CEO of InsureVision, argued that much of the industry is building on the wrong technical foundations.
This is not a niche debate between engineer-CEOs. Europe’s dashboard camera market is valued at roughly $1.41bn in 2026—about a third of global revenue—and millions of vehicles now stream video into telematics platforms. The architecture chosen to interpret that footage carries significant operational and commercial consequences.
On one side, are the world’s top telematics providers such as Geotab, Samsara, Powerfleet and Verizon Connect. Their systems combine sensor fusion, video analytics and convolutional neural networks (CNNs), refined through years of large-scale deployment.
In Geotab’s case, the stack also includes XGBoost, an open-source machine-learning model optimised for structured data. While video systems detect and classify events, XGBoost analyses speed, braking, acceleration and driver history to estimate collision risk. The combination allows Geotab to rank drivers by risk level and prioritise serious events without relying solely on large, compute-intensive models.
On the other side is InsureVision, an AI start-up founded in 2022, contending that incumbents are solving the wrong AI problem.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataAn AI company, not an IoT company
Miller speaks quickly, intensely and passionately, framing collision-risk modelling as a fundamentally different discipline from traditional telematics. “Understanding risk is sophisticated and requires complicated machine learning,” he says, adding: “Doing so requires an AI company, not an IoT company.”
In his view, most platforms still centre on accelerometer and GPS data, combined with CNN-based video triggers. Even advanced CNN variants, he argues, tend to classify objects or events rather than interpret the full driving context. InsureVision’s alternative is an end-to-end transformer architecture trained on tens of thousands of crashes and near-misses. The system analyses forward-facing video and learns temporal patterns directly from data.
“We never trained our models about red lights,” Miller says. “But they understand that driving through red lights is risky.” What is attracting attention to its offering, says Miller, is its automated fleet safety reviews which can upscale dramatically with deeper automation. One fleet customer, he says, previously reviewed around 200 dashcam clips per day. After deploying InsureVision’s system, that figure rose to roughly 5,000 clips daily, with AI scoring and categorising events so managers can focus on genuinely high-risk behaviour and coaching.
Having launched the product only recently, Miller is now focused on building traction, offering free pilot programmes to fleet operators. The system, he says, “integrates with multiple different dashcam companies. It doesn’t replace the dashcam, it’s just a layer that sits on top,” adding an AI-driven analytics layer to existing camera infrastructure rather than requiring new hardware.
For Miller, the advantage of transformers lies in contextual reasoning. To illustrate the claim, he flips open his laptop and loads a video clip shot by an unnamed telematics company. “I downloaded it and reprocessed it through our system,” he says. “We applied our AI risk analysis to the exact same footage.”
The forward-facing dashcam view shows a straight stretch of road. A vehicle in the adjacent lane begins edging towards the host driver. A few seconds in, Miller pauses the clip. On his overlay, a large red “87” dominates the screen, his system’s risk score.
“At this point,” he says, tapping the screen, “their system is silent. Ours is already screaming at you you’ve got a big problem.”
According to Miller, his model has detected the early indicators of a cut-off manoeuvre, subtle lateral drift and closing distance before any abrupt movement occurs. “We’ve got a sensor that understands risk,” he says. “Not just events.”
The video resumes. The neighbouring car completes the cut-in, forcing the host driver to brake sharply. The other vehicle accelerates away. No collision occurs.
As the braking event unfolds, Miller points again to his overlay. The risk score drops rapidly, eventually returning to zero as the threat dissipates.
“Now look,” he says. “At the moment the other vehicle speeds away, our score is back to zero because the risk has passed. That’s when their system flags a problem.”
In his telling, the competing platform reacts only after the accelerometer registers harsh braking, not when the visual cues first signal danger.
“So the only reason this became an event clip,” Miller says, “is because there was a harsh brake. The accelerometer detected it, not the computer vision.”
For Miller, the demonstration encapsulates his broader argument: traditional telematics platforms are event-led and sensor-triggered, while AI-native systems aim to model risk continuously, before and after a measurable incident occurs.
It is also worth noting that InsureVision’s comparison was conducted on archived footage that had been downloaded and reprocessed, rather than in a live, real-time deployment.
Edge versus data centre
The divide is not only architectural but geographical, in computational terms.
Edge processing means running AI models directly on or near the vehicle. These systems operate with limited power and memory and must deliver decisions in milliseconds, for example, issuing a real-time warning if a driver is tailgating.
Data-centre processing, by contrast, involves transmitting footage or telemetry to remote servers equipped with powerful GPU clusters. These systems can run larger models but consume more energy and introduce latency.
Miller’s approach leans toward large, end-to-end models capable of deep contextual inference. Cawse questions whether such systems are practical at fleet scale.
Later at the conference, Cawse addressed the suitability of transformers for real-time safety with a measured, deliberate tone and quiet confidence. “Transformer architecture is effectively a text-token output model,” he says.
While powerful, it is not automatically the best fit for analysing telemetry or issuing split-second alerts. “You can’t turn measures like hard braking and speed change into text tokens. It’s not the right tool for the job. The older models are still the best,” he adds.
His concerns centre on latency, cost and power consumption. A large Transformer model, he suggested, could require eight to ten high-end GPUs, draw around 30 kilowatts of power and take several seconds to generate an answer. “You can’t run that at the edge,” Cawse says. “When somebody’s getting too close to the vehicle in front, you can’t wait five seconds.”
Such infrastructure could approach $200,000 in hardware costs, he added, difficult to justify for in-vehicle deployment.
The sustainability dimension was taken up by Mike Branch, Geotab’s vice president of data & analytics. “And from a sustainability perspective, you want to compute what that takes,” Branch says, pointing to the energy intensity of large AI systems operating in data centres.
“Ultimately, we’ll pick the best model for the job to achieve those outcomes,” he added. “The models that we’ve created so far, you can look at the outcomes we’ve already created for customers. That’s what we really look for. Is the technology and the models we are choosing right now going to fit the bill for achieving that reduction of collisions? And it already has, and it will continue to evolve.”
