Milestone’s Mauser talks Hafnia expansion, project roadmap ‘What we’re actually doing is augmenting real-world data, not replacing it,’ says Mauser

By Ken Showers, Managing Editor
Updated 12:58 PM CDT, Wed March 18, 2026
YARMOUTH, Maine — Milestone Systems is pushing deeper into the AI infrastructure space with a major expansion of its Hafnia platform, adding new tools designed to dramatically accelerate the development of computer vision models. By introducing synthetic data capabilities and preparing a training‑as‑a‑service offering with NVIDIA, Milestone is aiming to solve one of the industry’s biggest bottlenecks: access to large‑scale, compliant, high‑quality datasets.
To understand the scope of what these updates unlock – particularly for emerging smart city and public‑sector deployments – Security Systems News spoke with Edward Mauser, director of Hafnia. In the conversation, he laid out just how significantly these services could change the time, cost, and complexity of building next‑generation video analytics and what this direction means for Hafnia’s broader roadmap.
SSN: Milestone is introducing a significant expansion of developer tools within Hafnia. From your vantage point, what does this release unlock for the platform, and how does it change what developers can achieve with it?
Edward Mauser: What we’re doing with Hafnia – and everything you’re seeing coming out of it – is accelerating the computer vision industry through responsible AI. Another way to say that is, Hafnia accelerates video analytics development.
We’re building the world’s largest real-world, compliant video dataset. From that, we’re unlocking services that can dramatically accelerate both time-to-market and the quality of video analytics development.
SSN: With synthetic data now being integrated into Hafnia, what kinds of real-world and edge-case scenarios do you anticipate being able to emulate that were previously difficult or impossible to capture?
Edward Mauser: I think it gets dangerous when you talk about synthetic data in isolation. What we’re actually doing is augmenting real-world data, not replacing it. We’re not generating completely artificial datasets; instead, we’re enhancing existing ones.
We focus on rare or underrepresented conditions, things like specific weather scenarios, traffic patterns, regional vehicle types, or unusual events, such as debris in the road that are difficult to capture in real-world data. With synthetic augmentation, we can simulate variations of these scenarios, allowing models to better detect and respond to them.
In this way, synthetic data complements real-world data by filling gaps and improving overall model performance.
SSN: Training as a service is a major new direction. How does it simplify the process for developers and integrators without deep AI infrastructure?
Edward Mauser: Training as a service is designed to enhance the existing video analytics market. Today, nearly all deployed video analytics solutions rely on classical models. Our goal is to improve their performance and drive broader adoption.
We handle the most complex and resource-intensive parts of the process, especially data preparation, which accounts for roughly 80% of the effort and cost in model development.
That includes data capture, regulatory compliance, anonymization, structuring and balancing datasets. By taking care of all that, developers and integrators only need a basic level of data science knowledge to train effective models.
SSN: How do these new tools fit into the broader roadmap for Hafnia and Milestone’s vision for AI?
Edward Mauser: Hafnia has a growing roadmap of services – all built on the data library we’re developing. As you know, we already offer VLM as a service and training as a service. You can expect additional complementary services, including more domain-specific VLMs, as well as continued expansion of the data library to support a broader range of use cases and improve model performance.
Comments