GSX teaser: Security leaders weigh in on power of AI in ESRM

By Ken Showers
Updated 12:16 AM CDT, Wed August 13, 2025
NEW ORLEANS — To combat new threats, enterprise security risk management (ESRM) must keep up with the broader artificial intelligence (AI)-enhanced evolution. But while AI has advanced security applications, it has also raised questions about ethics, governance and the impact of future computing advances.
That’s where experts Paul Mercer, managing director at HawkSight SRM Ltd, and Rachelle Loyear, VP, Integrated Security Solutions, at Allied Universal, come in. Their panel at GSX, “AI-Enhanced ESRM: Charting the Future of Data-Focused Security Risk Management,” will examine real-world examples, such as automating incident detection and providing faster risk insights, while addressing privacy, bias and oversight concerns. 
SSN: Can you tell me some more about your topic and what you’ll be discussing?
Loyear: Paul Mercer and I will be talking about how artificial intelligence can impact and enhance each phase of the ESRM cycle. We will include real examples of AI tools in use today, discuss what’s coming next, and ground all of it within the principle that ESRM remains a risk-led discipline. AI should support decision-making, not replace the human judgment and context that security leaders bring to the table.
SSN: What has been the biggest hurdle to the adoption of AI in the ESRM space?
Loyear: The biggest challenge is trust. Yes, trusting the technology, but more so trusting the AI process and outcomes it produces. Security leaders are used to owning every step of a risk assessment or incident response; handing part of that to a machine can feel uncomfortable without a clear understanding of how it works and how biases or data quality could affect the output. Layer onto that the challenge of integrating AI into legacy systems, navigating privacy and compliance requirements and proving return on investment, and you have a complex adoption path for many people.
SSN: If there was one thing you hope your audience takes away from this session, what would it be?
Loyear: That AI isn’t an optional “nice to have” for the future. This is rapidly becoming a core requirement for security risk management. Used well, AI lets us see patterns and act on risks at a speed and scale no human team can match, freeing security professionals to focus on the human-driven aspects of protection that machines can’t replicate. AI will enhance our programs and it's incumbent on all of us to leverage the tools we have at our disposal.
Comments