Skip to Content

SNG discussion – AI called the ‘topic of the year’ during Security Megatrends panel

SNG discussion – AI called the ‘topic of the year’ during Security Megatrends panel Steve Van Till, Tara Dunning, and Kasia Hanson touch on AI’s impact on security industry during SIA’s executive conference

SNG discussion – AI called the ‘topic of the year’ during Security Megatrends panel

NEW YORK—It’s safe to say that artificial intelligence (AI) has emerged as one of the top buzzwords in the security industry over the last couple of years.

At the recent Securing New Ground (SNG) conference, the Security Industry Association’s (SIA’s) premier annual executive gathering in NYC, the topic of AI and its impact on the security sector came up during several sessions at the two-day event.

SNGIn fact, AI has been at or near the top of SIA’s Security Megatrends report – the association’s annual publication that provides analysis on the 10 top trends affecting security industry businesses and practitioners – the last two years. AI stood atop the trends list over No. 2 Cybersecurity in the 2022 Security Megatrends report, while Cybersecurity took the top spot from AI in the 2023 report.

One of the SNG sessions that focused on AI and how it continues to reshape the security industry - in terms of adding value to security and safety operations, as well as impacting the processes and people who work in the industry – was “Security Megatrends - Taking the Industry’s Pulse and Vitals.”

Brivo CEO Steve Van Till; Tara Dunning, vice president of global security at Wesco; and Kasia Hanson, global director of security ecosystem strategy and partnerships at Intel Corp., focused on the impact of generative AI on security, the evolution of the camera as the “ultimate sensor,” and ethical and trustworthy Internet of Things (IoT) innovation.

Van Till used a slide presentation during the “Security Megatrends” session to demonstrate how his first SNG presentation from eight years ago evolved from “Big Data is Like Teen Dating” to “AI is Like Teen Dating,” with the following musings – “Everybody talks about it. Nobody knows how to do it. Everyone thinks everyone else is doing it. So everyone claims they are doing it.”

“AI has become, I would say, the topic of the year,” Van Till said. “If we look at the Megatrends from last year, AI was No. 2 and Cyber was No. 1. I think those are going to flip this year. Certainly, with the release of a lot of the generative AI tools that have come out in this past year, it is even more a topic of conversation than ever before because the generative tools, both textual and graphic, do such amazing things. They really are close to magic and people are completely enchanted with them.”

The Ultimate Sensor

In discussing the evolution of the camera as the “ultimate sensor,” Dunning pointed out that “AI is becoming the apex of the Megatrends, and certainly cybersecurity of our physical assets is very much a part of that.”

She noted that the proliferation of sensors cracked the top 10 on the 2023 Megatrends list for the first time. “Sensorization of things is really what has enabled the camera to operate in a very different way.”

Dunning also pointed out that the camera has become the “everything tool” in the industry beyond just a recording device and used for security purposes, as well as what that everything tool means in terms of creating exponential value at a solution level, whether it's 1X ,10X, 100X, or even 1,000X.

Dunning noted that video analytics and AI are fundamentally transforming the role of the camera, going back to when the first IP network camera was produced in 1996.

“AI and analytics and the compute on the network and on the smart edge is fundamentally transforming the way we think about video surveillance and physical security solutions,” she explained. “This is really driven by two main drivers, and again I think back to 1996 moving from an analog to an IP-based network camera. The integration of the network and IoT is really one of the biggest drivers and again, proliferation of sensors and then the intelligent edge.”

Dunning cited another fact supporting the increased value of the camera. Nearly 80 million security cameras were shipped globally in 2022, during the height of the supply chain crisis brought on by the COVID-19 pandemic.

“That's more than two-and-a-half times the installed base in the U.S. and North America alone,” she said. “Exponential sea change leads to advancement in our industry. Truly an exciting time.”

She added that 17 billion IoT-connected devices are expected to sit on the same network that most of the cameras in the world are on today, with that number expected to go to 28 to 30 billion in the next five years. “So again, exponential sea change leads to transformation of our industry,” Dunning said.

Another Megatrend that Dunning pointed out was the elimination of industry boundaries, which ranked No. 6 on the list.

“The camera is no longer the sort of purpose-build, singular-focus use case. It's really becoming the everything tool,” she explained. “With AI and video analytics on the network connected to multiple devices, including sensors, it becomes a transformative tool to solve business challenges and very specific use cases across any number of functions, so think beyond security. Think safety, compliance, building controls, revenue optimization, usefulness in a retail environment, queueing of customers. All these use cases are possible, and the opportunities are absolutely limitless.”

Generative AI

Van Till outlined the impact of generative AI on security using what he called “three buckets” – tools, training, treachery - citing anomaly detection, predictive threat analysis and dynamic access control as examples of generative AI tools.

“A lot of tools are coming out of this that are useful for all kinds of things that we want to accomplish in the security world,” he said. “We have a lot of situations in security where a person or a small group of people has to make a decision fairly quickly about what's the right thing to do right now. And with the kinds of inputs and the kinds of training that these tool sets can accomplish now, this is something that can be an active assist to people who are in critical roles, whatever they may be, and we think that that's coming.”

In terms of generative AI’s impact on training, Van Till noted, “Who among us in our companies doesn't feel we need more training, or that the training is old? This is something that is a constant need, so I point to this as a very, very useful thing that it does for security because like any other industry, good training is something we need.”

As far as treachery, Van Till pointed out that “there's certainly a lot of fear mongering that goes on about AI. We've all heard it. AI is going to crush humanity. If it's in the hands of a good person, there will be good outcomes. If it's in the hands of the wrong people, there could be deepfakes or mimicry of people.

“If you played at all with any of the machines that capture your vocal tonality and reproduce you, and not only reproduce you in the same languages that you know but also reproduce you in any language spoken on Earth and sync it with your video, this kind of thing is very easy to do now with tools that are off the shelf, all but free. So as people who are guarding the castle and who are subject to social engineering attacks, these are very relevant capabilities.”

Van Till concluded by saying, “Every generation of technology has been met with skepticism. ‘It's going to destroy our youth.’ People thought of books when they first came out that they’re going to displace regular learning. I'm sure the internet and social media were met with those same kinds of concerns. But the thing about all those other categories of threats and the new technologies is that they don't improve themselves, whereas artificial intelligence can act on itself to be better than it was, and now it's better at acting on itself.”

Security of AI

Hanson noted the importance of using ChatGPT correctly and ethically.

“People and companies are starting to look at how do I take my information and have ChatGPT help me write it? You need to put in guardrails that are very serious and have serious ramifications if people use it incorrectly in your organization,” she said.

Another AI concern that Hanson touched on was hallucinations – AI-generated responses that might contain false or misleading information presented as fact.

“You may think it's simple to just use AI to develop something for you, but hallucinations are real and can give you fake data and fake information,” she explained. “If you don't oversee that information and you use it, it could cause serious harm.”

Hanson also discussed AI regulations, namely the proposed EU AI Act, which if passed, will govern the sale and use of AI by the European Union’s 27 member states.

“The EU is developing regulations, their EU AI Act, and it's really about people oversight, right? They want transparency. Sure, you can use AI, but we want to make sure there's people to oversee that AI and what it's generating,” she said. “I'm sure the U.S. will start to adopt some of that, but it's good to understand what that looks like as part of your business, but also how you're delivering that to your customers.

“My guess is over the next couple of years, customers will start to have a cybersecurity clause in the RFPs, as well as an AI clause. They will want to know what your posture is and they will want to know what your AI policies are, and will want to know if you're using generative AI, and you're going to need to be able to respond to that to all of your customers.”  

Hanson also discussed AI trust and ethical backlash.

“Especially in this industry, with the use of video surveillance and things like facial recognition, there are ethics involved and questions that we need to be developing and really being more transparent around the ethical use of some of the technology and especially AI,” she explained. “The other piece of it is that there is a lot of opportunity, right? I see a lot of opportunity to create services, but also you have to know what you're doing and how it's being used.”

Hanson then compared security of AI versus AI for security.

“There are a couple levels of that. One is security of AI - how secure is the AI? So, if you think about machine learning and models, the bad guys are using AI already. They've been using it for a while and they're very well-funded, obviously. So, they know how to get into a model and poison that model, right?

“A lot of the models that are being used today are public models, and they don't have the security around them that you would think they do. And so being able to understand where the models are coming from, how secure is it, is really important. That's why you're seeing companies that are developing tools or have developers/data scientists to scan the models and make sure they're clean.”

She continued, “AI is also being used for what I like to call helping the defenders defend, so AI is absolutely being used to detect and respond and identify threats much faster. It’s a really important tool for the defenders to defend.”

Hanson concluded by discussing other important components of AI.

“The ethics piece is an important component, and there's a lot in the news about the ethics in AI,” she said. “It's really important for us to understand what those are and what types of inputs we have for that as well, so the idea around ethical and trustworthy IoT innovation is something that I think is important. The sustainability piece is also important, and then fairness and bias. Those I think are really critical as we talk about ethics and trustworthy IoT engagement.”

Comments

To comment on this post, please log in to your account or set up an account now.