Skip to Content

Pro-Vigil releases “concerning” report on AI bias in video surveillance

Pro-Vigil releases “concerning” report on AI bias in video surveillance More than one-third would do nothing about AI bias in their video surveillance systems, as long as they deter crime

Pro-Vigil releases “concerning” report on AI bias in video surveillance

SAN ANTONIO—Pro-Vigil, a provider of remote video monitoring, management and crime deterrence solutions, published an eye-opening research report that found companies are more concerned about their Artificial Intelligence (AI)-powered video surveillance system’s ability to deter crime than any potential AI bias issues. 

Pro-Vigil surveyed 100 users of AI-powered video surveillance systems from a variety of commercial vertical companies that use digital video surveillance, including construction and auto. The results were outlined in a report titled “Perceptions of Artificial Intelligence in Video Surveillance.” 

Jeremy White, Founder of Pro-Vigil, told Security Systems News that the goal when putting together the AI Security Survey was to determine how existing and potential customers view AI, how much they know about AI, and how it is being used in their video surveillance systems, as well as their opinions on AI bias. 

With so much talk around artificial intelligence (AI) in the world right now, the goal was to survey our existing customer base, as well as potential customers, to find out how they see AI,” he explained. “We wanted to find out if they understand it, if they think about it, and if so, how they think about it. We wanted to understand their comfort level and any concerns around AI in general, especially as it relates to crime deterrence and video surveillance.”


Pro-Vigil’s survey showed that 62 percent of respondents do not care, or are not sure if they care, if their AI is biased in their video surveillance systems.

“The data clearly shows that a majority don't understand what AI is, or that it could be used unethically, and it’s due to the education gap,” White explained. “AI is evolving and being deployed at record speeds across industries and platforms, quicker than decision-makers and stakeholders can even get comfortable with or understand. And there’s a very small group of people who actually know what the biases are and hold the keys to that. We hear comments around biases and that AI can be used in wrong ways, but what we don’t realize is how much it’s already affecting our lives, like on social media for example. Lack of education and knowledge out there is most concerning.”

When asked if they would do anything if their AI video system was doing a good job deterring crime, but was using unethical algorithms, 37 percent of respondents said they would do nothing.

When asked if organizations are just throwing ethics out the door if all they care about is crime deterrence, White responded, “No, I don't believe so. I believe a majority have not been exposed and just don't understand what artificial intelligence is and how it can be used.

“What they do understand is that it's doing something to help deter crime and that's positive. Even those that said they would do nothing likely believe that because it’s out of their hands. They don’t know what to ask or who to ask, and they don’t understand the biases that could exist.”

Another startling result in the report is that 89 percent of respondents would not even know how to check if their AI video surveillance systems were biased, which raises the question of what should be done to ensure that ethical standards are being met when using these systems, such as adding staff to monitor AI practices for potential bias.

“I don't know that companies need to hire additional resources to manage AI and the practices of AI, but it should be included in the organization’s security IT governance practices,” White suggested. “Some best practices applied to other areas should be followed to protect and secure the data collected by the artificial intelligence. If they don't have those resources in house, an alternative to hiring a full-time employee would be to hire consultants to help with this strategy – how to set it up, manage it and teach what you need to succeed.”

AI Trends

Based on the survey results, respondents had varied degrees of opinions and knowledge about AI. White believes that this trend will “completely change” as the awareness in education of how AI works and how it can be used improves

“Right now, AI is a buzz word – education and awareness is not progressing as fast as AI is being deployed,” he pointed out. “And as more AI is deployed, five years from now I believe we will see completely different results than this initial survey represents today.”

With the AI knowledge, or lack of knowledge, that was outlined in the report, White stressed the need for organizations to be transparent when ensuring that their AI video systems operate ethically all the time.

“Companies need to be transparent about how they're using artificial intelligence and need to have a well-documented plan that dictates the usage and storage of all this data,” he explained. You have to keep tabs on what these systems are doing; you can’t set them up and let them go. It’s necessary to redirect them and fine-tune them from time to time.

“Keeping tabs on AI and its performance is extremely important. This is where that human element is so critical to ensure ethical performance of the AI.”

AI Awareness

White noted that he was not surprised by the findings in the AI Security Survey because of the lack of knowledge of how AI works and its effects on a company.

“There is very little awareness of how artificial intelligence works and how it can impact an organization,” he said. “We know the general population does not understand AI at this point, but they will. It’s just a matter of time as education around AI improves and awareness is raised. This survey is a way to start having that conversation to understand how AI can have a positive impact, but also to understand the ethics that need to be addressed.”

The Pro-Vigil founder added, however, that the data that AI-powered video surveillance systems gather could imply bias against race, gender, etc. because “it absolutely could be trained and used in those ways.”

“I like to use the analogy that if a picture is worth a thousand words, what is a high-definition video stream running through an advanced artificial intelligent program worth?” he explained. “All of those things are data points that you send in a certain direction – you tell it what is good and what isn’t. You have to stop and say how do I use this responsibly? Also how do we safeguard ourselves from unethical practices? Most of us are thinking about how to leverage it for the positives, but I don’t think we’re having enough conversation around how we do it responsibly.”

Balance Between Ethics, Crime Deterrence

One conclusion that was made in a summary of the survey’s findings was that “while 37 percent of respondents said they would do nothing about unethical AI in video surveillance as long as it was deterring crimes, there is no need to make that kind of tradeoff.”

White noted that there could be a balance achieved between ethics and crime deterrence so both objectives are reached.  

“I absolutely believe balance can be achieved between providing effective surveillance services, and at the same time, doing it in an ethical way,” he said. “It goes back to having and enforcing best practices for the use of the artificial intelligence. and transparency with those that it's being deployed for. And as long as a human is part of the final decision-making process that’s actioned upon, then there is a way to find balance.”

While the report pointed out some glaring concerns about what companies do and do not know, or even care about AI bias, White noted the importance of remaining transparent and cautious in how to use the data these AI-powered systems gather.

“I think that in the remote video surveillance industry, we need to be transparent with what we’re using AI for, what we’re collecting and how we’re storing that data – and we need to err on the side of caution,” he noted. “While there’s no governing body or best practices in place just yet, it’s important to ensure the level of services that we’re providing at this time are safe, fair, and ethical.”


To comment on this post, please log in to your account or set up an account now.