Managing the Risks of Public Safety Artificial Intelligence

A robot sitting in front of a computer using a keyboard

John G. Peters, Jr., Ph.D., and John Black, D.B.A.

©2024. A.R.R.

The growth of Artificial Intelligence (AI) is advancing at such an accelerated rate that it’s becoming increasingly challenging to stay up to date.

AI is the simulation of human intelligence in machines using algorithms which is designed to perform complex tasks which, historically, only humans could do using human intelligence, such as learning, reasoning and problem-solving.

Like any technology, it can be, and has been, deliberately misused (e.g., Deepfakes, solicitation of money) or inadvertently misused (e.g., uploading protected data, not verifying results, etc.). Legal briefs have been filed containing nonexistent cases. Facial recognition mistakes and uploading sensitive and/or personal information which reaches the World Wide Web are factual examples. When AI “hallucinates,” inaccurate or false information can be produced which requires public safety agencies to have policies and training in place to reduce these and other risk management concerns. Verifying accuracy by humans is an absolute necessity and requirement.

Policy

Data from a seminal 2024 survey on AI use in U.S. law enforcement agencies conducted by the Institute for the Prevention of In-Custody Deaths, Inc. (IPICD) found 74% of the respondents (n=150) reported not having an AI policy, 15% (n=31) reporting they did not know if there was an AI policy, and a meager 10.8% (n=22) having an AI policy. This is a risk management concern because employees using AI daily without AI policy guidance could trigger Monell liability issues (Monell v. Department of Social Services of the City of New York, 436 U.S. 658 [1978]), not to mention associated risk management training concerns. Policy training cannot take place unless there is a policy which plaintiffs may argue shows a municipality’s deliberate indifference toward providing direction and guidance to employees about AI and/or that it is negligent.

Shadow AI

With or without an AI policy, employees often use AI daily (think Alexa, Siri,  ChatGPT, etc.) and having a “policy” prohibiting the use of AI is not a policy, but rather a rule, according to trial attorney, James E. “Jeb” Brown, Esq., who has defended law enforcement agencies during his 30 year career as a California municipal lawyer. Those employees who use AI at work, regardless of policy, are known as “Shadow AI” users. In short, they are using AI in the “shadows” and will probably continue using it.

Training

IPICD AI survey respondents (n=250) identified an equally hazardous risk management matter: Lack of AI training. Of the 159 respondents answering this survey question, 93% (n=148) indicated their agency had provided NO training about using AI. Only 7% (n=11) indicated agency training in using AI. When employees use AI in their daily work environments, it can be argued that such AI use is a core task per the Supreme Court of the United States (SCOTUS) case, City of Canton, Ohio v. Harris, 489 U.S. 378 (1989). It can be argued that any constitutional harm caused by AI was a result of the municipality’s and/or management’s deliberate indifference toward training employees about its proper and improper uses in their respective assignments. A collateral argument can be made that this failure to train is negligence by the municipality and/or its agency leadership.

AI Qualified Curators

Associated with both policy and training development are the qualifications of the policy and training curators. Simply reading literature about AI and/or attending an online general introductory AI seminar are not enough to provide these curators with the breadth and depth of AI knowledge necessary for writing comprehensive policy, procedures, and rules, including training. Policy and training developers must understand “what’s under the hood” of AI so they can craft an AI policy and AI training which will not be defeated because they lack AI competency.

Internal Affairs

During a recent Internal Investigations/Discipline seminar produced by the Americans for Effective Law Enforcement, Inc. (AELE), real-time demonstrations were performed to show attendees how easy, fast and accurate AI can be used to identify policy violations. However, there must be policy and training in place for internal investigators to guide them and limit their discretion when using AI for such inquiries. Collective bargaining associations (unions) are sure to make the use of AI a contract issue because of its use and/or potential misuse in investigations of alleged officer misconduct and subsequent discipline.

Robots

Drones and robotic devices (robotic dogs, such as Spot, the four-legged robot developed by Boston Dynamics) are classified as AI robots, and need both policy and then training on that policy and on the robot(s) before employees are authorized to use them in tactical or daily activities. For example, how low can a drone fly into a person’s backyard before a search warrant is necessary? What about a rogue officer who flies a drone to check out people as they sunbathe or peek into high-rise apartments? Comprehensive AI policies and training will address these and similar risk management, tactical and operational concerns.

Body-worn Camera Generated Incident Reports

While space does not allow the identification of every potential AI risk management issue, a growing concern is allowing Body-worn Camera AI (B-WC AI) to generate an officer’s “draft” incident report. There are several risk management concerns associated with B-WC AI, including policy, training, legal, B-WC limitations, and testifying.

Policy: Municipalities and agencies which permit B-WC AI to author “draft” reports must have a written policy and training in place identifying when such “draft” reports are authorized and how the reports must be reviewed prior to authoring a final report, including the role of supervisors. Recently, GeekWire (September 26, 2024), reported about a prosecutor who told the King County (WA) Police Chiefs’ and Sheriff’s Association not to use AI for police reports because of “the potential for AI hallucinations.” One example identified a report which referred to an officer who was not at the scene. The ease of missing such an AI “hallucination” could be fatal to a criminal case and/or cause other officer and factual credibility problems in both criminal and civil matters.

In another news article where B-WC AI reports are permitted by some agencies in Oklahoma and Indiana, one officer told reporters that, after reviewing the B-WC AI “draft” report, he remembered things he initially “forgot.” What else might the officer have “forgotten” or have not known until reviewing the AI-generated report?

Another serious concern about B-WC AI reports is perspective. Per Graham v. Connor, the force used by officers must be based upon their “perspective.” The B-WC AI report is most likely not based upon the officer’s “perspective” or field of view, but that of their camera. Retired Henderson (NV) Police Department Sergeant James Borden conducted a limited analysis of video evidence distortion. Using an Axon 4 B-WC, his team conducted a comparison between a 50mm fisheye lens representing the human eye and the Axon 4. They found significant appearance and distance distortions which often impact an Internal Affairs inquiry, discipline, criminal, and/or civil trial. The analysis found distortions in perspective and field of view, and inaccuracies in documentation and visual representation.

First-line supervisors may unintentionally get involved in inaccurate B-WC AI “draft” reports when officers, eager to get off shift or who simply do not critically read “draft” reports, submit them as final reports. Supervisors are usually required to review the reports and initial them as being complete, accurate and thorough. However, have supervisors been trained by the municipality and/or the agency on how to identify important missing information and review it for AI hallucination which may impact accuracy?

Fact! A B-WC, with or without AI, will not record smells or a suspect’s muscle flexing or tensing. It will not record what the officer sees when looking in a different direction from where the camera is pointing. These are in addition to what Borden and his team discovered and there are likely more concerns, too.

Summary

AI is an exceptionally useful tool which can assist public safety employees in the performance of their tasks, but it is not perfect and has limitations. Policy and training curators must learn about AI benefits and shortcomings before drafting policies, lesson plans or teaching colleagues how it can be used to make their jobs more efficient and make themselves more productive. Unfortunately, the “bad guys” often take the time to learn AI capabilities knowing that many public safety employees will not understand AI. At this moment in time, the IPICD survey results show most public safety agencies lack both AI policies and training. Many public safety employees are beginning to embrace and use AI without fully understanding what lies in front of them. Like other important technology which was adopted and later restricted by court mandates because of improper use, lack of training, etc., there is still time to do it right.

John G. Peters, Jr., Ph.D., serves as president and chief learning officer of the Institute for the Prevention of In-Custody Deaths, Inc. and as Executive Director of the Americans for Effective Law Enforcement, Inc. He has curated AI programs, webinars, online training and has spoken on AI topics before several organizations including the International Municipal Lawyers Association. He is a graduate of several AI programs, including the MIT Sloan School and Computer Science & Artificial Intelligence Laboratory (CSAIL), and project manager of the IPICD AI public safety survey.

John Black, D.B.A., served over 30 years in U.S. Army Special Forces and 20 years in municipal law enforcement. A judicially qualified expert witness, John is a graduate of AI programs, including one from the MIT Sloan School of Management & Executive Education, is an IPICD Board member, and co-designer of the IPICD AI public safety survey.