According to research conducted by North Carolina State University, American law enforcement agencies who use AI for predictive policing, facial recognition, and gunshot detection have insufficient understanding of AI and its limitations when employing AI technology in their line of work.
The study’s primary goal was to investigate how artificial intelligence (AI) influences community-police relations, which included interviews with 20 law enforcement professionals. According to the study, police officers’ lack of understanding of AI capabilities hinders their ability to recognize the limitations and ethical risks associated with these technologies.
The events of the Capitol riot on January 6, 2021, according to the study, served as a stimulus for increasing usage of face recognition technology in law enforcement. The Government Accountability Office polled 42 federal agencies, and 20 of them said they used face recognition in criminal investigations.
This study emphasizes the importance of comprehensive AI education and training programs for law enforcement officers. Police personnel may better manage the ethical problems offered by AI by expanding their awareness of its capabilities and limits, thereby creating stronger community-police interactions.
According to the report, if new AI technologies are properly controlled and applied, they have the potential to increase community trust in law enforcement and the criminal justice system. Participants, however, raised worries about issues such as algorithm bias, empathy replication, privacy, and trust.
The sources for this piece include an article in FoxNews.