• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Law enforcement is learning how to use AI more ethically

Simon Osuji by Simon Osuji
July 16, 2025
in Artificial Intelligence
0
Law enforcement is learning how to use AI more ethically
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


police body cam
Credit: Pixabay/CC0 Public Domain

As more and more sectors experiment with artificial intelligence, one of the areas that has most quickly adopted this new technology is law enforcement. It’s led to some problematic growing pains, from false arrests to concerns around facial recognition.

Related posts

A $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With Amazon

A $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With Amazon

February 20, 2026
An FBI ‘Asset’ Helped Run a Dark Web Site That Sold Fentanyl-Laced Drugs for Years

An FBI ‘Asset’ Helped Run a Dark Web Site That Sold Fentanyl-Laced Drugs for Years

February 20, 2026

However, a new training tool is now being used by law enforcement agencies across the globe to ensure that officers understand this technology and use it more ethically.

Based largely on the work of Cansu Canca, director of responsible AI practice at Northeastern University’s Institute for Experiential AI, and designed in collaboration with the United Nations and Interpol, the Responsible AI Toolkit is one of the first comprehensive training programs for police focused exclusively on AI. At the core of the toolkit is a simple question, Canca says.

“The first thing that we start with is asking the organization, when they are thinking about building or deploying AI, do you need AI?” Canca says. “Because any time you add a new tool, you are adding a risk. In the case of policing, the goal is to increase public safety and reduce crime, and that requires a lot of resources. There’s a real need for efficiency and betterment, and AI has a significant promise in helping law enforcement, as long as the risks can be mitigated.”

Thousands of officers have already undergone training using the toolkit, and this year, Canca led a training session for 60 police chiefs in the U.S. The U.N. will soon be rolling out additional executive-level training in five European countries as well.

Uses of AI like facial recognition have attracted the most attention, but police are also using AI for simpler things like generating video-to-text transcriptions for body camera footage, deciphering license plate numbers in blurry videos and even determining patrol schedules.

All those uses, no matter how minor they might seem, come with inherent ethical risks if agencies don’t understand the limits of AI and where it’s best used, Canca says.

“The most important thing is making sure that every time we create an AI tool for law enforcement, we have as clear an understanding as possible of how likely this tool is to fail, where it might fail, and how we can make sure the police agencies know that it might fail in those particular ways,” Canca says.

Even if an agency claims it needs or wants to use AI, the more important question is whether it’s ready to deploy AI. The toolkit is designed to get law enforcement agencies thinking about what best suits their situation. A department might be ready to develop its own AI tool like a real-time crime center. However, most that are ready to adopt the technology are more likely to procure it from a third-party vendor, Canca explains.

At the same time, it’s important for agencies to also recognize when they aren’t yet ready to use AI.

“If you’re not ready—if you cannot keep the data safe, if you cannot ensure adequate levels of privacy, if you cannot check for bias, basically if your agency is not able to assess and monitor technology for its risks and mitigate those risks—then you probably shouldn’t go super ambitious just yet and instead start building those ethics muscles as you slowly engage with AI systems,” Canca says.

Canca notes that the toolkit is not one-size-fits-all. Each sector, whether it’s policing or education, has its own ethical framework that requires a slightly different approach that is sensitive to the specific ethical issues of that sector.

“Policing is not detached from ethics” and has its own set of ethical questions and criticisms, Canca says, including “a really long lineage of historical bias.”

Understanding those biases is key when implementing tools that could potentially re-create those very biases, creating a vicious cycle of technology and police practice.

“There are districts that have been historically overpoliced, so if you just look at that data, you’re likely to overpolice those areas again,” Canca says. “Then the question becomes, “If we understand that’s the case, how can we mitigate the risk of discrimination, how can we supplement the data or ensure that the tool is used for the right purposes?'”

The goal of the toolkit is to avoid those ethical pitfalls by making officers aware that humans are still a vital component of AI. An AI system might be able to analyze a city and suggest which areas might need more assistance based on crime data, but it’s up to humans to decide if a specific neighborhood might need more patrol officers or maybe social workers and mental health professionals.

“Police are not trained to ask the right questions around technology and ethics,” Canca says. “We need to be there to guide them and also push the technology providers to create better technologies.”

Provided by
Northeastern University

This story is republished courtesy of Northeastern Global News news.northeastern.edu.

Citation:
Law enforcement is learning how to use AI more ethically (2025, July 16)
retrieved 16 July 2025
from https://techxplore.com/news/2025-07-law-ai-ethically.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Top 10 African countries with the most improvement in consumer price from 2024 to 2025

Next Post

Ex-Waymo engineers launch Bedrock Robotics with $80M to automate construction

Next Post
Ex-Waymo engineers launch Bedrock Robotics with $80M to automate construction

Ex-Waymo engineers launch Bedrock Robotics with $80M to automate construction

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

55 Best Early Black Friday Deals on WIRED-Tested Gear (2025)

55 Best Early Black Friday Deals on WIRED-Tested Gear (2025)

3 months ago
New AI framework aims to remove bias in key areas such as health, education and recruitment

New AI framework aims to remove bias in key areas such as health, education and recruitment

1 year ago
Kamala Harris tries to navigate the convoluted politics of oil and gas

Kamala Harris tries to navigate the convoluted politics of oil and gas

1 year ago
Top 10 countries with the highest arms imports globally

Top 10 countries with the highest arms imports globally

11 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.