• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data

Simon Osuji by Simon Osuji
February 3, 2025
in Artificial Intelligence
0
UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ai
Credit: Pixabay/CC0 Public Domain

The UK government’s new plan to foster innovation through artificial intelligence (AI) is ambitious. Its goals rely on the better use of public data, including renewed efforts to maximize the value of health data held by the NHS. Yet this could involve the use of real data from patients using the NHS. This has been highly controversial in the past and previous attempts to use this health data have been at times close to disastrous.

Related posts

Tin Can Is a Dumb Phone for Kids. Can Someone Teach Them How to Use It?

Tin Can Is a Dumb Phone for Kids. Can Someone Teach Them How to Use It?

February 28, 2026
Everything Larry and David Ellison Will Control If Paramount Buys Warner Bros.

Everything Larry and David Ellison Will Control If Paramount Buys Warner Bros.

February 28, 2026

Patient data would be anonymized, but concerns remain about potential threats to this anonymity. For example, the use of health data has been accompanied by worries about access to data for commercial gain. The care.data program, which collapsed in 2014, had a similar underlying idea: sharing health data across the country to both publicly-funded research bodies and private companies.

Poor communication about the more controversial elements of this project and a failure to listen to concerns led to the program being shelved. More recently, the involvement of the US tech company Palantir in the new NHS data platform raised questions about who can and should access data.

The new effort to use health data to train (or improve) AI models similarly relies on public support for success. Yet perhaps unsurprisingly, within hours of this announcement, media outlets and social media users attacked the plan as a way of monetizing health data.

“Ministers mull allowing private firms to make profit from NHS data in AI push,” one published headline reads.

These responses, and those to care.data and Palantir, reflect just how important public trust is in the design of policy. This is true no matter how complicated technology becomes—and crucially, trust becomes more important as societies increase in scale and we’re less able to see or understand every part of the system. It can be difficult, if not impossible, to make a judgment as to where we should place trust, and how to do that well. This holds true whether we are talking about governments, companies, or even just acquaintances—to trust (or not) is a decision each of us must make every day.

The challenge of trust motivates what we call the “trustworthiness recognition problem”, which highlights that determining who is worthy of our trust is something that stems from the origins of human social behavior. The problem comes from a simple issue: Anyone can claim to be trustworthy and we can lack sure ways to tell if they genuinely are.

If someone moves into a new home and sees ads for different internet providers online, there isn’t a sure way to tell which will be cheaper or more reliable. Presentation doesn’t need—and may not even often—reflect anything about a person or group’s underlying qualities. Carrying a designer handbag or wearing an expensive watch doesn’t guarantee the wearer is wealthy.

Luckily, work in anthropology, psychology and economics shows how people—and by consequence, institutions like political bodies—can overcome this problem. This work is known as signaling theory, and explains how and why communication, or what we can call the passing of information from a signaler to a receiver, evolves even when the individuals communicating are in conflict.

For example, people moving between groups may have reasons to lie about their identities. They might want to hide something unpleasant about their own past. Or they might claim to be a relative of someone wealthy or powerful in a community. Zadie Smith’s recent book, “The Fraud,” is a fictionalized version of this popular theme that explores aristocratic life during Victorian England.

Yet it’s just not possible to fake some qualities. A fraud can claim to be an aristocrat, a doctor, or an AI expert. Signals that these frauds unintentionally give off will, however, give them away over time. A false aristocrat will probably not fake his demeanor or accent effectively enough (accents, among other signals, are difficult to fake for those familiar with them).

The structure of society is obviously different than that of two centuries ago, but the problem, at its core, is the same—as, we think, is the solution. Much as there are ways for a truly wealthy person to prove wealth, a trustworthy person or group must be able to show they are worth trusting. The way or ways this is possible will undoubtedly vary from context to context, but we believe that political bodies such as governments must demonstrate a willingness to listen and respond to the public about their concerns.

The care.data project was criticized because it was publicized via leaflets dropped at people’s doors that did not contain an opt-out. This failed to signal to the public a real desire to alleviate people’s concerns that information about them would be misused or sold for profit.

The current plan around the use of data to develop AI algorithms must be different. Our political and scientific institutions have a duty to signal their commitment to the public by listening to them, and through doing so develop cohesive policies that minimize the risks to individuals while maximizing the potential benefits for all.

The key is to place sufficient funding and effort to signal—to demonstrate—the honest motivation of engaging with the public about their concerns. The government and scientific bodies have a duty to listen to the public, and further to explain how they will protect it. Saying “trust me” is never enough. You have to show you are worth it.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data (2025, February 3)
retrieved 3 February 2025
from https://techxplore.com/news/2025-02-uk-ai-health.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Japan Buys 150 SM-6 Missiles for AEGIS-Equipped Vessels

Next Post

President El-Sisi Speaks with the President of Ghana

Next Post
President El-Sisi Speaks with the President of Ghana

President El-Sisi Speaks with the President of Ghana

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

ACI exports reach $5.9bln in first 7 months of 2025

ACI exports reach $5.9bln in first 7 months of 2025

7 months ago
Zoho and Loita Business team up to support central and southern African SMEs

Zoho and Loita Business team up to support central and southern African SMEs

11 months ago
Guyana – Qatar signs MoU on biodiversity conversation and technical capacity

Guyana – Qatar signs MoU on biodiversity conversation and technical capacity

1 year ago
Why Are New Business Applications at All-Time High?

Why Are New Business Applications at All-Time High?

2 years ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • Mahama attends Liberia’s 178th independence anniversary

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.