Friday, June 13, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data

Simon Osuji by Simon Osuji
February 3, 2025
in Artificial Intelligence
0
UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ai
Credit: Pixabay/CC0 Public Domain

The UK government’s new plan to foster innovation through artificial intelligence (AI) is ambitious. Its goals rely on the better use of public data, including renewed efforts to maximize the value of health data held by the NHS. Yet this could involve the use of real data from patients using the NHS. This has been highly controversial in the past and previous attempts to use this health data have been at times close to disastrous.

Related posts

A Mattress Testing Expert Breaks Down Natural and Organic Certifications (2025)

A Mattress Testing Expert Breaks Down Natural and Organic Certifications (2025)

June 13, 2025
Anthropic says looking to power European tech with hiring push

Anthropic says looking to power European tech with hiring push

June 13, 2025

Patient data would be anonymized, but concerns remain about potential threats to this anonymity. For example, the use of health data has been accompanied by worries about access to data for commercial gain. The care.data program, which collapsed in 2014, had a similar underlying idea: sharing health data across the country to both publicly-funded research bodies and private companies.

Poor communication about the more controversial elements of this project and a failure to listen to concerns led to the program being shelved. More recently, the involvement of the US tech company Palantir in the new NHS data platform raised questions about who can and should access data.

The new effort to use health data to train (or improve) AI models similarly relies on public support for success. Yet perhaps unsurprisingly, within hours of this announcement, media outlets and social media users attacked the plan as a way of monetizing health data.

“Ministers mull allowing private firms to make profit from NHS data in AI push,” one published headline reads.

These responses, and those to care.data and Palantir, reflect just how important public trust is in the design of policy. This is true no matter how complicated technology becomes—and crucially, trust becomes more important as societies increase in scale and we’re less able to see or understand every part of the system. It can be difficult, if not impossible, to make a judgment as to where we should place trust, and how to do that well. This holds true whether we are talking about governments, companies, or even just acquaintances—to trust (or not) is a decision each of us must make every day.

The challenge of trust motivates what we call the “trustworthiness recognition problem”, which highlights that determining who is worthy of our trust is something that stems from the origins of human social behavior. The problem comes from a simple issue: Anyone can claim to be trustworthy and we can lack sure ways to tell if they genuinely are.

If someone moves into a new home and sees ads for different internet providers online, there isn’t a sure way to tell which will be cheaper or more reliable. Presentation doesn’t need—and may not even often—reflect anything about a person or group’s underlying qualities. Carrying a designer handbag or wearing an expensive watch doesn’t guarantee the wearer is wealthy.

Luckily, work in anthropology, psychology and economics shows how people—and by consequence, institutions like political bodies—can overcome this problem. This work is known as signaling theory, and explains how and why communication, or what we can call the passing of information from a signaler to a receiver, evolves even when the individuals communicating are in conflict.

For example, people moving between groups may have reasons to lie about their identities. They might want to hide something unpleasant about their own past. Or they might claim to be a relative of someone wealthy or powerful in a community. Zadie Smith’s recent book, “The Fraud,” is a fictionalized version of this popular theme that explores aristocratic life during Victorian England.

Yet it’s just not possible to fake some qualities. A fraud can claim to be an aristocrat, a doctor, or an AI expert. Signals that these frauds unintentionally give off will, however, give them away over time. A false aristocrat will probably not fake his demeanor or accent effectively enough (accents, among other signals, are difficult to fake for those familiar with them).

The structure of society is obviously different than that of two centuries ago, but the problem, at its core, is the same—as, we think, is the solution. Much as there are ways for a truly wealthy person to prove wealth, a trustworthy person or group must be able to show they are worth trusting. The way or ways this is possible will undoubtedly vary from context to context, but we believe that political bodies such as governments must demonstrate a willingness to listen and respond to the public about their concerns.

The care.data project was criticized because it was publicized via leaflets dropped at people’s doors that did not contain an opt-out. This failed to signal to the public a real desire to alleviate people’s concerns that information about them would be misused or sold for profit.

The current plan around the use of data to develop AI algorithms must be different. Our political and scientific institutions have a duty to signal their commitment to the public by listening to them, and through doing so develop cohesive policies that minimize the risks to individuals while maximizing the potential benefits for all.

The key is to place sufficient funding and effort to signal—to demonstrate—the honest motivation of engaging with the public about their concerns. The government and scientific bodies have a duty to listen to the public, and further to explain how they will protect it. Saying “trust me” is never enough. You have to show you are worth it.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
UK government must show that its AI plan can be trusted to deal with serious risks when it comes to health data (2025, February 3)
retrieved 3 February 2025
from https://techxplore.com/news/2025-02-uk-ai-health.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Japan Buys 150 SM-6 Missiles for AEGIS-Equipped Vessels

Next Post

President El-Sisi Speaks with the President of Ghana

Next Post
President El-Sisi Speaks with the President of Ghana

President El-Sisi Speaks with the President of Ghana

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

KShs. 33 Million Injected Towards The 2025 WRC Safari Rally ; Toyota By CFAO

KShs. 33 Million Injected Towards The 2025 WRC Safari Rally ; Toyota By CFAO

4 months ago
This Underused IRA Option Offers Tax Benefits, Income Security

This Underused IRA Option Offers Tax Benefits, Income Security

3 months ago
Dubai Crown Prince Sheikh Hamdan launches new AI academy

Dubai Crown Prince Sheikh Hamdan launches new AI academy

2 months ago
3 Coins That Are Capable Of Birthing Millionaires

3 Coins That Are Capable Of Birthing Millionaires

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.