Monday, June 2, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Replacing front-line workers with AI can be a bad idea. Here’s why

Simon Osuji by Simon Osuji
October 31, 2023
in Artificial Intelligence
0
Replacing front-line workers with AI can be a bad idea. Here’s why
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


chatbot
Credit: Sanket Mishra from Pexels

AI chatbots are already widely used by businesses to greet customers and answer their questions—either over the phone or on websites. Some companies have found that they can, to some extent, replace humans with machines in call center roles.

Related posts

Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating

Neurosymbolic AI is the answer to large language models’ inability to stop hallucinating

June 2, 2025
IBM and Roche use AI to forecast blood sugar levels

IBM and Roche use AI to forecast blood sugar levels

June 2, 2025

However, the available evidence suggests there are sectors—such as health care and human resources—where extreme care needs to be taken regarding the use of these front-line tools, and ethical oversight may be necessary.

A recent, and highly publicized, example is that of a chatbot called Tessa, which was used by the National Eating Disorder Association (NEDA) in the US. The organization had initially maintained a helpline operated by a combination of salaried employees and volunteers. This had the express goal of assisting vulnerable people suffering from eating disorders.

However, this year, the organization disbanded its helpline staff, announcing that it would replace them with the Tessa chatbot. The reasons for this are disputed. Former workers claim that the shift followed a decision by helpline staff to unionize. The vice president of NEDA cited an increased number of calls and wait times, as well as legal liabilities around using volunteer staff.

Whatever the case, after a very brief period of operation, Tessa was taken offline over reports that the chatbot had issued problematic advice that could have exacerbated the symptoms of people seeking help for eating disorders.

It was also reported that Dr. Ellen Fitzsimmons-Craft and Dr. C Barr Taylor, two highly qualified researchers who assisted in the creation of Tessa, had stipulated that the chatbot was never intended as a replacement for an existing helpline or to provide immediate assistance to those experiencing intense eating disorder symptoms.

Significant upgrade

So what was Tessa designed for? The researchers, alongside colleagues, had generated an observational study highlighting the challenges they faced in designing a rule-based chatbot to interact with users who are concerned about eating disorders. It is quite a fascinating read, illustrating design choices, operations, pitfalls and amendments.

The original version of Tessa was a traditional, rule-based chatbot, albeit a highly refined one, which is one that follows a pre-defined structure based on logic. It could not deviate from the standardized pre-programmed responses calibrated by its creators.

Their conclusion included the following point: “Rule-based chatbots have the potential to reach large populations at low cost in providing information and simple interactions but are limited in understanding and responding appropriately to unanticipated user responses.”

This might appear to limit the uses for which Tessa was suitable. So how did it end up replacing the helpline previously used by NEDA? The exact chain of events is under discussion amid differing accounts, but, according to NPR, the hosting company of the chatbot changed Tessa from a rules-based chatbot with pre-programmed responses to one with an “enhanced questions and answers feature.”

The later version of Tessa was one employing generative AI, much like ChatGPT and similar products. These advanced AI chatbots are designed to simulate human conversational patterns with the intention of giving more realistic and useful responses. Generating these customized answers relies on large databases of information, which the AI models are trained to “comprehend” through a variety of technological processes: machine learning, deep learning and natural language processing.

Learning lessons

Ultimately, the chatbot generated what have been described as potentially harmful answers to some users’ questions. Ensuing discussions have shifted the blame from one institution to another. However, the point remains that the ensuing circumstances could potentially have been avoided if there had been a body providing ethical oversight, a “human in the loop” and an adherence to the clear purpose of Tessa’s original design.

It’s important to learn lessons from cases such as this against the background of a rush towards the integration of AI in a variety of systems. And while these events took place in the US, they contains lessons for those seeking to do the same in other countries.

The UK would appear to have a somewhat fragmented approach to this issue. The advisory board to the Center for Data Ethics and Innovation (CDEI) was recently dissolved and its seat at the table was taken up by the newly formed Frontier AI Taskforce. There are also reports that AI systems are already being trialed in London as tools to aid workers—though not as a replacement for a helpline.

Both of these examples highlight a potential tension between ethical considerations and business interests. We must hope that the two will eventually align, balancing the well-being of individuals with the efficiency and benefits that AI could provide.

However, in some areas where organizations interact with the public, AI-generated responses and simulated empathy may never be enough to replace genuine humanity and compassion—particularly in the areas of medicine and mental health.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Replacing front-line workers with AI can be a bad idea. Here’s why (2023, October 31)
retrieved 31 October 2023
from https://techxplore.com/news/2023-10-front-line-workers-ai-bad-idea.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Art Auction East Africa Returns to Circle Art Agency For its 11th Edition

Next Post

Coca-Cola Commits to Supporting Township SMMEs in South Africa

Next Post
Coca-Cola Commits to Supporting Township SMMEs in South Africa

Coca-Cola Commits to Supporting Township SMMEs in South Africa

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Gold eases as dollar ticks higher, Fed speaker on tap

Gold eases as dollar ticks higher, Fed speaker on tap

1 year ago
Ghana to Cut Cost by Importing Fuel from Nigeria’s Dangote Refinery

Ghana to Cut Cost by Importing Fuel from Nigeria’s Dangote Refinery

7 months ago
Saudi Arabia’s flynas announces new flights to Milan, Geneva, Krakow, Rize and Casablanca

Saudi Arabia’s flynas announces new flights to Milan, Geneva, Krakow, Rize and Casablanca

2 months ago
Sheret Energy’s new executive chairman and more

Sheret Energy’s new executive chairman and more

4 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.