Tiktok Youtube Telegram Instagram Linkedin X-twitter
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist
  • Fashion Intelligence
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist
  • Fashion Intelligence

AI tools promise efficiency at work, but they can erode trust, creativity and agency

Simon Osuji by Simon Osuji
October 8, 2025
in Artificial Intelligence
0
AI tools promise efficiency at work, but they can erode trust, creativity and agency
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


AI keyboard
Credit: Unsplash/CC0 Public Domain

What if your biggest competitive asset is not how fast AI helps you work, but how well you question what it produces?

Related posts

ICE Seeks Cyber Upgrade to Better Surveil and Investigate Its Employees

ICE Seeks Cyber Upgrade to Better Surveil and Investigate Its Employees

December 18, 2025
Don’t Miss This Kindle Sale: Paperwhite, Colorsoft, and Accessories

Don’t Miss This Kindle Sale: Paperwhite, Colorsoft, and Accessories

December 18, 2025

Business leaders tend to prioritize efficiency and compliance in the workplace. It’s one of the reasons why so many are drawn toward incorporating generative AI technologies into their workflows. A recent survey found 63% of global IT leaders worry their companies will be left behind without AI adoption.

But in the rush to adopt AI, some organizations are overlooking the real impact it can have on workers and company culture.

Most organizational strategies focus on AI’s short-term efficiencies, such as automation, speed and cost saving. What tends to be overlooked are the impacts AI has on cognition, agency and cultural norms. AI is fundamentally restructuring not only what we know, but how we know it.

As AI becomes more integrated, it will continue to influence organizational tone, pace, communication style and decision-making norms. This is why leaders must set deliberate boundaries and consciously shape organizational culture in relation to AI integration.

Once embedded into workflows, AI influences workplace defaults: which sources appear first, what tone a memo takes and where managers set the bar for “good enough.” If people don’t set these defaults, tools like AI will instead.

As researchers who study AI, psychology, human-computer interaction and ethics, we are deeply concerned with the hidden effects and consequences of AI use.

Psychological effects of AI at work

Researchers are beginning to document a number of psychological effects associated with AI use in the workplace. These impacts expose current gaps in epistemic awareness—how we know what we know—and how those gaps can weaken emotional boundaries.

Such shifts can affect how people make decisions, calibrate trust and maintain psychological safety in AI-mediated environments.

One of the most prominent effects is known as “automation bias.” Once AI is integrated into a company’s workflow, its outputs are often internalized as authoritative sources of truth.

Because AI-generated outputs appear fluent and objective, they can be accepted uncritically, creating an inflated sense of confidence and a dangerous illusion of competence.

One recent study found that in 40% of tasks, knowledge workers—those who turn information into decisions or deliverables, like writers, analysts and designers—accepted AI outputs uncritically with zero scrutiny.

The erosion of self-trust

A second concern is the erosion of self-trust. Continuous engagement with AI-generated content leads workers to second-guess their instincts and over-rely on AI guidance, often without realizing it. Over time, work shifts from generating ideas to merely approving AI-generated ones. This results in the diminishing of personal judgment, creativity and original authorship.

One study found that users have a tendency to follow AI advice even when it contradicts their own judgment, resulting in a decline in confidence and autonomous decision-making. Other research shows that when AI systems provide affirming feedback—even for incorrect answers—users become more confident in their decisions, which can distort their judgment.

Workers can end up deferring to AI as an authority despite its lack of lived experience, moral reasoning or contextual understanding. Productivity may appear higher in the short term, but the quality of decisions, self-trust and ethical oversight may ultimately suffer.

Emerging evidence also points to neurological effects of over-reliance on AI use. One recent emerging study tracked professionals’ brain activity over four months and found that ChatGPT users exhibited 55% less neural connectivity compared to those working unassisted. They struggled to remember the essays they just co-authored moments later, as well reduced creative engagement.

So what can leaders and managers do about it?

What leaders and managers can do

Resilience has become something of a corporate buzzword, but genuine resilience can help organizations adapt to AI.

Resilient organizations teach employees to effectively collaborate with AI without over-relying on its outputs. This requires systematic training in interpretive and critical skills to build balanced and ethical human-AI collaboration.

Organizations that value critique over passive acceptance will become better at thinking critically, adapting knowledge effectively and will build stronger ethical capacity. One way of achieving this is by shifting from a growth-oriented mindset to an adaptive one. Which, practically speaking, means workplaces should seek to do the following:

  1. Train people to separate fluency from accuracy and to ask where information comes from rather than just being passive consumers of it. With better epistemic awareness, workers become active interpreters understanding what an AI tool is saying, as well as how and why it’s saying it.
  2. Teach people to monitor their thinking processes and question knowledge sources. A recent study showed professionals with strong metacognitive practices, like planning, self-monitoring and prompt revision, achieved significantly higher creativity when using AI tools, while others saw no benefit. That means metacognition could be the “missing link” for productive LLM use.
  3. Avoid a one-size-fits-all approach and consider levels of automation by task stages. AI tool developers should be encouraged to define clear roles for when the model drafts or analyzes, when the human leads and when verification is mandatory. Consider adding things like AI-use to responsibility and accountability charts.
  4. Create workplace cultures that encourage workers to question AI outputs, track those challenges as quality signals and budget time for verification. Workplaces should publish style norms for AI-assisted writing, set confidence thresholds and evidence requirements by function, and specify who signs off at each risk level.
  5. Hold quarterly “drift reviews” to spot shifts in tone, reliance or bias, before they calcify into organizational culture.

Efficiency will not decide the winners

As we are starting to see, the drive for efficiency will not decide which firms are most successful; the ability to interpret and critically assess AI outputs will.

The companies that pair speed with skepticism and protect judgment as a first-class asset will handle volatility better than those that treat AI as an autopilot. Speed may get you to the next decision, but judgment keeps you in business.

Ethical intelligence in organizations requires an ongoing investment in epistemic awareness, interpretive skill, psychological safety and active value-driven design.

Companies capable of balancing technological innovation with critical thinking and deep ethical understanding will be the ones to thrive in the AI era.

Provided by
The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
AI tools promise efficiency at work, but they can erode trust, creativity and agency (2025, October 8)
retrieved 8 October 2025
from https://techxplore.com/news/2025-10-ai-tools-efficiency-erode-creativity.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Corteva celebrates women’s impact in agriculture

Next Post

US Dollar Just Received A Warning From Deutsche Bank

Next Post
US Dollar Just Received A Warning From Deutsche Bank

US Dollar Just Received A Warning From Deutsche Bank

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

World Zoonoses Day: Wild Africa urges action against bushmeat trade to safeguard Nigerians from future pandemics – EnviroNews

World Zoonoses Day: Wild Africa urges action against bushmeat trade to safeguard Nigerians from future pandemics – EnviroNews

5 months ago
Here’s How You Do It Per A Seasoned Trader

Here’s How You Do It Per A Seasoned Trader

3 months ago
Subsidising access: How Namibia is using public funds to tackle electricity inequality

Subsidising access: How Namibia is using public funds to tackle electricity inequality

5 months ago
Doji raises $14M to make virtual try-ons fun through AI avatars

Doji raises $14M to make virtual try-ons fun through AI avatars

7 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form
© 2023 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.