Saturday, November 8, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

AI as the attack surface

Simon Osuji by Simon Osuji
November 5, 2025
in Artificial Intelligence
0
AI as the attack surface
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Boards of directors are pressing for productivity gains from large-language models and AI assistants. Yet the same features that makes AI useful – browsing live websites, remembering user context, and connecting to business apps – also expand the cyber attack surface.

Tenable researchers have published a set of vulnerabilities and attacks under the title “HackedGPT”, showing how indirect prompt injection and related techniques could enable data exfiltration and malware persistence. Some issues have been remediated, while others reportedly remain exploitable at the time of the Tenable disclosure, according to an advisory issued by the company.

Removing the inherent risks from AI assistants’ operations requires governance, controls, and operating methods that treat AI as a user or device, to the extent that the technology should be subject to strict audit and monitoring

The Tenable research shows the failures that can turn AI assistants into security issues. Indirect prompt injection hides instructions in web content that the assistant reads while browsing, instructions that trigger data access the user never intended. Another vector involves the use of a front-end query that seeds malicious instructions.

The business impact is clear, including the need for incident response, legal and regulatory review, and steps taken to reduce reputational harm.

Research already exists that shows assistants can leak personal or sensitive information through injection techniques, and AI vendors and cybersecurity experts have to patch issues as they emerge.

The pattern is familiar to anyone in the technology industry: as features expand, so do failure modes. Treating AI assistants as live, internet-facing applications – not productivity drivers – can improve resilience.

How to govern AI assistants, in practice

1) Establish an AI system registry

Inventory every model, assistant, or agent in use – in public cloud, on-premises, and software-as-a-service, in line with the NIST AI RMF Playbook. Record owner, purpose, capabilities (browsing, API connectors) and data domains accessed. Even without this AI asset list, “shadow agents” can persist with privileges no one tracks. Shadow AI – at one stage encouraged by the likes of Microsoft, who encouraged users to deploy home Copilot licences at work – is a significant threat.

2) Separate identities for humans, services, and agents

Identity and access management conflate user accounts, service accounts, and automation devices. Assistants that access websites, call tools, and write data need distinct identities and be subject to zero-trust policies of least-privilege. Mapping agent-to-agent chains (who asked whom to do what, over which data, and when) is a bare minimum crumb trail that may ensure some degree of accountability. It’s worth noting that agentic AI is susceptible to ‘creative’ output and actions, yet unlike human staff, are not constrained by disciplinary policies.

3) Constrain risky features by context

Make browsing and independent actions taken by AI assistants opt-in per use case. For customer-facing assistants, set short retention times unless there’s a strong reason and a lawful basis otherwise. For internal engineering, use AI assistants but only in segregated projects with strict logging. Apply data-loss-prevention to connector traffic if assistants can reach file stores, messaging, or e-mail. Previous plugin and connector issues demonstrate how integrations increase exposure.

4) Monitor like any internet-facing app

  • Capture assistant actions and tool calls as structured logs.
  • Alert on anomalies: sudden spikes in browsing to unfamiliar domains; attempts to summarise opaque code blocks; unusual memory-write bursts; or connector access outside policy boundaries.
  • Incorporate injection tests into pre-production checks.

5) Build the human muscle

Train developers, cloud engineers, and analysts to recognise injection symptoms. Encourage users to report odd behaviour (e.g., an assistant unexpectedly summarising content from a site they didn’t open). Make it normal to quarantine an assistant, clear memory, and rotate its credentials after suspicious events. The skills gap is real; without upskilling, governance will lag adoption.

Decision points for IT and cloud leaders

Question Why it matters
Which assistants can browse the web or write data? Browsing and memory are common injection and persistence paths; constrain per use case.
Do agents have distinct identities and auditable delegation? Prevents “who did what?” gaps when instructions are seeded indirectly.
Is there a registry of AI systems with owners, scopes, and retention? Supports governance, right-sizing of controls, and budget visibility.
How are connectors and plugins governed? Third-party integrations have a history of security issues; apply least privilege and DLP.
Do we test for 0-click and 1-click vectors before go-live? Public research shows both are feasible via crafted links or content.
Are vendors patching promptly and publishing fixes? Feature velocity means new issues will appear; verify responsiveness.

Risks, cost visibility, and the human factor

  • Hidden cost: assistants that browse or retain memory consume compute, storage, and egress in ways finance teams and those monitoring per-cycle Xaas use may not have modelled. A registry and metering reduce surprises.
  • Governance gaps: audit and compliance frameworks built for human users won’t automatically capture agent-to-agent delegation. Align controls according to OWASP LLM risks and NIST AI RMF categories.
  • Security risk: indirect prompt injection can be invisible to users, passed from media, text or code formatting, as shown by research.
  • Skills gap: many teams haven’t yet merged AI/ML and cybersecurity practices. Invest in training that covers assistant threat-modelling and injection testing.
  • Evolving posture: expect a cadence of new flaws and fixes. OpenAI’s remediation of a zero-click path in late 2025 is a reminder that vendor posture changes quickly and needs verification.

Bottom line

The lesson for executives is simple: treat AI assistants as powerful, networked applications with their own lifecycle and a propensity for both being the subject of attack and for taking unpredictable action. Put a registry in place, separate identities, constrain risky features by default, log everything meaningful, and rehearse containment.

With these guardrails in place, agentic AI is more likely to deliver measurable efficiency and resilience – without quietly becoming your newest breach vector.

(Image source: “The Enemy Within Unleashed” by aha42 | tehaha is licensed under CC BY-NC 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

Related posts

Gear News of the Week: Fairphone Lands in the US, and WhatsApp Is Finally on the Apple Watch

Gear News of the Week: Fairphone Lands in the US, and WhatsApp Is Finally on the Apple Watch

November 8, 2025
Why Are We All Still Carrying Around Car Keys?

Why Are We All Still Carrying Around Car Keys?

November 8, 2025
Previous Post

Hydrobox: Microgrid Solutions for Rural Africa

Next Post

University of Pennsylvania confirms hacker stole data during cyberattack

Next Post
University of Pennsylvania confirms hacker stole data during cyberattack

University of Pennsylvania confirms hacker stole data during cyberattack

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Wisconsin Investment Board Sells Off $350M Stake In Bitcoin ETF

Wisconsin Investment Board Sells Off $350M Stake In Bitcoin ETF

6 months ago
LLMs are becoming more brain-like as they advance, researchers discover

LLMs are becoming more brain-like as they advance, researchers discover

11 months ago
Europe-based units are learning from Ukraine, officers say

Europe-based units are learning from Ukraine, officers say

1 year ago
Minor Hotels in deal with Soma Bay Hotel for Egypt expansion

Minor Hotels in deal with Soma Bay Hotel for Egypt expansion

6 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.