Monday, July 28, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Professor discusses the risks international organizations face with AI and data protection

Simon Osuji by Simon Osuji
September 30, 2024
in Artificial Intelligence
0
Professor discusses the risks international organizations face with AI and data protection
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


ai
Credit: CC0 Public Domain

Policymakers across the globe are grappling with the vast implications of advances in artificial intelligence technology, including how these tools will be used by international organizations.

Related posts

Humanoid robots embodiment of China’s AI ambitions

Humanoid robots embodiment of China’s AI ambitions

July 27, 2025
TCL QM8K Review: The Best Mid-Tier TV

TCL QM8K Review: The Best Mid-Tier TV

July 27, 2025

Aaron Martin, an assistant professor at the University of Virginia’s School of Data Science, was recently invited to share his insights on this critical topic during a panel discussion at the International Organizations Workshop on Data Protection, which was held at the World Bank in Washington and co-hosted by the European Data Protection Supervisor. This year’s gathering marked the first time the conference took place outside of Europe.

Martin, who joined the School of Data Science faculty in 2023 with a joint appointment at UVA’s Department of Media Studies, specializes in data governance and how international bodies establish transnational policy, particularly as it relates to technology.

At the panel discussion, Martin shared his thoughts on the challenges global institutions face with data protection and why it is vital that they work to address them.

He recently chatted about his experience at the conference and his views on some of the many facets of this rapidly evolving global issue.

Your panel focused on AI use by international organizations. Broadly speaking, what is your impression of how widely used AI systems are by these agencies?

Suffice it to say, international organizations—including those that were represented at the workshop in D.C.—are very diverse in terms of their missions and mandates.

These range from U.N. agencies with development or humanitarian missions to organizations like NATO or Interpol, which facilitate security and law enforcement cooperation internationally. Each of them is exploring the use of AI in different ways, and my impression is that currently, their approach is a cautious one, which is encouraging.

A key feature of IOs is that they enjoy what are known as legal immunities and privileges—these help ensure their independence and effective functioning. What this means in practice is that national laws and regulations for data protection (like the General Data Protection Regulation) and AI (like the EU AI Act) won’t apply to IOs as they do for government bodies or commercial firms.

This becomes a real governance issue—how do we ensure that IOs are processing data and using new technologies responsibly? Most of these organizations have established policies for privacy and data protection, but AI introduces a new set of challenges that they need to grapple with. The point of this workshop is for the organizations to work together to develop good guidance and practices for data, and increasingly, AI.

The discussion in part focused on the risks of these systems. What should international organizations prioritize when it comes to mitigating the risks of AI to the many constituencies affected by their work?

Recently, I’ve been struck by news reports about the challenges AI companies face in terms of their access to new sources of quality data. There’s growing anxiety that AI models will become less useful and less reliable if they aren’t fed with more and more data—these models are “hungry,” as one of my co-panelists described it. There are fears that AI models will begin to collapse if they’re trained on too much synthetic (i.e., fake) or AI-generated data, so AI companies are scrambling for new data partners.

At the workshop, I focused my intervention on raising awareness about the varied risks of IOs’ oversharing data with AI companies. IOs have incredibly rich and diverse data, for example, about development indicators, global conflict, and humanitarian affairs.

They also have data from parts of the world that are very underrepresented online, which is where AI companies typically go to scrape data. IOs need to think carefully about how to ensure the confidentiality of this data and to take steps to protect it from misuse and toxic AI-business models.

International organizations, as you mentioned, are not a monolith, and the audience for your panel was composed of representatives from diverse groups. To what extent should various types of international organizations think about these issues differently based on their mission?

There will be some common challenges—every IO has a human resources department, for example, so enterprise applications exist in IOs just like they do any other organization. And many–if not most–IOs have important budgetary considerations that will shape and possibly limit their use of AI tools, including generative AI.

What I’m particularly interested in, including in my research, is how the use of AI by humanitarian IOs may impact the recipients of aid—so-called beneficiaries. Should IOs rely exclusively on AI to make decisions about who receives food aid, for example? What are the risks of doing so? These are hard questions that require engagement with a range of stakeholders, including those directly impacted by these decisions.

You’ve done a lot of work looking at technology’s impact on historically marginalized communities, particularly refugees. When it comes specifically to humanitarian organizations and AI, what are your biggest concerns?

Humanitarian organizations are generally being pretty thoughtful about their approach to AI. “Do no digital harm” is their mantra, which means they’re very sensitive to the potential and actual harms that AI might inflict on refugees and others impacted by conflict and crisis.

I do worry about what’s been referred to as “AI snake oil” in the aid sector, and organizations being sold technology that simply can’t deliver on the hype. It’s important that we continue engaging with these organizations to help them understand the possibilities and the risks.

What were some of your main takeaways from the other speakers for your panel and any others you heard at the conference?

Well, it was Chatham House Rule, so I ought to be careful here, but I was quite impressed by the strategic thinking that IOs are undertaking to incorporate AI into their organizations. I’ve attended other conferences where it feels like folks are mindlessly fishing for AI use cases, and that’s usually the wrong approach.

Another panelist explained how his organization is using AI to document human rights abuses around the world, which is a fascinating application and speaks to the potential for AI to have a positive impact in the world.

Provided by
University of Virginia

Citation:
Q&A: Professor discusses the risks international organizations face with AI and data protection (2024, September 30)
retrieved 30 September 2024
from https://techxplore.com/news/2024-09-qa-professor-discusses-international-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Rebels’ in Congo rake in $300,000 monthly revenue – UN

Next Post

Canada Seeking Industry Input for Victoria-Class Submarines Replacement

Next Post
Canada Seeking Industry Input for Victoria-Class Submarines Replacement

Canada Seeking Industry Input for Victoria-Class Submarines Replacement

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

The Los Angeles Fires Will Put California’s New Insurance Rules to the Test

The Los Angeles Fires Will Put California’s New Insurance Rules to the Test

7 months ago
8 restaurant foods that can be a part of your weight loss diet

8 restaurant foods that can be a part of your weight loss diet

2 years ago
US Seeks New Air-Launched Standoff Munition for Ukraine

US Seeks New Air-Launched Standoff Munition for Ukraine

1 year ago
AI may not steal many jobs after all. It may just make workers more efficient

AI may not steal many jobs after all. It may just make workers more efficient

11 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.