Thursday, May 29, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

IEEE-USA’s New Guide Helps Companies Navigate AI Risks

Simon Osuji by Simon Osuji
September 19, 2024
in Artificial Intelligence
0
IEEE-USA’s New Guide Helps Companies Navigate AI Risks
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Related posts

A new transformer architecture emulates imagination and higher-level human mental states

A new transformer architecture emulates imagination and higher-level human mental states

May 29, 2025
Instagram Keeps Polishing Its Edits App to Woo Video Creators

Instagram Keeps Polishing Its Edits App to Woo Video Creators

May 29, 2025



Organizations that develop or deploy artificial intelligence systems know that the use of AI entails a diverse array of risks including legal and regulatory consequences, potential reputational damage, and ethical issues such as bias and lack of transparency. They also know that with good governance, they can mitigate the risks and ensure that AI systems are developed and used responsibly. The objectives include ensuring that the systems are fair, transparent, accountable, and beneficial to society.

Even organizations that are striving for responsible AI struggle to evaluate whether they are meeting their goals. That’s why the IEEE-USA AI Policy Committee published “A Flexible Maturity Model for AI Governance Based on the NIST AI Risk Management Framework,” which helps organizations assess and track their progress. The maturity model is based on guidance laid out in the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (RMF) and other NIST documents.

Building on NIST’s work

NIST’s RMF, a well-respected document on AI governance, describes best practices for AI risk management. But the framework does not provide specific guidance on how organizations might evolve toward the best practices it outlines, nor does it suggest how organizations can evaluate the extent to which they’re following the guidelines. Organizations therefore can struggle with questions about how to implement the framework. What’s more, external stakeholders including investors and consumers can find it challenging to use the document to assess the practices of an AI provider.

The new IEEE-USA maturity model complements the RMF, enabling organizations to determine their stage along their responsible AI governance journey, track their progress, and create a road map for improvement. Maturity models are tools for measuring an organization’s degree of engagement or compliance with a technical standard and its ability to continuously improve in a particular discipline. Organizations have used the models since the 1980a to help them assess and develop complex capabilities.

The framework’s activities are built around the RMF’s four pillars, which enable dialogue, understanding, and activities to manage AI risks and responsibility in developing trustworthy AI systems. The pillars are:

  • Map: The context is recognized, and risks relating to the context are identified.
  • Measure: Identified risks are assessed, analyzed, or tracked.
  • Manage: Risks are prioritized and acted upon based on a projected impact.
  • Govern: A culture of risk management is cultivated and present.

A flexible questionnaire

The foundation of the IEEE-USA maturity model is a flexible questionnaire based on the RMF. The questionnaire has a list of statements, each of which covers one or more of the recommended RMF activities. For example, one statement is: “We evaluate and document bias and fairness issues caused by our AI systems.” The statements focus on concrete, verifiable actions that companies can perform while avoiding general and abstract statements such as “Our AI systems are fair.”

The statements are organized into topics that align with the RFM’s pillars. Topics, in turn, are organized into the stages of the AI development life cycle, as described in the RMF: planning and design, data collection and model building, and deployment. An evaluator who’s assessing an AI system at a particular stage can easily examine only the relevant topics.

Scoring guidelines

The maturity model includes these scoring guidelines, which reflect the ideals set out in the RMF:

  • Robustness, extending from ad-hoc to systematic implementation of the activities.
  • Coverage,ranging from engaging in none of the activities to engaging in all of them.
  • Input diversity, ranging fromhaving activities informed by inputs from a single team to diverse input from internal and external stakeholders.

Evaluators can choose to assess individual statements or larger topics, thus controlling the level of granularity of the assessment. In addition, the evaluators are meant to provide documentary evidence to explain their assigned scores. The evidence can include internal company documents such as procedure manuals, as well as annual reports, news articles, and other external material.

After scoring individual statements or topics, evaluators aggregate the results to get an overall score. The maturity model allows for flexibility, depending on the evaluator’s interests. For example, scores can be aggregated by the NIST pillars, producing scores for the “map,” “measure,” “manage,” and “govern” functions.

When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance.

The aggregation can expose systematic weaknesses in an organization’s approach to AI responsibility. If a company’s score is high for “govern” activities but low for the other pillars, for example, it might be creating sound policies that aren’t being implemented.

Another option for scoring is to aggregate the numbers by some of the dimensions of AI responsibility highlighted in the RMF: performance, fairness, privacy, ecology, transparency, security, explainability, safety, and third-party (intellectual property and copyright). This aggregation method can help determine if organizations are ignoring certain issues. Some organizations, for example, might boast about their AI responsibility based on their activity in a handful of risk areas while ignoring other categories.

A road toward better decision-making

When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance. The model enables companies to set goals and track their progress through repeated evaluations. Investors, buyers, consumers, and other external stakeholders can employ the model to inform decisions about the company and its products.

When used by internal or external stakeholders, the new IEEE-USA maturity model can complement the NIST AI RMF and help track an organization’s progress along the path of responsible governance.



Source link

Previous Post

Marine robotics firm extends capacities with autonomous tech

Next Post

US Army Orders More FMTV Vehicles From Oshkosh

Next Post
US Army Orders More FMTV Vehicles From Oshkosh

US Army Orders More FMTV Vehicles From Oshkosh

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

DA rejects SANDF’s “success” claim in DRC

DA rejects SANDF’s “success” claim in DRC

3 weeks ago
Traditional maritime threats being compounded by emerging set of issues – expert

Traditional maritime threats being compounded by emerging set of issues – expert

4 months ago
Mauritania is ambitiously working to harness its vast energy resources

Mauritania is ambitiously working to harness its vast energy resources

7 months ago
10 most devastating terrorist attacks across Africa in 2024

10 most devastating terrorist attacks across Africa in 2024

3 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.