NIST’s ARIA Program and its Implications for AI Safety

NIST ARIA Program for AI Safety

Table of Contents

As AI continues to revolutionize industries even as it impacts our daily lives, ensuring safe operations of AI systems has become the most critical area of concern. Now, the National Institute of Standards and Technology (NIST) is one giant leap closer to this direction with the launch of the Assessing Risks and Impacts of Artificial Intelligence (ARIA) program. This effort will advance the measurement science necessary to make AI robust and trustworthy by filling the gaps in AI evaluation.

Goals of the ARIA Program 

The goals of the ARIA program are to:

  • Fill Gaps in AI Evaluation: Methods for evaluating state-of-the-art AI often come up short in understanding how AI systems embed into the real world and its societal consequences. ARIA will help fill the gap in broader developments of AI about individual people and society. 
  • Societal Understanding: By eliciting AI systems in realistic scenarios, ARIA will strive to understand how these technologies impact people and communities in practice and how individuals engage with AI and adapt to its functionalities. 
  • Develop Metrics and Guidelines: The program will develop scalable guidelines, methods, tools, methodologies, and metrics that would enable agencies to deploy strategies to ensure safety in their AI systems. This work is germane to the design, development, and responsible deployment of AI technologies.

Learn more about the ARIA program, its evaluations, and goals

Evaluation Approach 

ARIA undertakes multi-tier evaluation to comprehensively review AI systems:

  1. Model Testing: This involves testing the technical capabilities and functionalities of AI models. It consists of the comparison of system outputs with known outcomes to establish the accuracy of performance.
  2. Red-Teaming: This consists of the stress test of AI systems for potential of vulnerabilities and adverse results. It helps bring into light biases, inaccuracies, and harmful outputs.
  3. Field Testing: The most extensive level, refers to how AI systems perform under practical conditions with real users. This level goes a step further in illustrating how human beings interact with AI and what impact those interactions have on society.

Visit the ARIA FAQ document for more on the NIST ARIA Evaluation Levels.

Implications for the Future Regulatory Landscape

ARIA will be a vital contributor in creating the future regulatory landscape of AI technologies. Empirical results and in-depth evaluations of AI systems under ARIA will:

  • Inform Policy Development: Insights from ARIA will help policymakers design regulations to ensure AI technologies are safe, reliable, and beneficial to society. 
  • Guiding Industry Standards: ARIA, through its deliverable standards and related metrics, will help to set industry standards on how to maintain safe and trustworthy AI in practice, thus promoting best practices across all economic sectors. 
  • Increasing Public Trust: There will be more public trust in the presentation of rigorous and transparent evaluations of the reliability of AI technologies concerning their solicited concerns related to potential risks and societal impacts of the AI technologies.

To continue learning more about ARIA, you can join the ARIA email distribution list by signing up here

Cyber Testing and Assessments for AI

The NIST, in addition to the ARIA program, has developed the AI Risk Management Framework (RMF) to help organizations govern the risks related to AI. This model is to be adopted voluntarily and seeks to imbibe an aspect of trustworthiness in AI systems. It comprises:

Generative AI Risk Management: A working document published in draft in April 2024 aiming to be the framework for the management of new risks that are associated with generative AI, laying down over 400 actions the developers should take to reduce such.

Crosswalks and International Alignment: NIST has developed crosswalks between the AI RMF and international guidance to foster collaboration and global alignment.

The AI RMF facilitates the design, development, operation, and assessment of AI offerings, services, and systems. It comes with a playbook, roadmap, and perspectives to make operationalization simple.

How TestPros Can Support

As an independent IT assessment service, TestPros is positioned to help the business community and governmental entities tackle the issues involved in AI safety and compliance. Our services offer a complete lifecycle evaluation of your AI systems for safety, security, and reliability at each step.

  • Customized AI Assessment: We provide our clients with the assessments necessary, which follow the ARIA standards and measures under the program, throughout identifying and mitigating an organization’s faced risks. 
  • Regulatory Compliance: TestPros assists organizations in developing an understanding and compliance with emerging AI regulations to enable AI technologies that are both efficient and compliant. 
  • Advanced AI Governance: Our independent assessment supports sound AI governance frameworks with essential insights that allow organizations to operationalize AI in responsible ways.

To learn more about how we can support you in conducting AI assessments, send us a message.

SHARING IS CARING

Get In
Touch

Our pool of certified engineers, subject matter experts, and IT support staff remove the burden of IT—freeing you up to be the best at what you do.

Ready To Experience TestPros ?

*All fields are mandatory.

Related Posts

Skip to content