Search
Close this search box.

AI Safety Updates: The Need for Rigorous Testing Amid Recent NIST, Government Initiatives

AI technology microchip with a background representing digital transformation

Table of Contents

Artificial intelligence (AI) has quickly burst onto the scene as a critical area of development, with the prospect of delivering immense benefits across many sectors. However, for each new breakthrough in AI, the risks involved keep changing. Responsible and safe deployment, therefore, demands careful testing and evaluation of AI systems. This was evidenced by recently released initiatives and commitments of the Biden-Harris administration and now by leaders at technology companies around the world.

The Biden-Harris Administration’s Commitment to AI Safety

The administration is building on voluntary commitments of major U.S. AI companies now including Apple, by an Executive Order of President Biden nine months ago articulating his vision for positioning the U.S. at the forefront of AI innovation while managing its risks.

Federal agencies have also been extensively involved in the process of addressing AI safety and security risks. The Executive Order laid out measures for federal agencies to carry out, including the release of technical guidelines by the AI Safety Institute (AISI) and frameworks by the National Institute of Standards and Technology (NIST). 

For instance, the AI Risk Management Framework released by NIST is adopted to a wide extent to effectively manage AI risks to individuals and organizations, benefiting society at large. The Department of Energy (DOE) has also developed AI testbeds to evaluate model safety and security with regard to potential threats to critical infrastructure and national security.

NIST Director and Under Secretary of Commerce for Standards and Technology, Laurie E. Locascio, stated: “For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”

Why Thorough Testing is Vital in AI Deployment

Rigorous AI system testing is fundamental in finding risks and ensuring safe deployment. Testing helps developers find possible vulnerabilities that could be used to exploit the systems for ill purposes. The latest initiatives of the Biden-Harris administration prioritize the development and expansion of testbeds and model evaluation tools as critical for assessing the safety and reliability of AI models.

Additionally, the Department of Defense (DoD) and the Department of Homeland Security (DHS)  have launched pilot AI programs for the defense of critical government software that will address vulnerable points in the national security and civilian government networks. These are part of a larger, interagency pilot effort across the federal government to increase trust and transparency in AI applications.

Advancing Responsible AI Innovation

Responsible AI innovation requires a balance between harnessing AI’s potential and safeguarding against its risks. The administration has taken several steps to promote responsible AI development, including awarding research teams access to computational resources through the National AI Research Resource (NAIRR) pilot. These projects target issues like the detection of deep fakes and safety issues within AI.

In addition, the Department of Education has released a guide for designing safe, secure, and trustworthy AI tools for educational use to benefit students and teachers. 

The Importance of Industry Commitment

The voluntary commitments of major AI companies, including Apple, underscore the important role that the industry has in furthering the cause of responsible AI development. Such commitments provide the foundation for collaboration between the government and the private sector in solving challenges related to AI on agreed terms that are beneficial to all. The AI Talent Surge, part of the Executive Order, has moved hundreds of AI professionals to work in the government, further enhancing the government’s capacity to manage AI risks. 

According to the White House Fact Sheet released on July 26, agencies have published new technical guidelines from the AISI for public comment. These guidelines assist leading AI developers in managing the evaluation of misuse of dual-use foundation models. They detail how developers can prevent increasingly advanced AI systems from being misused to harm individuals, public safety, and national security, and provide strategies for increasing transparency about their products.

Key Insights from NIST and the Department of Commerce

The Department of Commerce, with NIST,  is centrally involved in developing new guidance and tools to ensure AI is being used in a safe and responsible manner. Several key initiatives and updates were released by NIST:

Preventing Misuse of Dual-Use Foundation Models

NIST released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1) to help mitigate risks from generative AI and dual-use foundation models. This guidance outlines voluntary best practices for developers to protect their systems from being misused for harmful purposes.

Testing AI Systems Against Adversarial Attacks

NIST developed Dioptra—an open-source software toolset designed to allow users to measure how adversarial attacks can degrade AI system performance. This was envisioned to aid AI developers and users in testing how robust their models are against different types of attacks.

Global Engagement on AI Standards

NIST’s publication on the global AI standards, A Plan for Global Engagement on AI Standards (NIST AI 100-5), shows more reasons for increased international collaboration in the development and promulgation of AI-related consensus standards. That is, engaging all stakeholders worldwide for shared cooperation and the spirit of information exchange.

Generative AI 

NIST has released two guidance documents on risks associated with managing generative AI, such as chatbots and tools for creating text-based images and videos.

The Path Forward

As AI continues to shape the future, testing and evaluation are key to making sure it is being applied safely and ethically. The steps put in place by the Biden-Harris administration, along with industry commitments, set a solid foundation for continued AI innovation while managing risks. All this makes the need for rigorous testing and collaboration extremely important in successfully navigating this complex landscape in AI development. 

SHARING IS CARING
AUTHOR

Related Posts

Skip to content