FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (2024)

Nine months ago, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI).

This Executive Order built on the voluntary commitments he and Vice President Harris received from 15 leading U.S. AI companies last year. Today, the administration announced that Apple has signed onto the voluntary commitments, further cementing these commitments as cornerstones of responsible AI innovation.

In addition, federal agencies reported that they completed all of the 270-day actions in the Executive Order on schedule, following their on-time completion of every other task required to date. Agencies also progressed on other work directed for longer timeframes.

Following the Executive Order and a series of calls to action made by Vice President Harris as part of her major policy speech before the Global Summit on AI Safety, agencies all across government have acted boldly. They have taken steps to mitigate AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more. Actions that agencies reported today as complete include the following:

Managing Risks to Safety and Security:
Over 270 days, the Executive Order directed agencies to take sweeping action to address AI’s safety and security risks, including by releasing vital safety guidance and building capacity to test and evaluate AI. To protect safety and security, agencies have:

  1. Released for public comment new technical guidelines from the AI Safety Institute (AISI) for leading AI developers in managing the evaluation of misuse of dual-use foundation models. AISI’s guidelines detail how leading AI developers can help prevent increasingly capable AI systems from being misused to harm individuals, public safety, and national security, as well as how developers can increase transparency about their products.
  2. Published final frameworks on managing generative AI risks and securely developing generative AI systems and dual-use foundation models. These documents by the National Institute of Standards and Technology (NIST) will provide additional guidance that builds on NIST’s AI Risk Management Framework, which offered individuals, organizations, and society a framework to manage AI risks and has been widely adopted both in the U.S. and globally. NIST also submitted a report to the White House outlining tools and techniques to reduce the risks from synthetic content.
  3. Developed and expanded AI testbeds and model evaluation tools at the Department of Energy (DOE). DOE, in coordination with interagency partners, is using its testbeds to evaluate AI model safety and security, especially for risks that AI models might pose to critical infrastructure, energy security, and national security. DOE’s testbeds are also being used to explore novel AI hardware and software systems, including privacy-enhancing technologies that improve AI trustworthiness. The National Science Foundation (NSF) also launched an initiative to help fund researchers outside the federal government design and plan AI-ready testbeds.
  4. Reported results of piloting AI to protect vital government software.The Department of Defense (DoD) and Department of Homeland Security (DHS) reported findings from their AI pilots to address vulnerabilities in government networks used, respectively, for national security purposes and for civilian government. These steps build on previous work to advance such pilots within 180 days of the Executive Order.
  5. Issued a call to action from the Gender Policy Council and Office of Science and Technology Policy to combat image-based sexual abuse, including synthetic content generated by AI. Image-based sexual abuse has emerged as one of the fastest growing harmful uses of AI to-date, and the call to action invites technology companies and other industry stakeholders to curb it. This call flowed from Vice President Harris’s remarks in London before the AI Safety Summit, which underscored that deepfake image-based sexual abuse is an urgent threat that demands global action.

Bringing AI Talent into Government
Last year, the Executive Order launched a government-wide AI Talent Surge that is bringing hundreds of AI and AI-enabling professionals into government. Hired individuals are working on critical AI missions, such as informing efforts to use AI for permitting, advising on AI investments across the federal government, and writing policy for the use of AI in government.

  1. To increase AI capacity across the federal government for both national security and non-national security missions, the AI Talent Surge has made over 200 hires to-date, including through the Presidential Innovation Fellows AI cohort and the DHS AI Corps.
  2. Building on the AI Talent Surge 6-month report, the White House Office of Science and Technology Policy announced new commitments from across the technology ecosystem, including nearly $100 million in funding, to bolster the broader public interest technology ecosystem and build infrastructure for bringing technologists into government service.

Advancing Responsible AI Innovation
President Biden’s Executive Order directed further actions to seize AI’s promise and deepen the U.S. lead in AI innovation while ensuring AI’s responsible development and use across our economy and society. Within 270 days, agencies have:

  1. Prepared and will soon release a report on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, including related policy recommendations. The Department of Commerce’s report draws on extensive outreach to experts and stakeholders, including hundreds of public comments submitted on this topic.
  2. Awarded over 80 research teams’ access to computational and other AI resources through the National AI Research Resource (NAIRR) pilot—a national infrastructure led by NSF, in partnership with DOE, NIH, and other governmental and nongovernmental partners, that makes available resources to support the nation’s AI research and education community. Supported projects will tackle deepfake detection, advance AI safety, enable next-generation medical diagnoses and further other critical AI priorities.
  3. Released a guide for designing safe, secure, and trustworthy AI tools for use in education. The Department of Education’s guide discusses how developers of educational technologies can design AI that benefits students and teachers while advancing equity, civil rights, trust, and transparency. This work builds on the Department’s 2023 report outlining recommendations for the use of AI in teaching and learning.
  4. Published guidance on evaluating the eligibility of patent claims involving inventions related to AI technology,as well as other emerging technologies. The guidance by the U.S. Patent and Trademark Office will guide those inventing in the AI space to protect their AI inventions and assist patent examiners reviewing applications for patents on AI inventions.
  5. Issued a report on federal research and development (R&D) to advance trustworthy AI over the past four years. The report by the National Science and Technology Council examines an annual federal AI R&D budget of nearly $3 billion.
  6. Launched a $23 million initiative to promote the use of privacy-enhancing technologies to solve real-world problems, including related to AI.Working with industry and agency partners, NSF will invest through its new Privacy-preserving Data Sharing in Practice program in efforts to apply, mature, and scale privacy-enhancing technologies for specific use cases and establish testbeds to accelerate their adoption.
  7. Announced millions of dollars in further investments to advance responsible AI development and use throughout our society. These include $30 million invested through NSF’s Experiential Learning in Emerging and Novel Technologies program—which supports inclusive experiential learning in fields like AI—and $10 million through NSF’s ExpandAI program, which helps build capacity in AI research at minority-serving institutions while fostering the development of a diverse, AI-ready workforce.

Advancing U.S. Leadership Abroad
President Biden’s Executive Order emphasized that the United States lead global efforts to unlock AI’s potential and meet its challenges. To advance U.S. leadership on AI, agencies have:

  1. Issued a comprehensive plan for U.S. engagement on global AI standards.The plan, developed by the NIST, incorporates broad public and private-sector input, identifies objectives and priority areas for AI standards work, and lays out actions for U.S. stakeholders including U.S. agencies. NIST and others agencies will report on priority actions in 180 days.
  2. Developed guidance for managing risks to human rights posed by AI. The Department of State’s “Risk Management Profile for AI and Human Rights”—developed in close coordination with NIST and the U.S. Agency for International Development—recommends actions based on the NIST AI Risk Management Framework to governments, the private sector, and civil society worldwide, to identify and manage risks to human rights arising from the design, development, deployment, and use of AI.
  3. Launched a global network of AI Safety Institutes and other government-backed scientific offices to advance AI safety at a technical level.This network will accelerate critical information exchange and drive toward common or compatible safety evaluations and policies.
  4. Launched a landmark United Nations General Assembly resolution. The unanimously adopted resolution, with more than 100 co-sponsors, lays out a common vision for countries around the world to promote the safe and secure use of AI to address global challenges.
  5. Expanded global support for the U.S.-led Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy.Fifty-five nations now endorse the political declaration, which outlines a set of norms for the responsible development, deployment, and use of military AI capabilities.

The Table below summarizes many of the activities that federal agencies have completed in response to the Executive Order:

FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (1)
FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (2)
FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (3)
FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (4)
FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (5)

###

FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI | The White House (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Rev. Leonie Wyman

Last Updated:

Views: 5699

Rating: 4.9 / 5 (59 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.