Executive Actions

Executive Actions

The government is responding to AI. The federal government has undertaken hundreds of AI-related actions, agencies have implemented comprehensive frameworks and programs, spanning national security, financial regulation, transportation safety, environmental protection, and international cooperation. An ongoing list of those major actions is detailed below.

This document details ... government actions.

Please send me additions.


Key documents #

  • Winning the Race: America’s AI Action Plan (July 10, 2025) — The AI Action Plan is Trump’s flagship AI policy document. “America’s AI Action Plan has three pillars: innovation, infrastructure, and international diplomacy and security. The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.”
  • NIST AI 100-1: NIST AI Risk Management Framework (April 29, 2024) — NIST released a draft publication based on the AI Risk Management Framework (AI RMF) to help manage the risk of Generative AI. The draft AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management aligned with their goals. Developed with input from a public working group of more than 2,500 members, focusing on 12 risks and 400+ actions developers can take. For more information check NIST’s landing page.

White House Actions under Trump (2025-present) #

  • Adversarial Distillation of American AI Models (April 23, 2026) — As OSTP Director Michael Kratsios explained, “The U.S. has evidence that foreign entities, primarily in China, are running industrial-scale distillation campaigns to steal American AI. We will be taking action to protect American innovation. These foreign entities are using tens of thousands of proxies and jailbreaking techniques in coordinated campaigns to systematically extract American breakthroughs. Foreign entities who build on such fragile foundations should have little confidence in the integrity and reliability of the models they produce.”
  • President Donald J. Trump Advances Energy Affordability with the Ratepayer Protection Pledge (March 4, 2025) — The White House brought the leading AI companies and hyperscalers together to sign the Ratepayer Protection Pledge, ensuring they protect Americans from electricity price hikes due to data center energy requirements now and in the long run, take action to further strengthen the grid, and ensure that all Americans benefit from the oncoming technological boom.
  • 2025 Federal AI Use Case Inventory (April 2026) — This 2025 Federal Agency Artificial Intelligence (AI) Use Case Inventory repository consolidates AI use case inventories from across U.S. Federal agencies, consistent with Section 5 of Executive Order (EO) 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” and pursuant to the Advancing American AI Act and OMB Memorandum M-25-21, “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust.” This repository demonstrates American leadership in AI and provides transparency into how Federal agencies are using AI technology to improve their services to the public.
  • Adjusting Imports of Semiconductors, Semiconductor Manufacturing Equipment, and their Derivative Products into the United States (January 14, 2026) — This Proclamation uses Section 232 to impose a new 25% import duty on certain semiconductors, semiconductor manufacturing equipment and their derivate products. The Proclamation includes exclusions for U.S. data centers, U.S. research and development, consumer electronics, industrial applications, and the public sector.
  • Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles (December 11, 2025) — Large Language Models (LLMs) procured by the Federal Government must produce reliable outputs free from harmful ideological biases or social agendas. Section 4 of the E.O. requires the Director of the Office of Management and Budget (OMB) to issue guidance to agencies to implement these principles. This memorandum fulfills this requirement.
  • Executive Order on Ensuring a National Policy Framework for Artificial Intelligence (December 10, 2025) — This EO establishes a federal policy to sustain and enhance U.S. AI leadership through a minimally burdensome national policy framework and to limit conflicting state requirements. Specific directives include: an FCC proceeding to consider a federal reporting and disclosure standard for AI models; an FTC policy statement on how the FTC Act applies to AI models and could preempt certain state laws; an evaluation of conditions on federal funding provided to states; and the creation of a DOJ AI Litigation Task Force to challenge state AI laws inconsistent with that policy.
  • Launching the Genesis Mission (November 24, 2025) — This EO establishes a plan to develop an integrated AI platform that harnesses federal scientific datasets to “train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs." The mission will be implemented within the Department of Energy, overseen by the Secretary of Energy, and use DOE facilities.
  • U.S. - Korea Technology Prosperity Deal (October 29, 2025) — This Memorandum of Understanding aims to enable collaboration on AI research and innovation between the U.S. and Korea, including joint efforts to reduce barriers, promote U.S. and Korean AI exports, support AI safety standards and alignment, and promote AI education.
  • U.S. - Japan Technology Prosperity Deal (October 28, 2025) — This Memorandum of Cooperation seeks to promote collaboration between the United States and Japan on research and development in science and technology: “The Participants intend to collaborate closely on promoting pro-innovation AI policy frameworks, promoting exports across our full AI stack, ensuring the rigorous enforcement of existing protection measures while acknowledging the importance of strengthening such measures related to critical and emerging technologies, advancing shared work on industry standards, and safeguarding our children’s digital wellbeing, with a shared commitment to promoting a secure and trustworthy AI ecosystem in a mutually beneficial manner. "
  • Memorandum of Understanding Between the Government of The United States of America and the Government of The United Kingdom of Great Britain and Northern Ireland Regarding the Technology Prosperity Deal (September 18, 2025) — The Memorandum of Understanding aims to promote cooperation in science and technology between the United States and the United Kingdom, including on AI: “The Participants intend to collaborate closely in the build-out of powerful AI infrastructure, facilitate research community access to compute, support the creation of new scientificdata sets, and harness their expertise in metrology and evaluations to enable adoption and advance our collective security. The Participants intend to leverage this infrastructure and the AI expertise across industry and elsewhere, to deliver transformational AI-driven change for our societies and economies. "
  • Major Organizations Commit to Supporting AI Education (September 9, 2025) — Following the President’s AI in education executive order in January, numerous organizations that have committed to providing AI education resources including Google, Code.org, IBM, Pearson Education, HP, Zoom, NVIDIA, MasterCard, Dell Technologies, Microsoft, Amazon, Apple, Adobe, xAI, OpenAI, Anthropic, Meta, Siemens, ScaleAI, MagicSchool, Learning.com, Arist, Palo Alto Networks, AT&T, Cisco, Qualcomm, ARM, Charter Communications, Salesforce, Cengage Group, McGraw Hill Education, Houghton Mifflin Harcourt, Kyndryl, Mason Contractors Association of America, SAP America, Silver Lake, Accenture, Walmart, Intuit, Deloitte, Booz Allen, ServiceNow, Roblox, Cognizant, Software & Information Industry Association, Business Software Alliance, ISACA, Micron Technology, Groq, Intel, and Snap.
  • Executive Order on Preventing Woke AI in the Federal Government (July 23, 2025) — While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas. Building on Executive Order 13960 of December 3, 2020, this order helps fulfill that obligation in the context of large language models.
  • Executive Order on Accelerating Federal Permitting of Data Center Infrastructure (July 23, 2025) — This EO facilitates expedited permitting for data centers and related infrastructure, energy, and manufacturing projects in numerous ways, including changes to the Clean Water Act, the Clean Air Act, the National Environmental Policy Act, and Fixing America’s Surface Transportation Act.
  • Promoting The Export of the American AI Technology Stack (July 23, 2025) — This order is meant to “establish and operationalize a program within DOC aimed at gathering proposals from industry consortia for full-stack AI export packages. Once consortia are selected by DOC, the Economic Diplomacy Action Group, the U.S. Trade and Development Agency, the Export-Import Bank, the U.S. International Development Finance Corporation, and the Department of State (DOS) should coordinate with DOC to facilitate deals that meet U.S.-approved security requirements and standards.”
  • Winning the Race: America’s AI Action Plan (July 10, 2025) — The AI Action Plan is Trump’s flagship AI policy document. “America’s AI Action Plan has three pillars: innovation, infrastructure, and international diplomacy and security. The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so.”
  • Advancing Artificial Intelligence Education for American Youth – The White House (April 23, 2025) — The executive order establishes the White House Task Force on Artificial Intelligence Education, composed of the department heads of various federal agencies. The task force is charged with establishing the Presidential Artificial Intelligence Challenge, encouraging AI adoption and achievement in education across different geographical areas; seeking public-private partnerships and identifying Federal funding for AI education; and generally increasing AI proficiency and literacy through AI education. Additionally, the Secretary of Education is charged with issuing guidance on the use of funds for this purpose, as well as enhancing training for educators on AI. Finally, the Secretary of Labor is tasked with increasing participation in AI-related Registered Apprenticeships.
  • OMB Memo M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust + OMB Memo M-25-22: Driving Efficient Acquisition of Artificial Intelligence in Government (April 3, 2025) — According to the White House’s fact sheet, the OMB AI Use and AI Procurement Memos rescind and replace OMB memos on AI use and procurement issued under President Biden’s Executive Order 14110 and shift U.S. AI policy to a “forward-leaning, pro-innovation, and pro-competition mindset” that will make agencies “more agile, cost-effective, and efficient.”
  • Public Comment Invited on Artificial Intelligence Action Plan (Febraury 25, 2025) — President Trump’s recent Artificial Intelligence (AI) Executive Order shows that this Administration is dedicated to America’s global leadership in AI technology innovation. This Order directed the development of an AI Action Plan to sustain and enhance America’s global AI dominance. Today, the American people are encouraged to share their policy ideas for the AI Action Plan by responding to a Request for Information (RFI), available on the Federal Register’s website.
  • Removing Barriers to American Leadership in Artificial Intelligence (January 23, 2025) The executive order tasks the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto , and the Assistant to the President for National Security Affairs with reviewing all policies, directives, regulations and other actions pursuant to Executive Order 14110. It charges them to suspend, revise, or rescind any such actions in accordance with law to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The order also tasks the aforementioned officials to develop an Artificial Intelligence Action Plan in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, and the Director of the Office of Management and Budget.
  • Initial Rescissions Of Harmful Executive Orders And Actions (January 20, 2025) — This catch-all executive order revokes a larger number of the Biden Administration’s executive orders, including Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

White House Actions under Biden (2020-2025) #

White House Actions under Trump (2016-2021) #

Department of Commerce #

  • American AI Exports Program (October 28, 2025) — The U.S. Department of Commerce published in the Federal Register a request for information (RFI) to solicit public comment on questions relating to the American AI Exports Program (Program). Through that RFI, the Department is seeking information from the public on the request for proposals that the Department will issue pursuant to Executive Order (E.O.) 14320, “Promoting the Export of the American AI Technology Stack.” The Department has determined that an extension of the comment period until December 13, 2025 is appropriate.
  • Commerce Strengthens Restrictions on Advanced Computing Semiconductors, Semiconductor Manufacturing Equipment, and Supercomputing Items to Countries of Concern (October 17, 2023) — The BIS updated and strengthened the 2022 controls to close loopholes and address evolving tech. The updated rules expanded the scope of chips covered (introducing a new “performance density” metric to catch chip designs that might circumvent prior thresholds) and imposed a worldwide license requirement on any export of controlled AI chips to companies headquartered in any country of concern (preventing proxy routing). It also added 43 more countries to the list requiring notification for exports of less advanced (but still sensitive) chips, beyond just China/Macau.
  • Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the People’s Republic of China (PRC) (October 7, 2022) — The Commerce Department’s Bureau of Industry and Security (BIS) imposed sweeping export controls on advanced computing chips and semiconductor technology that underpin high-end AI systems. This seminal rule requires licenses for exporting to China and other adversary nations.

Consumer Financial Protection Bureau (CFPB) #

  • Quality Control Standards for Automated Valuation Models (June 24, 2024) — The CFPB, OCC, FRB, FDIC, NCUA, and FHFA adopted a final rule to implement the quality control standards mandated by the Dodd-Frank Wall Street Reform and Consumer Protection Act for the use of automated valuation models (AVMs) by mortgage originators and secondary market issuers in determining the collateral worth of a mortgage secured by a consumer’s principal dwelling. Under the final rule, institutions that engage in certain credit decisions or securitization determinations must adopt policies, practices, procedures, and control systems to ensure that AVMs used in these transactions to determine the value of mortgage collateral adhere to quality control standards designed to ensure a high level of confidence in the estimates produced by AVMs; protect against the manipulation of data; seek to avoid conflicts of interest; require random sample testing and reviews; and comply with applicable nondiscrimination laws.
  • inactive Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024) — “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
  • CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior (April 25, 2023) — The Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission released a joint statement outlining a commitment to enforce their respective laws and regulations.
  • CFPB Issue Spotlight Analyzes “Artificial Intelligence” Chatbots in Banking (June 6, 2023) — The Consumer Financial Protection Bureau (CFPB) released an issue spotlight on the expansive adoption and use of chatbots by financial institutions. Chatbots are intended to simulate human-like responses using computer programming and help institutions reduce the costs of customer service agents. These chatbots sometimes have human names and use popup features to encourage engagement. Some chatbots use more complex technologies marketed as “artificial intelligence,” to generate responses to customers.
  • Consumer Financial Protection Circular 2023-03: Adverse action notification requirements and the proper use of the CFPB’s sample forms provided in Regulation B (September 19, 2023) — When using artificial intelligence or complex credit models, may creditors rely on the checklist of reasons provided in CFPB sample forms for adverse action notices even when those sample reasons do not accurately or specifically identify the reasons for the adverse action? No, creditors may not rely on the checklist of reasons provided in the sample forms (currently codified in Regulation B) to satisfy their obligations under ECOA if those reasons do not specifically and accurately indicate the principal reason(s) for the adverse action. Nor, as a general matter, may creditors rely on overly broad or vague reasons to the extent that they obscure the specific and accurate reasons relied upon.
  • Consumer Financial Protection Circular 2022-03: Adverse action notification requirements in connection with credit decisions based on complex algorithms (May 26, 2022) — When creditors make credit decisions based on complex algorithms that prevent creditors from accurately identifying the specific reasons for denying credit or taking other adverse actions, do these creditors need to comply with the Equal Credit Opportunity Act’s requirement to provide a statement of specific reasons to applicants against whom adverse action is taken? Yes. ECOA and Regulation B require creditors to provide statements of specific reasons to applicants against whom adverse action is taken.
  • Chatbots in consumer finance (Junec 6, 2023) — This research conducted by the Consumer Financial Protection Bureau (CFPB) explores how the introduction of advanced technologies, often marketed as “artificial intelligence,” in financial markets may impact the customer service experience. The purpose of this report is to explain how chatbot technologies are being used by financial institutions and the associated challenges endured by their customers.
  • Report on Copyright and Artificial Intelligence (July 31, 2024 & May 9, 2025) — Copyright and Artificial Intelligence analyzes copyright law and policy issues raised by artificial intelligence (AI). This Report is being issued in several Parts. Part 1 was published on July 31, 2024, and addresses the topic of digital replicas. Part 2 was published on January 29, 2025, and addresses the copyrightability of outputs created using generative AI. On May 9, 2025, the Office released a pre-publication version of Part 3 in response to congressional inquiries and expressions of interest from stakeholders. A final version of Part 3 will be published in the future, without any substantive changes expected in the analysis or conclusions.

Department of Defense, Department of War (DOD, DOW) #

  • Memo on “Artificial Intelligence Strategy for the Department of War” (January 9, 2026) — As is noted in the first paragraph of this memo: “In the national security domain, AI-enabled warfare and AI-enabled capability development will re-define the character of military affairs over the next decade. This transformation is a race - fueled by the accelerating pace of commercial AI innovation coming out of America’s private sector. The United States Military must build on its lead over our adversaries in integrating this technology, established during President Trump’s first term, to make our Warfighters more lethal and efficient. To this end, aligned with America’s AI Action Plan, I direct the Department of War to accelerate America’s Military AI Dominance by becoming an “AI-first” warfighting force across all components, from front to back.””
  • CDAO and DIU Launch New Effort Focused on Accelerating DOD Adoption of AI Capabilities (December 11, 2024) — The Chief Digital and Artificial Intelligence Office (CDAO) in partnership with the Defense Innovation Unit (DIU) announced the formation of a new AI Rapid Capabilities Cell (AI RCC) focused on accelerating DoD adoption of next-generation artificial intelligence (AI) such as Generative AI (GenAI). The AI RCC will focus on executing pilots in the primary use case areas identified by Task Force Lima, including warfighting and enterprise management. The executive summary of Task Force Lima’s findings can be found on CDAO’s website.
  • Evaluation of the Effectiveness of the Chief Digital and Artificial Intelligence Office’s Artificial Intelligence Governance and Acquisition Process (November 14, 2024) — The Chief Digital and Artificial Intelligence Office (CDAO) was established in December 2021 and was tasked with developing an AI strategy and policy for the DoD, and the acquisition and development of AI products and services. Focusing exclusively on evaluating the CDAO’s strategy and policy development, the report found that the implementation plan for the AI Adoption Strategy and the DoD’s AI policy were past due. The CDAO was supposed to have submitted these plans in the form of a DoD chartering directive and an accompanying DoD Instruction; however, this did not happen. As of June 2024, the implementation plan was still in draft form. The evaluation recommended that the Chief Digital and Artificial Intelligence Officer publish the implementation plan and coordinate with the Director of Administration and Management to review existing guidance that could be incorporated into the DoD Directive and Dod Instruction.
  • Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence (October 24, 2024) — This memorandum fulfills the directive set forth in subsection 4.8 of Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). This memorandum provides further direction on appropriately harnessing artificial intelligence (AI) models and AI-enabled technologies in the United States Government, especially in the context of national security systems (NSS), while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities. A classified annex to this memorandum addresses additional sensitive national security issues, including countering adversary use of AI that poses risks to United States national security.
  • Replicator Initiative (August 28, 2023) — The first iteration of Replicator (Replicator 1), announced in August 2023, will deliver all-domain attritable autonomous systems (ADA2) to warfighters at a scale of multiple thousands, across multiple warfighting domains, within 18-24 months, or by August 2025.
  • Task Force Lima (Generative AI) (August 10, 2023) — Established the CDAO Generative AI task force, which operated until being sunset in December 2024 when the AI Rapid Capabilities Cell launched.
  • Data, Analytics, and Artificial Intelligence Adoption Strategy: Accelerating Decision Advantage (June 27, 2023) — This Strategy builds upon and supersedes the 2018 AI Strategy and the 2020 Data Strategy to continue the Department’s digital transformation. As it notes, “The Department cannot succeed alone. Our integration of data, analytics, and AI technologies is nested within broader U.S. government policy, the network of private sector and academic partners that promote innovation, and a global ecosystem. We need a systematic, agile approach to data, analytics, and AI adoption that is repeatable by all DoD Components. This strategy outlines our approach to improving the organizational environment within which our people can deploy data, analytics, and AI capabilities for enduring decision advantage.”
  • DOD Directive 3000.09 Update: Autonomy in Weapon Systems (January 25, 2023) — This document updated the foundational 2012 policy on autonomous weapons, establishing policy and assigning responsibilities for developing and using autonomous and semiautonomous functions in weapon systems, as well as guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements. It also establishes the Autonomous Weapon Systems Working Group.
  • Department’s Responsible Artificial Intelligence Strategy and Implementation Pathway Maps the Journey to a Trusted AI Ecosystem (June 22, 2022) — Deputy Secretary of Defense Kathleen Hicks has signed the Department’s Responsible Artificial Intelligence Strategy and Implementation Pathway, which guides the Department of Defense’s journey to its goal of a trusted artificial intelligence (AI) ecosystem. The DoD must transform itself into an AI-ready organization, with responsible artificial intelligence (RAI) as a prominent feature to maintain its competitive advantage.
  • Chief Digital and Artificial Intelligence Office (December 8, 2021) — The Chief Digital and Artificial Intelligence Office (CDAO) is established.
  • DOD Adopts Ethical Principles for Artificial Intelligence (February 24, 2020) — The U.S. Department of Defense adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation’s leading AI experts with multiple venues for public input and comment. The adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems.

Department of Education #

Department of Energy (DOE) #

  • Energy Department Launches ‘Genesis Mission’ to Transform American Science and Innovation Through the AI Computing Revolution (November 25, 2025) — The Genesis Mission will transform American science and innovation through the power of artificial intelligence (AI), strengthening the nation’s technological leadership and global competitiveness. The ambitious mission will harness the current AI and advanced computing revolution to double the productivity and impact of American science and engineering within a decade. It will deliver decisive breakthroughs to secure American energy dominance, accelerate scientific discovery, and strengthen national security.
  • Artificial Intelligence Strategy (October 2025) — DOE’s AI Strategy outlines how the Department plans to harness the power of artificial intelligence with a focus on smart adoption, responsible use, and real-world impact that addresses our nation’s greatest challenges.
  • PermitAI Tool (July 10, 2025) — Pacific Northwest National Laboratory (PNNL) is building a one-stop data platform and a powerful suite of artificial intelligence (AI) tools to streamline and accelerate the review process for critical federal infrastructure.
  • DOE Established Artificial Intelligence Advancement Council (May 20, 2022) — The DOE established an Artificial Intelligence Advancement Council (AIAC), the first of its kind at the Department. Chartered by Deputy Secretary David Turk, the AIAC coordinates AI activities across DOE’s extensive enterprise and defines Department-wide AI priorities. It brings together top DOE leaders (Science, Nuclear Security, Intelligence, General Counsel, etc.) to provide recommendations on a comprehensive DOE AI strategy led by the DOE’s Artificial Intelligence and Technology Office (AITO).
  • DOE Announces Roadmap for the Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative (July 16, 2024) — Through FASST, DOE and its 17 national laboratories aims to build the world’s most powerful integrated scientific AI systems for science, energy, and national security, in collaboration with academic and industry partners.
  • The Landing Page for Artificial Intelligence.

Equal Employment Opportunity Commission (EEOC) #

  • Compliance Plan for OMB Memorandum M-25-21 (September 2025) — The Equal Employment Opportunity Commission is actively engaged in efforts to align its internal principles, guidelines, and policies to ensure the responsible and trustworthy deployment and use of AI. This document outlines the Agency’s plans to meet those requirements applicable to a non-CFO Act Federal agency related to M-25-21’s main goals of (1) driving AI innovation; (2) improving AI governance; and (3) fostering public trust in federal use of AI.
  • iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit (September 11, 2023) — iTutorGroup, three integrated companies providing English-language tutoring services to students in China, will pay $365,000 and furnish other relief to settle an employment discrimination lawsuit filed by the U.S. Equal Employment Opportunity Commission (EEOC), the federal agency announced.
  • inactive Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (May 18, 2023) — This publication is part of the EEOC’s ongoing effort to help ensure that the use of new technologies complies with federal EEO law by educating employers, employees, and other stakeholders about the application of these laws to the use of software and automated systems in employment decisions.
  • inactive Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 3, 2023) — As the statement reads, “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
  • inactive Artificial Intelligence and Algorithmic Fairness Initiative (2021) — In 2021, U.S. Equal Employment Opportunity Commission (EEOC) Chair Charlotte A. Burrows launched an agency-wide initiative to ensure that the use of software, including artificial intelligence (AI), machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.

Federal Aviation Administration (FAA) #

Federal Communications Commission (FCC) #

  • FCC Proposes First AI-Generated Robocall & Robotext Rules (August 8, 2024) – The Federal Communications Commission proposed new consumer protections against AI-generated robocalls and robotexts. The proposal seeks comment on the definition of AI-generated calls, requiring callers to disclose their use of AIgenerated calls and text messages, supporting technologies that alert and protect consumers from unwanted and illegal AI robocalls, and protecting positive uses of AI to help people with disabilities utilize the telephone networks. These proposed rules follow from other FCC action:
    • AI-Generated Voices in Robocalls Illegal (February 8, 2024) — The FCC announced the unanimous adoption of a Declaratory Ruling that recognizes calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (TCPA).
    • FCC Launches Inquiry into AI’s Impact on Robocalls and Robotexts (November 15, 2023) — The FCC adopted a Notice of Inquiry (NOI) that seeks comment to better understand the impact of emerging artificial intelligence (AI) technologies as part of the FCC’s efforts to protect consumers from unwanted and illegal telephone calls and text messages.
  • The Opportunities and Challenges of Artificial Intelligence for Communications Networks and Consumers (July 13, 2023) — The Federal Communications Commission and the National Science Foundation co-hosted this half-day workshop that convened stakeholders to discuss the opportunities that artificial intelligence (AI) presents for spectrum management and network resiliency, and the challenges AI brings to vital consumer issues like robocalls/robotexts and digital discrimination.

Federal Energy Regulatory Commission (FERC) #

  • FERC to Act on Large Load Interconnection Docket by June 2026 (April 16, 2026) — The Federal Energy Regulatory Commission (FERC) said that it will take action by June 2026 on the Advance Notice of Proposed Rulemaking (ANOPR) proceeding initiated by the U.S. Secretary of Energy. The ANOPR directs the Commission to consider potential reforms designed to ensure the timely, orderly, and equitable integration of significant electrical loads—such as the increasing demand from data centers—into the nation’s transmission infrastructure.
  • FERC PJM Co-Location Order (December 18, 2025) — On February 20, 2025, FERC initiated a show cause proceeding into whether the sections of PJM’s tariff that govern co-location of generation with loads—including data centers and industrial facilities—are just, reasonable, and not unduly discriminatory or preferential. The show cause order raised concerns that PJM’s tariff lacks clarity on rates, terms, and conditions that would apply to co-location arrangements.
  • FERC Talen-Amazon Rejection (November 1, 2024) — FERC rejected PJM’s amended interconnection service agreement expanding Amazon’s co-located data center load to 480 MW from Talen’s Susquehanna nuclear plant, setting an important precedent for AI data center co-location.

Federal Trade Commission (FTC) #

Food and Drug Administration (FDA) #

  • Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions (August 2025) — The FDA is issued this guidance to provide recommendations for predetermined change control plans (PCCPs) tailored to artificial intelligence (AI)-enabled devices. The recommendations in this guidance are intended to support iterative improvement through modifications to AI-enabled devices while continuing to provide a reasonable assurance of device safety and effectiveness. This guidance recommends that a PCCP describe the planned device modifications, the associated methodology to develop, validate, and implement those modifications, and an assessment of the impact of those modifications. The recommendations in this guidance apply to AI-enabled devices, including the device constituent part of device-led combination products, reviewed through the 510(k), De Novo, and PMA pathways.
  • Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products (January 2025) — This guidance provides recommendations to sponsors and other interested parties on the use of artificial intelligence (AI) to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs. Specifically, this guidance provides a risk-based credibility assessment framework that may be used for establishing and evaluating the credibility of an AI model for a particular context of use (COU).
  • Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles (June 2024) — In 2021, Health Canada, the U.S. Food and Drug Administration (FDA) and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) jointly identified 10 guiding principles for good machine learning practice (GMLP). GMLP supports the development of safe, effective and high-quality artificial intelligence/machine learning technologies that can learn from real-world use and, in some cases, improve device performance. The FDA, Health Canada, and MHRA have further identified guiding principles for transparency for machine learning-enabled medical devices (MLMDs).
  • Artificial Intelligence/Machine Learning-Based Software as a Medical Device Action Plan (March 18, 2024) — FDA outlined its approach to AI/ML-based software as a medical device, including cross-center coordination and future regulatory planning.
  • Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together (March 15, 2024) — This paper describes four areas of focus for CBER, CDER, CDRH, and OCP regarding the development and use of Al across the medical product life cycle.
  • CDER’s Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative (November 1, 2023) — Advanced manufacturing technologies have the potential to improve the reliability and robustness of the manufacturing process and supply chain and increase timely access to quality medicines for the American public. CDER established the Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) initiative to prepare a regulatory framework to support the adoption of advanced manufacturing technologies that could bring benefits to patients.
  • Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles (October 2023) — In 2021, the U.S. Food and Drug Administration (FDA), Health Canada, and the U.K.’s Medicines and Healthcare products Regulatory Agency (MHRA) jointly identified 10 guiding principles that can inform the development of Good Machine Learning Practice (GMLP). GMLP supports the development of safe, effective, and high-quality artificial intelligence/machine learning technologies that can learn from real-world use and, in some cases, improve device performance. In this document, the FDA, Health Canada, and MHRA jointly identified 5 guiding principles for predetermined change control plans. These principles draw upon the overarching GMLP guiding principles, in particular principle 10, which states that deployed models are monitored for performance and re-training risks are managed.
  • Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products (May 10, 2023) — To fulfill its mission of protecting, promoting, and advancing public health, the FDA’s Center for Drug Evaluation and Research (CDER), in collaboration with the Center for Biologics Evaluation and Research (CBER) and the Center for Devices and Radiological Health (CDRH), including the Digital Health Center of Excellence (DHCoE), published this document to facilitate a discussion with stakeholders on the use of artificial intelligence and machine learning in drug development, including in the development of medical devices intended to be used with drugs, to help inform the regulatory landscape in this area.
  • Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions: Guidance for Industry and Food and Drug Administration Staff (April 3, 2023) — This guidance is intended to provide a forward-thinking approach to promote the development of safe and effective AI enabled devices.
  • Good Machine Learning Practice for Medical Device Development: Guiding Principles (October 2021) — The U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) have jointly identified 10 guiding principles that can inform the development of Good Machine Learning Practice (GMLP). These guiding principles will help promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML).
  • Artificial Intelligence and Machine Learning Software as a Medical Device Action Plan (January 2021) — This AI/ML-Based Software as a Medical Device Action Plan was developed in direct response to the stakeholder feedback and it builds on the Agency’s longstanding commitment to support innovative work in the regulation of medical device software and other digital health technologies.
  • Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Discussion Paper and Request for Feedback (April 2, 2019) — This discussion paper proposed a framework for modifications to AI/ML-based SaMD that is based on the internationally harmonized International Medical Device Regulators Forum (IMDRF) risk categorization principles, FDA’s benefit-risk framework, risk management principles in the software modifications guidance, and the organization-based TPLC approach as envisioned in the Digital Health Software Precertification (Pre-Cert) Program. It also leverages practices from current premarket programs, including the 510(k), De Novo, and PMA pathways.

Department of Health and Human Services (HHS) #

Department of Homeland Security (DHS) #

  • Directive Number: 139-08: Artificial Intelligence Use and Acquisition (January 15, 2025) — The purpose is to advance AI innovation and governance while managing risks from the use of AI, particularly those affecting the safety or rights of individuals.
  • AI Cybersecurity Collaboration Playbook (January 14, 2025) — The AI Cybersecurity Collaboration Playbook provides guidance to organizations across the AI community, including AI providers, developers, and adopters, for sharing AI-related cybersecurity information voluntarily with the Cybersecurity and Infrastructure Security Agency (CISA) and other partners through the Joint Cyber Defense Collaborative (JCDC).
  • DHS Playbook for Public Sector Generative Artificial Intelligence Deployment (January 6, 2025) — The DHS GenAI Public Sector Playbook encapsulates the lessons learned from DHS’s pilot programs and offers a series of actionable steps for the responsible adoption of GenAI technologies in the public sector.
  • 2024 DHS Artificial Intelligence Roadmap (March 2024) — This document outlines DHS’s AI initiatives and the technology’s potential across the homeland security enterprise: “It is the most detailed AI plan put forward by a federal agency to date, directing our efforts to fully realize AI’s potential to protect the American people and our homeland, while steadfastly protecting privacy, civil rights, and civil liberties.”
  • Federal Register :: Establishment of the Artificial Intelligence Safety and Security Board (April 29, 2024) — Pursuant to Executive Order (E.O.) 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” dated October 30, 2023, the Department of Homeland Security, through the Office of Partnership and Engagement, has established the Artificial Intelligence Safety and Security Board (the Board). The Board will provide the Secretary of Homeland Security information, advice, and recommendations to advance the security and resilience of our nation’s critical infrastructure in its use of artificial intelligence (AI).
  • DHS Announces New Policies and Measures Promoting Responsible Use of Artificial Intelligence (September 14, 2023) — Building on the task force’s work, DHS announced new policies to ensure the responsible use of AI. Notably, Secretary Mayorkas was appointed DHS’s first Chief AI Officer (CIO Eric Hysen) to coordinate AI innovation and safety across the agency. DHS also issued:
    • Policy Statement 139-06, “Acquisition and Use of Artificial Intelligence and Machine Learning by DHS Components,” sets department-wide AI principles. It mandates that DHS’s use of AI align with EO 13960 (promoting trustworthy AI in government) and all applicable laws, prohibits AI systems that engage in illegal discrimination, and requires that AI adoption demonstrably improves mission effectiveness. DHS affirmed it “will not collect, use, or disseminate data used in AI activities” or deploy AI systems that make decisions based on sensitive characteristics like race, sex, or religion, echoing a commitment to minimize bias.
    • Directive 026-11, “Use of Face Recognition and Face Capture Technologies,” imposes strict oversight on DHS’s use of facial recognition AI. All such systems must undergo extensive testing to ensure no unintended bias or disparate impact, with review by DHS’s Privacy Office and Office for Civil Rights/Civil Liberties. The directive also gives U.S. citizens the right to opt out of face recognition for non-law-enforcement uses and prohibits using face recognition as the sole basis for any law enforcement action. Together, these policies ensure DHS harnesses AI’s benefits for security while protecting privacy and civil rights.
  • DHS Artificial Intelligence Task Force (April 21, 2023) — DHS Secretary Alejandro Mayorkas launched the Department’s first-ever AI Task Force (AITF) to drive specific applications of AI in critical homeland security missions. The task force is applying AI to enhance supply chain screening (detecting forced-labor goods), counter the flow of fentanyl (identifying illicit shipments and precursor chemicals), bolster cybersecurity and critical infrastructure protection, and aid investigations of child exploitation by analyzing large volumes of data.

Department of Housing and Urban Development (HUD) #

  • inactive Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing (April 29, 2024) — This guidance from HUD’s Office of Fair Housing and Equal Opportunity explains how the Fair Housing Act protects certain rights of applicants for rental housing. It discusses how housing providers and companies that offer tenant screening services can screen applicants for rental housing in a nondiscriminatory way and recommends best practices for complying with the Fair Housing Act. This guidance may also help applicants understand their rights and recognize when they might have been denied housing unlawfully.
  • inactive Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms (April 29, 2024) — This guidance from HUD’s Office of Fair Housing and Equal Opportunity explains how the Fair Housing Act (“Act”) applies to the advertising of housing, credit, and other real estaterelated transactions through digital platforms. In particular, it addresses the increasingly common use of automated systems, such as algorithmic processes and Artificial Intelligence (“AI”), to facilitate advertisement targeting and delivery.

Department of Justice (DOJ) #

  • Justice Department and National Economic Council Partner to Identify State Laws with Out-Of-State Economic Impacts (August 15, 2025) — The Justice Department and the National Economic Council announced an effort to identify State laws that significantly and adversely affect the national economy or interstate economic activity and to solicit solutions to address such effects. They invite public comments to support the Administration’s mission to address laws that hinder America’s economic growth, including those that burden industry and our small businesses.
  • AI Inventory (January 21, 2025) — The October 30, 2023, Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” required federal agencies to report on their use of AI by conducting an annual inventory of their AI use cases.
  • Justice Department Issues Final Rule Addressing Threat Posed by Foreign Adversaries’ Access to Americans’ Sensitive Personal Data (December 27, 2024) — The Justice Department issued a comprehensive final rule carrying out Executive Order (E.O.) 14117 “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.” The E.O. charged the Justice Department with establishing and implementing a new regulatory program to address the urgent and extraordinary national security threat posed by the continuing efforts of countries of concern (and covered persons that they can leverage) to access and exploit Americans’ bulk sensitive personal data and certain U.S. Government-related data. The Final Rule will take effect 90 days from the date of the Final Rule’s publication, with certain affirmative due diligence, reporting, and auditing requirements taking effect 270 days after publication.
  • inactive Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (April 4, 2024) — According to the statement, “Existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices. The Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, the Equal Employment Opportunity Commission, and the Federal Trade Commission are among the federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections. We take seriously our responsibility to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws, and each of our agencies has previously expressed concern about potentially harmful uses of automated systems.”
  • Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring. (May 12, 2022) — This guidance explains how algorithms and artificial intelligence can lead to disability discrimination in hiring. The Department of Justice enforces disability discrimination laws with respect to state and local government employers. The Equal Employment Opportunity Commission (EEOC) enforces disability discrimination laws with respect to employers in the private sector and the federal government. The obligation to avoid disability discrimination in employment applies to both public and private employers.
  • How to Avoid Unlawful Discrimination and Other Form I-9 Violations When Using Commercial or Proprietary Programs to Electronically Complete the Form I-9 or Participate in E-Verify (December 1, 2023) — This fact sheet discusses what employers should keep in mind if they use private sector commercial or proprietary products to electronically complete, modify, or retain the Form I-9. Although this document refers to these products collectively as Form I-9 software programs, the information here also applies to employers who use these programs to participate in E-Verify. The Form I-9 software programs discussed in this fact sheet do not include programs that the Department of Homeland Security directly oversees and administers, such as E-Verify.
  • Readout of Justice Department’s Interagency Convening on Advancing Equity in Artificial Intelligence (July 10, 2024) — The Justice Department’s Civil Rights Division convened principals of federal agency civil rights offices and senior government officials to foster AI and civil rights coordination. This was the third such convening hosted by the Civil Rights Division following President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (EO 14110), which tasks the Civil Rights Division with coordinating federal agencies to use our authorities to prevent and address unlawful discrimination and other harms that may result from the use of AI in programs and benefits, while preserving the potential social, medical and other advances AI may spur.
  • Justice Department Files Statement of Interest Supporting Private Parties’ Right to Bring Voting Rights Act Challenge to Robocalls (July 25, 2024) — The Justice Department filed a statement of interest in the U.S. District Court for the District of New Hampshire supporting the right of private plaintiffs to bring a lawsuit challenging robocalls as intimidating, threatening or coercive in violation of Section 11(b) of the Voting Rights Act. This brief is one of several filed by the Justice Department explaining the prohibition against voter intimidation in Section 11(b) and supporting the longstanding principle that private plaintiffs can sue to vindicate important rights protected by the Voting Rights Act.
  • Statement of Interest (SOI) (January 9, 2023) — The Department of Justice and the Department of Housing and Urban Development filed a Statement of Interest under 28 U.S.C. § 5171 to assist the Court in evaluating the application of the Fair Housing Act (FHA), 42 U.S.C. § 3601 et seq., in challenges to an algorithm-based tenant screening system. The United States has a strong interest in ensuring the correct interpretation and application of the FHA’s pleading standard for disparate impact claims, including where the use of algorithms may perpetuate housing discrimination.
  • U.S. v. Regents of the University of California (December 2, 2022) — In November 2022, the Civil Rights Division filed a consent decree resolving allegations that the Regents of the University of California on behalf of the University of California, Berkeley, failed to provide much of its online content (such as courses, lectures, and conferences) in an accessible manner to individuals with disabilities, including through the use of inaccurate automated captioning technology for people with hearing impairments. On December 2, 2022, the district court approved the decree. Under the decree, among other things, the University will not rely solely on YouTube’s automated AI-based technology and will provide accurate captions for its online content.
  • United States Attorney Resolves Groundbreaking Suit Against Meta Platforms, Inc., Formerly Known As Facebook, To Address Discriminatory Advertising For Housing (June 21, 2022) — Damian Williams, the United States Attorney for the Southern District of New York, along with Kristen Clarke, Assistant Attorney General for the Justice Department’s Civil Rights Division, announced that the Justice Department has entered into a settlement agreement resolving allegations that Meta Platforms, Inc., formerly known as Facebook, Inc., engaged in discriminatory advertising in violation of the Fair Housing Act (FHA). The agreement would resolve a lawsuit filed today in the U.S. District Court for the Southern District of New York alleging that Meta’s housing advertising system discriminates against Facebook users based on their race, color, religion, sex, disability, familial status, and national origin. The proposed settlement is subject to the review and approval by a district judge in the Southern District of New York.
  • Justice Department Secures Settlements with 16 Employers for Posting Job Advertisements on College Recruiting Platforms That Discriminated Against Non-U.S. Citizens (June 27, 2022) — The Department of Justice announced that it signed settlement agreements requiring 16 private employers to pay a total of $832,944 in civil penalties to resolve claims that each company discriminated against non-U.S. citizens in hiring. According to the department, each company posted at least one job announcement excluding non-U.S. citizens on an online job recruitment platform operated by the Georgia Institute of Technology (Georgia Tech). One employer posted as many as 74 discriminatory advertisements on Georgia Tech’s platform, while several of the employers posted discriminatory advertisements on other college or university platforms as well. The department determined that the advertisements deterred qualified students from applying for jobs because of their citizenship status, and in many cases the citizenship status restrictions also blocked students from applying or even meeting with company recruiters.
  • Justice Department Settles with Large Health Care Organization to Resolve Software-Based Immigration-Related Discrimination Claims (August 25, 2021) — The Department of Justice announced that it reached a settlement with Ascension Health Alliance (Ascension), a Missouri-based health care organization with more than 2,600 sites, including 146 hospitals and more than 40 senior living facilities, in 19 states and the District of Columbia. The settlement resolves the department’s claims that Ascension violated the Immigration and Nationality Act (INA) when it discriminated against work-authorized non-U.S. citizens because of their citizenship status by requesting more or different documents than necessary when attempting to reverify their continued work authorization.
  • The Landing Page for Artificial Intelligence and Civil Rights at the DOJ.

Department of Labor (DOL) #

National Artificial Intelligence Advisory Committee (NAIAC) #

During the Biden Administration, NAIAC issued a number of reports, which are no longer available on official website. They include:

National Institute of Standards and Technology (NIST) #

  • Center for AI Standards and Innovation (CAISI) — Formerly the AI Safety Institute, the Center for AI Standards and Innovation (CAISI) serves as industry’s primary point of contact within the U.S. government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems. The blog is located here.
  • NIST AI 100-1: NIST AI Risk Management Framework (April 29, 2024) — NIST released a draft publication based on the AI Risk Management Framework (AI RMF) to help manage the risk of Generative AI. The draft AI RMF Generative AI Profile can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management aligned with their goals. Developed with input from a public working group of more than 2,500 members, focusing on 12 risks and 400+ actions developers can take. For more information check NIST’s landing page.
  • TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (December 1, 2022) — This Joint Roadmap aims to guide the development of tools, methodologies, and approaches to AI risk management and trustworthy AI by the EU and the United States and to advance our shared interest in supporting international standardization efforts and promoting trustworthy AI on the basis of a shared dedication to democratic values and human rights.

National Institutes of Health (NIH) #

  • Artificial Intelligence in Research: Policy Considerations and Guidance (February 2025) — Advancements in artificial intelligence (AI) are spurring tremendous progress in medical research to enhance human health and longevity. To that end, NIH has a robust system of policies and practices that guide stakeholders across the biomedical and behavioral research ecosystem. The policies, best practices, and regulations listed reflect this framework and should be considered before, during, and after development and use of AI in research. This is not an exhaustive list of all policies and requirements that may apply to any NIH-supported research project but can serve as a guide for the research community.
  • Landing Page for NITRD (April 2, 2026) — The Networking and Information Technology Research and Development (NITRD) Program coordinates Federal R&D to identify, develop, and transition into use the secure, advanced IT, high-performance computing, networking, and software capabilities needed by the Nation, and to foster public-private partnerships that provide world-leading IT capabilities.
  • Landing Page for Bridge2AI (April 23, 2026) — The NIH Common Fund’s Bridge to Artificial Intelligence (Bridge2AI) program will propel biomedical research forward by setting the stage for widespread adoption of artificial intelligence (AI) that tackles complex biomedical challenges beyond human intuition.
  • Landing Page for AIM-AHEAD Program (December 30, 2025) — The NIH’s AIM-AHEAD program will establish mutually beneficial and coordinated partnerships to empower researchers and communities across the United States in the development of AI/ML models and enhance the capabilities of this emerging technology, beginning with electronic health record (EHR) data.
  • Landing Page for NIH’s AI Initiatives (May 3, 2024) — Across NIH’s 27 institutes and centers, AI/ML technologies are being developed. This page collects those projects.
  • Landing Page for Advancing Health Research through Multimodal AI (April 23, 2026) — Multimodal AI has the potential to capture the complexity of biomedical and behavioral systems and improve clinical decision‑making, but realizing this promise requires new innovations in data fusion, model training, evaluation, and application. The purpose of this program is to develop ethically focused and data-driven multimodal AI approaches to more closely model, interpret, and predict complex biological, behavioral, and health systems and enhance our understanding of health and the ability to detect and treat human diseases.
  • inactive Using AI in Peer Review Is a Breach of Confidentiality (June 2023) —
  • The Landing Page for Artificial Intelligence at NIH.

National Science and Technology Council #

  • Select Committee on AI — The Select Committee on AI, created in June 2018, advises The White House on interagency AI R&D priorities and improving the coordination of Federal AI efforts to ensure continued U.S. leadership in this field. Members focus on policies to prioritize and promote AI R&D, leverage Federal data and computing resources for the AI community, and train the AI-ready workforce.

National Science Foundation (NSF) #

  • Request for Information on the Development of a 2025 National Artificial Intelligence (AI) Research and Development (R&D) Strategic Plan (April 29, 2025) — The Office of Science and Technology Policy (OSTP), the Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO) welcomes input from all interested parties on how the previous administration’s National Artificial Intelligence Research and Development Strategic Plan (2023 Update) can be rewritten so that the United States can secure its position as the unrivaled world leader in artificial intelligence by performing R&D to accelerate AI-driven innovation, enhance U.S. economic and national security, promote human flourishing, and maintain the United States’ dominance in AI while focusing on the Federal government’s unique role in AI research and development (R&D) over the next 3 to 5 years.
  • NSF’s National Artificial Intelligence Research Institutes – Launched in 2020; consists of 25 AI institutes connecting over 500 funded and collaborative institutions globally.
  • National Artificial Intelligence Research Resource Pilot — The National Artificial Intelligence Research Resource (NAIRR) will provide a shared national research infrastructure to bridge this gap by connecting U.S. researchers and educators to AI resources — computation, data, software, models, training and educational materials — to advance research, discovery and innovation. As directed by Winning the Race: America’s AI Action Plan, the launch of solicitation NSF 25-546 begins the transition from the NAIRR pilot to a scalable and sustainable NAIRR.

National Telecommunications and Information Administration (NTIA) #

  • AI Accountability Policy Request for Comment (April 11, 2023) — NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems. Much as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy. Just as financial accountability required policy and governance to develop, so too will AI system accountability.
  • Dual-Use Foundation Models with Widely Available Model Weights Report (July 30, 2024) — This Report provides a non-exhaustive review of the risks and benefits of open foundation models, broken down into the broad categories of Public Safety; Societal Risks and Wellbeing; Competition, Innovation, and Research; Geopolitical Considerations; and Uncertainty in Future Risks and Benefits. It is important to under stand these risks as marginal risks—that is, risks that are unique to the deployment of dual-use foundation models with widely available model weights relative to risks from other existing technologies, including closed weight models and models that are not considered du al-use foundation models under the EO definition (such as foundation models with fewer than 10 billion parameters).

Securities and Exchange Commission (SEC) #

  • SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence (March 18, 2024) — The Securities and Exchange Commission announced a settlement with two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their purported use of artificial intelligence (AI). The firms agreed to pay $400,000 in total civil penalties.
  • Regulating AI in Securities Markets (July 26, 2023) — The SEC voted to propose new rules addressing conflicts of interest arising from broker-dealers’ and investment advisers’ use of predictive data analytics (PDA) and AI when interacting with investors. Concerned that firms’ AI-driven platforms might optimize for the firm’s benefit (e.g. maximizing fees or trading volume) at the expense of investors, the SEC’s proposal (Release No. 34-97990) would require firms to: (1) evaluate whether any AI/PDA usage places the firm’s interest ahead of the client’s, and (2) eliminate or neutralize the effect of any such conflict.

Department of State #

  • Department of State Launches Pax Silica Fund (March 26, 2026) — The U.S. Department of State announced that it intends, working with Congress, to allocate $250 million in foreign assistance funding for a new Pax Silica Fund initiative to support critical minerals extraction, processing, critical infrastructure, and manufacturing assets that support secure and reliable semiconductor supply chains.
  • Pax Silica Initiative (December 12, 2025) — Pax Silica is the Department of State’s flagship effort on AI and supply chain security, advancing new economic security consensus among allies and trusted partners.
  • Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy (January 2024) — The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy provides a normative framework addressing the use of these capabilities in the military domain. Launched in February 2023 at the Responsible AI in the Military Domain Summit (REAIM 2023) in the Hague, the Declaration aims to build international consensus around responsible behavior and guide states’ development, deployment, and use of military AI. The Declaration provides a basis for exchanging best practices and building states’ capacities, which will allow endorsing States to share experience and ideas.

Department of Transportation (DOT) #

  • Trump’s Transportation Secretary Sean P. Duffy Unveils New Automated Vehicle Framework as Part of Innovation Agenda (April 24, 2025) – U.S. Transportation Secretary Sean P. Duffy unveiled the National Highway Traffic Safety Administration’s (NHTSA) new Automated Vehicle (AV) Framework as part of his transportation innovation agenda. The new framework will unleash American ingenuity, maintain key safety standards, and prevent a harmful patchwork of state laws and regulations.
  • NHTSA Finalizes First Occupant Protection Safety Standards for Vehicles Without Driving Controls (March 2022) — The U.S. Department of Transportation’s National Highway Traffic Safety Administration issued a first-of-its-kind final rule to ensure safety of occupants in automated vehicles. This rule updates the occupant protection Federal Motor Vehicle Safety Standards to account for vehicles that do not have the traditional manual controls associated with a human driver because they are equipped with automated driving systems.
  • Standing General Order 2021-01 (June 2021) – This order mandates that manufacturers and operators of vehicles equipped with automated driving systems (ADS) or certain advanced driver-assistance systems (ADAS) promptly report any serious crashes to NHTSA. In April 2025, NHTSA updated and extended this Order to streamline reporting burdens while preserving the requirement that firms report the most serious incidents within 5 days.
  • Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 (January 2020) — In October 2018, Preparing for the Future of Transportation: Automated Vehicles 3.0 (AV 3.0) introduced guiding principles for AV innovation for all surface transportation modes, and described the USDOT’s strategy to address existing barriers to potential safety benefits and progress. Building upon these efforts, AV 4.0 details 10 U.S. Government principles to protect users and communities, promote efficient markets, and to facilitate coordinated efforts to ensure a standardized Federal approach to American leadership in AVs.
    • Preparing for the Future of Transportation: Automated Vehicles 3.0 (October 2018) — Preparing for the Future of Transportation: Automated Vehicles 3.0 introduces guiding principles and describes the Department’s strategy to address existing barriers to safety innovation and progress. It also communicates the Department’s agenda to the public and stakeholders on important policy issues, and identifies opportunities for cross-modal collaboration.
    • Automated Driving Systems: A Vision for Safety 2.0 (September 2017) — A Vision for Safety replaces the Federal Automated Vehicle Policy released in 2016. This updated policy framework offers a path forward for the safe deployment of automated vehicles by: (1) Encouraging new entrants and ideas that deliver safer vehicles; (2) Making Department regulatory processes more nimble to help match the pace of private sector innovation; and (3) Supporting industry innovation and encouraging open communication with the public and with stakeholders.
  • USDOT’s Automated Vehicles Comprehensive Plan (January 11, 2021) — This comprehensive plan lays out U.S. DOT’s multimodal strategy to promote collaboration and transparency, modernize the regulatory environment, and prepare the transportation system for the safe integration of automated vehicles. It illustrates how the Department’s work extends beyond government to meet the challenges of a modern transportation system by providing real-world examples of how the Department’s operating administrations collaborate to address the needs of emerging technology applications.

Department of the Treasury #


Last updated: April 29, 2026