EXECUTIVE SUMMARY
On November 24, 2025, President Trump signed Executive Order 14363, Launching the Genesis Mission, establishing a coordinated national effort to harness artificial intelligence for scientific discovery and energy innovation. The order sets an ambitious goal to double the productivity and impact of federally funded research and development within a decade. To accomplish this, the Department of Energy (DOE) has identified 26 National Science and Technology Challenges spanning biotechnology, critical materials, nuclear energy, fusion, quantum information science, grid modernization, and more. The executive order explicitly directs agencies to launch prize competitions to incentivize private-sector participation in AI-driven research aligned with Mission objectives.
Prizes are a proven, high-leverage policy tool uniquely suited to the Genesis Mission. They lower barriers to entry beyond traditional government funding recipients, pay only for results, and capture public imagination. Federal prize competitions have launched thousands of startups, unlocked new markets, and benchmarked innovative solutions (e.g. the American-Made Net Load Forecasting). Rather than biasing toward incumbents, prizes serve as an equalizer, drawing the best ideas from across the nation.
We propose DOE design and launch the AI for American Innovation (AI2, “AI-Squared”) Prize Program, a series of competitions that advance Genesis Mission goals across three tracks: AI for Science, AI for Energy Innovation, and AI for DOE Operations. The illustrative prize examples in the Appendix use a variety of prize designs and funding amounts. Anticipated outcomes include new AI benchmarks, startup creation, and measurable advances on Genesis challenge metrics. Prize designs vary by objective, and each should be carefully tailored to maximize impact on its specific challenge area. The program leverages private sector sponsorship, including compute resources from technology firms and philanthropic funds from the Foundation for Energy Security and Innovation (FESI), enabling DOE to rapidly generate, evaluate, and advance new ideas while messaging to Congress and the innovation community the breadth of programs that Genesis Mission can deploy.
THE CASE FOR URGENCY
The Genesis Mission’s 60-day challenge identification window and annual reporting requirements create natural deadlines for action. Prize competitions can be designed and launched faster than traditional federal funding mechanisms, providing early wins that demonstrate momentum. With 26 defined challenges, clear prize authority under Section 5, and the Administration’s stated commitment to AI-driven innovation, the conditions for a successful prize program are in place. What is needed is the decision to launch.
Note: The authors do not believe only prizes should be used for Genesis Mission. This paper is meant to illustrate that prizes can be a useful part of the overall programmatic strategy.
STRATEGIC RATIONALE
The policy foundation is in place. Executive Order (EO) 14179, Removing Barriers to American Leadership in Artificial Intelligence (January 23, 2025), established the overarching policy to sustain and enhance America’s global AI dominance for economic competitiveness and national security. The Genesis Mission EO builds on this foundation by directing agencies to apply AI tools to the nation’s most pressing scientific and technological challenges. Section 5 of the Genesis EO specifically authorizes prize competitions across participating agencies, creating clear legal authority and policy alignment for a DOE prize program.
The science and energy challenges are defined. DOE has published 26 National Science and Technology Challenges under Section 4 of the Genesis EO, spanning domains from advanced manufacturing and biotechnology to fusion energy, grid modernization, critical minerals, and quantum computing. Each challenge identifies a technical problem, an AI solution pathway, DOE’s unique justification for leadership, and the expected national impact. These challenges provide a ready-made framework for structuring prize competitions with clear, measurable objectives.
Prizes complement the rest of the funding toolkit. Traditional grants and cooperative agreements are well-suited to sustaining established research programs but less effective at surfacing novel concepts from non-traditional innovators. Prizes complement these mechanisms by creating demand pull from the private sector, attracting participants who would not otherwise navigate the federal grants process.
Prizes generate a pipeline of vetted ideas that can inform subsequent program investments as innovation milestones are met. Sequenced prizes can also serve as a means to build a scientific or commercialization program roadmap, using ideation competitions to reveal where the best private-sector thinking is concentrated, guiding DOE’s allocation of larger resources.
Precedent supports the approach. Federal prize authority under the America COMPETES Act (15 U.S.C. § 3719) has been used across government to drive innovation. DOE’s own American-Made Challenges have awarded over $100 million in prizes, supporting more than 4,500 innovators and launching hundreds of energy companies. The proposed AI2 Prize builds on this proven infrastructure while responding directly to the Genesis EO’s directive to use prize competitions as a tool for Mission execution.
PROPOSED SOLUTION: THE AI FOR AMERICAN INNOVATION (AI2) PRIZE PROGRAM
The AI2 Prize Program is a series of competitions designed to generate, test, and scale AI-driven solutions aligned with DOE’s 26 Genesis Challenges. The program structure mirrors DOE’s existing American-Made prizes while incorporating features tailored to the AI innovation landscape. Alternative designs beyond the American-Made prize construct could also be developed to better align with competitor needs and motivations.
Prize Tracks. The competition is organized into three tracks that map to the Genesis Mission’s scope:
- Track A – AI for Science: Enabling and Accelerating Discovery — Autonomous experimentation, literature synthesis, scientific foundation models, and experiment design. The Appendix has examples of prizes covering: Achieving AI-Driven Autonomous Laboratories, Discovering Quantum Algorithms with AI, and Accelerating Materials Discovery.
- Track B – AI for Energy Innovation: Accelerating Commercialization and National Competitiveness — Grid operations, materials discovery, energy forecasting, and industrial process optimization. The Appendix has examples of prizes covering: Scaling the Grid to Power the American Economy, Predicting U.S. Water for Energy, Accelerating Delivery of Fusion Energy, and Delivering Nuclear Energy That is Faster, Safer, Cheaper.
- Track C – AI for DOE Operations: Improving Government Efficiency and Outcomes — Compliance review, application processing, research gap identification, and international technology intelligence. Addresses DOE’s internal capacity constraints while demonstrating AI’s dual value for both generating and managing innovation. The Appendix includes examples of prizes covering low- to high-dollar use cases that can increase efficiencies across DOE, and spur scientific discovery and energy commercialization and deployment.
Funding Flexibility. Prize concepts in the attached appendix range from $10,000 (competitive tool evaluations for DOE operations) to $100 million (grand challenges in fusion AI or scientific foundation models). This range allows DOE to scale the resources it provides to the level of effort required for different prizes. Numbers are based on experiences from other DOE prizes, funding opportunities and grand challenges. The appendix provides illustrative examples across all three tracks at different funding levels. They are meant to show what is possible. Should DOE proceed with any ideas, time should be taken to customize the given prize to the most relevant goals and outcomes for Genesis. DOE should also consider other topics it considers well-suited for prizes.
ANTICIPATED IMPACT AND VALUE
Advancing National Interests. Successful prizes do more than solve individual technical problems—they catalyze entire ecosystems of innovation. As previously mentioned, DOE’s American-Made Challenges, launched in 2018, have already launched hundreds of companies and supported thousands of energy innovators. An AI-focused prize program has the potential to accelerate scientific discovery across disciplines, strengthen U.S. competitiveness in the global AI and energy technology races, and spur the development of new industries and jobs.
For a relatively modest investment, DOE can demonstrate measurable progress toward Genesis goals while building a pipeline of ideas that informs larger programmatic investments. This will rapidly increase the number of scientific discoveries, energy innovations and their path to market, and efficiencies across the Department.
Benefit to the Administration. By launching a prize series—particularly with private sector partners engaged from the start—DOE signals that the Genesis Mission is actively soliciting cutting-edge ideas from across the American innovation ecosystem that will increase energy dominance, scientific leadership, national security, and economic competitiveness.
Private Sector Collaboration. Prizes create natural entry points for private sector sponsorship, which can increase prize visibility and potentially bring in additional financial or technical resources. Creative models include compute sponsorship from AI and technology firms, co-funded prize purses (e.g. amount paid to prize winners), mentorship from industry leaders, and philanthropic partnerships. FESI is well-positioned to develop innovative sponsorship models not previously available to DOE, including leveraging philanthropic capital to advance the Administration’s AI mission.
Build AI Capacity and Capabilities. The prize program itself serves as a vehicle for building AI capacity within DOE, from increasing familiarity with AI and its applications to removing red tape and optimizing project management. Running AI-focused competitions requires developing internal expertise in AI evaluation, data infrastructure, and technology assessment. For example, high-volume submissions can be managed through AI-assisted review platforms for triage, reviewer matching, and duplication detection, tools that corporations like Google are already developing for DOE’s Office of Critical Materials and Energy Innovation. Note: The authors firmly believe any AI review tool should be tested and quality checked thoroughly with human reviews before leading any evaluation process.
Inter- and intraagency Coordination. The Genesis EO directs coordination within and across agencies. When prize models prove effective, they can be expanded to include other agencies via joint programs, consistent with Section 5’s directive to launch coordinated funding opportunities or prize competitions across participating agencies. Given the multidisciplinary nature of Genesis challenges, programming can bring together a diverse set of offices to execute common goals using a common tool (prizes).
NEXT STEPS
We recommend convening a working session with DOE Genesis Mission leadership, relevant program offices, national laboratory representatives, prize program design experts, and potential private sector partners to explore how prize competitions could advance specific Genesis Challenges. This session would assess which challenges are best suited to a prize approach, identify available funding authorities and sponsorship models, and define the parameters for initial competition.
APPENDIX: Genesis Mission Illustrative Prize Concepts
AI FOR SCIENCE
Discovering Quantum Algorithms with AI
| Concept | DOE Challenge | Cost Tier |
| Discovering Quantum Algorithms with AI | Discovering Quantum Algorithms with AI (p.8) | $1M |
Description
Teams use AI agents to autonomously discover, construct, and benchmark novel quantum algorithms for DOE-relevant computational problems, such as molecular simulations, materials discovery and optimization, and nuclear physics calculations. Judged on provable quantum advantage, generalizability, and degree of AI autonomy. Single-phase, 6–9 month hackathon-to-demo format benchmarked against DOE quantum testbeds at Argonne or Oak Ridge.
AI-Enabled Autonomous Discovery Prize
| Concept | DOE Challenge | Cost Tier |
| AI-Enabled Autonomous Discovery Prize | Achieving AI-Driven Autonomous Laboratories (p.13) | $10M |
Description
Student, early-career, and professional teams build AI agents that close the loop between hypothesis generation, experimental design, and scientific interpretation using real DOE user facility data (APS, EMSL, JGI). Two concurrent tracks: a Launchpad track for student/early-career teams and an Autonomous Discovery Sprint open to all sectors. 270-day demonstration window maps to DOE’s platform demonstration mandate.
Genesis Scientific Foundation Model Grand Challenge
| Concept | DOE Challenge | Cost Tier |
| Genesis Scientific Foundation Model Grand Challenge | Multiple DOE challenges | $100M |
Description
Flagship prize. Teams build domain-specific scientific foundation models trained on DOE’s federated datasets that outperform general-purpose AI models (e.g. Claude, Gemini, OpenAI, etc.) used by scientific experts on a standardized benchmark of scientific reasoning, prediction, and experimental design across at least three DOE mission areas. Three phases: Phase 1 ($10M, 20 teams) for data strategy and architecture; Phase 2 ($30M, 10 teams) for model training on DOE compute (Frontier, Aurora, NERSC); Phase 3 ($60M final prize pool) for deployment at national labs measuring actual scientist productivity improvements. 28–36 months.
How this could actually work: $10M AI-Enabled Autonomous Discovery Prize.
The AI-Enabled Autonomous Discovery Prize challenges student, early-career, and professional teams to build AI agents capable of closing the loop between hypothesis generation, experimental design, and scientific interpretation—using real DOE user facility data and infrastructure. The prize directly advances the Administration’s AI and science priorities by accelerating discovery in DOE’s six priority research domains while building the next-generation scientific workforce. The prize runs two concurrent tracks under a unified competitive framework, converging at a shared final demonstration event.
Track 1 — Launchpad (Student & Early-Career Teams). Open to teams with at least 75% student or early-career participants (within 5 years of terminal degree). Teams build AI agents that autonomously design, execute, and interpret experiments using published DOE user facility datasets from facilities such as the Advanced Photon Source (APS), Environmental Molecular Sciences Laboratory (EMSL), or Joint Genome Institute (JGI). This track runs as a structured hackathon-to-demo sequence over 12 months, with cohort-based mentorship from researchers at national labs. Seed awards to support 20 selected teams through a proof-of-concept phase, with finalist awards for the top 5 teams advancing to the live demo. A grand prize is awarded at the final event.
Track 2 — Autonomous Discovery Sprint (Open Track). Open to teams from any sector—universities, national labs, startups, and industry. Teams demonstrate AI systems capable of running a complete closed-loop experimental cycle in one of DOE’s six priority domains, producing either a verifiably novel scientific finding or a measurable acceleration of an active research program.
- Phase 1 concept papers select 15 teams for seed awards.
- Phase 2 proof-of-concept using DOE facility data or validated simulation environments narrows to 8 teams.
- Phase 3 is a live demonstration at an operating DOE user facility, with grand-prize and runner-up awards. This track’s 270-day demonstration window maps directly to DOE’s platform demonstration mandate.
AI FOR ENERGY INNOVATION
Water-for-Energy Forecast Challenge
| Concept | DOE Challenge | Cost Tier |
| Water-for-Energy Forecast Challenge | Predicting U.S. Water for Energy (p.17) | $1M |
Description
Teams build AI tools that outperform current NOAA CFS and Bureau of Reclamation baseline forecasts for monthly streamflow at 50 USGS gauge stations across five energy-critical river basins. Phase 1: hindcast competition. Phase 2: live real-time forecasting. DOE provides E3SM outputs and HPC allocations. 18–24 months.
AI Nuclear Licensing Accelerator
| Concept | DOE Challenge | Cost Tier |
| AI Nuclear Licensing Accelerator | Delivering Nuclear Energy That is Faster, Safer, Cheaper (p.5) | $1M |
Description
Teams build AI tools that accelerate NRC safety analysis review by automatically identifying issues in reactor license applications. Judged on accuracy against historical NRC review dockets and time reduction (≥50%). Phase 1 scored against past applications; Phase 2 piloted with an advanced reactor applicant. 6–12 months.
AI Mineral Prospector
| Concept | DOE Challenge | Cost Tier |
| AI Mineral Prospector | Securing America’s Critical Minerals Supply (p.4) | $10M |
Description
Teams build tools that “integrate geophysical data, process optimization, cost estimation, and economic modeling.” The prize targets the discovery/identification of high-priority sites and resources, which is critical to solving the full spectrum of challenges in this space. The prize includes a verification second stage.
AI Grid Interconnection Accelerator
| Concept | DOE Challenge | Cost Tier |
| AI Grid Interconnection Accelerator | Scaling the Grid to Power the American Economy (p.18) | $10M |
Description
Teams automate interconnection engineering studies—power flow analysis, stability assessment, and cost allocation—to address the 2,600 GW queue backlog. Phase 1: validated against completed historical studies at a partner ISO/RTO. Phase 2: run in parallel with a real ongoing study (≥90% accuracy, ≥10x speed improvement). Phase 3: utility adoption incentives over ~12 months. 8–12 months for technical phases.
AI Grid IntGrand Challenge Fusion Breakthrough Prizeerconnection Accelerator
| Concept | DOE Challenge | Cost Tier |
| AI Grid IntGrand Challenge Fusion Breakthrough Prizeerconnection Accelerator | $100M |
Description
Teams demonstrate AI systems that reduce the fusion design-experiment-analyze cycle for plasma control or materials qualification by an order of magnitude. Includes AI controller performance in high-fidelity plasma simulation, deployment on real experimental devices, and transfer test for applicability beyond the machine used to train the controller.
How this could work: $100M Grand Challenge Fusion Breakthrough Prize.
AI-driven acceleration of fusion energy timelines. This is the “where we are today vs. the future we want” framing. Today, fusion confinement experiments take months to years of iterative design. The prize asks teams to demonstrate AI systems that reduce the design-experiment-analyze cycle for fusion plasma control or materials qualification by an order of magnitude.
- Phase 1 ($20M prize pool): AI controller performance in high-fidelity plasma simulation.
- Phase 2 ($30M prize pool): Deployment on real experimental devices—DIII-D, NSTX-U, or partner private facilities.
- Phase 3 ($50M final prize pool): The transfer test—controller trained on one tokamak configuration successfully controls plasma on a different machine, demonstrating the generalizability needed for commercial deployment.
Multi-year, phased structure with escalating barriers. AI-directed experiments at DOE fusion facilities. 30–36 months. The $100M scale is justified because fusion is explicitly called out in the EO, it’s a defining national challenge, and the commercial fusion sector (CFS, TAE, Helion) would bring private capital alongside the prize dollars. The framing is energy dominance through the ultimate energy source.
AI FOR DOE OPERATIONS
DOE Review Tool Buy (P-Card)
| Concept | DOE Challenge | Cost Tier |
| DOE Review Tool Buy (P-Card) | NA | $10k |
Description
Competitive evaluation of commercially available AI tools for accelerating merit review of grant applications, FOA applications, or compliance documents. Vendors submit products; DOE evaluates against a standardized test set. Winning vendor receives purchase order. Competitive acquisition via challenge authority—fast, cheap, immediately useful.
AI Compliance Accelerator
| Concept | DOE Challenge | Cost Tier |
| AI Compliance Accelerator | NA | $1M |
Description
Teams build AI systems accelerating NEPA review, export control compliance screening, or other DOE regulatory workflows. Judged on accuracy, speed improvement, and compatibility with DOE’s IT security environment. One to two phases, 6–12 months.
Smart DOE Challenge
| Concept | DOE Challenge | Cost Tier |
| Smart DOE Challenge | NA | $10M |
Description
Broader scope: AI systems that measurably improve any major DOE operational workflow. Targets include research gap identification across the DOE portfolio, international technology intelligence, duplicative research detection across the 17 labs, and budget optimization. Open call for concepts, down-select, build, demonstrate. Output: tools DOE can deploy, judged on implementation readiness and projected efficiency gains.
How this could work: $1M AI Compliance Accelerator
Reduce DOE compliance review burden using AI trained on historical determinations. A challenge could be posed to develop a tool to conduct more efficient compliance reviews for applications. DOE would provide anonymized sample data from previous compliance reviews, which the tool makers can use to train their tool. Tool outputs would be compared to the human baseline from that solicitation, since DOE already has that data.
- Phase 1 ($250K prize pool): Teams demonstrate accuracy on a small held-out set of anonymized prior DOE determinations. Up to 5 teams advance.
- Phase 2 ($750K prize pool): Full build-and-evaluate against larger data sets. Judged on accuracy vs. human baseline, speed improvement, and compatibility with DOE’s cybersecurity environment.
Single prize office execution, 9–12 months. Prize structured as payment-for-use rather than a one-time award. DOE pays per review until funds are exhausted, then exercises a sole-source follow-on contract justified by demonstrated performance. The $1M scale is appropriate for a workflow automation tool where DOE controls the training data, the evaluation benchmark, and the deployment environment. This is a build-to-spec challenge, not a moonshot.
Can Florida Revolutionize the Nuclear Energy Sector?
A new bill in the Sunshine State’s legislature seeks to streamline the approval process