AFRL is seeking white papers to identify and build a machine learning-artificial intelligence system that can efficiently help researchers find appropriate conditions for optimization and discovery of new synthetic compounds as quickly as possible using multi-system approaches.

Grand Challenge Overview

The Air Force Research Laboratory (AFRL) seeks partners to design an ML/active-learning framework, with a library of Python based optimization tools (with an open source license), that can use multi-system workflows to discover optimal conditions and accelerate research across many problem domains. Eligibility for the challenge competition is limited to U.S. Citizens and U.S. Permanent residents.


The Challenge

AFRL (The Seeker) is seeking white papers to identify and build a machine learning-artificial intelligence system that can efficiently help researchers find appropriate conditions for optimization and discovery of new synthetic compounds as quickly as possible using multi-system approaches. The intent of this competition is for AFRL to use the results of this challenge as a building block for later efforts to demonstrate viability of this approach in multiple domains.

To be eligible for the contract award, the winner will sign Appendix A granting AFRL Government Purpose Rights and assign an open source license to the project. The winner of this challenge is eligible for a contract award up to $500,000.


Grand Challenge Details

  • What: AFRL Active AI Planners for Chemistry/Materials Optimization and Discovery Grand Challenge
  • When: Sept. 29-Oct. 26 (White Paper Submission), Virtual Q&A Session: Oct. 7 at 12:00pm ET

  • Where: Virtual
  • Who: Eligibility for this challenge competition is limited to US citizens and US permanent residents.
  • Why: Multi-disciplinary teams can build machine learning-artificial intelligence techniques to accelerate the optimization and discovery of new synthetic compounds. The winner of this challenge is eligible for a contract award up to $500,000, awarded in four phases, and can help build a robust ecosystem across experimental and data science.

Automation is broadly being adopted as a strategy to accelerate scientific discovery and innovation. Current algorithms for navigating the high dimensional parameter space of automated experiments, however, lack the flexibility to integrate information sources of different type, size, accuracy, and cost. Developing algorithms to address this challenge will accelerate materials discovery and lower barriers making it possible to extend this strategy to other problem domains.

Traditional chemistry and materials laboratories perform iterative tests on the scale of 10-100 experiments to discover, or optimize, synthetic targets. This is becoming an increasing concern given that the money needed to maintain consistent scientific GDP growth has doubled every 13 years.

  • One reason for this is that ideas are just getting harder to find, experimentally validate and optimize. Given that the number of scientific publications is enormous, it takes more time and a diverse team to absorb all this information and propose new experiments. There are some solutions to aid experimental scientists, however many of them are limited to single system input streams. For example, many existing solutions rely on one piece of synthetic equipment or one computational model, when in fact there are a number of systems available all exhibiting varying degrees of fidelity (high/low).
  • The ultimate solutions required are those that can work, not with solitary systems, but with multiple inputs.

Key Deliverables and Phases

Phase I: White Paper submission

In the first phase of this competition, participants will submit White Papers describing their proposed approach to creating a machine learning framework to find optimal experimental conditions. The white papers must include how the team intends to test the algorithm and how they will get data from iterative experiments for testing. These white papers from the viable entrees (potential awardees) will be down selected in accordance with the Evaluation Criteria, and the chosen proposers will then be given a 1-hour time slot to pitch their concept to the evaluation team.

The winner of this Pitch Competition will be awarded 30% of the total contract award value. Specific instructions regarding the contents of the pitch will be provided with the invitation to pitch.

Administrative requirements for the White Paper are provided on the Solution Submission Form.

Phase II: Algorithm development

The team will develop an active learning algorithm that facilitates multi-system architectures to optimize or discover new compounds/properties and determine appropriate noise levels and parameter constraints. This algorithm must be designed to handle single objective and multi-objective optimization from multiple input systems and classes (continuous/categorical). The team will evaluate the system on iterative steps of performed experiments, to collect offline data, and suggest next combinations to maximize/minimize the desired outputs. These experiments can be performed iteratively (1 at a time) or in groups (i.e., 3 at a time) on established datasets or new experiments. The team will present the results of this evaluation to AFRL. Upon approval by AFRL that this concept will work the team will be contract awarded another 30% of the total contract award value to move to Phase III.

Phase III: Robust algorithm evaluation

The team will then document the minimal experiments needed to find an optimum and provide some context as to how much more “quickly” these conditions were found. The algorithm should be able to optimize

  • Processing conditions: Experimentally identify a synthetic route through the fewest (or cheapest) data collections.
  • Design space discovery: Build a high accuracy surrogate for the design space with the fewest (or cheapest) data collections.

The team may explore different multi-objective and augmented response approaches for variable optimization. The team should also provide a rationale for why their approach could provide reliable results for other scientific problems/domains. Upon approval from AFRL that this will work, the team will be provided 35% of the total contract award value to move to phase IV.

Phase IV: Build, evaluate, and document final program

The team will then develop a functional program that incorporates this active learning framework. The team will facilitate the transfer of their code to AFRL scientists for testing, provide a demonstration of the system and produce a final report on the results. Upon successful demonstration of the system and receipt of the final report documenting the source code (can be uploaded to Github) and the results of the testing the team will receive the final 5% of the total subcontract award value.


Project Deliverables

  1. Presentation of the algorithm and approach to document the robustness of this approach from Phase II.
  2. Report on robustness evaluation from Phase III
  3. Final report from Phase IV
  4. Demonstration of the functional system to the AFRL team from Phase IV.
  5. Source code for the final program and executable.

Contract Award

The winner will be issued a contract award up to $500,000, distributed across four phases of effort.

Submissions will be evaluated on the ability of the concept to meet the requirement delineated under “The Solution.” If the evaluation team does not deem any submission viable to get to testable solutions, then no winners will be selected.

The seeker will issue the contract award winner 30% of the total contract award value (up to $150K) upon selection, 30% of the total contract award value (up to $150K) upon successful completion of phase II, 35% (up to $175K) of the total contract award value upon successful completion of phase III, and the final 5% (up to $25K) upon successful completion and demonstration of phase IV. If it is determined upon AFRL evaluation that the results of phases II or III are insufficient, then the program shall be terminated, and no additional money will be awarded to the winner.


Access Grand Challenge #2

To participate in the challenge, teams will need to create an account on IdeaScale. Upon registration teams will receive a verification email. Check your spam folder if the verification email is not received. Please contact challenges@nsin.mil for questions about IdeaScale registration. All relevant information regarding this Challenge can be found via the following link to the IdeaScale platform:

IdeaScale


Partners


***Distribution A. Approved for public release: distribution unlimited. Case Number: AFRL-2022-3547