Abstract
Since the 1970’s and the advent of recombinant DNA, biology has consistently become easier to engineer, and the pace of these advances is increasing. Many tools and capabilities for engineering biology are becoming more powerful, more affordable, and more widely available. These capabilities are critical for basic scientific research as well as advances in health, agriculture, and a wide range of applications in the burgeoning bioeconomy. However, access to these tools also raises the possibility that they could be accidentally or deliberately misused to cause harm by enabling development of toxins, pathogens, or other dangerous biological agents, including some not found in nature. Potential biological harms include high-consequence events such as the development and release of an engineered pathogen that causes a global catastrophe as well as a wide range of lower-consequence, higher-likelihood events. To prevent this type of misuse, policy experts have recommended expanding customer screening practices and policy frameworks to include a broad range of life sciences products, services, and infrastructure (Carter and DiEuliis, 2019a). Recent advances in artificial intelligence (AI) have increased this type of risk and have intensified these calls for action (Carter, et al., 2023; Helena, 2023).