Highlights
President Biden's ambitious executive order aims to balance artificial intelligence innovation with safety and security, addressing broad challenges
The order introduces AI safety standards, privacy protection, equity, and transparency, emphasizing collaboration and government AI use
While a significant step, federal legislation is needed to create a comprehensive AI governance framework
President Joe Biden signed an executive order late last month addressing the complex challenges posed by the expansion of artificial intelligence (AI). This ambitious effort aims to thread the needle between harnessing the power of AI to spur innovation and mitigating the significant potential risks associated with AI technology.
The executive order, issued Oct. 30, addresses a broad array of issues, with a primary focus on AI safety and security. It introduces new standards and safety testing requirements for advanced AI systems, a practice often referred to as “red teaming.” Under the order, developers of high-stakes AI technology will be required to share their safety test results and other critical information with the U.S. government, a measure they argue is in place to help ensure that AI systems are safe, secure, and trustworthy.
To enforce these requirements, the order leverages the Defense Production Act, expanding presidential authority in times of crisis.
Additionally, the executive order emphasizes the need for privacy protection and data security. It calls on Congress to pass bipartisan data privacy legislation to safeguard the privacy of all Americans, particularly children. Privacy-preserving techniques and cryptographic tools are promoted to protect sensitive data used in AI systems.
The order also addresses concerns related to equity and civil rights by providing clear guidance to prevent discrimination in AI applications across various sectors, including housing, education, and law enforcement. It seeks to ensure fairness in the criminal justice system by developing best practices for AI use in sentencing, parole, surveillance, and predictive policing.
The executive order supports consumers, patients, and workers by promoting responsible AI use in healthcare, drug development, and education, and aims to protect against AI-enabled fraud and deception. The move also calls for clear labeling of AI-generated content.
Recognizing the global nature of AI challenges, the order encourages international collaboration and the development of AI standards. It also emphasizes the importance of government use of AI, addressing the risks of discrimination and unsafe decisions.
While this executive order is a significant step in the right direction, it acknowledges the need for federal legislation to create a comprehensive framework for AI governance.
In light of these policies, companies seeking to harness AI's power to help develop their business should consider several key takeaways:
- Prioritize AI Safety and Security: In line with the executive order's focus on AI safety and security, businesses should consider implementing robust safety testing procedures, including “red teaming,” to identify and mitigate potential risks. These can include sharing safety test results with relevant stakeholders, fostering trust and confidence in their AI technology.
- Promote Transparency and Accountability: Businesses should promote transparent AI use by clearly labeling AI-generated content to inform consumers and users, and assume accountability for the effects of AI in healthcare, education, and other fields to build trust and credibility
- Commit to Privacy and Data Security: Companies should stay updated on data privacy regulations and advocate for responsible data practices within their organization. Protecting sensitive data used in AI systems can occur through privacy-preserving techniques and cryptographic tools, which can demonstrate a commitment to safeguarding user information.
- Proactive Privacy Compliance: Given the increasing importance of data privacy in the AI landscape, businesses should consider establishing robust privacy compliance measures.
- Address Bias and Fairness: The order's guidance on preventing discrimination in AI applications underlines the importance of addressing bias and promoting fairness. Companies should consider implementing processes to detect and rectify biases in their AI systems. Developing and adopting best practices to ensure fairness in AI decision-making across various business sectors can enhance equity and trustworthiness.
By incorporating these steps into their AI strategies, businesses can not only navigate the evolving AI landscape, but also contribute to the responsible and ethical deployment of AI technologies in their operations, positioning themselves to successfully leverage AI.
For more information, please contact the Barnes & Thornburg attorney with whom you work or Brian McGinnis at 317-231-6437 or brian.mcginnis@btlaw.com.
© 2023 Barnes & Thornburg LLP. All Rights Reserved. This page, and all information on it, is proprietary and the property of Barnes & Thornburg LLP. It may not be reproduced, in any form, without the express written consent of Barnes & Thornburg LLP.
This Barnes & Thornburg LLP publication should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own lawyer on any specific legal questions you may have concerning your situation.