[USA] - Executive Order 14110
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
The Synopsis:
On October 30, 2023, The Biden Administration signed an Executive Order [EO] 14110, which is an order to regulate artificial intelligence. Specifically, the Biden Administration have purported to regulate foundation models, attract AI Talent through immigration, create AI leadership opportunities, and implement requirements and rules for agencies. Additionally, federal agencies must address 150 requirements, including actions, reports, rules, and guidance, that must be addressed within 2023 and later.1
The Public Commentary:
Nicol Turner Lee, the director of Center for Technology, has applauded the Biden Administration in signing an Executive Order for responsible A.I. However, Nicol has posed questions that have yet to be answered:
Q1: How is a civil rights violation identified in opaque AI systems, and who decided if the situation presents itself as worthy of punitive action?
Q2: What will be the recourse for individuals harmed by discriminatory AI systems?
Q3: Who is seated at the table in the design and deployment of A.I., especially the inclusion of academic, or civil society experts that understand the lived experiences of communities (including their trauma) who have become the primary subjects of existing and emerging technologies?2
Joseph Keller, a fellow in Foreign Policy, Strobe Talbott Center for Security, Strategy, and Technology, has honed in on America’s lead in A.I. implementation, especially about environmental impact. He states that A.I. use resources, but there is a dearth of information on the effects. Joseph envisions the U.S. becoming the leader in reporting environmental impact information and metrics from companies.2
Aaron Klein, the Miriam K. Carliner Chair, has expounded on the lack of A.I. responsibility on the Treasury Department and financial regulators. He questions the future of financial markets, since A.I. could enhance banking regulation and prevent the Silicon Valley debacle.2
The Analysis:
The Biden Administration’s issuing the Executive Order is monumental because the federal government has taken initiative to regulate A.I. However, the initiatives are in their nascent stage, while A.I. is maturing, exponentially. In the article, foundation models are addressed meticulously in four elements: Thresholds, Compute Monitoring, Content Provenance, and Open Foundation Models.
The third element, Content Provenance, piqued my interest because GANs are being used to create content, like pictures. Interestingly, machine-generated content and human-generated content are indistinguishable because GANs have become precisely phenomenal in creativity; if I decided to prompt a foundation model, like ChatGPT to ‘create a picture of Albert Einstein in jeans, a t-shirt, and chukka boots in New York City’, it will create a realistic rendition of a ‘hip Einstein', even though determining whether photo was real [human generated] or fake [machine generated] would be difficult. More nefariously, GANs can be used to create deepfakes that have real-life implications, destroying families and livelihoods. Watermarking is posed as one of the solutions to aide in distinguishing photos, but I am diffident in its implementation. There are software that can remove watermarks in photos, which may make watermark detectors useless.
A.I. Talent through Immigration is promising; I believe all great minds, regardless of country of origin, are needed to ensure the ethical maturity of artificial intelligence. However, the EO does not address important logistics, such as national security, incentives, and interdisciplinary subjects. National security, I opine, is utmost necessary because infiltration can lead to unforeseen vulnerabilities in America’s infrastructure.
Secondly, incentives are important in attracting foreign talent, but there are questions lingering: what does America have that I cannot find elsewhere? Why would I want to help America in their A.I. strategy? These questions have not been answered, and I believe they need to be thoroughly addressed, prior to opening immigration applications for A.I. talent. Lastly, interdisciplinary subjects are integral in making connections in artificial intelligence; for instance, a philosopher and a linguist contribute through creating a moral code and identifying vulgar, offensive terms respectively. Sourcing from only A.I. or computer science backgrounds is not sufficient, and I believe the U.S. should source from all domains to have a comprehensive view of A.I.
A.I. Leadership is lacking, and federal agencies need leaders to handle the efforts in ethical A.I. implementation. Though, I am skeptical of America’s leaders because capitalism is the American ethos, and values may sway if the endeavor doesn’t yield respectable monetary benefits. The public sector typically pays less than the private sector, and I believe many capable A.I. leaders may leave the public sector for better pay, especially in a uncertain economic future.
The Public Commentary is correct; the promises of an ethical A.I. future is touted by the U.S. government, but there are specific questions that linger and need to be addressed. If the U.S. government mishandles the EO, the U.S. can be set back years in ethical implementation of A.I., increasing the risk of unregulated A.I. destroying U.S. citizens’ livelihoods. We need to implement change judiciously and quickly to catch up with A.I. Time is precious, and it is all we have in the race for ethical, responsible A.I.
The Terminology:
Foundation Model - A machine-learning model that is trained on various, multiple datasets to be used for different tasks;3 it generates output from human-language inputs or prompts.4
Example, ChatGPT is a foundational model.
Generative Adversarial Networks [GANs] - GANs are deep-learning algorithms that create novel data from existing datasets; the generator creates images and the discriminator discriminates/distinguishes the real photo from the fake. The algorithm continues to train on the datasets until the discriminator cannot distinguish the real and fake photos.5
Example, Deepfakes [images of real people in A.I. - generated scenery] is a product of GANs.
The Questions:
Q1: What is important to you in the Executive Order 14110? Are your values represented in the EO?
Q2: What would you want to be included in the EO?
The Bonus:
Stanford HAI has created an EO 14110 tracker to account for changes in A.I. policy: https://docs.google.com/spreadsheets/d/1xOL4hkQ2pLR-IAs3awIiXjPLmhIeXyE5-giJ5nT-h1M/edit?pli=1#gid=142633882
The Endnotes:
1 Rishi Bommasani, Christie M. Lawrence, et al., “Decoding the White House AI Executive Order’s Achievements”, Stanford University Human-Centered Artificial Intelligence, accessed Jan 23, 2024,
https://hai.stanford.edu/news/decoding-white-house-ai-executive-orders-achievements
2 Nicol Turner Lee, Joseph B. Keller, et al., “Will the White House AI Executive Order deliver on its promises”, Brookings, accessed Jan 23, 2024,
https://www.brookings.edu/articles/will-the-white-house-ai-executive-order-deliver-on-its-promises/
3 Ben Lutkevich, “Foundation models explained: Everything you need to know”, TechTarget, accessed Jan 23, 2024,
https://www.techtarget.com/whatis/feature/Foundation-models-explained-Everything-you-need-to-know
4 Amazon AWS., “What are Foundation Models?”, AWS, accessed Jan 23, 2024,
https://aws.amazon.com/what-is/foundation-models/
5 Simplilearn., “List Of Generative Adversarial Network Applications?”, Simplilearn, accessed Jan 23, 2024,
https://www.simplilearn.com/generative-adversarial-networks-applications-article