The Synopsis:
On December 8, 2023, The European Parliament and Council have contingently agreed to regulate A.I. Particularly, Parliament and Council have decided to ensure democratic values and fundamental rights are protected, ban high-risk A.I., create guardrails on generative A.I., and lead A.I. advancements. Pertaining to high-risk A.I., MEPs [Members of Parliament]2 have included a fundamental rights impact assessment, and citizens have the right to appeal those A.I. systems. Law enforcement is limited to use one [biometric categorisation system] of them for strictly listed, defined crimes. Additionally, failure of an entity to adhere to parliamentary rules, whether it be law enforcement or a startup, can lead to fines from 7.5 million euros to 35 million euros, depending on the size and violation of the agency. Nonetheless, the provisional agreement on the Artificial Intelligence Act must be formally adopted to become an EU mandate for the rules to be enforced.1
The Public Commentary:
The Artificial Intelligence Act has commentary from those who have been following closely to A.I. regulatory practices in the EU:
Ed Newton-Rex, founder of FairlyTrained, has weighed in on the A.I. Act to state that data transparency of foundation models will encourage licensing and proprietary regulation:
Source: Ed Newton-Rex - X
Michael Kove, also had something to say about the A.I. Act, believing that VPNs would exponentially be used since European markets will have stricter regulations on data and services access:
Source: Michael Kove - X
The Analysis:
The Artificial Intelligence Act is a significant, positive step in A.I. regulation for the EU. Notably, banning high-risk A.I. and democratizing foundation models and A.I. systems are prominent features of the A.I. Act. High-Risk A.I. must be curtailed because of their devastating effects on humans and society. For instance, you apply for a scholarship to attend university, yet you are denied because an algorithm has deemed you unqualified. Simplistically, an A.I. system checks for matching key words to which the A.I. developer coded into the system, but it does not check the individuals’ circumstances, background, and motivation. Humanness is essential in handling affairs that typically changes a person’s destiny, and relegation of those affairs to an A.I. system, which cannot fathom the spectrum of emotion, behavior, history, and future of humanity, is asinine.
The democratization of A.I. systems is fundamentally a right of citizens; citizens have the right to know and fully understand the A.I. systems that are determining their lives, and they have the right to critique those systems. If citizens have an obligation to pay taxes, obey traffic laws, and vote in elections, the government has its obligation in full explainability of those taxes, laws, voting requirements, and more recently A.I. foundation models.
The Public Commentary is interesting. Ed Newton-Rex remarks that data transparency would discourage scraping [“using an application to extract valuable information from a website”]4 and encourage licensing. However, in a competitive market place, would the quality of data diminish, since all information about a competitor is accessible? I think that agencies may attempt to circumvent data transparency for the sake of competition; agencies may become emboldened to created two foundation models: a public facing model and a private, internal facing one. The public facing model would be submitted to an audit and accessible for the public, while the private, internal model would be the one that is actually being used. Simply put, the public facing model would be a decoy. Pertaining to licensing, agencies will become highly selective of the LLMs [Large Language models] to avoid wasting resources, which may lead to an overuse or under use of certain LLMs.
Michael Kove makes peculiar point; he states that VPNs will make a lot money, since apps in Europe will not be natively available, and I do not think that is true. The A.I. Act came to ban high-risk A.I. systems that threaten society and enforce transparency, explainability, and critique to the systems. I believe VPNs should also become subject to explainability and transparency audits to ensure compliance to A.I. regulation. If the VPN providers are allowing users to access high-risk A.I. systems, then VPN providers and users should be sanctioned as well. Additionally, a healthy capitalist market takes risks and rebounds after failure, and if a ‘healthy’ capitalist market cannot rebound, then maybe it was not as healthy as it was believed to be.
As an A.I. ethicist, I have an obligation and will to protect the welfare of humanity from algorithmic harms, and if the capitalist market has to fail to retrieve humanity, then it must fail.
The Terminology:
High-Risk A.I. - A.I. systems that directly affect humans lives, such as transportation, education, medicine, employment, essential private and public services, law enforcement, border-control protocols, and the judicial process.3
VPNs [Virtual Private Networks] - “VPNs create an encrypted tunnel for your data, protect your online identity by hiding your IP address, and allow you to use public Wi-Fi hotspots safely.”5
The Questions:
Q1: What is important to you in The A.I. Act? Do you think the A.I. Act has set a precedence for European countries outside the EU?
Q2: What do you think can be included in high-risk A.I. systems?
Q3: Have you been subject to a high-risk A.I. system? How did you respond?
Q4: What do you think of the public commentary? Do you agree, disagree, or neither for the commentators? What would you have asked or said to one of them if given the opportunity?
The Endnotes:
1 European Parliament, “Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI”, News European Parliament, accessed Jan 25, 2024,
2 European Parliament, “Members of the European Parliament”, MEPs European Parliament, accessed Jan 25, 2024,
https://www.europarl.europa.eu/meps/en/home
3 European Commission, “Regulatory framework proposal on artificial intelligence”, Shaping Europe’s digital future, accessed Jan 25, 2024,
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
4 Cloud Flare., “What is data scraping?”, Cloud Flare, accessed Jan 26, 2024,
https://www.cloudflare.com/learning/bots/what-is-data-scraping/
5 NordVPN., “What is a vpn?”, NordVPN, accessed Jan 26, 2024,