The Five Most Key Takeaways from This Blog
- The European Union (E.U.) has been drafting the A.I. Act for years now. This tracks because ChatGPT, which prompted (pun not originally intended, but nonetheless endorsed by this writer) global awareness of A.I. and how far in development companies have been able to bring this technology, has been available for years now.
- The A.I. Act will do a number of things to protect citizens from certain uses of A.I. as well as restrict companies in the creation of A.I. It has two main focus areas: the development and the deployment of A.I.
- The foundation of this legislation relates to the risks of A.I., with four categories that regulators identify for risk assessment of A.I. systems: minimal risk, specific transparency risk, high risk, unacceptable risk
- By putting more onus on the developers and providers of A.I. systems, this could mean less of an oversight-related burden on companies that partner with A.I. providers. In other words, companies using A.I. in the E.U. could potentially enjoy a greater guarantee that the A.I. the companies are using is safer and trustworthier than an E.U. without the A.I. Act in place.
- What is the relevance for American business owners that wish to use A.I.? The E.U.’s framework for A.I. regulation could prove to have a large influence on American legislators in the coming years in deciding what to regulate and what to permit.
A.I. in the E.U. and U.S.A.(I.)
As this writer wrote above in the Key Takeaways section, this regulatory framework that has gone live in the E.U. could prove to be a significant influence on A.I. legislation in the U.S.A.
What that could lead to is an American regulatory environment, a legal landscape if you will, that bears a resemblance to the European state of A.I. monitoring and restriction.
So, what would be the underlying concepts that serve as the foundation for such a legal landscape?
For this writer, it would probably be best to consider the risk-assessment aspect of the regulation, as this more or less places A.I. systems into broad categories, placement in any of which incurs a set of regulations that could severely limit the uses of the system.
So, without further ado, let us consider those four categories, which this writer first named in the above Key Takeaways section’s third bullet point.
Minimal Risk
This category is for A.I. systems that may not even have any obligations under the A.I. act. Instead, the E.U. will basically be entrusting this class of A.I.’s developers and deployers to self-regulate.
One of the examples that the E.U. gives is spam-filter systems.
Since regulating the A.I. industry will be time- and cost-intensive, it makes sense that government entities will likely just leave a certain portion of the A.I. systems alone.
But do not mistake this labeling of “minimal risk”, dear reader, as being “no risk”–even something like an A.I. spam filter can be put to ill use, or just suffer from poor development. Imagine, for instance, a spam filter that fails to detect a lot of spam, which in some cases could actually end up increasing the risk of e.g. ransomware intrusion because a user had a false confidence that an unfiltered spam email was not spam.
Specific Transparency Risk
Letting users know that they are interacting with A.I. is part of this category.
For instance, chatbots on web sites that have the guise of an actual person would have to indeed inform the human interlocutor that, no, Marissa from Best Mattresses is not a human being chatting with you about the best spring-box deals at 2 A.M. Marissa, in this case, is a chatbot, and the E.U. believes that users have the right to know that.
This also entails the watermarking of A.I.-generated content to cut down on deceptive content like misinformation and disinformation.
High Risk
These are systems where risk mitigation and human oversight will be a requirement in most cases for their development and deployment.
Think of something like recruitment software that runs a high risk of discriminating against certain groups if developers and deployers are careless in creating, training, tailoring, and using the A.I.
Unacceptable Risk
There are some forms of A.I. that the E.U. will simply not permit to have an impact on private citizens.
Think of walking into a Walmart and seeing something like an A.I. knife-throwing machine available for the low low price of $50.
That example literalizes the idea that some A.I. systems will be weaponizable against private citizens, be it bodily or otherwise, in infringing upon their rights.
Recent Comments