The Five Big Key Takeaways from This Blog
- The U.S. Justice Department is increasing its focus on A.I and A.I. Regulations.
- Computer scientist and lawyer Jonathan Mayer will be the U.S.A.’s first Chief Artificial Intelligence Officer, advising top leadership in the U.S. Justice System about the possible misuses and illegal uses of A.I.
- This is timely, as there has been recent controversy over the exploitation of generative A.I. to create misinformation and just offensive or illegal content in general.
- Mayer’s term will be up after 12 months, so you can expect to see many rules and regulations passed or put into motion during that period, promising a productive 2024 for A.I. regulation.
- Business owners should expect to see greater restrictions and qualifications on the use of generative A.I. in particular.
The Justice Department’s New Hire
A.I. regulations are inbound with the appointment of the U.S. Justice Department’s first Chief Artificial Intelligence Officer.
Jonathan Mayer is both a computer scientist and a lawyer who has had a career as an academic at Princeton University.
In this role, he will be serving as an adviser to Attorney General Merrick Garland along with the Justice Department in general.
Since A.I. is such a fast-growing technology, and lasting widespread public awareness of it is only a recent phenomenon, it makes sense why lawmakers would be appointing experts in the field to advise the regulation of these technologies.
Expect Regulators to Scrutinize Generative A.I. the Most
Generative A.I. will be perhaps the biggest target for lawmakers in the coming year.
Part of the reason for this is that this is the A.I. technology that is the most widely available for exploitation by private citizens.
This has resulted in some widely publicized incidents involving the use of A.I. for nefarious ends. One example came in early 2024, when X-rated deepfakes of Taylor Swift flooded the Internet, leading to panic and controversy.
And, more recently, panic and controversy and bad press for Google followed the revelation that Google’s newly released Gemini generative A.I. platform could be easily manipulated into producing offensive content.
To avoid any more public backlash, Google barred any further public use of the image-generating feature of Gemini, but the damage was done. The message is also clear: If regulators don’t strictly regulate these platforms, the only thing preventing their exploitation is a controversy significant enough to prompt internal tech company leadership to self-regulate in order to preserve their reputation.
Another part of the reason that generative A.I. will be a large point of focus is that the exploitation of these systems can end up having a negative impact on those lawmakers themselves.
For instance, one can use generative A.I. to produce images, audio, and even video of events that never occurred. Fabricated media of politicians doing or saying things that they never would do or say, at least not in front of a camera, and in public.
You can imagine, then, how anyone on the campaign trail might consider such technology potentially harmful. For the sake of maintaining some degree of image control in public relations, it will be in the interest of politicians to regulate the use of generative A.I.
Watermarks Are the Most Likely Solution in A.I. Regulations
An idea that has been tossed out numerous times already is the implementation of A.I. watermarks.
Tech companies seem to be on board with this, promising to get to work in creating A.I. watermarks for their own A.I. products.
This may alienate some business owners who want to use A.I. to present auto-generated content as original marketing material, posing a significant opportunity cost for tech companies.
However, the big bet here is that working to create A.I. watermarks will signal to politicians that the tech companies are willing to regulate the parts of A.I. that politicians are the most concerned about, a gesture that may pay dividends when it comes time to send some representatives to Washington to lobby for certain regulations.
The Impact on Business Owners
We hinted about it in the above section, but watermarks–one of the more likely A.I. regulations–could pose a challenge to business’ marketing plans in the upcoming year.
The crucial point here is that a watermark would definitively show viewers of an ad or other marketing content that A.I. created it.
It’s uncertain whether consumers will accept this or if it will result in reduced attention to marketing materials.
Additionally, business owners can probably expect to see a slowdown in the development of visual A.I. generation technology as lawmakers seek to limit this technology’s power and reach.
For this reason, it would be prudent to not pin too much of their content strategy’s hopes in the short term on having unfettered access to image-generating A.I. working at the top of its powers.
Instead, they’ll have to keep depending on traditional marketing methods for now, possibly using A.I. for post-photoshoot tasks.
Recent Comments