Facebook Pixel Code

The Five Most Key Takeaways from This Blog Post

  • Kentucky lawmakers recently expressed concerns about the negative influence that A.I. could have on elections. One of the main concerns is disinformation and misinformation made and spread by artificial intelligence. 
  • Certain lawmakers point to Colorado’s recent legislation, which takes effect in 2026, on artificial intelligence as an indicator of what legislation in Kentucky may resemble. Colorado specifically targeted 
  • There is also acknowledgement among some lawmakers that A.I. legislation by the E.U., which has been decidedly more swift in enacting A.I. legislation than the E.E.U.U. (A.K.A., the U.S.A.), is a model for many American legislators. Part of the reason for this is that there is a recognition that A.I. companies looking to make global technologies need some level of consistency and semblance to A.I. regulations across the globe. 
  • One of the most high-profile examples that legitimizes and justifies politicians’ concerns were the robo-calls that used an A.I.-generated voice of Joe Biden to relay a message to the people on the other end of the phone line to not bother voting in primaries. 
  • Outside of the A.I. directly related to Your Money or Your Life (YMYL) issues, generative A.I. is likely to be one of the most heavily regulated parts of the A.I. industry because of gen A.I.’s potential to create deepfakes that could negatively influence election outcomes by spreading false information that voters may base voting choices on. 

 

A.I. and Elections: Kentucky’s Lawmakers Have Some Thoughts

In politics, many of the power players earn their position by winning elections from a constituent body within a population. As such, what you broadcast to those voters has a significant impact on whether you will win an election or not. 

Naturally, then, politicians will have a sizable yet due concern about the quality (read: veracity) of information about themselves that enters the consciousness of voters. 

What A.I. has made possible is the easy and fast creation and proliferation of low-quality information about politicians. And given just how much better these A.I. platforms are on the way to becoming, it is reasonable to admit that politicians will certainly have clearly defined limits about what they will permit A.I. companies to get away with. 

Much of this concern involves deepfakes, which can be audio recordings, photographs, or even videos. One can easily imagine untold numbers of political careers that can be un-made by the proliferation of deepfakes. 

In fact, younger generations of politicians are already dealing with this issue at the very start of their careers, with some examples being particularly execrable. Some must deal with an unprecedented uphill battle to juggle not only the usual challenges from the opposing candidates and parties, but a whole new ecosystem of unreality that is becoming a significant part of all Internet users’ shared media environment. 

What Kentucky Legislators May Seek to Do

The Colorado law earned recognition from certain Kentucky politicians as a likely model for what Kentucky law may resemble. And that Colorado law itself has the E.U.’s A.I. Act as a model and guiding influence. 

Much of those passed bits of legislation have to do with prioritizing the regulation of high-risk A.I. areas that have to do with people’s health and finances. 

For instance, these regulations would not allow for a technology that automatically handles without any human oversight the selection of candidates for loans at a bank. Given the problem of bias in A.I. systems, this particular way of using A.I. could simply be too socially disastrous for many Americans. 

One can think of other obvious examples. For instance, leaving A.I. chatbots to diagnose mental-health conditions in lieu of a human professional psychiatrist performing an official diagnosis is another problematic area. 

But another thing that the writer of this blog post believes readers should expect from politicians in Kentucky or elsewhere is a strong focus on A.I. watermarks. 

A.I. watermarks label A.I.-generated content as such, so that people will know outright whether something they are looking at was made by an algorithm. 

The question of legitimacy of what you see on the Internet is becoming a major issue, and politicians recognize this. They especially recognize how such a technology could thwart many of their efforts in the interest of their political self-preservation on the Internet and beyond. 

For this reason–i.e., the threat to politician’s self-preservation in a notoriously brutally competitive field–, you can expect to see a priority in addressing the problem of misinformation and disinformation that A.I. can amplify. 

Ultimately, the regulatory landscape regarding A.I. in the United States is still in many ways only beginning to develop, so we will need to stay tuned.