The Five Key Takeaways from this Article
- The California Civil Rights Council has proposed a series of amendments to the Fair Employment and Housing Act (FEHA) that seeks to mitigate and prevent discrimination in A.I.
- These amendments come as the uses for A.I. in hiring practices are becoming more clear, not to mention much more sophisticated.
- Specific areas targeting A.I. include its use to predict and measure an employee’s fitness and pre-screening applications.
- The amendments would affect organizations with five or more people paid for work or services. So, these amendments would certainly involve small businesses
- Practical advice for using A.I. in H.R. is to always have a human on staff to review the A.I.’s decisions.
A.I. in the H.R. Department: Highly Likely in the Future
So you are thinking of using A.I. in your hiring process. Predicting and measuring fitness for a role and targeting certain groups for hiring are among the uses A.I. offers.
A.I. could even rank and prioritize job applicants based on their schedule’s work availability.
Or, you are simply curious about how A.I. will affect the future of the human resources departments around the country.
The answer lies in using A.I. to predict the range of applicants you seek or appraise for a position at your company.
AI has shown to be prone to many troubling biases, some caused by training oversights and others by inborn issues. It often displays a limited or utter lack of common sense.
And so there is a virtual litany of potentially lawsuit-worthy biases that a company or other organization may unintentionally commit by relying on A.I. in its hiring processes.
In light of this, politicians across the U.S. are creating propositions about A.I. use in hiring processes. These propositions aim to provide guidelines for business owners to mitigate or prevent biases from using A.I. in hiring tasks.
California is the latest state to propose such guidelines. Below, we go over some of the areas that its proposed guidelines cover. Specifically, we cover a few ways A.I. could potentially lead to discrimination in a couple of these areas.
Pre-screening Applications for Human Review
So here is a pretty simple one to understand.
A.I. is given a distinct set of characteristics to look for across applications. But A.I.’s singular focus on some of these characteristics may lead to discrimination down the line.
So, for instance the A.I. may be tasked with finding applicants with college degrees.
If a degree is preferred but not required, this could lead to issues of class discrimination in A.I. An applicant unable to afford college, yet capable of performing the role as well as a college-educated applicant, may be rejected by the A.I. due to the lack of a degree.
And so there is indeed a motivation to ensure that hirable candidates are not knocked out of the hiring process by virtue of not being able to, say, afford to earn a college degree.
As such, monitoring and reporting A.I.’s outputs will be a promising way to ensure that this potential for A.I. bias can be mitigated.
Computer Vision Analysis of Voice, Face, Word Choice in Online Interviews
This use is one that applies to things like Zoom or phone interviews. An A.I. with computer vision capabilities could analyze the audio or footage and make predictions about the fitness of an employee for a certain position.
A.I.’s training phase for such a task could already lead to problems based off the developer’s criteria for what constitutes good candidate conduct as distinct from bad candidate conduct.
As such, the A.I. could develop arbitrary judgments as to what a candidate may say or that would make them fit or unfit for the role. And this could be further exacerbated by machine learning technology where the A.I. can decide on additional standards for judgment.
That is not to pretend that human beings are somehow free of arbitrary judgments formed from conscious or unconscious biases. The issue here is that a human conscience at least gives a human reviewer better odds of mitigating discriminatory judgments.
If A.I. could even have a conscience (we will skip the philosophical arguments), it would be distinct from ours. Human conscience is informed by emotions, along with possessing greater common sense than a machine’s conscience.
As such, this is certainly one of the areas that legislation would do well to target for monitoring.
Recent Comments