HR Leaders is a digital media platform Shaping the Future of Work, for business and for the lasting benefit of society.

View Original

How to Avoid Legal Risks of AI in HR

Keith Sonderling, Commissioner on the U.S. EEOC, joins the HR Leaders Podcast to discuss AI's transformative role in HR. He highlights the challenges of balancing technological advancement with legal compliance and shares practical strategies for creating fair and unbiased employment decisions.

🎧 Subscribe on your favourite platform iTunes | Spotify | TuneIn | YouTube

See this content in the original post

In today's episode of the HR Leaders Podcast, we welcome Keith Sonderling, a Commissioner on the U.S. Equal Employment Opportunity Commission (EEOC). 

Keith shares groundbreaking insights into the integration of AI in HR and its legal implications, emphasizing the need for fairness and compliance in this tech-driven landscape.

πŸŽ“ In this episode, Keith discusses:

  1. How AI can revolutionize HR by eliminating human bias and making data-driven decisions.

  2. The legal pitfalls of AI in HR and strategies to ensure compliance with discrimination laws.

  3. Real-world examples of AI misuse and the importance of conducting bias audits.

  4. The critical role of HR leaders in fostering trust and transparency around AI implementation.

DISCOVER WHAT EMOTIONAL SALARY MEANS – AND HOW YOU CAN MOTIVATE EMPLOYEES BEYOND PAY.

Great recognition is more than just a thank you program. By leveraging frequent and meaningful recognition, Achievers drives business results that matter to organizations like retention, productivity, and engagement. Our platform makes it easy for employees to recognize each other anywhere, whether in-office, remote, or on-the-go.

The Achievers Workforce Institute reveals that two-thirds of employees have one foot out the door in 2024. The top reason for job hunting? Better compensation. But money isn’t the whole story. Employees are seeking not only monetary salary, but emotional salary too.

Keith Sonderling 0:00

There's some AI technology out there that entrepreneurs are developing to sell into HR department. And that's when I realized that we really need to get involved here. Because now is everyone knows, but even before, especially for large global organizations, it's no longer a question, are you going to use AI in HR? We're so far past that point. The question now is, what are you going to use? How are you going to use it? What purpose? Are you going to use it for within your organization? And more, most importantly, to my concern? How are you going to comply with long standing laws that apply to those decisions? Whether or not you're using technology to make that decision? Or you're using your human worker? And that's where I'm trying to change the narrative on this? Because if you will, Kelly's products are being sold on being more efficient, more economical, and make better employment decision by eliminating the biggest problem that HR has faced since his existence, which is the human.

Chris Rainey 1:03

Keith Welcome to the show. How are you?

Keith Sonderling 1:04

Thanks for having me.

Chris Rainey 1:05

Thanks for making me look bad. By the way, it's looking way too sharp.

Keith Sonderling 1:08

It's the Washington DC costume that I've brought here to London,

Chris Rainey 1:13

have them like all in the exact same color all in order all day long,

Keith Sonderling 1:16

all dark. But DC suits?

Chris Rainey 1:18

Is there like a actually an actual dress code?

Keith Sonderling 1:21

If I didn't dress like this, that would you wouldn't believe me that I'm in this role. So it's very important that I at least play the part

Chris Rainey 1:29

is always if you just really casual people, like know exactly. How are you anyway?

Keith Sonderling 1:34

Good, good. I'm really excited for our combo.

Chris Rainey 1:36

Yeah, what takes you so what apart from this show what brought you to the UK,

Keith Sonderling 1:39

just you know, I get to Europe pretty often. As you know, a lot of these issues are global, which is what makes this practice so exciting, is that, you know, what we do in the United States is largely followed by global corporations in the HR space. So it's important for me to leave DC, see what's going on around the world in HR, and really talk to HR leaders across the globe, for me to be able to do my job better. In DC, before

Chris Rainey 2:06

we get into the current role and what you're doing. Tell us a little bit more about your background, and sort of the journey to where we are now. Yeah, so

Keith Sonderling 2:13

I was a labor and employment lawyer before joining government. So this is all I've done my career just a different angle, defending corporations defending HR departments, in disputes in Labor and Employment disputes. In 2017, I left the practice and joined the US government where we went to the US Department of Labor, the Wage and Hour Division, which did overtime issues, pay issues, minimum wage, child labor, immigration issues related to pay, very diverse portfolio. And then in 2019, I was nominated to be commissioner on the EEOC, and confirmed by the US Senate in 2020. Wow. So I've been in this role since then. Yeah.

Chris Rainey 3:00

Amazing. What made you interested in transferring over from corporate, the government?

Keith Sonderling 3:04

You know, I was really interested in the policy side. And I really wanted to see the other side of the practice. And like I said, moving to DC was a life changing experience. And yeah, you know, working for the agencies that deal with the issues that you're dealing with, in private practice, on a daily basis, being on the other side, it's almost like getting a PhD in HR, in labor and employment. So it was, it was a great experience for me in that regard. Because

Chris Rainey 3:33

I don't want to say it happened the other way around, like people go from government and then to private like kind of the opposite way. It

Keith Sonderling 3:39

gives me a unique perspective, it really allows me to see what HR leaders have to deal with on a daily basis, opposed to you know, a lot of people in these roles, as you say, you know, you start out from the government, and then you go the other way, you don't really see the day to day of how difficult the practice can be. You

Chris Rainey 3:54

got it. Probably good for the audience and to break down what is the EEOC.

Keith Sonderling 3:59

So the EEOC is the United States Equal Employment Opportunity Commission, EEOC, so everything in DC good acronym, we love good acronyms there. So we are the federal agency responsible for enforcing all laws that prevent workplace discrimination, and advance equal employment opportunity in the workforce. So we are the regulators of human resources. It's that simple. We are, you know, the government agency that's responsible for enforcing laws against and for HR. And our agency is very special. It was born out of the Civil Rights Act 60 years ago. So when Martin Luther King marched in Washington, DC, the result of that was the Civil Rights Act of 1964, which created the EEOC, which created the rights for workers in the United States, applicants and employees to have these civil rights protections, which are some of the strongest in the world. And you know, part of the reason why I travel all across the world is that We were really one of the first agencies to set that groundwork for civil rights laws in the workplace. So a lot of other countries since then use us as a model on how to create employment laws, how to create workplace protections, what those protected characteristics are, which we'll get into across the world, especially for large global organizations, they're really going to be using the EEOC standard, you know, generally across the world in most places. So that's why our agency is extremely relevant. And like I say, We are the premier Civil Rights Agency really in the world, because how strong our protections are, yeah,

Chris Rainey 5:36

because that's something I didn't realize when I suppose still, you're traveling and speaking, and I was like, oh, like, there's more to this than just the mean, just to us. But also, you have huge influence worldwide,

Keith Sonderling 5:47

right? A lot of governments are going to look to what we've done, just because, you know, everyone understands the civil rights movements in the 1960s in the US, so we're very, very impactful agency. Yeah.

Chris Rainey 5:56

Could you break down some of the what do you do specifically, then you mentioned some of the different areas. Yeah, so

Keith Sonderling 6:02

our mission is to prevent and remedy employment discrimination and advance equal employment opportunity in the workplace. So we're really known for the remedying employment discrimination. So again, we are the civil rights law enforcement agency in the United States, that employees come to, to complain of discrimination, and we have to investigate it, for the most part in the United States, whether you work for a private company, state or local government, or even the federal government itself, you cannot go to court directly and sue your employer, you have to come to us. So we see almost every single case of employment discrimination in the United States, no matter what sector you're in, so it really puts us in a good position to see what the trends are in human resources. Hayden,

Chris Rainey 6:45

it's like the opposite in the UK, where you go straight off the the company that accompany directly, as opposed to through yourself, so you have all of the data, and

Keith Sonderling 6:54

then they come to us. And then we do the investigation after certain period of time, and they can hire private lawyers and go to court themselves. But we get first shot at all employment discrimination cases in United States. Interesting,

Chris Rainey 7:05

what are some of the misconceptions that people have

Keith Sonderling 7:08

that work, you know, most HR professionals, which is such an important reason for me to get out there, think of the EEOC just as a law enforcement agency. Because, you know, the most HR professionals and I saw this in private practice, their only interaction with the EEOC is that they get a charge of discrimination, which is what we call a complaint of discrimination. And you know, then they're accused of doing something wrong, and they're already in a defensive position. But a lot of people don't know, is the other part of our mission statement, which is to prevent employment discrimination from occurring in the first place. And also advancing equal employment opportunity in the workforce, which gets to making sure that the people are getting the jobs based upon merit and skill and not having any other factor play a lawful or unlawful role in that decision. So I've really been trying to raise the awareness of not only are we a law enforcement agency at our core to protect workers, we also have an important mission to make sure that HR professionals understand these very complicated regulations that they face in every type of decision they're making, and what's unique about our laws, and applies to A to Z of the employment relationship. So you know, a lot of people in HR space, a lot of it's about hiring, right? So we do everything from hiring, firing, promotion, wages, trainings, benefits terminations. So really, the entire lifecycle of an employee is what my agency regulates. And that's very similar to the mission of HR departments. So I really, in this role have been out there trying to talk to HR leaders, and getting them more involved in the regulatory aspect of this, because before, you know, a lot of these conversations we're just having had with lawyers and government officials, and not the actual practitioners that are out there. So every aspect of an HR professionals daily job, we have some say, in the regulation, and in the law enforcement side of it, too.

Chris Rainey 8:58

Yeah. What are the little from a practical perspective of how you support them? You know, what are some of the specific things that you're doing to help support them beyond the reactive part that you mentioned?

Keith Sonderling 9:08

Well, you know, as you know, a lot of HR is reacting to what happened. Yeah. And we're in that same position to so what what happens with us is that we get very dictated by what's happening in the global news. And that's on us to then put out guidance to make sure that HR professionals can understand, you know, what the changes are, whether in the regulation or hot topics. Now, let me take a step back. And just so how, as HR leaders have been all over the place and what their priorities are same with us. So if you look after the recession in 2008, when the global economy crashed, we saw a lot of claims of age discrimination, right, because older workers were being laid off. Yeah. Then we saw we start putting out guidance about how you could support older workers in the workplace, make sure that they're not getting laid off just because they're the highest paid to lead to age discrimination cases. Then the me to movement happens. So then we have to As an agency, we have to put all of our resources into investigating all those sexual harassment claims, even though sexual harassment has been unlawful since the 1960s. You know, when the spotlight was on it, because of the terrible news that was happening worldwide, you know, we had to then not only do law enforcement, we had to put out best practices and guidance to ensure that you have the proper open door policies to report sexual harassment, to prevent sexual harassment from occurring, etc. Then other news, the US women's soccer team, right? Yeah, pay equity, huge. We're in the front end of that, we have to deal with all the no renewals coming out, right? And then like everyone else, but that's the same as HR. That's just just so then what happens COVID happens? And it's it's so dynamic with COVID, too, if you think about it, because look, in the beginning, it was all about, you know, how do we protect our employees? And are we allowed to take temperature checks of our employees? Are we allowed to exclude employees from the office, if they have symptoms of COVID, even though you know, they may not have COVID, then the vaccinations came out. And if you remember, there was a big push towards the mandatory vaccinations, whether it's by governments or certain employers, we were involved in that and talking about the parameters of if you're going to have a mandatory vaccination policy, well, here's who potentially gets exemptions under the law. And then if you remember, a lot of people weren't getting the vaccine, and a lot of companies were doing vaccination incentives, were saying, what a new car potentially, if you get the vaccine, and there are issues with that, because if people can't get the vaccine, it may be discriminating against unemployment benefit to those who are able to get it, and then moving on long haul COVID. Now some of the issues with return to work, it just never ends. And that's my point is same with HR, it never ends, and you never know what your priorities are. So very much like HR professionals, we have to adapt it not only ensuring that the laws are being enforced in all these different areas, and not waiting for some global news, to put the spotlight on it, but also to make sure that HR professionals have that guidance to prevent the next metoo movement to prevent the next big pay equity case from occurring. So we have a dual mission. I think that's the biggest misconception that we're just a law enforcement agency.

Chris Rainey 12:03

Yeah. Like, how, how do you and the team make decisions on what there's so much being thrown at you do geopolitically, you know, like, from Mater port from the media, from politics from so many different areas? How do you choose what battles to fight?

Keith Sonderling 12:23

Well, you know, it is Washington, DC. So you have a lot of stakeholders on various sides, you have unions, you have civil rights groups, you chant, you have chambers of commerce, you have different trade associations. And what's so unique about our agency, and with HR profession to is that we're industry agnostic. So we deal with every industry. So the issues facing the manufacturing industries are different than issues facing the retail industry different than facing finance or knowledge worker industries, right. So it's so difficult in that sense of what do you have to prioritize? And what do you have to focus on? And, you know, a lot of it unfortunately, is driven by Well, now there's a huge spike in these claims. And why is that happening? Measure that you got to say, we have a data because they have to come to us. So that's really, you know, where I've been trying to talk to HR leaders say, here's the data we're seeing, here are the issues we're seeing. And here's how we can get ahead of preventing, you know, an issue

Chris Rainey 13:19

happening, whether you find the information, because I'm sure that's something that many people maybe, yeah, oh, we

Keith Sonderling 13:27

have all this information. You know, the good thing about our agencies, most information is widely available, we put out a lot of data, and I'll make sure to get the links for the recording, that have all the different types of data. So and it's so granular to like, you know, for disability discrimination, we get so granular, we tell you the different types of disability discrimination occurring, especially with Long Haul COVID memory issues with brain fog with long issues. You know, we really break it down to how many cases we're getting on the various areas, whether it's physical disabilities, whether it's mental health issues, so we really haven't really broken down on our website, which is eeoc.gov. And all the different statistics, you could see, oh, here's the rise in age discrimination. Here's it lowered, you know, there's a huge after COVID vaccines came out, there were a lot of people who want the exemptions related to religion. So there's a huge spike in religious claims. So it's really year to year where you're seeing a lot of the changes. That's why we put that information out there. And you know, for the real data geeks out there, we actually have a Tableau dashboard where they can, they can nice, they can download some of that data and put it in their own system so that

Chris Rainey 14:35

one of the misconceptions I can hear is like, government's so slow to move and react like when you for example, remote work, right or the pandemic, how quickly from that moment? Did it take you down to put some report or report out to share your insights, etc, with guidance? Well, during

Keith Sonderling 14:53

the pandemic we were doing in real time through questions and answers, because you know, we were watching us we were seeing what's happening You know about whether it's disabled workers needing accommodations needing additional PPE or some of the different arrangements related to returning to the office. So in a sense, the COVID was a little easier, just because it was the top news story, and everyone was dealing it, it gets a little trickier, with some of the issues that aren't, you know, national news, you know, such as, like, you know, pregnancy discrimination, or religion, religious discrimination, or some of these just other because we have so many different protected categories, national origin, race, color discrimination, basically, which is different than national origin. So, really discrimination based on disability genetic information. So, it's so hard to, you know, keep up with all the other, you know, where the what's going to be the bigger issue that year. So we're, that's why we really get ahead of that data and put it out there for the public. And

Chris Rainey 15:49

I'm sure, now the next wave of AI, is AI, artificial intelligence know, as what, based at long as Jesus, you know, are they going to be aligned with employment law? Talking about data as what data security and people's personal data as well, there's a lot of, I'm sure, that must be a question that has been thrown at you. And

Keith Sonderling 16:09

here's, you know, where, you know, since I've been at the commission, really have tried to slow everything down. And saying that, yes, we are. And same with HR professionals, there's always going to be something that's going to distract you in the news that's going to, you know, have claims of discrimination that you're gonna have to be dealing with on the compliance side. But we have to address AI in the workplace. And because this is going to be bigger than any of the issues we talked about now. Because as you know, HR technology, whether it's AI or machine learning, way before generative AI is exploding. Yeah. And you know, I say, once I started digging into it, and when I first started looking at it, you know, in late 2020, early 2021, like many I was confused as to what we're even talking about, I thought it was very industry specific. Okay, manufacturing, retail logistics, we actually have robots replacing human workers. Well, that's not gonna affect knowledge workers, that's not going to be across all industries. So I thought that was actually about doing the work. But then when you dive into it, you see that there's HR tech vendors, using AI, to make all different types of employment decisions that normally HR professionals would make, from the very beginning to draft the job description, where to advertise, how to review the resumes, who to select for the interview, to conduct the act to schedule the interview to conduct the individual onboarding, compensation, everything upskilling, rescaling and even termination right across the board A to Z of the of the employment relationship, the candidate experience the employee experience, the post employment experience, right, all these different things. There are some AI technology out there that entrepreneurs are developing to sell into HR departments. And that's when I realized that we really need to get involved here. Because now is everyone knows, but even before, especially for large global organizations, it's no longer a question, are you going to use AI in HR? We're so far past that point. Yeah, the question now is, what are you going to use? How are you going to use it? What purpose? Are you going to use it for within your organization? And more, most importantly, to my concern? How are you going to comply with long standing laws that apply to those decisions, whether or not you're using technology to make that decision, or you're using your human workers. And that's where I'm trying to change the narrative on this. Because if you look how these products are being sold, again, not picking out any type of HR tech product, what's any of them, they're all being sold on being more efficient, more economical, to make better employment decisions, a lot of them on the skilled space side of it, you know, which is what the law requires right? To make decisions, free of bias, free of any protected characteristics on the actual merit. And to do it, you know, better than your current humans are doing it, by eliminating the biggest problem that HR has faced since its existence, which is the human human bias. If we eliminate the human from employment decision making, then there'll be no bias. Everything will be perfect. My agency will go out of business, which would be a good thing, right? Because then there's no employment decision. There's no employer discrimination. I don't want us to go to business. But you know, I'm saying that that a lot of it is on like, look, there's there's bias in employment

Chris Rainey 19:47

bias in the way to the code. It's well that's how you have

Keith Sonderling 19:51

to break it down. Yeah. And that's why I said Well, let's kind of decipher this as a you know, HR professionals as people who are not a Um, technologists and understand really, you know, how this technology is being developed and how it's being used. And that was really, you know, took a lot of work, because you have to understand, you know, where the market is coming from what HR buyers want, and then the complexities of actually trying to implement these systems within your organizations. And they're all different concerns. And that's when I came to the realization that, you know, for AI in HR to work, it has to be carefully designed, and properly used. And those are two completely separate and distinct concepts. But here's the kicker is that no matter who's designing the program, whether you buy it, whether you develop it yourself, that organization, under our loss, is going to be liable for whatever decisions it makes, because only the company can make an employment decision. And that's really where, you know, a lot of the interest in getting this right, both from the vendor side. And from the user side, it comes into play. So

Chris Rainey 21:05

just because yeah, that was one of my questions like, Who is responsible if the AI makes a discriminatory employment decision? Is it a liability of the vendor? Or is it the employer? Well,

Keith Sonderling 21:15

under current laws are laws in the United States, which a lot of countries follow the EU with the EU AI act is changing this, and you're seeing some States and United States trying to get there as well. There's only three parties that can make an employment decision. Companies, employers, staffing agencies, and unions, that's our world, only they can make an employment decision. So when we show up for an investigation, we're only looking at that employment decision that employer made. And if that employer is saying, Well, I don't know how this employment decision was made, because we use the vendor, that doesn't matter to us. Our job is easy in that sense. So that's how we look at that, that only the employer can make an employment decision, right? Because the company you're working for a staffing agency you're working for, you know, the union you're working for, is the one hiring you is the one firing you is promoting you demoting you, right is paying you. So that's really where using AI in HR, it's just different than using it in different types of your organization. Right. So if you're using AI to make your widgets faster, to make shipping routes faster, right, you know, the vendor liability is potentially different there, right? A lot of contractual issues there not to sound too much like a lawyer. But the bigger issue is that, unlike using AI and other parts of your business, when you're using AI and HR, you're dealing with people's fundamental civil rights to enter and thrive in the workforce, provide for their family, and not be discriminated against, which is no different than the human side. So I'm also trying to change the narrative there and saying, you know, that that's why it's so important, you know, for HR buyers to understand. And we'll break it down those two different concepts that the AI has to be, you know, properly designed, that also has to be carefully implemented and used within your organization.

Chris Rainey 23:06

What advice would you give to HR leaders when they're thinking about choosing different vendors? What are some of the questions maybe they should be? Yeah,

Keith Sonderling 23:13

this is where it's really tough. Because the vendors, you know, they're entrepreneurs, they understand technology. And, you know, we have to break it down into verbiage that HR professionals understand. And you know, when we talk about, well, well, what is it a dataset, right for us? What's your applicant flow? Your current employees, you know, of who, you know, you see resumes for? And then, you know, what are you asking the algorithm to do? Right? Because there's a lot of confusion about, you know, are we going to use AI? Should we use AI versus human decision making? And at the end of the day, there's only a finite amount of employment decisions, right? hiring, firing, wages, training, benefits, promotions, you scheduling all this, you know, these issues, and AI is not making hasn't come up with a new type of employment decision yet, right? So we're just asking AI to either make that decision for us, or give us more data and information to make that decision better. So a lot of it is reframing it to that at the end of the day, you're already making those same employment decisions that you're asking, you're going to bring an AI vendor to make help you make, right. So a lot of that is to is also just slowing down. So you're already doing this one way or the other. And what are your current requirements when you're making those employment decisions? And that's where really you can get into the issues related to the the implementations of these AI systems, and how they can potentially completely remove bias from the process. Or they can scale bias greater than any individual human can do. And those are the two different concepts. Yeah. So on the on the sort of two different kinds of issues from a legal perspective. I think this is the sort of, you know, value I bring to this equation because it's going to be our investing Here's the EEOC who are going to be showing up, you know, at the investigate these claims, you know, there's really just two types of issues when you're dealing with the design of these programs. One is the more common data set discrimination, which there's been so much talked about, yeah, you know, when all the resumes are male, right, and there's only a couple females, then, you know, you've seen, you've heard these horror stories of the females getting lower scores, just because that was overwhelmingly in the dataset. Right? So that's on the design of it, right. And that's what the algorithm is going to be looking at. So the algorithms have to be much more sophisticated than saying, well, just because the vast amount of resumes are men, being a male must be the overwhelming qualification for the job, which of course, is unlawful? And, you know, how do we get to the point where these systems are designed, saying, well, let's completely mask and not look at any of these protected characteristics, such as, you know, age, such as, you know, any indicators that you may be American versus British? Right? All these things you can make a, an employment decision on? And how do we get these tools, so sophisticated enough that even if it's all men, there's still some underlying skill there. That is that those resumes have risen to the top outside of being a male? And then how do we design those systems looking for that actual skill, and that's really how you get can use these to get that skills based approach to employment decision making. So you know, that's so much on the the technical design, that even if you have, you know, these discriminatory datasets, you know, what's the next level of having algorithms look past that. And that's really where I think AI can help us a lot. There's a lot of long standing studies about, as you know, you know, male and female resume apply for the same job, male is more likely to get selected. In the US, African Americans and Asian Americans who whiten their resume by deleting any reference towards their race, you know, are likely to get selected when those are off their resume, they're not. So you know, that's a problem to begin with. But if the AI is carefully designed, it can look at the resume and ignore all those factors that a human really can't ignore, write and actually get to the bottom line skills and qualification. But so much of that is on the design. So you so let's, let's imagine now then an AI tool can do that, right? Yeah, that it doesn't look at any of these race, sex, national origin, or religion, etc. And it just taking your skills based approach to hiring and get, you know, the most qualified candidates in there, the AI is doing its job. And, of course, you know, what you're asking AI to do, or skills, either inferred from from the datasets, or you're saying, here's the skills that we believe are not discriminatory, because these are necessary for the job, right, based upon our industry, based upon an organization which employers have an obligation to do. So then, you know, you talk about working with a vendor, the vendor does their job, now you have an HR professional within your organization now having more access to data than ever before, right. So instead of just PDF resumes, with an Excel chart, now you have all these dashboards, were showing all these potential characteristics that you're not allowed to make an employment decision on. So think about before, you know, the long standing stat and talent acquisition, you know, before this technology, a resume, you have six and a half seconds, for a TA professional to read the resume your whole life, six and a half seconds in front of you when you're applying for a job. So if somebody has bias, before, technology, I don't want to hire a woman for the job to go through each resume. That's a female sounding name, that's a woman's college in the trash, right? It took time to discriminate before technology. But now with this technology in point seven seconds, you can eliminate hundreds of 1000s of applications based upon gender based upon any of these protected categories, just because the information is so widely available, again, not only on the if that information is made available by the vendor on the design side, but then also that individual human is injecting their own bias into the algorithm into that scale that scale to the likes we've never seen before. And think about a two in employee advertising cases, right? Where you're putting a job description, and your advertising, you know, on the internet, if you're limiting, whether it's through social media channels, or other saying, Well, I want to only show this employment advertisement to certain groups. Everyone who doesn't see that group on the basis of not being in that age range and not being in that gender can choose specific, yeah, millions of people potentially can be discriminated against just by not seeing that job offer, based upon, you know, if they were qualified for the job. So you could just see the scalability of these tools on both sides, both sides of this bet on the data or on the actual individual use, and that's why these conversations are so important. And it's not unfamiliar to HR professionals to build parameters around any of these types of decisions.

Chris Rainey 29:53

Yeah. What what are some of the specific things that you're you're doing at the EEOC to address some of those issues?

Keith Sonderling 29:59

So You know, number one, we, all these different HR technologies are all within our purview. So, you know, I think what does that mean within your purview that they're regulated? Because they're making or assisting with an employment

Chris Rainey 30:13

provide to you as part of that. So to us, they don't really have

Keith Sonderling 30:17

to provide anything until there's an investigation. So how do we prevent those investigations? It's not like some other governments where they're requiring you to disclose? Yes, that's, that's where a lot of this is going and what you're seeing is gonna go, that's where it's gonna go. You know, I think there's gonna be a big debate, I think federal law in the United States is going to be difficult on this. I think the Brussels effect is certainly EU AI act is certainly making waves here in London in the United States as well. And a lot of companies are going to have to start complying, just very much like GDPR with some of those. Yeah, not all of them. But some, maybe some of the disclosure requirements, some of the auditing requirements, which we'll talk about, you know, best practices of what employers can do now, you know, but from our perspective, what we're doing at the EEOC, you know, what I'm doing specifically, is not only going to all these conferences, not only meeting with the vendors directly, and I have to say, in this area, the vendors really believe in their products, they believe that this is going to fundamentally change, employment decision making, where there's been a lot of problems, which is why my agency exists. In the last two years, we've collected over 1.2 billion US dollars for employers for violating our laws. Haley Yeah. Wow. Right. So I mean, if there isn't a year, two years, that's a two year period. So is that 600 million a year? Yeah. So there's some issues and right, so it's just, you know, the if we, and that's where we have to be careful here in this conversation. I don't want to just talk about the potential for these tools to discriminate. I like to talk also about their the positive of how it can help us make better employment decisions. If not, then we're left with the processes that we've been dealing with since the 1960s in HR, and that's not really helping, you know, promote equal employment opportunity, that's not helping better candidate experiences, better workplace experiences. So you have to, you know, I used to be very balanced in my position saying, look, look, you can use whatever tool you want. However, you know, there's going to be certain issues under the law if it's not designed properly, or used. So let's talk about just an easy to understand example, a lot of hiring now for high volume hiring is through apps.

Chris Rainey 32:28

Yeah. So you know, you're the founders on the show, right? And so, today,

Keith Sonderling 32:33

let's think about, let's just break it down. And I don't say it's, it's great. I don't say it's terrible. I'm just, I just have to give the page the picture of what the law requires. So think about walking into an interview, or now everything on Zoom, what is the first thing you see on that candidate? When they walk in the room? What they're wearing, you see them? You see everything about them? Yeah, everything, you're not allowed to make an employment decision on the color of their skin, their thing, their potential religion, if they're wearing religious garb, their sex, you know, their national origin, their age, if they're disabled, if they're pregnant. And in the back of an interviewers mind, just by human nature, it's very difficult to forget that. So if a disabled candidate comes into you, you know, turn it interviewer and you see that they're physically disabled, how much is this going to cost me in accommodations? Pregnant, can leave leave, then somebody else comes, who's not going to make those requests, it's so easy historically, for an interviewer to say, I'm gonna go with that candidate, I'm just gonna send the other the disabled candidate, the pregnant candidate a letter saying thank you for applying, you know, we went with a qualified candidate, typical rejection letter, and the end of the day, nobody will ever know, because nobody admit, would admit to violating the law like that. So think about all those individuals that historically never been able to get past that first point. You know, even if you can get past the resume filters that we were just talking about. And then you see somebody. So that's why, you know, a lot of these tools to do in the entire interview on an app eliminates that. A

Chris Rainey 34:08

lot of companies now especially the hive, or using, like the onboarding agents, where you ask the questions and write

Keith Sonderling 34:12

and seeing it talks about a skills based approach. I don't care what you look like, I don't care what your religion is, again, what the law requires. This is not your new. It's saying we're going to ask you questions. And those questions we believe is what is most important for you to work in organization. And we're going to use natural language process. Let's say one that does a voice interview, right here, answer this question. Here's a real life situation. And as you know, the natural language processes go through and looks at the order of the words you're you're responding. And if it meets with the employers qualifications, you're not about what you look like, you know, nothing else in that, that and that's good, right? Because you're just getting that raw transcript, and that's being graded based upon your answers that you'd then be able to say, are relevant to this job as an employer. Good. What's the bad? Well, again, very repetitive Hear it goes back to the design of that. So let's say the candidate, you know, has a very thick foreign accent. And, you know, I speak fluent American English, you speak fluent in British English. And you know, the algorithms can pick us up 99% of the time. Yeah, perfect translate, somebody comes in with a very thick German accent will say, you know, it's a kind of classic example, very thick, you know, rough translation into English, if it only picks you up 50% of the time, because of the accent, then, you know, us you speak fluent English, we get a perfect transcript of the natural language processing and go through, versus that person with a thick foreign accent, or somebody who's disabled who slurs who has a stutter, or somebody who's religious observant in their mouth is covered, they're going to be penalized based upon that protected characteristic. So that person with a thick foreign accent, may have given 100 times better interview that we did. But if it only picks us up 25% of the time, you know, that's where that's where we're gonna start out. And that's not equal opportunity there. And that's discriminating against religion, national origin, disability, whatever that you know, barricade was to getting that that transcript is, is the, you know, can be the issue. And then there's forget about the technology. So there's nothing that you're getting back into the same issues HR professionals have been dealing with is that these are just employment assessments. And employment assessments have been around since the Industrial Revolution, Before, they used to be very boring on pencil and paper. But now these tools allow an employee to do an appoint a candidate do an employment assessment, at any hour of the night, and then you're going to have to show well, is that really relevant to the job and we've seen a lot of these employment sets have discriminated against certain groups. So it's a lot of these same concepts, then then get to the next layer of it as well, which HR professionals are dealing with. But going back to the even at voice transcription, that's where we really have to rely on the vendors to show that they're able to account for a disability. Even. And that's really

Chris Rainey 36:59

maybe like an option that to say, say that right, right. But even then, like not everyone interviews well over video and audio. Right? I one of like the the, there's a guy called camera, and he was like one of the best salespeople ever to work with me. And during the interview process, I was like, everyone's like, Oh, he's not gonna, he didn't interview well. But he was super knowledgeable. He wasn't like enthusiastic, he wasn't charismatic. He wasn't, you know, all the things that beat that people go, Oh, that's gonna be a great salesperson, but actually ended up being one of the best employees for the company ever. But if I think if these

Keith Sonderling 37:35

go back to the same issues, talent professional has been struggling with to begin with, but you just see when you're integrating technology in there, the scalability, but also if it picks up, you know, and you're looking for this qualification, and the example I just gave you, and your classic one, like, oh, we want all our salespeople to smile, and we're going to use facial recognition, on Zoom to see if they're staring at the camera. And they're smiling, because that's what we want for our salespeople. Well, what happens if you can't smile because a disability, what happens because in the country you're from, it's inappropriate to smile during a sales meeting, but you'd be the best salesperson that can at that company. So this gets back to a lot of the basic employment assessment issues that we've been dealing with. But now that it's technology, you know, just the way you know, it's being implemented so quickly, we sometimes forget, well, you know, some of those principles you have to deal with in HR to begin with,

Chris Rainey 38:26

are you collaborating with other agencies as well, because obviously, every agency right now has to be looking at AI and the impact as well, what kind of collaboration?

Keith Sonderling 38:34

So there's a lot of collaboration within the US government, and across the board. You know, this is a global issue. And I think with HR more than other areas of using AI, because we're industry agnostic, because it's being used the same HR software in AI is being used in that continent where there's workers, right, so it's not like this is a USA specific issue. Yeah, it's, you know, you know, all this this world, this is a global issue. So that's why you know, a lot, there's a lot you're seeing a lot of governments cooperate within with each other in this area. You're also seeing the US government, you know, the different agencies coming together, the FTC saying, Well, if you're advertising employment products, AI products that are making false advertising claims, you'll deal with it on that side, the EEO C's, which deals with the buyer side, you know, the housing issues related is the finance issues related to this. It's just really across the board, but a lot of the principles are the same. But you know, what we've been doing as an agency is that we've been putting out guidance saying that, for instance, the first guys we put out was about how are workers with disabilities going to use these tools? And you know, if you have a disability, how are you going to be doing what they're supposed to be doing with bias audit testing? So, you know, we really been trying to see well, what is that the biggest issues that employers who want to buy these programs are facing the questions they have, and people want to develop them? How do we how do we get ahead of that? Because where we started this is just too big to deal With the ramifications after the fact, because it's really going to impact every employee in the world.

Chris Rainey 40:06

Yeah, it goes back to your point earlier about the misconception that the only see you and there's something negative to a negative interaction with your agent. Right.

Keith Sonderling 40:13

And the difference here, though, unlike other areas, is that, you know, until some of these laws change, again, you're seeing governments around the world doing this, employees don't know they're being subjected to these algorithms. There's not,

Chris Rainey 40:26

we're in so we get it, but most of the average person has no idea the note, they're gonna have no idea, they probably don't even know to even look into it. Like, whatsoever. Okay,

Keith Sonderling 40:35

I'll just do the interview on my phone, didn't he? Oh, just I didn't hear anything back, I assume humans gonna look at it, I assume I'm gonna meet somebody, they just they don't know. So you know, from a law enforcement perspective, if employees don't know that their civil rights are being violated by an algorithm, they can't come forward and complaint of employment discrimination even though, right, so that is, that's why we have to be so proactive and saying, yeah, these tools can work, if they're properly designed and carefully used. And I really think that's the theme, we need to be thinking of, some of it you can control of having those policies and procedures in place, making sure that all your HR employees who are have access to these tools are properly trained in in, you know, the laws and regulations of your area, we're using them. But not only that, in your own company's culture as well. And that's why you're seeing so many of these tech companies come out and saying, you know, here's our AI principles, you know, the White House is put out, you know, a blueprint for Bill of Rights when it comes to AI of saying, you know, here's how, you know, the government's going to use AI, you're seeing a big tech companies coming in, here's how we're going to use AI. And I think that's really important, too, we within the organization, and a lot of that is under your control. So some of the technical design stuff, you're gonna have to work with the vendor, and they're going to show you have to show you how that's going to work on your workforce, with your applicant pool for that job description in that part of the world where you want to do that advertisement, how it's going to get you qualified candidates and not discriminate. Yeah, and train your employees to make sure they're doing it right. The second part of it is that as an organization, and this is really, really believe CHR, OHS can take the lead on because you're doing it anyway. Because you have a people culture, that you've built policies, procedures around, in and making sure that you're not only compliant, but whatever your your ethics are, whatever, you know, the trust within your organization. And we're throwing up that all out the window, because oh, now AI is coming in. We don't understand it, you know, as HR leaders, because it's technology. And that's not true. Justice go back, what are you using it for? And what are your current policies and procedures around those same decisions, and now start integrating and mending though saying, If AI is going to be in the equation, it's going to be no different than if the way we've been doing it in a lawful and ethical way. And and there's just a lot of chaos and noise. Because it's technology. And that's where I think HR leaders really need to take the lead on, because you've developed those policies for your organization across the board for decades. Yeah,

Chris Rainey 42:57

yeah, it's harder, right? So it makes sense that we were saying to use that as your solid foundation decision making, despite the noise and every shiny object coming towards you, and a lot happening. But that's you can still be grounded. So despite the disruption, innovation, this is where we make decisions, and it doesn't align with that Danielle's, there's no way you can stay.

Keith Sonderling 43:17

That's your own culture. And that's your own organizations, which may be different than others. And I think that's what's so important. Is that that comfort level, you know, of saying, well, here's what we want to use AI with, here's where we don't feel comfortable using, you know, AI with and again, because there's products out there have A to Z of the employment relationship. You may say, Well, we're comfortable using AI to review resumes, to schedule interviews to, or you may say, Well, we're comfortable using AI to hire and fire employees to completely do performance reviews for employees, you know, just to completely automate and outsource you're allowed to do that. You can delegate this to an algorithm now, whether there's going to be issues whether you need a human in a loop, I'm not encouraging that. But I'm just saying that's, it goes back to what are the principles of your organization? And what kind of transparency and fairness Are you going to have within your organization that that you already have? And how are you going to implement that in AI? Yeah,

Chris Rainey 44:15

just because you can do it doesn't mean you should be doing it. Has there been any large government or private enforcement actions related to AI? Yeah,

Keith Sonderling 44:24

it's been, like I just said, it's hard. It's hard because we just don't, you know, no. Candidates, employees don't know, you know, where they're being terminated when they're not being face to face as I think as what you're starting to see is like a New York local law 144, which got international attention, the EU AI act. There's a proposal as in California, a bill was just passed in Colorado, where it's essentially saying, if you're using high risk systems and what is a high risk system, employment has hrs fallen into the for the most part yeah for scheduling an interview. You're really asking for a forum for your benefits, but making an employment decision making a salary decision, which would be in the higher risk system along with, you know, financial decisions, anything really impacting civil rights and civil liberties, then you're gonna have to disclose, and some go, as far as saying, you're gonna have to tell the candidate that, here's the software we're using, here's exactly how it's going to rate you. And, you know, here are your remedies. If you want to opt out, that's extreme, we don't have that at a national level in the United States, but you're starting to see certain areas, certain governments require that when that happens, and then some of you get doesn't get a job, and then they talk to some of their friends say, who, you know, look the same, who are the same age and say, Yeah, I didn't get a job there, you know, either, then you can say, well, maybe age played an unlawful role in an algorithm, because they, here's what the factors that we're looking for. So that's where I think a lot of it is going, if you look at the trends of you know, where the state and local laws are going, you're seeing consent, the requirement to opt in, or the ability to opt out, and get a lot of these are just proposals as well, disclosure, that you're being subject to the algorithm, banning, you know, emotional recognition, facial recognition, you know, from some of these processes, for the that there's software out there to do just pletely, banning some uses of it, you know, some states in the United States to have, its facial recognition is almost, you know, in Illinois and Maryland, there's acts that basically make it impossible to use during a candidate interview. So you know, you're starting to see that patchwork of saying, well, here's outright ban uses. If you're going to use it in certain areas, we're going to require you tell the candidate, which is a consumer in our world, a lot of these are consumer laws, and what they're being subjected to, yeah, and wait until that happens, it's gonna be very hard for candidates to know, it's

Chris Rainey 46:53

hard, because like, you kind of touched upon it. And there's so many headlines across the world about people calling for new laws for AI in the workplace. But it seems like it's not as simple as it seems to make that happen. Because you're talking about state by state, state

Keith Sonderling 47:08

by state and country by country. But look, it's like, but look at what are the common themes, I think one of the most important common themes, which the EEOC, which department of labor, have all come out and said, and even here in the UK, you know, having worked with them, is is bout self regulation, and doing bias audits yourself. And even if you're not required, like New York City is requiring bias audits, okay. The EU will require bought bias audits in some of these tools to make sure that they're doing what they're saying they're doing pre deployment, and then yearly, or as your dataset changes, and that's a good thing. And we encourage that, because look, if you're if you're buying an AI tool, you're saying here are the skills we believe is necessary for this job. And that skill completely discriminates against a certain category. Yeah, you know, if you have technology, you can do a pre deployment audit, where before an HR, you put out a job description, and then you see not a single female applies. It's too late at that point, right. And here, we're using this data and technology, we're able to assess in real time, if it's doing what it says it's supposed to doing, or if it's discriminating and fix it before you ever make a decision on someone's livelihood. So that's the benefit of doing these pre deployment audits, or yearly audits, or as the data set changes again, what does that mean to HR leaders? Well, if you're changing some of the job skills, job requirements, you know, if you're changing some of the requirements, is that going to have an impact on certain groups, and to be able to do that in real time, and fix it before it goes live? It's something we haven't had before. And that will prevent discrimination and keep you in compliance with the law. And you don't need a government to tell you to do that, even though a lot of them are. But I think that's where a lot of it's going is saying, Well, if you want to use these tools, do do an audit in advance, make sure that you know the skills you're requiring the the way the algorithm is looking at the the qualifications are actually going to get you the best candidates and not discriminate. And you know what that's based upon long standing law and principles.

Chris Rainey 49:04

Yeah. I didn't realize that. I was just thinking about you saying I didn't realize that AI in HR is being designated as high risk. But it makes complete sense based on civil rights,

Keith Sonderling 49:15

you're dealing with the, you know, fundamental rights, you call different things in different countries. But yeah, think about that the ability to provide for your family. That's really, that's, that's up there. So and that's why, you know, going back to my agency, that's worse Civil Rights Agency, and that's what HR deals with. You're dealing with employees civil rights to not be discriminated against.

Chris Rainey 49:39

Yeah. Do you think that we're ever going to reach a global consensus?

Keith Sonderling 49:43

I think you're starting to see a commonality between a lot of the proposals I think everyone has sort of like if you're if you look at some of the principles a lot of tech companies have put out, or governments put out just about their own use of it. transparency, accountability, fairness, not having bias you I think you're having these state and local, or foreign governments basically start to say, Okay, if you're in the US AI, disclosure, reporting in some places, auditing, and then it gets into the data world, which is a little out of my wheelhouse about ownership of the data under the GDPR. World, yeah. So I think you're gonna start seeing a lot of commonalities between countries. And it's important because this is, like I said, especially in HR, this is global. Yeah.

Chris Rainey 50:28

And most of the companies that we speak to our global companies, and depending on different countries and continents, has different laws, and regulators, Heartland to keep up as well. Right. So things are moving so quickly. It's a real challenge, challenging

Keith Sonderling 50:41

a lot of people, you know, post COVID, a lot of governments, a lot of groups have a lot of interest in HR. You know, we talk so much, and you talked on this podcast about how HR has taken a huge lead, you know, in COVID, and really has shown the business function of this job. Well, with that comes additional scrutiny. Yeah, if you look at, you know, some of the Union pushes in the United States, and across the world, labor, and employment, and HR is really taking a front seat with employee, you know, activism, with social media. I mean, these are really big ticket items for corporate boards now. And you know, I think AI is playing such a big role in that, especially, you know, when you get into the generative AI equation. Yeah.

Chris Rainey 51:26

And it's interesting to see how HR is leading the way with AI in the workplace. I don't think there's something you said a few years back. No, people were like, because you just mentioned about the applications in, you know, retail and manufacturing. And those are that, okay, that makes sense. But now, the applications that are affecting the labor in the workforce, I think, many people didn't see the real impact of that. And now we're starting to see it play out. Right, especially

Keith Sonderling 51:54

with the, you know, remote work, and people applying for jobs all across the world, which has never dealt with those issues, you'd apply to a job in a company in you know, your city, right? And it was very easy for employers to say, well, what is the hiring pool? Within my city? What is the breakdown of this, we don't have people, you know, from this national origin, or, you know, or this religion, and even if it's going to have an impact, there's nothing we can really do about it. But now, essentially, the whole world is your applicant pool. Yeah, because of the technology. And it just really changes the dynamic of ordering to make sure you're not excluding people, based upon a protected characteristic at scale. So I think that's a really important concept. Yeah.

Chris Rainey 52:37

How do you see this playing out over the next couple of years, you're in such a unique position where you're seeing it both internally, externally, in the conversations you're having globally, which is great that you've taken the time to step out of the office. And actually, you know, which many of us don't, from, from my understanding, to really understand the global impact as well,

Keith Sonderling 52:58

that this technology is going to continue to develop. And it's going to, it's, the real struggle is going to be how do we not? Are we using it? But how do we implement it properly with all these changing requirements. And I think a lot of that is just going to get into the testing phase of it. I know that sounds kind of boring. But you know, all you can do, no matter what you want to use for it, whether it's compensation, whether it's for, you know, promotion, or even employee management, you don't forget, there's a lot of software out there telling you what you're going to do for work that day. And also, I'm going to grade you using an algorithm if you've made enough products for the day, right? So it's for each of those uses, it's it's ensuring and doing that testing, and here's the problem. A lot of these tools are being sold to save companies money, and there's undoubtably they will, right especially on the hiring side. But at the same time, so we're asking companies to spend money on this advance technology on this transformation into AI to remove bias to make more fair, transparent employment decisions. All these buzzwords we have. But not only that, because that liability for loss rests with the company, the next layer of there's no other way to put this spending money is these governance programs around it, it's making sure that you have those policies and procedures in place, making sure that you know, the employees who have access to these tools have the recurrent training, you know, in our laws to make sure that they're making, you know, not using these tools to discriminate as well and that audit component to it, too. Yeah, but that's how it's gonna work. And you know, that's where the rubber is gonna meet the road because you could use it confidently, that you know that the tools are doing what they're saying you're doing, because you've tested it yourself.

Chris Rainey 54:43

Yeah, love it. Before I let you go. What would be your parting advice? And then where can people reach out to you if they want to connect with you? Well,

Keith Sonderling 54:53

reaching out to me is easy. Just find me on LinkedIn. If it's simple, keep it simple. Like every word goes where like, I just think that this is where really, you know, understanding HR, this is where you can really play a huge role. And it's not as complicated as you think, you know, I like to use the example of most companies have extremely robust, let's just use sexual harassment policies. After the metoo movement, what happened from the board to the C suite, everyone said, we are going to be a company that doesn't sexually harass if you're sexually harassed, we don't care if you're the CEO, or we don't care for your higher sales producer, you're going to be fired. Right? And they had trainings, open door policies, HR lead that movement. Because if something bad happens, so how do we how do we now flip that and bring it into the AI equation? Well, let's from the top HR leaders need to make sure that their boards their C suite are saying, here's how we're going to ethically and lawfully use AI, especially when it comes to our employees. Yeah, right. Number one, number two, HR needs to build those policies, saying that, you know, whether it's an applicant or current employees being subjected to these tools, that if somebody misuses these tools, just like if somebody sexually harasses you, there's an open door policy, and that person is going to be fired, just like violating other types of employee policy, right? So it goes back to where HR leaders can really thrive is around those policies, procedures and bring that cultural ethics, culture, compliance, blend, trustworthiness, right, because a lot of days if your employees, at the end of the day, if your employees are not trusting that your pace fair, that your hiring practices are fair, you know, they're going to believe and it's going to lead to potential discrimination anyway, going back to those basic principles, and now just reinventing those when it comes to AI. And saying, here's that, here's how we're using it within our organization. And here's how it's going to benefit you. And here's what everything we're doing to so I think a lot of it is building that trust within your organization that HR leaders are very good at. Yeah.

Chris Rainey 56:57

Wow. I feel like we need to do a whole series. Commission. I appreciate you coming on. And thank you so much for taking the time out to join us. The what you're doing is obviously truly making an impact globally. And honestly, it's quite refreshing the fact that you do take the time out to travel globally, speaking at events, really, you're really changing the stigma and narrative of your of the agency, I feel like for what you're doing, but more importantly, helping HR professionals thrive but and also impact the millions of employees that they serve around the world. So thank you for your time.

Keith Sonderling 57:34

It's my pleasure. And I want this technology to work just like anyone else. Because it can really help us eliminate bias. It can help us in our mission to prevent employment discrimination, and advance equal employment opportunity in the workplace. And that's really where I think the time and effort is worth it now for all of us. Yeah,

Chris Rainey 57:53

I agree. Thanks so much.

Keith Sonderling 57:54

Cool. Thanks for having me.

More from the HR Leaders Podcast

See this gallery in the original post