Certainly by now we’ve all seen enough empirical and anecdotal evidence in support of the value to the business of diversity in the workplace to dispel any notion that it’s just another HR inconvenience to deal with. But given the fact that if you’re human, you’re biased, how can we take bias out of the hiring equation so that it’s not a factor? Can artificial intelligence help us accomplish that?
I recently had the opportunity to discuss that question with Brian Delle Donne, president of Talent Tech Labs, a talent acquisition tech incubator in New York. I opened the discussion by asking Delle Donne, who has been exploring the role of AI in talent acquisition, how pervasive he has found bias to be in the IT hiring process. He said it’s very pervasive:https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
It’s probably not because of the way hiring goes on, as much as a skewing in the candidate pool of qualified people. Of course, the IT community is largely male-dominated, so at least as it relates to gender, the pool started off skewed in that direction. I think the bigger question for today is, if there’s going to be a concerted effort to create diversity, how to best do that. The notion of diversity, as it’s being talked about now, has fallen back into the traditional EEOC terminology. We believe that’s a significant component of it, but we also think it’s important to achieve cognitive diversity. That doesn’t just map back to things like religion, race and gender, but also work experience and collaboration with people who have had different experiences, that help to make for a better-performing organization.
So what is the role of artificial intelligence in diminishing bias in the tech hiring process? Delle Donne said there are two ways to look at it:
The way AI works is largely predictive, and has to do with pattern-recognition types of tools being applied. It’s very good at coming up with correlations or causalities that map back to the historical data set. So if you just take it on its face, it can be very limiting if you’re just trying to replicate or accelerate hiring based on the way things have been done in the past. There are some initiatives going on that are actually trying to eradicate that algorithmic bias — it’s all about the need to have transparency in algorithms for other people to be aware of what’s going on inside this black box. There’s a lot of academic work being conducted on ways to use AI to eliminate biases that are otherwise going to naturally grow up in the AI datasets. That whole body of thinking is one side.
The other side, Delle Donne said, involves recognizing that candidates are coming with all types of backgrounds and diversity:
You can use AI to adjust the filters that you use when you do your searches and make your matches, where artificial intelligence helps to match applicants or personal profiles to roles that you’re trying to fill in an organization. One of the companies leading in that space, HiringSolved, has taken that approach, recognizing that bias is pre-existing. In order to help overcome it, you set up filters that you can manually toggle to overcome what your perceived biases are. So if you’re trying to get diversity into a particular hiring process, and you’re emphasizing women and minorities, you can actually set filters on the way the artificial intelligence scrapes the profiles and surfaces people who will fit those criteria. So it’s used to enhance the filtering by first recognizing you’ve got a deficiency that you’re trying to overcome.
Finally, I asked Delle Donne if he foresees a day when all jobs will be filled with AI alone, or if there will always need to be a human element in some form. His response:
I don’t think it’s all going to be solved by technology. I think bias-busting is going to mostly come to pass by a combination of technology, adapting human processes, and applying a strong dose of training. So I do believe the human element is going to continue to be there, not just for the adjudication of potential bias, but just because you are dealing with human capital. I don’t think the technology can go the last mile. But I can foresee a day when the machine-learning aspect of AI is trained in a way that it can eliminate some of the data contamination — that’s probably too strong a word — from the correlations that have historically prevailed. And I think we can foresee a day when you can design algorithms to oversee those algorithms and undo them when they come up. But keep in mind, those are going to be programmed by programmers who are going to have to be extremely mindful of the biases they unintentionally might bring to that situation.
A contributing writer on IT management and career topics with IT Business Edge since 2009, Don Tennant began his technology journalism career in 1990 in Hong Kong, where he served as editor of the Hong Kong edition of Computerworld. After returning to the U.S. in 2000, he became Editor in Chief of the U.S. edition of Computerworld, and later assumed the editorial directorship of Computerworld and InfoWorld. Don was presented with the 2007 Timothy White Award for Editorial Integrity by American Business Media, and he is a recipient of the Jesse H. Neal National Business Journalism Award for editorial excellence in news coverage. Follow him on Twitter @dontennant.