More

    The Struggles & Solutions to Bias in AI

    Artificial intelligence (AI) is commonplace in today’s business world, with only 7 percent of businesses not yet using it. While AI is great for tasks like sales forecasting and inventory management, problems often arise when businesses use it for hiring or predictive analytics in regards to people. However, AI isn’t the enemy here; it’s bias, and there are ways that AI can combat these biases if organizations use it correctly.

    AI and Bias

    The Problems of AI Bias

    As helpful as AI is for businesses, organizations often run into problems because their AI models are biased. The training they receive results in inaccuracies and historical data doesn’t always provide the full picture that AI needs to make an informed decision. 

    Accuracy

    Unfortunately, AI isn’t always as accurate as developers would like it to be. For example, when using facial recognition software, the maximum error rate to identify light-skinned men is only 0.8 percent, while the error rate to identify dark-skinned women can be as high as 34.7 percent. This means that approximately one in three black women will be misidentified, while only one in 100 white men will be.

    This discrepancy likely comes from the model’s training. Developers might train AI facial recognition software with more photos of people who are light-skinned than they do dark-skinned and more men than women, automatically making the platform better at identifying light-skinned men. The more training AI gets with minority groups, the better it will be at making accurate judgments.

    Using historical data to inform future decisions

    Many types of AI used in business applications use historical data to inform their future decisions. This isn’t bad on its own, but when making people-related decisions, it often adds bias. For example, the system automatically rejecting a rental application for a black family in a majority white neighborhood or a healthcare system prioritizing the needs of white patients over those of black patients. 

    Past bias informs future bias, and organizations can’t move forward until they address it.

    For hiring decisions, this might lead it to favor men over women or white candidates over minorities because that’s who the company has hired in the past. Companies especially need to be cognizant of this if they’ve been around a long time — more than 50 or 60 years. Past bias informs future bias, and organizations can’t move forward until they address it.

    The Solutions to AI Bias

    For companies to use AI successfully, they need to implement it in objective processes where bias can’t affect the outcome, or they can use it to hide factors that might cause bias.

    Blind hiring

    While not intentional, managers often bring their own biases into hiring decisions. Race, gender, and even college affiliation can affect their feelings about a candidate before they’ve even interviewed them. AI-enabled hiring platforms, like Xor or HireVue, can hide these categories from a resume and evaluate candidates purely on skills and experience.

    “When used properly, AI can help to reduce hiring bias by creating a more accessible job application process and completely blind screening and scheduling process,” says Birch Faber, marketing VP for XOR. “Companies can also create an automated screening and scheduling process using AI so that factors like a candidate’s name, accent, or origin don’t disqualify a candidate early in the hiring process. Every candidate gets asked the same questions and is scored by the AI in the same exact fashion.”

    Removing discriminatory language

    Person standing on a line between gender symbols.

    Whether accidentally or on purpose, companies often include language in their job descriptions that is gendered or otherwise discriminatory. For example, terms like “one of the guys” or “he” often deter female candidates from applying, while requirements involving the ability to lift a certain amount rule out disabled candidates and aren’t necessary for many of the jobs they appear on.

    Text analysis tools use AI to examine job descriptions for gender bias and weak or choppy language. This will not only help organizations ensure they’re reaching a diverse audience, but it also helps them strengthen their job descriptions to find the right candidates. Gender Decoder is a free tool that examines text for gendered language. Companies can input their job descriptions to make sure they’re not unintentionally driving applicants away.

    Good Business Examples of AI

    Despite its bias, AI isn’t all bad and has some great use cases in business. Here are a couple of examples.

    Improving hiring diversity

    By pulling important information from resumes like work experience and relevant skills, AI can remove some of the bias from the early stages of hiring. It then presents that information to hiring managers while hiding the candidate’s name, gender, and college or university, forcing them to focus on the objective parts of an applicant’s resume.

    Using AI in this way promotes talent and experience over referrals, which often look like the referrer. It can also help organizations fix their job descriptions to remove gendered and non-inclusive language and give them a larger hiring pool. 

    Also read: Why the Tech Industry Struggles with DE&I

    Employee scheduling

    The average manager spends 140 hours per year making shift schedules for manufacturing, retail, and service industry jobs. For an eight-hour workday, that’s almost 18 days that they could be spending to improve the business instead. AI scheduling programs can automate this process based on skill sets to take work off the manager’s plate. They can also forecast likely sales, telling businesses how many people they’ll need on staff each day.

    Bad Business Examples of AI

    As of right now, AI isn’t accurate enough for all business uses. Here are some use cases of AI that companies should stop using until it improves.

    Predictive analytics for mortgage loans

    Some banks use predictive analytics to determine whether they should grant mortgages to people. However, minority groups often have less information in their credit histories, so they’re at a disadvantage when it comes to getting a mortgage loan or even approval on a rental application. Because of this, predictive models used for granting or rejecting loans are not accurate for minorities, according to a study by economists Laura Blattner and Scott Nelson. 

    “We give the wrong people loans and a chunk of the population never gets the chance to build up the data needed to give them a loan in the future.”

    Laura Blattner, economist

    Blattner and Nelson analyze the amount of “noise” on credit reports — data that institutions can’t use to make an accurate assessment. The number itself, for example, could be an overestimate or underestimate of the actual risk that an applicant poses, meaning the problem isn’t in the algorithm itself but in the data informing it. “It’s a self-perpetuating cycle, Blattner says. “We give the wrong people loans and a chunk of the population never gets the chance to build up the data needed to give them a loan in the future.”

    Background checks with facial recognition software

    Some workplaces perform background checks on their applicants that include a facial recognition component. However, facial ID has high error rates for minorities and could cause applicants to be discarded based on mistaken identity. Evident ID is one such company that offers facial recognition-enabled background checks by comparing applicant-provided images against online databases and government documents.

    These background checks are a good idea to ensure people are who they claim to be by comparing their photos to government IDs. However, they can be risky because glasses, scars, and even makeup can skew the results. 

    Using AI Correctly in Your Business

    If you’re ready to implement AI into some of your business processes, start with objective work, like forecasting, scheduling, and automating workflows. For more advanced AI users, consider integrating it into your recruiting strategy to remove some bias from your hiring processes and widen your pool of candidates. Until AI becomes more accurate, businesses shouldn’t be using it to make decisions related to people. It’s too easy for bias to creep in and perpetuate the diversity issues that many organizations already face.

    Read next: Using Responsible AI to Push Digital Transformation

    Jenn Fulmer
    Jenn Fulmer is a content writer for TechnologyAdvice, IT Business Edge, and Baseline currently based in Lexington, KY. Using detailed, research-based content, she aims to help businesses find the technology they need to maximize their success and protect their data.

    Latest Articles