TL;DR;
It is difficult to deny that humans make biased decisions. Unconsciously we all make choices that are based on prejudices and flawed associations. This bias that we introduce to our business decisions can trickle through entire organisations, from recruitment to market segmentation. AI, with its lack of consciousness, human experience and gut feelings, has the potential to remove bias from businesses, and yet all too often AI is found to exhibit the same biases that we do.
Where does AI bias come from?
An algorithm is only as good as the data it is trained upon, and frequently the source of bias in AI is either biased data or biased sampling of data. An algorithm trained only to understand data associated with caucasian males will not make informed decisions about other ethnicities or genders. Some developers remove labels that can introduce bias, such as gender labels, only to find that the resulting algorithm has incorporated gender bias from a different variable, such as predominantly used words by subjects of a certain gender.
Why should you care about AI bias?
AI free from bias can support improved decision making, not just by computing more variables more quickly than a human can, but also by avoiding the pitfalls of clouded human judgement. For example, with a rigorous algorithm that has been audited to remove bias, an AI could examine a much wider pool of applicants and introduce fair testing to the whole of your recruitment pipeline, finding the best possible candidate for a job instead of the candidate that best fits an outdated benchmark.
How can AI bias be removed?
Key to developing ethical, unbiased AI is collaboration. A bias management strategy should be built into the development process at every step to attempt to catch bias before it is introduced. Following the ethics guidelines for trustworthy AI, algorithms should be lawful, ethical, and robust. Key to all of these is that AI should be auditable for bias, so that the bias can either be removed or compensated for by targeted training and human intervention.
Ultimately, it is easier to find and remove bias in an algorithm, than it is to do so in a human. However, a diverse team is more likely to understand and identify areas of bias in an algorithm, and an organisation that values fairness and equality will be less likely to produce biased training data. For business leaders hoping to deploy unbiased AI, addressing existing areas of bias within the business is a good place to start.
With AI we can remove bias from our decisions, but only if we actively remove our own biases from AI.