Oscar Wilde once argued that life imitates art more than art imitates life. Strangely, that’s proving to be the case when it comes to AI development – but not in the way some had hoped.
On Star Trek: The Next Generation, the android Data was constantly endeavoring to evolve its programming to become more human. That’s how AI works in our world now, as systems have advanced to the point where people are starting to envision what a workforce augmented by robots might look like. But at the same time AI has grown to become more like humans, a distinctly human roadblock has emerged in the application of the technology: bias.
Wasn’t this supposed to be our shot to get it right? Since human bias doesn’t appear to be going anywhere soon, technology was supposed to succeed at eliminating bias where human intelligence had failed miserably. Yet here we are, dealing with the same issues that we’ve dealt with in the humans-only world, bringing a crop of new challenges.
Addressing AI bias means understanding AI bias and, to do this, it’s critical to understand how bias is introduced into AI systems.
Read more here. (Originally published on Forbes)