Machine learning and algorithms have become so intelligent and complex that they are shaped by forces beyond our control. We use these tools to help us in our everyday lives. They help us make business decisions, they generate accurate search results, they show us articles on Facebook that we actually find interesting, they match us with prospective employers, and even match us with prospective partners. In many ways, they improve our businesses, our health, our education, and our lives. But can these programs and software be discriminatory?
Although there is a widespread belief that these computer programs and algorithms are objective, there is no doubt that there is a significant degree of human influence involved. Not only are the algorithms and software created by people, but they also are constantly adjusting themselves to adapt to and represent human behavior and attitudes. Therefore, they can also represent human prejudices.
For example, this study by Carnegie Mellon University showed that Google’s online advertising is sexist: an ad for a high-income job was shown more often to men, than to women. When searching for “C.E.O” online, only 1 of every 10 pictures or so is of a woman. Although, you can easily find the C.E.O Barbie on page 2.
But it’s not just sexism, it’s racism too. Other research, from Harvard University shows that ads for arrest records were much more likely to show up on searches for distinctively black names or a historically black fraternity.
But in the case of machine learning algorithms, it is difficult to point the finger. The responsibility does not necessarily lie in the hands of the creator, because they are designed to learn from human behavior, or online searchers. Therefore instead of catalyzing the discrimination, they are simply “going along with it.” But the problem question is, does this reinforce discriminatory behaviors in our society? And if so, how can developers adjust the algorithm to counteract this?