Way more dangerous than killer robots
With traditional computing, programmers create an outline for a machine to follow, step by step. With machine learning, computers find their own solutions after crunching vast amounts of data. So how could a computer end up amplifying human prejudices? It’s just data, right?
Let’s step back for a minute and look at an example of machine learning gone wrong. On March 23, 2016 Microsoft released a social chatbot named “Tay” on Twitter. Touted as an experiment in “conversational understanding,” Tay was designed to engage in “casual and playful conversation” with other Twitter users.
The experiment didn’t turn out to be very casual or playful at all. Soon after Tay launched, Twitter trolls started tweeting racist, misogynistic and hateful comments at it; it wasn’t long before Tay began tweeting inflammatory responses right back. Sixteen hours and an astounding 96,000 tweets later, Microsoft pulled the plug on Tay and apologized for the bot’s offensive language.
The Tay debacle was just one of many times that machine-learning software has developed biases based on its given data set. Tay’s flameout didn’t have a long-term impact on society, but there are serious situations where machine-learning bias can have real-world consequences:
- A ProPublica investigation uncovered that machine learning software designed to spot recidivism in former criminal offenders was twice as likely to flag black men and women as future high-risk criminals; only 20 percent of them actually went on to commit new crimes.
- Human resources departments hope to use AI to remove human bias when seeking diverse job candidates. However, automatic screening software can latch onto subjective criteria that lead to bias anyway. These screenings backfire when the algorithm decides to seek out, say, European-sounding male names because previous hires had similar names.
- Some overseas credit-rating companies are using unethical machine learning tactics to analyze the social networks of loan applicants; being connected to someone with good credit is a point in their favor, while connections to friends with poor credit counts against them. This can lead to denied loans, increased car insurance rates and worse.
So if data is biased because the people who compile it are naturally biased, what can we do to curb that bias?
Pledge to do no harm
Launch broad ethics oversights
As machine learning advances, its effects will be felt throughout society - so it makes sense that its regulation shouldn’t be left entirely to engineers and tech companies. Watchdog groups such as the
Ethics and Governance of Artificial Intelligence Fund gets input from social scientists, ethicists, philosophers, faith leaders, economists, lawyers and more to make sure a wide range of perspectives are incorporated into their ethical guidelines.
Diversify the AI industry
Representation matters, especially when you’re trying to remove unconscious biases baked into data sets. Organizations such as
Women in Machine Learning and
Black in AI mentor and encourage the advancement of people underrepresented in the field. And just this month the African Institute for Mathematical Sciences announced a new
machine learning master’s degree program, the first of its kind in Africa, which Facebook and Google are helping fund.
Get started