Amazon Shut Down Its Recruitment AI For Gender Discrimination


This is a most amusing little story from Amazon, they wanted to use AI to aid in hiring decisions and found that they weren’t getting much aid from it. Thus, of course, they stopped. The amusement coming from the fact that it proves what Gary Becker said all those years ago – discrimination is costly. Thus we’d expect a market based economic system not to contain all that much discrimination. And that actually is the reason Amazon shut it down, because it was costly in the manner it did discriminate.

Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars – much like shoppers rate products on Amazon, some of the people said.

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

Which is amusing. That the AI has to be trained means that it’s got to use past data to see what patterns there might be which can become the basis of decision making. But if that past data itself contains prejudice or bias then the AI itself is going to be prejudiced or biased.

Now, normally, this would be a good thing. For we’re trying to train the AIs to be useful in this world, as people are, not as they should be:

This brings us to another common argument about AI — that it should not incorporate the things we know about actual human beings.

For example, we know that some to many humans are racist, misogynist, greedy and short-termist. AI too can pick up those foibles, and can definitely show what we would call prejudice.

Insisting they do not is to miss the point entirely. The only possible use of AIs is to provide us with knowledge about the world we live in, knowledge we cannot derive purely from logic but which can only be gained through data processing.

After all, the world is full of deeply prejudiced human beings. An AI which didn’t account for that would have little value in describing our world. That’s why we should not just want, but must absolutely insist that AIs do incorporate our errors.

This is one of those things which is generally true but which isn’t specifically so.

There is quite a movement out there to insist that all algorithms, all AIs, must be audited. That there can be no black boxes – we must know the internal logic and information structures of everything. This is so we can audit them to ensure that none of the either conscious or unconscious failings of thought and prejudice that humans are prey to are included in them.

But, as above, this fails on one ground – that we humans are prey to such things. Thus a description of, or calculation about, a world inhabited by humans must at least acknowledge, if not incorporate, such prejudices. Otherwise the results coming out of the system aren’t going to be about this world, are they?

Again, generally true. But not in this specific case. For one of the things that Amazon would like to do is work out when it is being discriminatory to its own cost? Like, say, when it doesn’t hire qualified and nice and cheap – cheap because they’re being discriminated against – women? Making the same mistake everyone else is means losing money, or perhaps not making as much as you could be. In a capitalist world this is a bad thing. And it’s that market bit, that others can be doing things a different way, which militates against your doing this.

So, here, Amazon would definitely like not to be taste discriminating against women candidates. They’d still like to be rationally discriminating of course, which is the very point of selecting people to be hired in the first place, to discriminate.

So, why did Amazon can the gender discriminatory AI? Because it was losing Amazon money by not hiring qualified women. That means that we need regulation of AI, when markets achieve it for us, why?