Bias baked in: AI, machine learning, and ethics

prototype-sandra-peters-qanda-1000In its blog this week, Alphabet-Google’s machine learning subsidiary DeepMind announced that it is launching an initiative called ‘Ethics & Society’. DeepMind writes, “Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work”.

In a time when Silicon Valley is under constant criticism for bias, for intentionally deceiving regulators and for tone-deaf approaches to user privacy, the Ethics & Society initiative is a proactive effort to consider social expectations of technology and how citizens outside the tech community want the digital sphere to relate to human needs.

As the University of Sydney’s Sandra Peter explained on Q&A Monday night, the problem is not that evil computers are coming to get us.

“The real problem with AI is that the second generation has in-built bias… We don’t train them to be biased, but they’re modelled on the real world. They creep into how we get our loans, they creep into who gets a job, who gets paroled, who gets to go to jail. And that’s the real fear with AI – those types of biases that we’re building in – and we don’t know, and there’s no easy way to fix them.”

Read more: DeepMind / Nature / ABC

Image: Sandra Peter on Q&A / ABC iView

This story is taken from the 6 October 2017 edition of The Warren Centre’s Prototype newsletter. Sign up for the Prototype here.