October 18, 2017
Imagine a future where you are regularly stopped and searched by the police, based simply on bad information fed into a computer.
That is the fear of one authority on the subject, who is concerned that human biases and errors are being programmed into machine learning systems.
Studies have already shown that AI experiments, including predictive policing, demonstrate exaggerated versions of human biases when put into practice.
Experts are worried that this could lead to a ‘toxic’ future where machines make bad decisions on our behalf, unless something is done now to prevent it.
The claims were made in an in-depth report written for the Wall Street Journal (WSJ) by New York University (NYU) research professor Kate Crawford.
Along with colleagues from legal, economic, social science and other backgrounds, she is launching The AI Now institute, to study the complex social implications of this rapidly developing technology.
This article was posted: Wednesday, October 18, 2017 at 8:05 am