November 19, 2018
An MIT study has revealed the way artificial intelligence system collect data often makes them racist and sexist.
Researchers looked at a range of systems, and found many of them exhibited a shocking bias.
The team then developed system to help researchers make sure their systems are less biased.
‘Computer scientists are often quick to say that the way to make these systems less biased is to simply design better algorithms,’ said lead author Irene Chen, a PhD student who wrote the paper with MIT professor David Sontag and postdoctoral associate Fredrik D. Johansson.
‘But algorithms are only as good as the data they’re using, and our research shows that you can often make a bigger difference with better data.’
In one example, the team looked at an income-prediction system and found that it was twice as likely to misclassify female employees as low-income and male employees as high-income.
They found that if they had increased the dataset by a factor of 10, those mistakes would happen 40 percent less often.
This article was posted: Monday, November 19, 2018 at 8:45 am