Custom Search 1

'Self-driving cars more likely to hit black people'

ALGORITHMIC BIAS: Self-driving cars are more likely to hit black people, according to a new study

SELF-DRIVING cars are more likely to hit black people, according to a new study.

Academics at Georgia Institute of Technology have revealed the possible deadly implications of what researchers have termed “algorithmic bias”.

Using the Fitzpatrick scale, a human skin tone classification system, authors of the study looked at how state-of-the-art object detection models identified people of varying skin colour.

The study, Predictive Inequity in Object Detection, concluded that standard models appeared to exhibit higher precision when it came to detecting skin tones which were lighter.

When analysing the models' ability to correctly detect light-skinned people versus dark-skinned people, the researchers found tools were five per cent less accurate in detecting dark skin tones. This accuracy gap remained when the study's authors altered the time day or obstructed the view of the pedestrians, Vox reported.

The motivation for the research came after its authors noted object detection technology had higher rates of errors in identifying people from certain demographics.

“We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models,” the study’s authors said.

While researchers did not conduct tests on any object detection models, such as sensors or cameras, currently used by self-driving vehicles, use training datasets used by manufacturers and the study has not been peer reviewed as of yet, the accuracy of its findings have prompted debate among AI experts.

Kate Crawford, co-founder of AI Now Institute, said: “In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers. But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks.”

The issue of artificial intelligence bias is one that digital industry has been called to address.

A number of technology giants including Microsoft, Google and Facebook have been developing tools to tackle bias detection.

In 2016, Joy Buolamwini launched the Algorithmic Justice League, after she discovered that her face was only detected by facial recognition technology if she wore a white mask.

Read every story in our hardcopy newspaper for free by downloading the app.

Facebook Comments