The results from Joy Buolamwini’s research on facial recognition accuracy are disappointing to say the least.
Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.
Those are bad numbers, but they shouldn’t be surprising—not when we’re training these algorithm’s with a poorly constructed data set.
One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.
The stakes are just too high for us to continue to build technology without making sure we’re taking off our blinders and accounting for our biases. Oversights like this leave people out, at best. At their worst, they are capable of doing even worse.