A new report concludes that Clearview AI’s facial recognition software outperforms other leading systems by almost four times in eight different scenarios.
The company released an original version of its recognition software in late 2016 that scored very well at other benchmark studies. But in August of this year, clearview AI issued a new preview, in which its software outperformed the competition yet again.
Findings from the new study, conducted by Dr. Armando Rojas-Beche at Aite Group, a business strategy consultancy, and Dr. Greg Dalton, another senior researcher at Aite Group, are published in the Technology Review.
These semi-annual benchmark analyses look at how different facial recognition systems compare in eight different situations—at full facial length from 10,000 pixels to 16,000 pixels and at 10,000 pixels to 16,000 pixels from all angles.
Testing Clearview AI
Browsing Facebook, the researchers compared the accuracy of Clearview AI’s software with eight other competing apps in each scenario, all of which are completely new. The results:
Clearview AI, with a 20.4 percent average accuracy, outscored the competition’s six most recent versions of version 8.3-thi0ld by almost 4.2 percentage points.
the competition’s six most recent versions of version 8.3-thi0ld by almost 4.2 percentage points. The Clearview AI version’s accuracy reached its peak during testing at 16,000 pixels, where the accuracy reached 55.8 percent.
results reached its peak during testing at 16,000 pixels, where the accuracy reached 55.8 percent. Despite a 10-point advantage in testing, Clearview AI’s average testing error rate was very high. The company’s version 3.0 software, for example, finished with an average error rate of 76.7 percent.
Browsing Windows for information, Clearview AI with a 21.6 percent average score-outdid the competition’s 7.5 percent.
outdid the competition’s 7.5 percent. Since Clearview AI released its version 3.0 in August, more than 10 different sources are behind the benchmark results. It obtained a 53.5 percent accuracy rate with a 2.0-pixel error rate.
“While these results are very promising, the quality of Clearview AI’s visual system is somewhat uneven across the eight categories,” the report’s authors write. “For example, Clearview AI’s 8.3-thi0ld version is a better all-around performer in several of the new tests than the 7.5 version.”
The tests were conducted on the samples of OpenSim Face User Interface SoftWtr only, and Broadcom’s Speedsight Vision offers an identical rival. As such, the finding applies primarily to new and different systems.
Watch: The Face of Identity Biometrics: No to Biometric Passports