• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

AI Weekly: AI phrenology is racist nonsense, so of course it doesn’t work

June 13, 2020   Big Data
 AI Weekly: AI phrenology is racist nonsense, so of course it doesn’t work

In a paper titled “The ‘Criminality From Face’ Illusion” posted this week on Arxiv.org, a trio of researchers surgically debunked recent research that claims to be able to use AI to determine criminality from people’s faces. Their primary target is a paper in which researchers claim they can do just that, boasting some results with accuracy as high as 97%.

But the authors representing the IEEE — Kevin Bowyer and Walter Scheirer of the University of Notre Dame and Michael King of the Florida Institute of Technology — argue that this sort of facial recognition technology is “necessarily doomed to fail,” and that the strong claims are primarily an illusory result of poor experimental design.

In their rebuttal, the authors show the math, so to speak, but you don’t have to comb through their arguments to know that claims about being able to detect a person’s criminality from their facial features is bogus. It’s just modern-day phrenology and physiognomy.

Phrenology is an old idea that the bumps on a person’s skull indicate what sort of person they are and what type and level of intelligence they can attain. Physiognomy is essentially the same idea, but it’s even older and is more about inferring who a person is by their physical appearance rather than the shape of their skull. Both are inherently deeply racist ideas, used for “scientific racism” and clear-eyed justification for atrocities such as slavery.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

And both ideas have been widely and soundly debunked and condemned, yet they’re not dead. They were just waiting for some sheep’s clothing, which they found in facial recognition technology.

The problems with accuracy and bias in facial recognition are well documented. The landmark Gender Shades work by Joy Buolamwini, Dr. Timnit Gebru, Dr. Helen Raynham, and Deborah Raji showed how major facial recognition systems performed worse on women and people with darker skin. Dr. Ruha Benjamin, author, Princeton University associate professor of African American Studies, and director of the Just Data Lab said in a talk earlier this year that those who create AI models must take into consideration social and historical contexts.

Her assertion is echoed and unpacked by cognitive science researcher Abeba Birhane in her paper “Algorithmic Injustices: Towards a Relational Ethics,” for which she won the Best Paper Award at NeurIPS 2019. Birhane wrote in the paper that “concerns surrounding algorithmic decision making and algorithmic injustice require fundamental rethinking above and beyond technical solutions.”

This week, as protests continue all around the country, the social and historic contexts of white supremacy and racial inequality are on full display. And the dangers of facial recognition use by law enforcement is front and center. In a trio of articles, VentureBeat senior AI writer Khari Johnson detailed how IBM walked away from its facial recognition tech, Amazon put a one-year moratorium on police use of its facial recognition tech, and Microsoft pledged not to sell its facial recognition tech to police until there’s a national law in place around its use.

Which brings us back to the IEEE paper. Like the work done by the aforementioned researchers in exposing broken and biased AI, these authors are performing the commendable and unfortunately necessary task of picking apart bad research. In addition to some historical context, they explain in detail why and how the data sets and research design are flawed.

Though they do discuss it in their conclusion, the authors do not engage directly in the fundamental moral problem of criminality-by-face research. In taking a technological and research methodology approach to debunking the claims, they leave room for someone to make the argument that future technological or scientific advancements could make this phrenology and physiognomy nonsense possible. Ironically, in their approach, there’s a danger of legitimizing these ideas.

This is not a criticism of Bowyer, Scheirer, and King. They’re fighting (and winning) a battle here. There will always be battles, because there will always be charlatans who claim to know a person from their outward appearance, and you have to debunk them in that moment in time with the tools and language available.

But the long-running war is about that question itself. It’s a flawed question, because the very notion of phrenology comes from a place of white supremacy. Which is to say, it’s an illegitimate question to begin with.

Let’s block ads! (Why?)

Big Data – VentureBeat

Course, Doesn’t, nonsense, phrenology, racist, Weekly, work
  • Recent Posts

    • Now get Mind Map View of your Dynamics 365 CRM Connections in a single view with latest Map My Relationships features!
    • Potatoes for Brains
    • How to Prepare for Microsoft Certification Exams
    • The Missing Link: Blockchain for Digital Supply Chains
    • Incoming White House science and technology leader on AI, diversity, and society
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited