• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Google’s Inclusive Images Competition spurs development of less biased image classification AI

December 3, 2018   Big Data

Bias is a well-established problem in artificial intelligence (AI): models trained on unrepresentative datasets tend to be impartial. It’s a tougher challenge to solve than you might think, particularly in image classification tasks where racial, societal, and ethnic prejudices frequently rear their ugly heads.

In a crowdsourced attempt combat the problem, Google in September partnered with NeurIPS competition track to launch the Inclusive Images Competition, which challenged teams to use Open Images — a publicly available dataset of 900 labeled images sampled from North America and Europe — to train an AI system evaluated on photos collected from regions around the world. It’s hosted on Kaggle, Google’s data science and machine learning community portal.

Pallavi Baljekar, a Google Brain researcher, gave a progress update on Monday morning during a presentation on algorithmic fairness.

“[Image classification] performance … has [been] improving drastically … over the last few years … [and] has almost surpassed human performance [on some datasets]” Baljekar said. “[But we wanted to] see how well the models [did] on real-world data.”

Toward that end, Google AI scientists set a pretrained Inception v3 model loose on the Open Images dataset. One photo — a caucasian bride in a Western-style, long and full-skirted wedding dress — resulted in labels like “dress,” “women,” “wedding,” and “bride.” However, another image — also of a bride, but of Asian descent and in ethnic dress — produced labels like “clothing,” “event,” and “performance art.” Worse, the model completely missed the person in the image.

“As we move away from the Western presentation of what a bride looks like … the model is not likely to [produce] image labels as a bride,” Baljekar said.

 Google’s Inclusive Images Competition spurs development of less biased image classification AI

Above: Wedding photographs labeled by a classifier trained on the Open Images dataset.

Image Credit: Google AI

The reason is no mystery. Comparatively few of the photos in the Open Images dataset are from China, India, and the Middle East. And indeed, research has shown that computer vision systems are susceptible to racial bias.

A 2011 study found that AI developed in China, Japan, and South Korea had more trouble distinguishing between Caucasian faces than East Asians, and in a separate study conducted in 2012, facial recognition algorithms from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians. More recently, a House oversight committee hearing on facial recognition technologies revealed that algorithms used by the Federal Bureau of Investigation to identify criminal suspects are wrong about 15 percent of the time.

The Inclusive Images Competition’s goal, then, was to spur competitors to develop image classifiers for scenarios where data collection would be difficult — if not impossible.

To compile a diverse dataset against which submitted models could be evaluated, Google AI used an app that instructed users to take pictures of objects around them and generated captions using on-device machine learning. The captions were converted into action labels and passed through an image classifier, which were verified by a human team. A second verification step ensured people were properly labeled in images.

In the first of two competition stages, during which 400 teams participated, Google AI released 32,000 images of diverse data sampled from different geolocations and label distributions from the Open Image data. In the second stage, Google released 100,000 images with different labels and geographical distributions from the first stage and training dataset.

 Google’s Inclusive Images Competition spurs development of less biased image classification AI

Above: Examples of labeled images from the challenge dataset.

Image Credit: Google AI

So in the end, what were the takeaways? The top three teams used an ensemble of networks and data augmentation techniques, and saw their AI systems maintain relatively high accuracy in both stage one and stage two. And while four out of five of the top teams’ models didn’t predict the “bride” label when applied to the original two bride images, they recognized a person in the image.

“Even with a small, diverse set of data, we can improve performance on unseen target distributions,” Baljekar said.

Google AI will release a 500,000-image diverse dataset on December 7.

Let’s block ads! (Why?)

Big Data – VentureBeat

Biased, classification, Competition, Development, Google's, Image, images, Inclusive, less, spurs
  • Recent Posts

    • NOT WHAT THEY MEANT BY “BUILDING ON THE BACKS OF….”
    • Why Healthcare Needs New Data and Analytics Solutions Before the Next Pandemic
    • Siemens and IBM extend alliance to IoT for manufacturing
    • Kevin Hart Joins John Hamburg For New Netflix Comedy Film Titled ‘Me Time’
    • Who is Monitoring your Microsoft Dynamics 365 Apps?
  • Categories

  • Archives

    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited