• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

AI Weekly: Facebook’s discriminatory ad targeting illustrates the dangers of biased algorithms

August 30, 2020   Big Data
 AI Weekly: Facebook’s discriminatory ad targeting illustrates the dangers of biased algorithms

Automation and Jobs

Read our latest special issue.

Open Now

This summer has been littered with stories about algorithms gone awry. For one example, a recent study found evidence Facebook’s ad platform may discriminate against certain demographic groups. The team of coauthors from Carnegie Mellon University say the biases exacerbate socioeconomic inequalities, an insight applicable to a broad swath of algorithmic decision-making.

Facebook, of course, is no stranger to controversy where biased, discriminatory, and prejudicial algorithmic decision-making is concerned. There’s evidence that objectionable content regularly slips through Facebook’s filters, and a recent NBC investigation revealed that on Instagram in the U.S. last year, Black users were about 50% more likely to have their accounts disabled by automated moderation systems than those whose activity indicated they were white. Civil rights groups claim that Facebook fails to enforce its hate speech policies, and a July civil rights audit of Facebook’s practices found the company failed to enforce its voter suppression policies against President Donald Trump.

In their audit of Facebook, the Carnegie Mellon researchers tapped the platform’s Ad Library API to get data about ad circulation among different users. Between October 2019 and May 2020, they collected over 141,063 advertisements displayed in the U.S., which they ran through algorithms that classified the ads according to categories regulated by law or policy — for example, “housing,” “employment,” “credit,” and “political.” Post-classification, the researchers analyzed the ad distributions for the presence of bias, yielding a per-demographic statistical breakdown.

The research couldn’t be timelier given recent high-profile illustrations of AI’s proclivity to discriminate. As was spotlighted in the previous edition of AI Weekly, the UK’s Office of Qualifications and Examinations Regulation used — and then was forced to walk back — an algorithm to estimate school grades following the cancellation of A-levels, exams that have an outsize impact on which universities students attend. (Prime Minister Boris Johnson called it a “mutant algorithm.”) Drawing on data like the ranking of students within a school and a school’s historical performance, the model lowered 40% of results from teachers’ estimations and disproportionately benefited students at private schools.

Elsewhere, in early August, the British Home Office was challenged over its use of an algorithm designed to streamline visa applications. The Joint Council for the Welfare of Immigrants alleges that feeding past bias and discrimination into the system reinforced future bias and discrimination against applicants from certain countries. Meanwhile, in California, the city of Santa Cruz in June became the first in the U.S. to ban predictive policing systems over concerns the systems discriminate against people of color.

Facebook’s display ad algorithms are perhaps more innocuous, but they’re no less worthy of scrutiny considering the stereotypes and biases they might perpetuate. Moreover, if they allow the targeting of housing, employment, or opportunities by age and gender, they could be in violation of the U.S. Equal Credit Opportunity Act, the Civil Rights Act of 1964, and related equality statutes.

It wouldn’t be the first time. In March 2019, the U.S. Department of Housing and Urban Development filed suit against Facebook for allegedly “discriminating against people based upon who they are and where they live,” in violation of the Fair Housing Act. When questioned about the allegations during a Capital Hill hearing last October, CEO Mark Zuckerberg said that “people shouldn’t be discriminated against on any of our services,” pointing to newly implemented restrictions on age, ZIP code, and gender ad targeting.

The results of the Carnegie Mellon study show evidence of discrimination on the part of Facebook, advertisers, or both against particular groups of users. As the coauthors point out, although Facebook limits the direct targeting options for housing, employment, or credit ads, it relies on advertisers to self-disclose if their ad falls into one of these categories, leaving the door open to exploitation.

Ads related to credit cards, loans, and insurance were disproportionately sent to men (57.9% versus 42.1%), according to the researchers, in spite of the fact more women than men use Facebook in the U.S. and that women on average have slightly stronger credit scores than men. Employment and housing ads were a different story. Approximately 64.8% of employment and 73.5% of housing ads the researchers surveyed were shown to a greater proportion of women than men, who saw 35.2% of employment and 26.5% of housing ads, respectively.

Users who chose not to identify their gender or labeled themselves nonbinary/transgender were rarely — if ever — shown credit ads of any type, the researchers found. In fact, across every category of ad including employment and housing, they made up only around 1% of users shown ads — perhaps because Facebook lumps nonbinary/transgender users into a nebulous “unknown” identity category.

Facebook ads also tended to discriminate along the age and education dimension, the researchers say. More housing ads (35.9%) were shown to users aged 25 to 34 years compared with users in all other age groups, with trends in the distribution indicating that the groups most likely to have graduated college and entered the labor market saw the ads more often.

The research allows for the possibility that Facebook is selective about the ads it includes in its API and that other ads corrected for distribution biases. Many previous studies have established Facebook’s ad practices are at best problematic. (Facebook claims its written policies ban discrimination and that it uses automated controls — introduced as part of the 2019 settlement — to limit when and how advertisers target ads based on age, gender, and other attributes.) But the coauthors say their intention was to start a discussion about when disproportionate ad distribution is irrelevant and when it might be harmful.

“Algorithms predict the future behavior of individuals using imperfect data that they have from past behavior of other individuals who belong to the same sociocultural group,” the coauthors wrote. “Our findings indicated that digital platforms cannot simply, as they have done, tell advertisers not to use demographic targeting if their ads are for housing, employment or credit. Instead, advertising must [be] actively monitored. In addition, platform operators must implement mechanisms that actually prevent advertisers from violating norms and policies in the first place.”

Greater oversight might be the best remedy for systems susceptible to bias. Companies like Google, Amazon, IBM, and Microsoft; entrepreneurs like Sam Altman; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.

For years, some U.S. courts used algorithms known to produce unfair, race-based predictions more likely to label African American inmates at risk of recidivism. A Black man was arrested in Detroit for a crime he didn’t commit as the result of a facial recognition system. And for 70 years, American transportation planners used a flawed model that overestimated the amount of traffic roadways would actually see, resulting in potentially devastating disruptions to disenfranchised communities.

Facebook has had enough reported problems, internally and externally, around race to merit a harder, more skeptical look at its ad policies. But it’s far from the only guilty party. The list goes on, and the urgency to take active measures to fix these problems has never been greater.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Let’s block ads! (Why?)

Big Data – VentureBeat

algorithms, Biased, Dangers, Discriminatory, Facebook's, illustrates, targeting, Weekly
  • Recent Posts

    • Search SQL Server error log files
    • We were upgraded to the Unified Interface for Dynamics 365. Now What?
    • Recreating Art – the unexpected way
    • Upcoming Webinar: Using Dynamics 365 to Empower your Marketing and Sales Teams with Digital Automation
    • Center for Applied Data Ethics suggests treating AI like a bureaucracy
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited