New York City Council today passed a privacy law for commercial establishments that prohibits retailers and other businesses from using facial recognition or other biometric tracking without public notice. If signed into law by NYC Mayor Bill de Blasio, the bill would also prohibit businesses from being able to see biometric data for third parties.
In the wake of the Black Lives Matter movement, an increasing number of cities and states have expressed concerns about facial recognition technology and its applications. Oakland and San Francisco, California and Somerville, Massachusetts are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including face images. New York recently passed a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts have advanced a suspension of government use of any biometric surveillance system within the commonwealth. More recently, Portland, Maine approved a ballot initiative banning the use of facial recognition by police and city agencies.
The New York City Council bill, which was sponsored by Bronx Councilman Ritchie Torre, doesn’t outright ban the use of facial recognition technologies by businesses. However, it does impose restrictions on the ways brick-and-mortar locations like retailers, which might use facial recognition to prevent theft or personalize certain services, can deploy it. Businesses that fail to post a warning about collecting biometric data must pay $ 500. Businesses found selling data will face fines of $ 5,000.
In this aspect, the bill falls short of Portland, Oregon’s recently-passed ordinance regarding biometric data collection, which bans all private use of biometric data in places of “public accommodation,” including stores, banks, restaurants, public transit stations, homeless shelters, doctors’ offices, rental properties, retirement homes, and a variety of other types of businesses (excepting workplaces). It’s scheduled to take effect starting January 1, 2021.
“I commend the City Council for protecting New Yorkers from facial recognition and other biometric tracking. No one should have to risk being profiled by a racist algorithm just for buying milk at the neighborhood store,” Fox Cahn, executive director of the Surveillance Technology Oversight Project, said. “While this is just a first step towards comprehensively banning biometric surveillance, it’s a crucial one. We shouldn’t allow giant companies to sell our biometric data simply because we want to buy necessities. Far too many companies use biometric surveillance systems to profile customers of color, even though they are biased. If companies don’t comply with the new law, we have a simple message: ‘we’ll see you in court.’”
Numerous studies and VentureBeat’s own analyses of public benchmark data have shown facial recognition algorithms are susceptible to bias. One issue is that the data sets used to train the algorithms skew white and male. IBM found that 81% of people in the three face-image collections most widely cited in academic studies have lighter-colored skin. Academics have found that photographic technology and techniques can also favor lighter skin, including everything from sepia-tinged film to low-contrast digital cameras.
“Given the current lack of regulation and oversight of biometric identifier information, we must do all we can as a city to protect New Yorkers’ privacy and information,” said Councilman Andrew Cohen, who chairs the Committee on Consumer Affairs. Crain’s New York reports that the committee voted unanimously in favor of advancing Torres’ bill to the full council hearing earlier this afternoon.
The algorithms are often misused in the field, as well, which tends to amplify their underlying biases. A report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects. The New York Police Department and others reportedly edit photos with blur effects and 3D modelers to make them more conducive to algorithmic face searches. And police in Minnesota have been using biometric technology from vendors including Cognitec since 2018, despite a denial issued that year, according to the Star Tribune.
Amazon, IBM, and Microsoft have self-imposed moratoriums on the sale of facial recognition systems. But some vendors, like Rank One Computing and Los Angeles-based TrueFace, are aiming to fill the gap with customers, including the City of Detroit and the U.S. Air Force.
Public comments show lingering problems with California’s data privacy law
Earlier this month, the California Office of the Attorney General (CAG) held hearings across four cities where the public could offer comments and feedback to lawmakers as part of the rulemaking process for the California Consumer Privacy Act (CCPA). The hearings drew speakers from across a variety of industries, and their oral comments, as well as other written comments sent to the CAG’s office by Friday, December 6, are now available on the California Attorney General’s CCPA page.
While the hearings drew a number of concerns about the new data privacy law, which goes into effect January 1, four core issues emerged.
1. Crucial CCPA terms aren’t clearly defined
The most prominent concern that came out of the hearings was that terms central to the CCPA are unclear, making it difficult for companies to feel fully confident they are in compliance. At the San Francisco hearing alone, speakers said the definitions of personal information (PI) and service provider are unclear, as is what constitutes a sell. Speakers at the Los Angeles hearing made similar comments, adding that other terms like “business,” “reasonable security measures,” and “secure” transmissions of personal information were also unclear.
A common refrain was that the CCPA’s language was too vague or broad and overreaching. As a consequence, organizations have found key sections of the CCPA difficult to operationalize. They worry that the ambiguity of these terms could result in significant unintended consequences. For example, some argued that the broad definitions of PI and business may extend the reach of the CCPA to businesses that the AG likely had no intention of regulating, like small operations that serve fewer than 50,000 California customers but run high-traffic websites using cookies.
2. It’s unclear how CCPA’s scope effects other industry-specific regulation
Several commentators expressed confusion over the CCPA’s scope as it applies to companies that are already subject to industry-specific privacy legislation. At the San Francisco hearing, one speaker, representing a San Francisco credit union, indicated that the Gramm-Leach-Bliley Act (GLBA) and California Financial Information Privacy Act have definitions of PI that differ from the CCPA. She noted, though, that while the CCPA spells out exemptions to PI collected under the GLBA, inconsistencies in the definition of PI between laws have resulted in multiple interpretations about how the CCPA applies to data credit unions collect. Similar confusion may surround other regulations like HIPAA. At the Sacramento hearing, a speaker asked for clarification on how de-identification under the CCPA differs from de-identification under HIPAA, and how any de-identified data exempt from HIPAA should be handled by the CCPA.
3. Smaller organizations will have trouble meeting the January 1 deadline
Given the extensive scope of the CCPA, it’s no surprise that small and medium businesses have expressed concerns about the law’s reach and implications. Some organizations have said publicly that they’ll have substantial difficulty meeting the January 1 compliance deadline. At the San Francisco hearing, two speakers requested the compliance deadline be moved to 2022 to ensure their organizations could build a robust compliance program.
4. The system for data requests could be open to abuse
Speakers at the Los Angeles and San Francisco hearings also raised concerns about the potential for abuse with the request system. For example, they said that if companies were required to take unverified opt-out requests seriously, it could invite mass bot attacks by bad actors, either online or by phone. It’s been argued elsewhere that such abuse could effectively result in data request “denial of service” style attacks against organizations as their staff and infrastructure become tied up in an effort to respond to an unanticipated flood of fake requests. While tools exist to help automate data discovery and responses to data requests, some speakers argued that a “reasonable degree of certainty” should be the standard applied to requests, as that would give businesses more bandwidth to handle the issue.
What happens now?
Now that the hearings and the public comment period have passed, the CAG may use comments to revise the current draft regulations, after which the public will have 15 days (or longer) to provide comments on the revisions. So even though the CCPA goes into effect January 1, 2020, organizations should still expect changes to the law. Stakeholders should follow the rule-making process closely while making sure to submit any concerns to the CAG during the next comment period. Enforcement of the finalized law will begin July 1, 2020; however, organizations must make good faith efforts to comply starting January 1, 2020 and can be held liable for breaches of the law after this date.
Michael Osakwe is a tech writer and Content Marketing Manager at Nightfall AI.
Let’s block ads! (Why?)
Big Data – VentureBeat