• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: lawmakers

AI Weekly: U.S. lawmakers decry the chilling effect of federal surveillance at protests

October 17, 2020   Big Data
 AI Weekly: U.S. lawmakers decry the chilling effect of federal surveillance at protests

There’s a thread that runs through police violence against Black people and connects to overpolicing, onerous and problematic tactics like facial recognition, AI-powered predictive policing, and federal agencies’ surveillance of protestors. It’s almost a loop; at the very least, it’s a knot.

For months, American citizens have tirelessly protested against police violence, largely in response to the police killings of George Floyd and Breonna Taylor. Numerous reports allege that federal agencies have conducted surveillance on protestors. According to some members of Congress, this is creating a chilling effect on First Amendment rights: This week, Representatives Anna Eshoo (D-CA) and Bobby Rush (D-IL), along with Senator Ron Wyden (D-OR), sent a letter asking the Privacy and Civil Liberties Oversight Board (PCLOB), an independent federal agency, to investigate those reports.

“The act of protesting has played a central role in advancing civil rights in our country, and our Constitution protects the right of Americans to engage in peaceful protest unencumbered by government interference. We are, therefore, concerned that the federal government is infringing on this right,” reads the letter’s introduction.

Specifically, they want the PCLOB to investigate:

  • The federal government’s surveillance of recent protests
  • The legal authority supporting that surveillance
  • The government’s adherence to required procedures in using surveillance equipment
  • The chilling effect that federal government surveillance has had on protesters

The alleged surveillance measures include aircraft surveillance from Customs and Border Protection (CBP) that involved devices that collect people’s cell phone data, the Department of Homeland Security (DHS) seizing phones from protesters with the intention of extracting their data (a request that went unfulfilled apparently), and the DHS compiling information on journalists covering the protests (which seems to have stopped).

In a statement shared with VentureBeat, PCLOB board member Travis LeBlanc said, “I am deeply concerned by reports of the federal government’s surveillance of peaceful Black Lives Matter protesters exercising their constitutional rights. As the Privacy and Civil Liberties Oversight Board, we are empowered to conduct an independent investigation of any such government surveillance and I hope my fellow Board Members will join me in doing so promptly.”

The agency would not state what measures it may take as a result of its investigation, and indeed, its powers are somewhat limited. Formed in 2007 from the 9/11 Commission, the PCLOB’s two chief responsibilities are to oversee “implementation of Executive Branch policies, procedures, regulations, and information-sharing practices relating to efforts to protect the nation from terrorism” and to “review proposed legislation, regulations, and policies related to efforts to protect the nation from terrorism” in order to advise the executive branch of the U.S. government on how to meet its goals while preserving privacy and civil liberties. The PCLOB’s aegis expanded beyond terrorism with Section 803 of the 9/11 Commission Act, which requires that federal agencies submit reports about privacy and civil liberties reviews and complaints.

The agency has deep reach, at least — access to documents, and the right to interview anyone in the Executive Branch. But though it can conduct reviews and make recommendations, the only real legal action it can take is to subpoena people through the U.S. Attorney General’s office.

Though this week’s letter directly engages the PCLOB, it’s by no means the first salvo from concerned lawmakers. Earlier this year, Reps. Eshoo and Rush sent a letter of concern about surveillance and its chilling effect, signed by 33 other members of Congress, to heads of the FBI, Drug Enforcement Agency (DEA), National Guard Bureau, and CBP. “We demand that you cease any and all surveilling of Americans engaged in peaceful protests,” they wrote. They also demanded access to all documents these agencies have that pertain to the protests and surveillance. (The agency responses came by the barrelful and were included in a media announcement this week.) And in their most recent letter, Eshoo and Rush listed a dozen other letters of concern that members of Congress sent agencies and private companies expressing shades of these same concerns.

But the prior missives and responses have not, apparently, satisfied their concerns. In a statement to VentureBeat, Rep. Eshoo said, “It’s my hope that the PCLOB will conduct a thorough and independent investigation to uncover the facts about the allegations cited in my letter. These facts will help inform me and my colleagues about what actions Congress should take to prevent future abuses, update existing laws, and hold offenders accountable.”

The aforementioned thread continues through federal agencies’ protest surveillance to acts of aggression, intimidation, vigilantism, and in some cases violence. Unidentified agents in unmarked vehicles brazenly grabbed and detained protestors off the street in Oregon. Law enforcement directly or indirectly let an armed teenager roam the streets of Wisconsin, where he killed two protestors and injured a third. And the sitting President of the United States, during a nationally televised presidential debate, ominously told his supporters to watch the polls on election day — a thinly veiled overture to intimidate voters — and exhorted his white supremacist supporters to “stand by” and implied that they should be prepared to commit violence against leftist groups.

The chilling effect that Rep. Eshoo is so concerned about may follow the same path, from protests to the polls — which is all the more urgent given that the 2020 election is just over two weeks away. Preventing future abuses and holding offenders accountable is not merely the right thing to do, it’s crucial to a continued functioning democracy.


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

From Washington state to Washington DC, lawmakers rush to regulate facial recognition

January 19, 2020   Big Data
 From Washington state to Washington DC, lawmakers rush to regulate facial recognition

Amid the start of an impeachment trial; talk of mounting hostility with Iran; new trade deals with China, Canada, and Mexico; and the final presidential debate before the start of the Democratic presidential primary season, you might’ve missed it, but it was also a momentous week for facial recognition regulation.

A bipartisan group in Congress wants action, roughly a dozen state governments are considering legislation, and overseas news broke Thursday that the European Commission is considering a five-year moratorium on facial recognition among potential next steps. This would make the EU the largest government worldwide to halt deployment of the technology.

In Washington, DC this week, the House Oversight and Reform Committee pledged to introduce legislation in the “very near future” that could regulate facial recognition use by law enforcement agencies in the US. Just like in hearings held last summer, members of Congress exhibited a fairly unified, bipartisan position that facial recognition use by the government should be regulated and in some cases limited. There was talk of regulation, but until this week, the future of sweeping facial recognition regulation seemed uncertain.

Congress on Civil Rights, the Constitution, and facial recognition

Lawmakers seem to have a sense of urgency to take action for a variety of reasons, including a lack of standards for businesses; governments; and local, state, and federal law enforcement.

One major area of focus: violation of the 1st amendment right to freedom of assembly, and the idea that facial recognition might be used to identify people at political rallies or track political dissidents at protests.

“It doesn’t matter if it’s a President Trump rally or a Bernie Sanders rally, the idea of American citizens being tracked and cataloged for merely showing their faces in public is deeply troubling,” Jordan said.

“The urgent issue we must tackle is reigning in the government’s unchecked use of this technology when it impairs our freedoms and our liberties. Our late chairman Elijah Cummings became concerned about government use of facial recognition technology after learning it was used to surveil protests in his district related to Freddie Gray. He saw this as a deeply inappropriate encroachment on the freedom of speech and association, and I couldn’t agree more,” Jordan said.

Another reason lawmakers are anxious to regulate facial recognition: Civil rights protections and the great potential for racial discrimination.

Analysis by the Department of Commerce’s National Institute for Standards and Technology (NIST) last month found that some facial recognition systems are anywhere from 10 to 100 times more likely to misidentify groups like the young, elderly, women of color, and people of Asian or African descent.

Facial recognition systems that exhibit discriminatory performance, lawmakers contend, can exacerbate existing prejudices and overpolicing of schools and communities of color.

NIST’s analysis follows studies in 2018 and 2019 by AI researchers that found misidentification issues for popular facial recognition systems like Amazon’s Rekognition. Amazon has not agreed to have its AI analyzed by NIST, director Dr. Charles Romine told Congress this week.

Romine said talks between NIST and Amazon are ongoing on the subject of Rekognition review by the federal government.

If any one company that seemed to be a main source of ire for the committee, it’s Amazon.

Amazon lobbied members of Congress on the subject and has stated a willingness to sell its facial recognition to any government agency. Amazon reportedly marketed Rekognition to ICE officials, but the extent to which facial recognition is sold to government agencies is still unknown. In a shareholder vote last summer, Amazon chose to continue to sell facial recognition services to governments.

One factor that lawmakers say motivates a sense of urgency: China. In the EU, Washington DC, and on the state and local level across the U.S., lawmakers frequently cite China’s use of facial recognition to strengthen an authoritarian state as a future they want to avoid.

Meredith Whittaker is cofounder of the AI Now Institute cofounder and former Google employee. In testimony earlier this week she talked about how facial recognition is often used by those in power to monitor those without power and described the difference between usage in the U.S. and China. Last month, the AI Now Institute called for a ban of business and government use of facial recognition technology.

“I think it is a model for authoritarian social control that is backstopped by extraordinarily powerful technology,” Whittaker said about China’s use of facial recognition software. “I think one of the differences between China and the U.S. is that there, the technology is announced as state policy. In the U.S., this is primarily corporate technology that is being secretly threaded through our core infrastructures without that kind of acknowledgment.”

Washington state’s impact on facial recognition regulation

A handful of cities put facial recognition bans and moratoriums in place in 2019, but state legislatures in 2020 are already moving even faster to regulate the technology. Since the start of the years, 10 state legislatures introduced bills to regulate the use of facial recognition software, according to the Georgetown University Law School Center on Privacy and Technology.

In the state of Washington, the stakes may be unlike anywhere else in the world. Legislation to regulate use of facial recognition in Washington can be particularly influential, since the Seattle area is home to Amazon and Microsoft, two of the largest companies selling facial recognition software to governments. Axon, maker of police body cameras and video cloud storage provider, is also in Washington.

In Washington, state lawmakers this week started a second attempt to pass the Washington State Privacy Act. Known as SB 6281, the bill would regulate data privacy law and require “meaningful human review” of facial recognition results when used by the private sector. In a press conference Monday, the bill’s chief sponsor, Sen. Reuven Carlyle, said Washington is moving forward because there isn’t time to wait for lawmakers in Washington DC to deliver privacy regulation to reign in business use of private data. Carlyle said the bill takes cues from CCPA in California and GDPR in Europe.

A different version of the Washington Privacy Act passed the Washington State Senate with a near unanimous vote in spring 2019 but died in the Washington State House of Representatives.

Lawmakers complained last spring that the legislative process was tainted by lobbying from tech companies like Microsoft, and that companies like Amazon and Microsoft played too much of a role in drafting the 2019 version of the Washington Privacy Act.

Also introduced this week in Washington is SB 6280, a bill to regulate government use of facial recognition. The bill’s chief sponsor, State Senator Joe Nguyen, is a senior program manager at Microsoft, according to his LinkedIn profile. Nguyen is also a cosponsor of the Washington Privacy Act.

Microsoft initially supported the Washington Privacy Act last year but came to oppose amendments to the bill, calling them too restrictive. Microsoft also opposed a moratorium proposed by the ACLU, one of the first such moratoriums to be considered by any state legislature.

Jevan Hutson leads facial recognition and AI policy at the University of Washington School of Tech and Public Policy Clinic. He testified in multiple hearings in Olympia, Washington this week in favor of HB 2363, a bill that would make biometric data the sole property of an individual, and in opposition to the latest iteration of the Washington Privacy Act.

He also introduced a bill known as the AI Profiling Act. The legislation, which he drafted with others at the University of Washington, would outlaw the use of AI to profile people in public places; in important decision-making processes for a number of industries; and to predict a person’s religious affiliation, political affiliation, immigration status, or employability.

His position is that facial recognition may have some legitimate use cases, but it’s also a perfect surveillance tool, and that the root cause motivating facial recognition supporters is to create a new, invasive surveillance capitalism-driven marketplace.

Like last year, he believes the permissive regulatory framework found in the Washington Privacy Act that rejects the idea of a moratorium comes about due to the outsized influence of technology companies in Washington State that stand to profit from the widespread deployment of facial recognition.

He views Microsoft’s involvement and lobbying in 2019, and again in 2020, as an effort to create an initial framework of what facial recognition regulation should look like so they can bring that model to other states and Washington DC.

While speaking at Seattle University last year, Microsoft president Brad Smith said legislation passed in Washington could go on to shape facial recognition policy around the world.

As lawmakers in favor of the bill lay the necessary groundwork to attempt to pass the bill for a second time, politicians and advocates like Hutson argue legislation should take into account the demonstrated harm facial recognition can do and reject the idea that widespread use of facial recognition is inevitable. 

“I think legislators and advocates here are seriously concerned and recognize that we need to get out front,” he said. “I think that sort of gets to the question of why now; it’s so important that we act because things will be bought by governments, and businesses will begin to deploy these things if there is not a clear sign from regulators and legislators both at the federal and local level to say, ‘No, this is not a valid market given the dangers that it poses.’” 

Hutson also calls action in the near future important in order to shut down the idea that stifling innovation, a common argument against regulation, is always a bad thing. Facial recognition is being used for payments and to arrest people accused of crimes in China, but it’s also being used to track or imprison ethnic minorities, a use case he says could also be considered innovative.

“Innovation in many ways is this sort of false religion right where it’s like innovation in and of itself is a perfect good, and it’s not,” he said. “This is innovation worth stifling. Like I don’t think we should be super innovative with nuclear weapons. We don’t need even more innovative forms of oppression to be legitimized and authorized by the state legislature.”

Finishing thoughts

As laws get hammered out, stories of outrage continue.

In recent days, in Denver, where the city council is considering a facial recognition ban, advocates demonstrated that all nine members of the council met a 92% accuracy rate as people on the local sex offender registry.

There’s also the story Clearview AI, a startup that allows people to upload an image of a person then find where else on the web that person’s images appeared.

“It’s creepy what they’re doing, but there will be many more of these companies. Absent a very strong federal privacy bill, we’re all screwed,” Stanford University professor Al Gidari told the New York Times.

Steps taken this week to introduce legislation are just the beginning.

Bills and regulations continue to percolate through state legislatures and the halls of Congress, and as they do, the string of stories that refresh outage and brought about the initial sense of urgency seem likely to continue.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

U.S. lawmakers say Facebook’s steps to tackle deepfakes don’t go far enough

January 9, 2020   Big Data
 U.S. lawmakers say Facebook’s steps to tackle deepfakes don’t go far enough

(Reuters) — Facebook said it would remove “deepfake” and other manipulated videos from its website to combat the spread of false information ahead of the 2020 presidential race, but lawmakers said those and other changes it has recently announced do not go far enough. The comments, made during a hearing held by the House Energy & Commerce subcommittee, marks the latest effort by House lawmakers to probe Facebook’s digital defenses ahead of the November elections four years after Russia used the site to spread misinformation during the 2016 presidential race.

Subcommittee Chairwoman Jan Schakowsky, a Democrat, said there is growing evidence that big tech has failed to regulate itself. “I am concerned that Facebook’s latest effort to tackle misinformation leaves a lot out,” she said.

Other lawmakers broadly pointed to Facebook’s inability to address the issues of data security, misinformation and foreign interference ahead of the elections. An internal company memo sent by a senior executive to employees addressed Facebook’s shortcomings and was published by the New York Times on Tuesday.

Ranking Republican member Cathy McMorris Rodgers said consumers are losing faith in sources they can trust online but focus should be on innovation to combat falsified videos and not more regulation.

Earlier this week, Facebook said it would remove deepfakes — which use artificial intelligence to create hyper-realistic but fake videos where a person appears to say or do something they did not — as well as other manipulated or misleadingly edited videos from its platform in a move to curb misinformation. It will not remove content deemed to be parody or satire.

Monika Bickert, Facebook’s vice president of global policy management, said the social media platform recognizes the risks of manipulated media and that “its latest policy is designed to prohibit the most sophisticated attempts to mislead people.” Bickert faced criticism from lawmakers for the company’s decision to not remove a heavily edited video that attempted to make House Speaker Nancy Pelosi seem incoherent by slurring her speech.

“Why wouldn’t Facebook simply take down the fake Pelosi video?” Florida Congressman Darren Soto, a Democrat, said.

Bickert said such videos will be labeled false and are subject to fact-checking. “Our enforcement is not perfect,” but it has gotten better, she said.

The company took down one network that spread false information in 2016. It has removed 50 such networks in 2019, she said.

Facebook has been criticized over its content policies by politicians across the spectrum. Democrats have blasted the company for refusing to fact-check political advertisements, while Republicans have accused it of discriminating against conservative views, a charge it has denied.

California Congressman Jerry McKerney asked if Facebook would be ready for an independent third-party audit of its practices by June 1, results of which would be visible to the public. Bickert did not answer the question.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

AI Weekly: Companies and lawmakers need to agree on facial recognition policies before it’s too late

January 20, 2019   Big Data

After a summer-long saga of accusations, denials, and blockbuster reporting by the American Civil Liberties Union, the dust appeared to have settled on Amazon’s Rekognition scandal. But a letter from shareholders this week rekindled the flames, urging the company, which was worth an estimated $ 1 trillion in September 2018, to prohibit sales of facial recognition technology like Rekognition to governments unless its board independently concludes there is no risk of civil and human rights violations.

The shareholders further claim that Rekognition, which has been piloted by police in Florida and Oregon, threatens to negatively impact Amazon’s stock price. More than 450 employees have demanded that the Seattle company halt sales of Rekognition to law enforcement agencies, presenting the policy as a talent and retention risk. And the service’s unfettered deployment puts Amazon under increased scrutiny from the U.S. Government Accountability Office, which lawmakers tasked in June with studying whether “commercial entities selling facial recognition adequately audit use of their technology.”

When reached for comment, an Amazon spokesperson pointed to a pair of blog posts penned by Matt Wood, general manager of deep learning and AI at Amazon Web Services (AWS), this past summer. Here, Wood pointed out that there has been “no reported law enforcement abuse of Amazon Rekognition” and argued that Rekognition has “materially benefit[ed]” society by “inhibiting child exploitation … and building educational apps for children” and by “enhancing security through multi-factor authentication, finding images more easily, or preventing package theft.”

But any restraint current AWS customers have chosen to exercise is by no means a guarantee against future — or present — abuses.

Case in point: In September, a report in The Intercept revealed that IBM worked with the New York City Police Department to develop a product that allowed officials to search for people by skin color, hair color, gender, age, and various facial features. Using “thousands” of photographs from roughly 50 cameras provided by the NYPD, its AI learned to identify clothing color and other bodily characteristics.

Some foreign governments have gone further.

According to a report by Gizmodo, the European Union plans to trial an AI system — dubbed iBorderCtrl — that will vet “suspicious” travelers in Hungary, Latvia, and Greece, in part by analyzing 38 facial micro-gestures. The system can reportedly be customized according to gender, ethnicity, and language.

Later this year, Singapore agency GovTech plans to deploy surveillance cameras linked to facial recognition software on over 100,000 lamp posts. Yitu Technology — a Chinese company weighing a bid to supply the software — says its solution can identify over 1.8 billion faces.

China’s facial recognition plans are perhaps the most ambitious to date. Efforts have long been underway in the country of 1.3 billion — which has an estimated 200 million surveillance cameras — to build a nationwide infrastructure capable of identifying people within three seconds with 90 percent accuracy.

Singapore, the EU, and others claim that facial recognition technology has the potential to deter crime, perform crowd analytics, and aid in antiterrorism operations. But countless research efforts — including a 2012 study showing that facial algorithms from vendor Cognitec performed 5 to 10 percent worse on African Americans than on Caucasians — has demonstrated current systems’ imprecision and susceptibility to bias. And, as Microsoft president Brad Smith noted in a blog post late last year, the normalization of facial recognition is a slippery slope toward a totalitarian dystopia.

“Imagine a government tracking everywhere you walked over the past month without your permission or knowledge,” he wrote. “Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies — like Minority Report, Enemy of the State, and even 1984 — but now it’s on the verge of becoming possible,” Smith said.

So what can be done about it?

In a December event in Washington, D.C. hosted by the Brookings Institution, Smith proposed that companies review the results of facial recognition in “high-stakes scenarios,” such as when it might restrict a person’s movements, and called on legislators to investigate facial recognition technologies and craft policies guiding their usage.

“In a democratic republic, there is no substitute for decision-making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms,” he said. “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology.”

Brian Brackeen, CEO of facial recognition software company Kairos, said in a hearing with the Congressional Black Caucus last year that standards should be put in place to ensure baseline accuracy, and to avoid misuse by foreign adversaries.

“We have to have AI tools that are not going to false-positive on different genders or races more than others, so let’s create some kind of margin of error and binding standards for the government,” he told VentureBeat in an interview.

There’s evidence the discourse has had a persuasive effect in at least a few cases. In July 2018, working with experts in artificial intelligence (AI) fairness, Microsoft revised and expanded the datasets it uses to train Face API, a Microsoft Azure API that provides algorithms for detecting, recognizing, and analyzing human faces in images. And Google recently said it would avoid offering a general-purpose facial recognition service until the “challenges” had been “identif[ied] and address[ed].”

“Like many technologies with multiple uses, [it] … merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes,” Kent Walker, senior vice president of global affairs, wrote in a blog post.

But there’s more work to be done. In the midst of government dysfunction in the U.S. and abroad and ethics-skirting advances in AI, it’s critical that regulators — and organizations — pursue mediating laws and policies before it’s too late.

For AI coverage, send news tips to Kyle Wiggers and Khari Johnson — and be sure to bookmark our AI Channel.

Thanks for reading,

Kyle Wiggers
AI Staff Writer

P.S. Please enjoy this video of Ubtech’s walker robot from the 2019 Consumer Electronics Show.

From VB

 AI Weekly: Companies and lawmakers need to agree on facial recognition policies before it’s too late

Facebook and Stanford researchers design a chatbot that learns from its mistakes

In a new paper, scientists at Facebook AI Research and Stanford describe a chatbot that learns from its mistakes over time.

Read the full story

 AI Weekly: Companies and lawmakers need to agree on facial recognition policies before it’s too late

Robomart to roll out driverless grocery store vehicles in Boston area this spring

Robomart is partnering with the Stop & Shop grocery store chain to make deliveries with its driverless grocery store vehicles this spring.

Read the full story

 AI Weekly: Companies and lawmakers need to agree on facial recognition policies before it’s too late

Above: The third-generation Echo Dot.

Image Credit: Jeremy Horwitz/VentureBeat

Alexa can now read your news like a newscaster

Amazon’s Alexa assistant can now read the news in the style of a newscaster, thanks to a novel machine learning training technique.

Read the full story

 AI Weekly: Companies and lawmakers need to agree on facial recognition policies before it’s too late

Project Alias feeds smart speakers white noise to preserve privacy

Project Alias is a crowdsourced privacy shield for smart speakers that prevents intelligent assistants from listening in on conversations inadvertently.

Read the full story

Clusterone raises $ 2 million for its DevOps for AI platform

Clusterone raised $ 2 million to take care of DevOps for developers and data scientists more interested in AI than infrastructure management.

Read the full story

 AI Weekly: Companies and lawmakers need to agree on facial recognition policies before it’s too late

Above: Badger Technologies Marty

Image Credit: Badger Technologies

Badger will deploy robots to nearly 500 Giant, Martin’s, and Stop and Shop stores in the U.S.

Badger Technologies is teaming up with Retail Business Services to supply more than 500 Giant, Martin’s, and Stop & Shop stores with robot employees.

Read the full story

Beyond VB

A New Human Ancestor Has Been Discovered Thanks To Artificial Intelligence

An international team of researchers have examined human DNA using deep learning algorithms to analyze genetic clues to human evolution for the very first time. (via IFL Science)

Read the full story

Facial and emotional recognition; how one man is advancing artificial intelligence

Scott Pelley reports on the developments in artificial intelligence brought about by venture capitalist Kai-Fu Lee’s investments and China’s effort to dominate the AI field. (via CBS News)

Read the full story

A country’s ambitious plan to teach anyone the basics of AI

In the era of AI superpowers, Finland is no match for the US and China. So the Scandinavian country is taking a different tack. (via MIT Tech Review)

Read the full story

The Weaponization Of Artificial Intelligence

Technological development has become a rat race. In the competition to lead the emerging technology race and the futuristic warfare battleground, artificial intelligence (AI) is rapidly becoming the center of global power play. (via Forbes)

Read the full story

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

U.S. lawmakers seek temporary extension to internet spying program

December 22, 2017   Big Data
 U.S. lawmakers seek temporary extension to internet spying program

(Reuters) — Republican leaders in the U.S. House of Representatives are working to build support to temporarily extend the National Security Agency’s expiring internet surveillance program by tucking it into a stop-gap funding measure, lawmakers said.

The month-long extension of the surveillance law, known as Section 702 of the Foreign Intelligence Surveillance Act, would punt a contentious national security issue into the new year in an attempt to buy lawmakers more time to hash out differences over various proposed privacy reforms.

Lawmakers leaving a Republican conference meeting on Wednesday evening said it was not clear whether the stop-gap bill had enough support to avert a partial government shutdown on Saturday, or whether the possible addition of the Section 702 extension would impact its chances for passage. It remained possible lawmakers would vote on the short-term extension separate from the spending bill.

Absent congressional action the law, which allows the NSA to collect vast amounts of digital communications from foreign suspects living outside the United States, will expire on Dec. 31.

Earlier in the day, House Republicans retreated from a plan to vote on a stand-alone measure to renew Section 702 until 2021 amid sizable opposition from both parties that stemmed from concerns the bill would violate U.S. privacy rights.

Some U.S. officials have recently said that deadline may not ultimately matter and that the program can lawfully continue through April due to the way it is annually certified.

But lawmakers and the White House still view the law’s end-year expiration as significant.

“I think clearly we need the reauthorization for FISA, and that is expected we’ll get that done” before the end of the year, Marc Short, the White House’s legislative director, said Wednesday on MSNBC.

U.S. intelligence officials consider Section 702 among the most vital of tools at their disposal to thwart threats to national security and American allies.

The law allows the NSA to collect vast amounts of digital communications from foreign suspects living outside the United States.

But the program incidentally gathers communications of Americans for a variety of technical reasons, including if they communicate with a foreign target living overseas.

Those communications can then be subject to searches without a warrant, including by the Federal Bureau of Investigation.

The House Judiciary Committee advanced a bill in November that would partially restrict the U.S. government’s ability to review American data by requiring a warrant in some cases.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Will the Equifax data breach finally spur lawmakers to recognize data harms?

October 1, 2017   Big Data
 Will the Equifax data breach finally spur lawmakers to recognize data harms?

This summer 143 million Americans had their most sensitive information breached, including their name, addresses, social security numbers (SSNs), and date of birth. The breach occurred at Equifax, one of the three major credit reporting agencies that conducts the credit checks relied on by many industries, including landlords, car lenders, phone and cable service providers, and banks that offer credits cards, checking accounts and mortgages. Misuse of this information can be financially devastating. Worse still, if a criminal uses stolen information to commit fraud, it can lead to the arrest and even prosecution of an innocent data breach victim.

Given the scope and seriousness of the risk that the Equifax breach poses to innocent people, and the anxiety that these breaches cause, you might assume that legal remedies would be readily available to compensate those affected. You’d be wrong.

While there are already several lawsuits filed against Equifax, the pathway for those cases to provide real help to victims is far from clear. That’s because even as the number and severity of data breaches increases, the law remains too narrowly focused on people who have suffered financial losses directly traceable to a breach.

The law consistently fails to recognize other sorts of harms to victims. In some cases this arises in the context of threshold “standing” to sue, a legal requirement that requires proof of harm (lawyers call it “injury in fact”) to even get into the door in federal courts. In other cases the problem arises within the claim itself, where “harm” is a legal element that must be proven for a plaintiff to win the case. Regardless of how the issue of “harm” comes up, judges are too often failing to ensure that data breach victims have legal remedies.

The consequences of this failure are two-fold. First, there’s the direct problem that the courthouse door is closed to hundreds of millions of people who face real risk and the accompanying reasonable fears about the misuse of their information. Second, but perhaps even more important, the lack of legal accountability means that the companies that hold our sensitive data continue to have insufficient incentives to take the steps necessary to protect us against the next breach.

Effective computer security is hard, and no system will be free of bugs and errors.

But in the Equifax hack, as in so many others, the breach resulted from a known security vulnerability. A patch to fix the vulnerability had been available for two months, but Equifax failed to implement it even though the vulnerability was being actively exploited. This wasn’t the first time that Equifax has failed to take computer security seriously.

Even if increasing liability only accomplished an increased incentive to patch known security problems, that alone would protect millions of people.

The High Bar to Harm

While there are exceptions, too often courts dismiss data breach lawsuits based on a cramped view of what constitutes “harm.” These courts mistakenly require actual or imminent loss of money due to the misuse of information that is directly traceable to a single security breach.

Yet outside of data breach cases, courts routinely handle cases where damages aren’t just a current loss of money or property.The law has long recognized harms such as the infliction of emotional distress, assault, damage to reputation and future business dealings.1 Victims of medical malpractice and toxic exposures can receive current compensation for potential for future pain and suffering. As two law professors, EFF Advisory Board member Daniel J. Solove and Danielle Keats Citron, noted in comparing data breach cases to the recent claims of emotional distress brought by Terry Bollea (Hulk Hogan) against Gawker: “Why does the embarrassment over a sex video amount to $ 115 million worth of harm but the anxiety over the loss of personal data (such as a Social Security number and financial information) amount to no harm?” “Why does the embarrassment over a sex video amount to $ 115 million worth of harm but the anxiety over the loss of personal data (such as a Social Security number and financial information) amount to no harm?”

For harms that can be difficult to quantify, some specific laws (e.g. copyright, wiretapping) provide for “statutory damages,” which sets an amount per infraction.2

The recent decision dismissing the cases arising from the 2014-2015 Office of Personnel Management (OPM) hack is a good example of these “data breach blinders.” The court required that the plaintiffs—mostly government employees—demonstrate that they faced a certain, impending, and substantial risk that the stolen information would be misused against them, and that they be able to trace any harm they alleged to the actual breach. The fact that the data sufficient to impersonate was stolen, and stolen due to negligence of OPM, was not sufficient. The court then disappointingly found that the fact that the Chinese government—as opposed to ordinary criminals—are suspected of having stolen the information counted against the plaintiffs in demonstrating likely misuse.

The ruling is especially troubling because we know that it can take years before the harms of a breach are realized. Criminals often trade our information back and forth before acting on it; indeed there are entire online forums devoted to this exchange. Stolen credentials can be used to set up a separate persona that incurs debts, commits crimes, and more for quite a long time before the victim is aware of it. And it can be difficult if not impossible to trace a problem with credit or criminal activity misuse back to any particular breach.

How are you to prove that the bad data that torpedoed your mortgage application came from the breaches at Equifax as opposed to the OPM, Target, Anthem, or Yahoo breaches, just to name a few?

What the Future Holds

When data is being declared the ‘oil of the digital era’ and millions in venture capital funding await those who can exploit it, it’s time to reevaluate how to think of data breaches and misuse, and how we restore access to the courts for those impacted by them.

When data is being declared the ‘oil of the digital era’ and millions in venture capital funding await those who can exploit it, it’s time to reevaluate how to think of data breaches and misuse, and how we restore access to the courts for those impacted by them.

Simply shrugging shoulders, as the OPM judge did, is not sufficient. Courts need to start applying what they already know in awarding emotional distress damages, reputational damages, and prospective business advantage damages to data breach cases, along with the recognition of current harm due to future risks, as in medical malpractice and pollution cases. If the fear caused by an assault can be actionable, so should the fear caused by the loss of enough personal data for a criminal to take out a mortgage in your name. These lessons can and should be brought to bear to help data breach victims get into the courthouse door and all the way to the end of the case.

If the political will is there, legislatures, both federal and state, can step up and create incentives for greater security and a much steeper downside for companies that fail to take the necessary steps to protect our data.

The standing problem requires innovation in crafting claims, but even the Supreme Court in the recent Spokeo decision recognized that intangible harms can still be harms under the Constitution and Congress can make that intention even more clear with proper legislative language. Alternately, as in copyright or wiretapping cases where the damages are hard to quantify, Congress can use techniques like statutory damages to ensure that those harmed receive compensation. Making such remedies clearly available in data misuse and breach cases is worthy of careful consideration. So far, the federal bills being floated in response to the Equifax breach and earlier breaches do not remove these obstacles to victims bringing legal claims and ensure a private right of action.

Similarly, outside of the shadow of federal standing requirements, state legislatures can consider models of specific state law protections like California’s Lemon Law, formally known as the Song-Beverly Consumer Warranty Act. The Lemon Law provides specific extra remedies for those purchasing a new car that needs significant repairs. States should be able to recognize that data breach situations are special and may similarly require special remedies. Things to consider are giving victims easier (and free) ways to clean up their credit rather than just the standard insufficient credit monitoring schemes.

By looking at various options, Congress and state legislatures could spur a race to the top on computer security and create real consequences for those who choose to linger on the bottom.

Of course, shoring up our legal remedies isn’t the only avenue for incentivizing companies to protect our data better. Government agencies like the Federal Trade Commission and state attorneys general have a role to play, as does public pressure and media attention.

One thing is for sure: as long as the consequences for neglecting to protect user data are weak, data breaches like the Equifax breach will continue to occur. Worse, it will become increasingly difficult for victims to demonstrate which breach caused their credit rate to drop, their job prospects to dim, or their hopes for a mortgage to be dashed. It’s long past time for us to rethink the approach to harm in data breach cases.

This story originally appeared on the EFF’s blog.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited