Category Archives: Big Data

Amazon, Google, Facebook, Microsoft: The scramble to beat Apple, dominate hardware, and own your future

 Amazon, Google, Facebook, Microsoft: The scramble to beat Apple, dominate hardware, and own your future

Google’s $ 1.1 billion acquihire of HTC’s Pixel team is just the latest example of a mega trend that is turning the tech industry upside down and has enormous consequences for the role technology plays in all of our lives.

Put simply, the biggest of big tech companies have decided that to win, they must control every aspect of how we interact with technology. And that means building out Apple-like ecosystems of gadgets that capture as much of your data as possible and keep it within their very own walled gardens.

The promise to consumers is to create, like Apple, a system that offers ease and simplicity of use, and rewards users for investing more in a single brand. By the same measure, those systems become a trap that eliminates choice over time as they limit compatibility with competing systems.

In rocketing to becoming the world’s most valuable company, Apple has demonstrated the profound appeal of this integrated approach. With some exceptions now and then, Apple’s products are seductively simple to connect with each other, and it’s easy to manage a single account across those gadgets. At home, I have an Apple TV, iPad, 3 iPhones, and a Macbook Air. These all communicate seamlessly. I haven’t taken the Apple Watch plunge, though, and have no plans to get one.

Now, Apple is extending that ecosystem with the HomePod, a voice-activated speaker powered by Siri. But if you want to listen to music, you’ll need to have an Apple Music subscription. There is some speculation that at some point, Apple could open it up to third-party developers. For the moment, it will be a closed system when it goes on sale in December.

Of course, for Apple, this integration of hardware and software has been a consistent approach. And because the company doesn’t overtly make money from users’ data, it’s a bit less ominous, I suppose.

On the other hand, for rivals like Google, Amazon, and Facebook, that data is now, and will increasingly, be used in the future for ad targeting, marketing campaigns, and efforts to get you to buy more and more.

Google-Alphabet has been going down this road for sometime now, as it has tried with mixed success to become a hardware company. Its acquisition of Nest, its Chromecast stick for TV, Nexus phones, Chromebooks, and Google Home have helped ease the pain of failures like Google Glass. Still, this isn’t enough for Google, which 18 months ago hired Rick Osterloh to be its senior vice president of hardware.

It was Osterloh who apparently oversaw the deal with HTC and the decision to basically bring in-house the production of the Pixel phone, its attempt to create a flagship Android device.

“Our team’s goal is to offer the best Google experience — across hardware, software and services — to people around the world. We’re excited about the 2017 lineup, but even more inspired by what’s in store over the next five, 10, even 20 years,” he wrote in a blog post about the HTC deal. “Creating beautiful products that people rely on every single day is a journey, and we are investing for the long run.”

It is an innocent-sounding statement that is less than innocent.

Google is already under fire in Europe for abusing its search engine monopoly to beat back competitive shopping services. The EU is also investigating whether it used its Android OS to force handset manufacturers and consumers to use various Google services on their phone.

Google is planning to make more announcements about its hardware plans at an event on October 4. Even if Google products allow third-party developers access to the hardware, as with Android, there are major concerns that with the default being Google’s voice assistant or search, the company is hoping to grab data that might otherwise flow to, say, Apple or any other competitor.

Amazon’s push to lasso the physical and the virtual also has troubling overtones. From its start with the Kindle reader, and later the Amazon Fire TV stick, the company has a growing portfolio of products such as the Echo now driven by its Alexa voice assistant. The company is also eager for other hardware developers to build Alexa capability into third-party hardware. And it’s attracting a growing number of developers who are creating skills for Alexa.

The company has other niche hardware products, like the Dash buttons for ordering more goods, the Dash Wand for reading barcodes at home, and the Echo Look connected camera. This is creating a massive storehouse of information about user behavior and user data. How they mix and match that data is going to draw close scrutiny from regulators and privacy advocates, particularly as they invest more in physical stores like Whole Foods.

And then comes Facebook, which for the moment has made its biggest splash with its acquisition of Oculus VR. But the company increasingly realizes where the game is headed. A year ago, Facebook publicly announced it had built a 22,000 square foot hardware lab. Some of that work focuses on its internal IT projects, like servers and switches. And some focuses and its various efforts to expand connectivity to remote areas.

But earlier this year, the company unveiled its Surround 360 cameras. Now there are reports it’s building a video chat device and its own smart speaker. Even possibly a modular smartphone.

Microsoft, in contrast, has tempered its hardware ambitions after a big push a few years ago. Though its phones have flopped and its Nokia deal was a bust, it has gotten traction with its Surface tablet, and of course the Xbox has had a long run as a top video gaming platform. Likely, Microsoft remains chastened by its various antitrust battles over the years, along with some of these costly flops. And like Google’s Android, it’s a bit restrained by its need to manage its long-standing OEM relationships.

But it’s also in some ways trying to zig away from these other tech giants by becoming easier to use on other platforms, particularly with some of its Office apps, in an effort to extend its appeal. And it’s focusing more on its cloud services competition against Google and Amazon.

Still, overall, the direction is clear. Consumers are going to be facing tough choices not just about a single product or service, but about the larger implications of what it means in terms of when and where that service or product might work. They are going to be forced to grapple with a larger number of interoperability issues as they try to figure out which service works with which hardware, and which hardware can be paired with which other hardware.

The path of least resistance, over time, will be to simply buy hardware and services from one company. And once you do, the barrier to switching to other products will become higher and higher. Companies will talk about being “open,” but they will only kinda sorta mean it.

While U.S. antitrust and consumer protection has become relatively toothless in recent years, these companies can also likely look forward to more intense scrutiny and legal battles with the European Union. The EU has been far more aggressive in ensuring that these companies don’t use their might to limit consumer choices. The expanding walled garden of hardware would seem to be painting a big red target on the backs of these tech companies.

In the meantime, consumers ought to remain vigilant. Each gadget unto itself can be fun, seductive, and even actually useful. But they should make sure that as they step further into the embrace of one company that they aren’t entering into a relationship from which there is no exit.

Let’s block ads! (Why?)

Big Data – VentureBeat

New eBook Alert: Know What You Don’t Know About Your Customers

In the age of eCommerce, there is a great need for companies to verify the identity of a given customer in real time. From credit card fraud to missed opportunities, the issues of incomplete, false or unreliable customer data can have a big effect on the bottom line of your business.

Our latest eBook, “Know What You Don’t Know About Your Customers,” looks at how you can reduce your risk while capitalizing on data-driven opportunities in a three-step strategy to real-time customer data verification.

blog banner eBook Know What You Dont Know 1 New eBook Alert: Know What You Don’t Know About Your Customers

Download the eBook now!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Equifax says server first compromised on March 10

 Equifax says server first compromised on March 10

(Reuters) — Equifax Inc said on Wednesday that investigators had determined that an online dispute website at the heart of the theft of some 143 million consumer records was initially compromised by hackers on March 10, four months before the company noticed any suspicious activity.

It disclosed the findings after details of a report by cyber-security firm FireEye Inc that was sent to some Equifax customers were reported by the Wall Street Journal earlier on Wednesday.

The report, which was obtained by Reuters, described the techniques that the unknown attackers used to compromise Equifax, including exploitation of a vulnerability in a software known as Apache Struts that was used to build the online dispute website.

It is not clear whether the March hackers were the same ones who later stole the vast cache of personal information. Equifax also said a previously reported incident in which some W-2 forms were compromised, also in March, was entirely unrelated.

The FireEye report said the firm was unable to determine who was behind the attack, and that it had never seen a hacking group employ the same tools, techniques and procedures as those used against Equifax.

A FireEye spokesman declined to comment on the report.

Equifax said in a statement to Reuters that a hacker “interacted with” the server on March 10, but that there was no evidence that the incident was related to the theft of sensitive consumer data that began in May.

The Wall Street Journal report said that hackers had roamed undetected inside Equifax’s network for four months before the massive breach was detected in July by the company’s security team. Equifax disputed that claim.

“There is no evidence that this probing or any other probing was related to the access to sensitive personal information” in the massive breach disclosed on Sept. 7, the company said in its statement.

Equifax shares have shed almost a third of their value since the disclosure of the breach. Critics have questioned why Equifax took so long to discover and disclose the breach.

One security expert who reviewed the FireEye report said that it was too soon to say whether the March 10 incident was related to the massive hack.

“They’ve had so much overlapping activity that it’s difficult to pick a single thread out of the noise,” said the expert, who was not authorized to discuss details of the confidential report.

Let’s block ads! (Why?)

Big Data – VentureBeat

Open Data is Great – But Only If You Ensure Data Quality

Open data is all around us these days, which is a great thing. To leverage open data effectively, however, you need to be prepared to address the data quality risks. Here’s why and how.

Defining Open Data

The open data movement takes its cues from the open source software movement.

Open source software refers to programs whose source code is available for the public to download, inspect, modify and, if desired, expand.

blog open Open Data is Great – But Only If You Ensure Data Quality

In a similar fashion, open data means sets of databases that anyone can access and use as they wish.

Open data is usually free of cost, although that is not the defining characteristic. Openness – that is, the quality of being openly accessible to anyone – is what makes open data what it is.

Government agencies have become one of the leading sources of open data. Governments like New York City and the federal government of the United States make data sets freely available online.

Scientific research projects also sometimes provide open data sets. The Human Genome Project makes a range of important data sets freely available, for example.

blog open data modem Open Data is Great – But Only If You Ensure Data Quality

Why Use Open Data

Simply, it is a great resource. Companies can and should take advantage of open databases when the data fit their needs. In many cases, doing so is a fast and cost-effective way to gain access to data that can drive analytics engines and deliver important insights.

For example, imagine that your company wants to know what kind of public Wi-Fi infrastructure is available to customers to help predict how much bandwidth the company can expect an app to support for those customers. If the customers happen to be living in New York City, the company can grab open data related to Wi-Fi availability for residents. That’s a lot faster and easier than compiling all that data from scratch.

blog banner ASG webcast 2 Open Data is Great – But Only If You Ensure Data Quality

Open Data and Data Quality

As great as open data is, it comes with a caveat. In some cases, it may not provide the data quality required to make the data actionable.

This isn’t because most open data sets are inherently low in quality. The fact that they are (usually) free does not mean you can’t trust the data inside them. This may be the case with some open databases, but most open projects provide data that is as reliable as any you collect yourself. (Indeed, you can make the argument that because open data sets are available for anyone to inspect, they are likely to have fewer errors, because there are more people to notice that something is wrong.)

Still, no data set is perfect, and open databases are no exception. Take the open database related to Wi-Fi in New York City. The database includes the street address for each Wi-Fi access point, along with latitude and longitude coordinates. If it is important for you to know for certain exactly where each Wi-Fi access point is located, you’d want to cross-check this information to make sure the street addresses align with the map coordinates.

You’d also probably want to make sure that all the street addresses actually exist. Data entry errors, address changes or other problems could easily introduce flaws into this part of the database.

Data quality tools – including Trillium data quality solutions which are now part of Syncsort’s suite of Big Data solutions – can help you perform the checks you need to identify, and fix potential data quality errors like these.

ASG and Trillium Software recently hosted an educational webcast that explored the need for improving data quality, as well as some common challenges:  Watch the replay now!

2017 Big Data Survey Promo Open Data is Great – But Only If You Ensure Data Quality

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Kleiner Perkins leads $15 million investment in Incorta’s data analytics software

 Kleiner Perkins leads $15 million investment in Incorta’s data analytics software

Incorta, which provides software to analyze data in real time, today announced that it has secured an additional $ 15 million in funding. Kleiner Perkins Caufield & Byers led the round, with participation from existing investors GV and Ron Wohl, Oracle’s former executive vice president. This new round of funding comes on the heels of a $ 10 million investment announced last March.

The San Mateo, California-based startup provides an analytics platform that enables real-time aggregation of large, complex sets of business data, such as for enterprise resource planning (ERP), eliminating the need to painstakingly prepare the data for analysis.

“With Incorta, there is no additional need to put data into a traditional data warehouse ahead of time,” wrote Incorta cofounder and CEO Osama Elkady, in an email to VentureBeat. “This reduces the time to build analytic applications from months to days.”

The startup claims that over the past year it has increased revenue more than 300 times and signed new customers, including Shutterfly.

“Incorta customers who used Exadata or Netezza appliances, or data engines like Redshift or Vertica, enjoyed performance gains of 50 to 100 times when they switched to Incorta, even on the most complex requests,” wrote Elkady.

The enterprise offering is licensed as an annual subscription on a per user basis and can be deployed on Google Cloud, Microsoft Azure, and Amazon Web Services (AWS). The startup is in talks with other cloud providers, according to Elkady.

Today’s fresh injection of capital will be used to further product development and increase sales and marketing. “It will also enable us to more quickly realize our vision for a third-party marketplace where vendors and content providers can build and distribute applications powered by Incorta’s platform,” wrote Elkady.

Founded in 2014, Incorta has raised a total of $ 27.5 million to date and currently has 70 employees.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

How to Clean and Trust Your Big Data

You’ve probably witnessed it, and maybe are doing it. Many organizations are just dumping as much data into a data lake as they can, trying to get to every data source in an enterprise and putting all the data into the data lake. We see it here at Syncsort, with the vast amount of data from mainframe and other sources heading to the data lake for analytics and other use cases. But what if you can’t trust the data because it has errors in it, duplicated data (like customer records!), and generally just “dirty data.”

You need to clean it! Syncsort just announced Trillium Quality for Big Data to do just that. To get more insight into the challenges the new product helps tackle to clean Big Data, let’s use a real-world data quality example, creating a single customer view or any entity, like supplier, product, etc.

blog banner landscape How to Clean and Trust Your Big Data

Parsing and Standardization are First Steps to Clean Big Data

There are a series of data quality steps to be taken to clean Big Data and to de-duplicate the data to get a single view. To create a single view of customer or product for instance, we need to have everything in a standard format to get the best match. Let’s talk through these steps, parsing and standardization.

Let me use a simple example of parsing and standardization.

100 St. Mary St.

As humans, we know that is an address and it is 100 Saint Mary Street, because we understand the position. Did you know that postal address formats can vary from country to country? For example, in COUNTRY X, the house number comes after the street name?

Now think about all the different formats for names, company names, addresses, product names, and other inventoried items such as books, toys, automobiles, computers, manufacturing parts, etc.

Now think about different languages.

Next Step to Draining the Swamp: Data Matching for the Best Single Records

Once we have all this data in a common, standard format we can then match. But this can even be complex. You can’t rely on customer IDs for instance to ensure de-duplicated data. Think about how many different ways customers are represented in each of your sources systems that have been polluting the data lake, matching is a hard problem.

Think about a name, Josh Rogers (I’ll pick on our CEO). The name could be in many different formats – or even misspelled – across your source systems and now in the data lake:

J. Rogers
Josh Rodgers
Joseph Togers

scuba diver left How to Clean and Trust Your Big Data

If you use the right data quality tools to clean Big Data in your data lake, your data won’t reside in murky waters.

As a marketing analyst, I have a new product to promote, and I must make sure I’m targeting the right customer/prospect. If Josh lives in a small town in zip code 60451 (New Lenox, IL – my home town!), he’s probably the only one on that street.

But if his zip code is 10023 (upper west side of NYC), there might be more than one person with that name at that address (think about the name Bob Smith!). Matching is a complex problem, especially dealing with the data volumes in a data lake.

The last step is to commonize and survive the best fields to make up the best single record.

Now, Let’s Run This in Big Data

Creating the single best, enriched record is exactly what Trillium Quality for Big Data does. The product allows the user to create and test the steps above locally, then leverage Syncsort’s Intelligent Execution technology to execute them in Big Data frameworks such as Hadoop MapReduce or Spark. The user doesn’t need to know these frameworks, and it’s also future-proofed for new ones which we all know are coming.

So, what makes Trillium Quality for Big Data different.

  • The product has more matching capabilities than any other technology that ensure you get that single view
  • For those postal addresses, we have world-wide postal coverage for addresses and geocoding (latitude/longitude)
  • Performance and scalability using Intelligent Execution for execution in Big Data environment on a large and growing volume of data

Now it’s time to go clean the data swamp and make it a trusted, single view data lake!

Discover how today’s new data supply chain impacts how data is moved, manipulated, and cleansed – download our eBook The New Rules for Your Data Landscape today!

2017 Big Data Survey Promo How to Clean and Trust Your Big Data

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Next Week’s Strata Data Conference: What’s in a Name?

Next week, thousands of Big Data practitioners, experts and influencers will gather at New York’s Javits Center to attend the newly-branded Strata Data Conference. According to event organizers, the conference, which debuted in 2012 as Strata + Hadoop World, has been rebranded to more accurately reflect the scope of the conference beyond Hadoop.

The simplicity of the name belies the increasingly diverse and complex ecosystem of Big Data tools and technology that will be covered during three days packed with tutorials, keynotes and track sessions. It can be quite overwhelming – but here’s a sampling of what’s going on to help you plan your week.

Strata Data Conference Keynotes

Keynotes are always a great way to get energized for the day ahead, and this year looks to be no different, with titles including “Wild, Wild Data,” “Weapons of Math Destruction,” the cautionary “Your Data is Being Manipulated,” and the upbeat “Music, the window into your soul.”

We’re also looking forward to the presentation by Cloudera co-founder Mike Olson and Cesar Delgado, the current Siri platform architect at Apple.

Expanded Session Topics Connect Technology to the Business

While the main driver for dropping “Hadoop” from the conference title was to be more inclusive of the breadth of technology discussed, the new name appears to coincide with an expansion of topics that connect the technology to the business. In addition to Findata Day – a separate event curated for finance executives held on Tuesday – there is a “Strata Business Summit” track within the main conference, tailored for executives, business leaders and strategists.

Looking for more sessions that marry technology and business? You can filter on topics for Business Case Studies, Data-driven Business Management, Enterprise Adoption, and Law, Ethics & Governance.

Speaking of Governance … if you want to make sure the Big Data in your organization is actually trusted by the people who need and use it, be sure to attend “A Governance Checklist for Making Your Big Data into Trusted Data,” presented by our VP of product management, Keith Kohl, on Thursday at 2:05 pm.

LPheader StrataNYC17 Next Week’s Strata Data Conference: What’s in a Name?

Strata Data Conference Events You Won’t Want to Miss

Last, but not least, what’s a great conference without some great events? Here are a few favorites:

  • Ignite:Presenters get 5 minutes to present on an interesting topic – from technology to philosophy – that touch on the wonder and mysteries of Big Data and pervasive computing. Always a favorite, this event is free and open to the public. So stop by even if you don’t have a conference pass!
  • Strata Data Gives Back: Join Cloudera Cares and O’Reilly Media in assembling care kits for New York’s homeless and at-risk youth, in partnership with the Covenant House NY. Visit the Cloudera stand in the Expo Hall to get involved.
  • Booth Crawl and Data After Dark: Unwind after a day of sessions by with fellow attendees, speakers and authors while you enjoy a vendor-hosted cocktail hour in the Expo Hall. Be sure to stop by Syncsort Booth #715 where you can enjoy a Mexican Fiesta and get our latest t-shirt! Ask our data experts how you can unlock valuable – and trusted – insights from your mainframe and other legacy platforms using our innovative Big Data Integration and Data Quality solutions! Then head to 230 Fifth, New York’s largest outdoor rooftop garden, for Data After Dark: City View.

Haven’t registered for Strata Data Conference yet? Get a 20% discount on us!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Intel and Waymo collaborate on self-driving compute platform

 Intel and Waymo collaborate on self driving compute platform

(Reuters) — Intel Corp on Monday announced a collaboration with Alphabet’s Waymo self-driving unit, saying it had worked with the company during the design of its compute platform to allow autonomous cars to process information in real time.

The world’s largest computer chipmaker said its Intel-based technologies for sensor processing, general compute and connectivity were used in the Chrysler Pacifica hybrid minivans that Waymo has been using since 2015 to test its self-driving system.

“As Waymo’s self-driving technology becomes smarter and more capable, its high-performance hardware and software will require even more powerful and efficient compute,” said Intel Chief Executive Brian Krzanich in a statement announcing the ongoing collaboration.

Intel, which announced the $ 15 billion acquisition of autonomous vision company Mobileye in March, is pushing to expand its real estate in autonomous vehicles, a fast-growing industry. A collaboration with Waymo, considered by many industry experts to be at the forefront of autonomous technology, adds to its portfolio.

The announcement marked the first time Waymo, formerly Google’s autonomous program, has acknowledged a collaboration with a supplier. The company has done most of its development work in-house.

Intel began supplying chips for then-Google’s autonomous program beginning in 2009, but that relationship grew into a deeper collaboration when Google began working with Fiat Chrysler Automobiles (FCHA.MI) to develop and install the company’s autonomous driving technology into the automaker’s minivans.

Waymo, which has developed its own sensors, is not using the autonomous vision system created by Mobileye.

Underscoring the non-exclusive partnerships and collaborations in the space, Mobileye and Intel are in an alliance with German automaker BMW (BMWG.DE) and Fiat-Chrysler to create an industry-wide autonomous car platform.

Waymo CEO John Krafcik said fast processing was crucial to the performance of its autonomous vehicles.

“Intel’s technology supports the advanced processing inside our vehicles, with the ability to manufacture to meet Waymo’s needs at scale,” Krafcik said in a statement.

Let’s block ads! (Why?)

Big Data – VentureBeat

IBM simulates chemical reactions on a quantum computer

IBM has unveiled a new approach to simulating molecules using a quantum computer, a breakthrough that could change materials science and chemistry. In the journal Nature, Big Blue said this will enhance our understanding of complex chemical reactions. IBM said that it could lead to practical applications, such as the creation of new materials, development of personalized drugs, and discovery of more efficient and sustainable energy sources.

Quantum computing holds a lot of promise because it represents a new way of doing computation with quantum bits, or qubits. Unlike conventional bits, a qubit can be a one or a zero or both. Using these qubits enables machines to make great numbers of computations simultaneously, making a quantum computer very good at certain kinds of processing. IBM has been researching quantum computing for years, and it is starting to report more progress.

IBM scientists developed an algorithm that uses a seven-qubit quantum processor, and they actually used six qubits of that processor to measure Beryllium hydride (BeH2)’s lowest energy state, a key measurement for understanding chemical reactions. So far, that is the largest molecule simulated on a quantum computer. While this model of BeH2 can be simulated on a classical computer, IBM said its approach has the potential to scale to investigate larger molecules that classical computing methods can’t handle.

 IBM simulates chemical reactions on a quantum computer

Above: IBM’s quantum computer

Image Credit: IBM

“Thanks to Nobel laureate Richard Feynman, if the public knows one thing about quantum, it knows that nature is quantum mechanical. This is what our latest research is proving — we have the potential to use quantum computers to boost our knowledge of natural phenomena in the world,” said Dario Gil, vice president of AI research and IBM Q at IBM Research, in a statement. “Over the next few years, we anticipate IBM Q systems’ capabilities to surpass what today’s conventional computers can do and start becoming a tool for experts in areas such as chemistry, biology, health care, and materials science.”

To help showcase how adept quantum computers are at simulating molecules, developers and users of the IBM Q experience are now able to access an open source quantum chemistry Jupyter Notebook. Available through the open access QISKit github repo, this allows users to explore a method of ground state energy simulation for small molecules, such as hydrogen and lithium hydride. A year ago, IBM launched the IBM Q experience by placing a five-qubit quantum computer on the cloud for anyone to freely access, and it most recently upgraded to a 16-qubit processor available for beta access.

“The IBM team carried out an impressive series of experiments that holds the record as the largest molecule ever simulated on a quantum computer,” said Alán Aspuru-Guzik, professor of chemistry and chemical biology at Harvard University, in a statement. “When quantum computers are able to carry out chemical simulations in a numerically exact way, most likely when we have error correction in place and a large number of logical qubits, the field will be disrupted. Exact predictions will result in molecular design that does not need calibration with experiment. This may lead to the discovery of new small-molecule drugs or organic materials.”

Chemistry is one example of a broader set of problems that quantum computers are potentially well-suited to tackle. Quantum computers also have the potential to explore complex optimization routines, such as might be found in transportation, logistics, or financial services. They could even help advance machine learning and artificial intelligence, which relies on optimization algorithms. Earlier this year, IBM scientists and collaborators demonstrated that there is a defined advantage to running a certain type of machine learning algorithm on a quantum computer.

For future quantum applications, IBM anticipates certain parts of a problem being run on a classical machine while the most computationally difficult tasks might be off-loaded to a quantum computer. This is how businesses and industries will be able to adopt quantum computing into their technology infrastructure and solutions. To get started today, developers, programmers, and researchers can run quantum algorithms, work with individual quantum bits, and explore tutorials and simulations on the IBM Q experience. In addition, IBM has commercial partners exploring practical quantum applications through the IBM Research Frontiers Institute.

Let’s block ads! (Why?)

Big Data – VentureBeat

Equifax announces Chief Security Officer and Chief Information Officer have left

 Equifax announces Chief Security Officer and Chief Information Officer have left

(Reuters) — Equifax said on Friday that it made changes in its top management as part of its review of a massive data breach, with two technology and security executives leaving the company “effective immediately.”

The credit-monitoring company announced the changes in a press release that gave its most detailed public response to date of the discovery of the data breach on July 29 and the actions it has since taken.

The statement came on a day when Equifax’s share price continued to slide following a week of relentless criticism over its response to the data breach,

Lawmakers, regulators and consumers have complained that Equifax’s response to the breach, which exposed sensitive data like Social Security numbers of up to 143 million people, had been slow, inadequate and confusing.

Equifax on Friday said that Susan Mauldin, chief security officer, and David Webb, chief information officer, were retiring.

The company named Mark Rohrwasser as interim chief information office and Russ Ayres as interim chief security officer, saying in its statement, “The personnel changes are effective immediately.”

Rohrwasser has led the company’s international IT operations, and Ayres was a vice president in the IT organization.

The company also confirmed that Mandiant, the threat intelligence arm of the cyber firm FireEye, has been brought on to help investigate the breach. It said Mandiant was brought in on Aug. 2 after Equifax’s security team initially observed “suspicious network traffic” on July 29.

The company has hired public relations companies DJE Holdings and McGinn and Company to manage its response to the hack, PR Week reported. Equifax and the two PR firms declined to comment on the report.

Equifax’s share prices has fallen by more than a third since the company disclosed the hack on Sept. 7. Shares shed 3.8 percent on Friday to close at $ 92.98.

U.S. Senator Elizabeth Warren, who has built a reputation as a fierce consumer champion, kicked off a new round of attacks on Equifax on Friday by introducing a bill along with 11 other senators to allow consumers to freeze their credit for free. A credit freeze prevents thieves from applying for a loan using another person’s information.

Warren also signaled in a letter to the Consumer Financial Protection Bureau, the agency she helped create in the wake of the 2007-2009 financial crisis, that it may require extra powers to ensure closer federal oversight of credit reporting agencies.

Warren also wrote letters to Equifax and rival credit monitoring agencies TransUnion and Experian, federal regulators and the Government Accountability Office to see if new federal legislation was needed to protect consumers.

Connecticut Attorney General George Jepsen and more than 30 others in a state group investigating the breach acknowledged that Equifax has agreed to give free credit monitoring to hack victims but pressed the company to stop collecting any money to monitor or freeze credit.

“Selling a fee-based product that competes with Equifax’s own free offer of credit monitoring services to victims of Equifax’s own data breach is unfair,” Jepsen said.

Also on Friday, the chairman and ranking member of the Senate subcommittee on Social Security urged Social Security Administration to consider nullifying its contract with Equifax and consider making the company ineligible for future government contracts.

The two senators, Republican Bill Cassidy and Democrat Sherrod Brown, said they were concerned that personal information maintained by the Social Security Administration may also be at risk because the agency worked with Equifax to build its E-Authentication security platform.

Equifax has reported that for 2016, state and federal governments accounted for 5 percent of its total revenue of $ 3.1 billion.

400,000 Britons affected

Equifax, which disclosed the breach more than a month after it learned of it on July 29, said at the time that thieves may have stolen the personal information of 143 million Americans in one of the largest hacks ever.

The problem is not restricted to the United States.

Equifax said on Friday that data on up to 400,000 Britons was stolen in the hack because it was stored in the United States. The data included names, email addresses and telephone numbers but not street addresses or financial data, Equifax said.

Canada’s privacy commissioner said on Friday that it has launched an investigation into the data breach. Equifax is still working to determine the number of Canadians affected, the Office of the Privacy Commissioner of Canada said in a statement.

Let’s block ads! (Why?)

Big Data – VentureBeat