Category Archives: Big Data

3 lessons your company can draw from AI implementations outside the tech sector

 3 lessons your company can draw from AI implementations outside the tech sector

It’s clear: Artificial intelligence has transformed the way we live. According to PwC, 55 percent of consumers would prefer to receive new media recommendations from AI — a development that illuminates how much we’ve integrated the technology into our lives.

Google, Amazon, and Microsoft are just a few of the obvious innovators embracing bot-powered business functions, but others are also taking notice. Artificial intelligence’s ability to synthesize and analyze data can easily improve business operations for many industries, including hospitality, restaurants, and travel. Such markets experience success when they revise their customer experience or marketing strategies with machine learning and chatbots.

Smaller companies can also adopt AI with the right strategy by looking to larger enterprises for insights into understanding business AI. Rather than mimicking the latest trends, however, your smaller organization should consider taking small steps to success. The following showcases how three large “non-tech” companies are embracing AI — and how to follow their lead.

1. Lemonade Insurance: Chatbots simplify business processes

New York-based startup Lemonade Insurance claims to disrupt the insurance industry with its flat-fee home, renters, and life insurance policies. Unlike its industry peers, Lemonade turns to machine learning and chatbots to deliver services, handle insurance claims, and reduce paperwork when generating quotes. Its customers benefit from shorter claims processes and supportive customer experiences that help them understand how insurance works.

Lemonade acknowledges that AI is ready to change the insurance industry. Its CEO and cofounder Daniel Schreiber recently shared how Lemonade uses chatbots to prevent the loss of useful data in the application process. He said that insurance companies have already taken note of how AI technology “transforms the user experience, appeals to younger consumers, and removes costs” and that the industry will take those innovations even further in the coming years.

Try it: Lemonade employs chatbots to simplify complex business processes (such as applying for an insurance quote) and clarify esoteric information. Consider building or outsourcing a chatbot platform to collect and communicate data to or from your customers.

2. United Airlines: Virtual assistants provide support

The third-largest U.S.-based airline became the first of its kind to integrate with Amazon Alexa in 2017. The United skill application enables customers to interact with Alexa and get answers to common questions about United’s domestic flights: Just say, “Alexa, ask United [your query here].” Encouraged by its success, United Airlines later announced integrations with Google’s Home Assistant, allowing customers to access updates from their smartphones.

Smart devices and virtual assistant platforms affect how customers connect with notable brands. The virtual assistance trend has penetrated 56 percent of U.S. households, and United Airlines capitalized on the technology’s ubiquity. Any company willing to adapt to the ever-changing pace of customer behavior can meet evolving customer expectations.

Try it: Avoid reinventing the wheel. Take advantage of Amazon and Google’s AI infrastructure with tools like Alexa Skills and Actions on Google. Amazon and Google even offer professional support teams to guide companies through the integration process to make it as simple as possible.

3. Marriott Hotels: Branded apps improve data collection

The hospitality and event planning industries have started using intuitive, branded digital experiences to improve customer interactions. Artificial intelligence allows leaders like Marriott to oversee event management with digital applications. When event planners book at a Marriott hotel, the luxury chain presents them with the Marriott Meeting Services App to assist with planning, launching, and monitoring their event. Planners receive real-time updates on their event locations, catering, and hotel services. Event attendees can also interface with Marriott’s in-app chatbot for critical information, such as parking availability and itineraries.

Artificial intelligence implementations like this can build positive branded experiences. When customers trust their favorite brands to create enjoyable interactions, businesses benefit from the helpful data generated by better customer engagement. The data you gather can help you further improve the customer experience and your targeted marketing campaigns.

Try it: Marriott uses branded event apps to track and analyze digital engagement. Outfitting your own digital or mobile application with machine learning or a chatbot function can improve data collection while offering insights into your brand reputation.

Artificial intelligence revolutionizes how the world’s leading companies conduct business. Smaller brands can join industry leaders by adopting popular AI technology such as chatbots and voice assistants. When done strategically, your business can use AI to simplify complex functions, collect more data, and discover key insights into your digital brand.

Darina Murashev is a digital journalist who contributes to Benzinga, Scoop.It, and Business Woman Media.

Let’s block ads! (Why?)

Big Data – VentureBeat

Expert Interview (Part 1): Tobi Bosede on What It Takes to Be a Machine Learning Engineer

At the Strata Data Conference in New York City in the fall, Paige Roberts of Syncsort had a chance to sit down with Tobi Bosede, Sr Machine Learning Engineer, shortly after her presentation. In the first of this three-part blog series, Bosede explains what goes into being a Machine Learning Engineer as well as some of the projects she is currently involved with.

Roberts: So, tell everyone a little about yourself.

Bosede: My name is Tobi Bosede. I got my graduate degree from Johns Hopkins and my presentation was on the research I did for a graduate thesis. I am a Machine Learning Engineer so I look at all sorts of data relating to finance, not necessarily relating to trades, but that includes staying up to date with current tools and technologies such as python libraries or using things like Ansible. Implicitly even though by title I’m a Machine Learning Engineer, I’m also a Data Engineer.

Expert Interview Part 1 Tobi Bosede on What It Takes to Be a Machine Learning Engineer banner Expert Interview (Part 1): Tobi Bosede on What It Takes to Be a Machine Learning Engineer

You have to be. Do you do a lot of your work on the cloud?

Yeah. Some of it is local on my machine but, yeah. A good amount is on the cloud, especially things that we productionize or even when we are doing a demo, and we are all trying to work together on something we’ll put it on a AWS for instance. There are so many technologies and it’s fast changing and fast moving so a huge part aside from research is just staying up-to-date by going to conferences, talking to peers at various companies, and reading about current technologies because it’s always changing.

Keeping up because it always changes. Yeah, I get that.

In addition to being a Machine Learning Engineer, I’ve been involved in NumFOCUS. I have done some extracurricular things with them. Have you heard of NumFOCUS?

No, I haven’t, tell me about it.

Well, it’s a non-profit. It was kind of spun off from continuum, to support Python libraries like NumPy, SymPy, Pandas. Those are all NumFOCUS projects. So, basically the idea is to fund open source projects that maybe would have difficulty staying up to date and being maintained. It’s actually pretty time consuming to do all that open source work. Wes McKinney created Pandas but he also has a day job. The larger the number of consumers or users of your tool and your library, the more intensive maintenance and upkeep on it is.

You have to have a fairly good community to keep it up.

Yeah, I’ve been on NumFocus’ DISC, which is their Diversity and Inclusion in Scientific Computing committee. They had an Unconference at PyData NYC in November 2017.

Make sure to check out part 2 where Bosede will explain predicting trade volumes and the correlation between volume and volatility.

Download our free eBook, “Mainframe Meets Machine Learning“, to learn about the most difficult challenges and issues facing mainframes today.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

The 3 most valuable applications of AI in health care

 The 3 most valuable applications of AI in health care

Artificial intelligence could prove to be a self-running growth engine for the health care sector in the not-so-distant future.

A recent report from Accenture analyzed the “near-term value” of AI applications in health care to determine how the potential impact of the technology stacks up against the upfront costs of implementation. Results from the report estimated that AI applications in health care could save up to $ 150 billion annually for the U.S. health care economy by 2026.

The report focused on 10 AI applications with potential for near-term impact in medicine and analyzed each application to derive an associated estimated value. Researchers considered the impact of each application, likelihood of adoption, and value to the health economy in their evaluation.

Here are the top three AI applications with the greatest value potential in health care, according to the report’s findings.

1. Robot-assisted surgery: Estimated value of $ 40 billion

Robotic surgeries are considered “minimally invasive” surgeries – meaning practitioners replace large incisions with a series of quarter-inch incisions and utilize miniaturized surgical instruments.

Cognitive surgical robotics combines information from actual surgical experiences to improve surgical techniques. In this type of procedure, medical teams integrate the data from pre-op medical records with real-time operating metrics to improve surgical outcomes. The technique enhances a physician’s instrument precision and can lead to a 21 percent reduction in a patient’s length of hospital stay post operation.

The da Vinci technique allows surgeons to perform a range of complex procedures with greater flexibility and control in comparison to conventional techniques. Considered to be the world’s most advanced surgical robot, the da Vinci’s robotic limbs have surgical instruments attached and provide a high-definition, magnified, 3-D view of the surgical site. A surgeon controls the machine’s arms from a seat at a computer console near the operating table. This allows the surgeon to successfully perform surgeries in tight spaces and reduce the margin for error.

Also under the physician’s control is HeartLander – a miniature mobile robot that can enter the chest through an incision below the sternum. It reduces the damage required to access the heart and allows the use of a single device for performing stable and localized sensing, mapping, and treatment over the entire surface of the heart. In addition to administering the therapy, the robot adheres to the epicardial surface of the heart and can autonomously navigate to the directed location.

2. Virtual nursing assistants: Estimated value of $ 20 billion

Virtual nursing assistants could help achieve a reduction in unnecessary hospital visits and lessen the burden on medical professionals. According to Syneos Health Communications, 64 percent of patients reported they would be comfortable with AI virtual nurse assistants, listing the benefits of 24/7 access to answers and support, round-the-clock monitoring, and the ability to get quick answers to questions regarding medications.

San Francisco-based virtual nurse assistant, Sensely, recently raised $ 8 million in Series B funding to deploy fleets of AI-powered nurse avatars to clinics and patients. The key goals of the technology are to keep patients and care providers in communication between office visits and prevent hospital readmission. Sensely’s most commonly referenced nurse is Molly, which uses a proprietary classification engine and listens and responds to users.

Care Angel’s virtual nurse assistant, Angel is another good example for this category. The bot enables wellness checks through voice and AI to drive better medical outcomes at a lower cost. It is able to manage, monitor, and communicate using unique insights and real-time notifications.

3. Administrative workflow assistance: Estimated value of $ 18 billion

Automation of administrative workflow ensures that care providers prioritize urgent matters and can also help doctors, nurses, and assistants save time on routine tasks. Some applications of AI on the administrative end of health care include voice-to-text transcriptions that automate non-patient care activities like writing chart notes, prescribing medications, and ordering tests.

An example of this comes from Nuance. The company provides AI-powered solutions that rely on machine learning to help health care providers cut documentation time and improve reporting quality. Computer-assisted physician documentation (CAPD) like this provides real-time clinical documentation guidance that helps providers ensure their patients receive an accurate clinical history and consistent recommendations.

Another example of this is a five-year agreement between IBM and Cleveland Clinic that aims to transform clinical care and administrative operations. The collaboration uses Watson and other advanced technologies to mine big data and help physicians provide a more personalized and efficient treatment experience. Watson’s natural language processing capabilities allow care providers to quickly and accurately analyze thousands of medical papers to provide improved patient care and reduce operational costs.

John Hopkins Hospital made a similar move in its partnership with GE Healthcare Camden Group. This initiative aims to improve patient care and efficiency via the adoption of hospital command centers equipped with predictive analytics. The strategy will help health care professionals make quick and informed decisions for operational tasks like scheduling bed assignments and managing requests for unit assistance.

Bottom line

While advancements like those mentioned in this article will leave little room for human error and boost overall outcomes and consumer trust, there still remain reservations on AI’s practical applicability in health care. Patients and caregivers fear that lack of human oversight and the potential for machine errors can lead to mismanagement of health. Among many concerns, data privacy remains one of the biggest challenges to health care which may rely heavily on AI.

Despite concerns, AI’s future in health care is inevitable and if this report provides any indication of its impact, the potential benefits might just outweigh the risks.

Deena Zaidi is a Seattle-based contributor for financial websites like TheStreet, Seeking Alpha, Truthout, Economy Watch, and icrunchdata.

Let’s block ads! (Why?)

Big Data – VentureBeat

Highly Available Data: Why High Availability Is Not Just for Apps

High availability is a buzzword in IT today. Usually, it refers to applications and services that are resistant to disruption. But it should also apply to your data. Here’s why.

Highly Available Data Why High Availability Is Not Just for Apps banner Highly Available Data: Why High Availability Is Not Just for Apps

What Is High Availability?

High availability refers to the ability of an application, service or other IT resources to remain constantly accessible, even in the face of unexpected disruptions.

In an age when a cloud service that fails for even just a few hours can significantly impact the ability of businesses to maintain operations, and when vendors typically guarantee certain levels of uptime via SLA contracts, maintaining high availability is crucial.

Unlike in the past, when users expected infrastructure to fail from time to time, downtime is unacceptable in most contexts today.

In reality, virtually every type of service or resource will fail occasionally. 100 percent uptime is not a realistic goal; even the best-managed services go down sometimes. But uptime on the order of 99.99 percent or higher (AWS famously promises “11 9s” of availability for its S3 storage service, for instance) is now standard. That’s the type of high availability that organizations strive for today.

High Availability for Data

In most cases, when people talk about high availability, they’re thinking about applications and services. Using automated server failover, redundant nodes and other strategies, they design systems that allow applications and services to continue running even if part of their infrastructure fails.

Yet the high availability concept can and should be extended to data. After all, without data to crunch, many applications and services are not very useful. If you plan a high availability strategy that addresses only your applications, you fall short of ensuring complete business continuity.

Achieving Data High Availability

stripes 3223819 960 720 600x Highly Available Data: Why High Availability Is Not Just for Apps

What does high availability for data look like in practice? All of the following considerations should factor into a data high availability strategy:

  • Servers that host data need to be resilient against disruption. You can, as noted above, achieve this by using redundant servers to host your data, and/or automated failover.
  • Databases should be architected in such a way that the failure of one database node won’t cause the database to be inaccessible. Databases should also be able to restart themselves automatically if they do crash, in order to minimize downtime.
  • If you rely on the network to access data, which you probably do, network availability is an important component in data high availability.

Highly Accessible Data

Data high availability can be taken a step further, too. In addition to keeping your data infrastructure and services up and running, you can build an even more effective high availability strategy for data by ensuring that your data is highly accessible.

Highly accessible data is data that you can work with readily. It’s quality data that is consistent and available in the format or formats that you need it to be in order to work with it. It’s data that is compatible with whichever tools you are using for analysis and interpretation.

By aiming for high data accessibility as well as high availability, you ensure not only that you can always reach your data, but also that the data is ready to use.

To learn even more about the state of disaster recovery preparedness in organizations today, read Syncsort’s full “State of Resilience“ report.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Webcast: Data Quality-Driven GDPR: Compliance with Confidence

If you happened to miss our recent webcast, “Data Quality-Driven GDPR: Compliance with Confidence,” it’s now available to view on demand.

Explore key insights on how data quality can help you achieve your GDPR compliance with confidence, including:

  • GDPR readiness: What companies must be prepared for
  • Why Data Quality is so critical for GDPR compliance
  • How to address data-related GDPR challenges through a practical, structured approach

Data Quality Driven GDPR Compliance with Confidence banner Webcast: Data Quality Driven GDPR: Compliance with Confidence

Before regulations like GDPR, enterprise-grade Data Quality tools might have been viewed as “nice to have” or driven by pockets of the organizations, such as marketing or sales. Today, Data Quality is easily recognized as a critically important data-based challenge that can jeopardize regulatory compliance.

Make sure to watch the on-demand webcast today!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

GoDaddy wants to help small businesses compete using AI

 GoDaddy wants to help small businesses compete using AI

GoDaddy may not spring to mind as a developer of cutting-edge AI technology, but the internet company is currently employing new tech to help small businesses compete with tech giants.

“If you have the local bookstore that has built their website on GoDaddy, that local bookstore needs to compete with Amazon,” GoDaddy director of engineering Jason Ansel told VentureBeat in an interview. “And Amazon’s using a lot of machine learning. Amazon is a machine learning powerhouse. [So] basically, how can we use our machine learning expertise at GoDaddy to help that little bookstore compete in an increasingly machine learning-dominated world?”

One of the most significant issues facing those small businesses is a shortage of data compared to their huge competitors. But Ansel says GoDaddy is in a position to pool information across its massive customer base to create intelligent systems that help them all.

The first project in that vein is a system the company claims can value internet domain names better than a human can. It uses new advancements in artificial neural networks to achieve superhuman results, with the goal of creating a valuation metric for domain names that works much as Zillow’s Zestimate works for homes.

“You have this domain valuation industry, which is dominated by a small number of experts who know a lot about how to value domains and have these really large portfolios,” Ansel said. “And it’s also an industry where there’s a huge variation in prices. People don’t really know what domains are worth. And so there’s a lot of people who either pay too much or pay too little.”

GoDaddy tested its system against a group of human experts using a random sample of domain names and their sale prices. The algorithm was more than 20 percent better than the human expert, judged on root mean square error.

To get there, GoDaddy created a new system to better generate prediction ranges for the price of a domain, since many previous techniques assume data distributions that don’t match those the company has observed.

Fueling that system is a complex set of features designed to capture all the complexities of valuing a domain name. To begin, GoDaddy trained a model to separate and evaluate the words in a domain to help determine its value. That’s a complicated task in itself since domain names typically don’t have characters separating them. The system also considers over 100 other features beyond the words in the domain to understand additional factors that would affect its value.

“There’s also things like how long [the domain name is], which you’re better off encoding directly because the value of three-letter dot com often has nothing to do with the words, it’s just that it’s really short,” Ansel said.

In addition to length, GoDaddy’s AI analyzes factors like top-level domain use (.com, .net, .de, .pizza, etc.); which company hosts a particular domain; when it was sold; and where the sale took place. All of those factors help paint a better picture of the demand for a specific piece of digital real estate.

To create this system, GoDaddy needed a massive amount of data, which it has in spades. Ansel said the company has millions of historical data points on domain name sales, which can then be used to perform transfer learning on the resale data set, which has 250,000 data points.

“So really, GoDaddy is one of the very few companies in the world that could produce this, because we’re the only ones with the data,” Ansel said. “GoDaddy in the United States is around 60 percent of the primary domain market, so we’re the largest domain name registrar in the world. In the domain name aftermarket — which is basically reselling domains from person to person — we’re also the largest by the count of sales.”

Right now, the domain name valuation tool is available in open beta.

Correction 9:40 a.m. Pacific: This story initially said that the domain valuation tool was not yet available. GoDaddy has released it in open beta.

Let’s block ads! (Why?)

Big Data – VentureBeat

How Alexa is spurring brand competition in the race for AI compatibility

 How Alexa is spurring brand competition in the race for AI compatibility

Alexa may be the most important name in business this year. During the 2017 holiday shopping season, the Echo Dot device was the top-selling product on Amazon, putting Alexa in millions more living rooms. Software, electronics, and appliance companies are rapidly building in Alexa compatibility to stay relevant. If Amazon’s Alexa continues expanding its market dominance, the lack of such compatibility could spell doom for even the biggest brands.

Amazon offers several devices that utilize Alexa, including the Echo, Echo Dot, Echo Show, and the Echo Spot. Each device acts as the central nervous system in a smart home, allowing thousands of voice-activated commands and skills. Alexa’s use in American homes is exploding with tens of millions of devices sold in 2017. What’s more, roughly 16 percent of Amazon Alexa users have more than one device.

Alexa sales are so high for a reason. Hundreds of products from a range of brands integrate seamlessly with Alexa, allowing voice control on everything from light bulbs to washing machines. This connectivity lets customers build their own smart home according to what they need and want. The astounding number of compatible devices separates Alexa from other AIs (like Siri and Google Assistant), and the rate of growth in the AI market could leave brands that won’t integrate with Alexa in the dust.

Smart home tech

Alexa’s rapid success has spurred nearly every major brand to integrate a Wi-Fi-connected smart device into its lineup. Companies from all corners of the home appliance market are racing to add compatibility with Alexa’s growing list of over 15,000 skills.

Larger appliances are one of the big surprises in the smart home market. Many brands didn’t expect customers to want more conversation from their refrigerator, but the success of early adopters like Samsung quickly set a trend in the market. Today, appliances of every kind are Wi-Fi enabled and integrate seamlessly with an Echo hub, and all signs point to an expanding market in 2018.

Alexa also brings a few advantages to common home appliances. With Alexa, you can start the dryer just before you leave work and set the oven to preheat while you relax on the couch. Alexa can even keep up on refills of common home products like dish detergent and dryer sheets — about 90 percent of Echo owners are also Amazon Prime members.

Streaming services

Amazon, Google, and Netflix are vying for the top spot in video streaming services, and this growing viewership is challenging other entertainment giants from cable and satellite TV. In the past year, Amazon announced in February, music streaming on Amazon devices tripled and video-on-demand streaming is “up 9x year-over-year.” Alexa’s popularity as an entertainment hub is driving more and more users away from traditional services, and big investments from streaming giants are keeping pace with audience demand.

Cable companies are mostly seeing the writing on the wall. Dish Network now offers Alexa compatibility on its Hopper DVR and Wally receiver, carving out a niche with smart home users. Comcast still hasn’t built in compatibility, however, requiring users to jump through hoops with third-party remotes.

Amazon has no trouble sharing space with the TV veterans. The company has added several new Alexa tools for compatibility with cable companies, but it’s up to the companies to build the technology into their devices. Cable users who can’t integrate their TV services with Alexa are cutting the cord. It’s a win-win scenario for Amazon — and life or death for cable companies.

The next AI battleground

Despite Alexa’s dominance in the smart home market, Apple’s Siri and Google’s Assistant remain the leaders in the smartphone and computer markets. Amazon is taking notice, and the release of the HTC U11 with Alexa built in signals the next big competitive arena for the AI giants.

Google and Apple are not backing down without a fight, and 2018 could be the year that a new front-runner emerges. On the HTC U11, Google’s Assistant works right alongside Alexa, allowing users to choose freely between them. Meanwhile, Google and Apple are rapidly expanding their smart home lineups.

Google’s Assistant is furthest ahead in AI development; natural language capabilities and the ability to answer follow-up questions puts it at the top of the ladder for ease of use. The Google Home smart speaker is compatible with some devices and appliances in just about every category and has IFTTT services for programming routines.

Apple is catching up, too. Its new HomePod speaker leverages what it claims to be the best sound quality on the market, and it’s betting on the entertainment market to compete with Alexa. Even Microsoft is getting in on the competition, with a new speaker from Harman Kardon that comes equipped with its AI, Cortana.

With top brands embracing the sea change in home technology, Wi-Fi connections are becoming as common as power cords on home devices. Innovation in the AI market could help brands pull ahead of their competition, while late adopters risk extinction in the face of the Internet of Things.

Allie Shaw is a freelance writer who writes for PopSugar, SWAAY, and WebDesignerDepot.

Let’s block ads! (Why?)

Big Data – VentureBeat

Data Integration Challenges in a Siloed World

Modernizing your infrastructure and operations means breaking down “silos” — including those that hamper your data integration processes. Here’s a look at the silos that typically stand in the way of data integration, and what businesses can do to tear them down.

“Breaking down silos” is lingo that you’ll hear if you follow the DevOps movement. Part of the point of DevOps is to eliminate the barriers that typically prevent different types of IT staff — such as the development and the IT Ops teams — from collaborating with each other.

Data Integration Challenges in a Siloed World banner Data Integration Challenges in a Siloed World

According to the DevOps mantra, everyone should work in close coordination, rather than having each team operate in its own silo. Silos stifle innovation, make automation difficult and lead to the loss of important information as data is transferred between teams.

Silos and Data Integration

articulated male 818202 960 720 600x Data Integration Challenges in a Siloed World

Although the DevOps movement focuses primarily on software development and delivery, rather than data operations, the value of tearing down silos is not limited to the world of DevOps.

The same concept can be applied to data integration operations — especially if you embrace the DataOps mantra, which extends DevOps thinking into the world of data management.

After all, the typical business’s data operations tend to be “siloed” for a number of reasons:

  • Businesses have many discrete sources of data, ranging from server and network logs to website logs, digital transactions records and perhaps even ink-and-paper files. Because each type of data originates from a different source, building a single process for integrating all of the data into a common pipeline can be challenging.
  • Different teams within the organization tend to produce different types of information, and they may not share it with each other. For example, your marketing department might store data related to customer engagement in a recent offline ad campaign. That data might be able to provide insights to website designers, who could use it to determine how best to engage customers online. But chances are that your marketing team and web design team don’t communicate much, or share data with each other on a routine basis.
  • Modern IT infrastructure tends to be quite diverse. Your businesses may use a combination of on-premise and cloud servers, with multiple operating systems, Web servers and so on in the mix. Each part of your infrastructure produces logs and other types of data in its own format, making integration hard.
  • Young data and historical data are usually stored in different locations. You probably archive historical data after a certain period, for example, possibly off-site. In contrast, real-time data and near-real-time data may remain in their original data sources. This also leads to data silos because data is stored in different places depending on its age.

How do you destroy these silos? The short answer is data integration.

bigstock Data Integration and Database  192826513 600x Data Integration Challenges in a Siloed World

Data integration refers to the process of collecting data from disparate sources and turning it into actionable information. Data integration typically involves data aggregation, data transformations, and data visualizations.

The end goal of data integration is to turn data into information that can deliver meaningful insights to people reviewing it, without forcing them to think too hard in order to find those insights.

If your data remains siloed, data integration is nigh impossible. You can’t achieve easy, obvious insights if you have to look in multiple places to find data, or if complementary data produced by different teams or machines is not combined together.

Nor can you integrate data effectively if your organization is siloed. Everyone should be able to view collective data insights in the same place and at the same time, so that they can also communicate in the same place and at the same time.

Conclusion

In short, your data, and the teams that work with it are probably spread across disparate locations. In other words, they are siloed.

Deriving maximum value from your data requires breaking down those silos through data integration.

For more Big Data insights, check out our recent webcast, 2018 Big Data Trends: Liberate, Integrate, and Trust Your Data, to see what every business needs to know in the upcoming year.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Why paying for Facebook won’t fix your privacy

 Why paying for Facebook won’t fix your privacy

For the first time, it seems that you might be able to pay Facebook money instead of giving it your personal information to show you ads. During Mark Zuckerberg’s Senate hearing last Tuesday, two senators, Orrin Hatch and Bill Nelson, asked Zuckerberg if he would let Facebook users pay for the service, instead of relying on collecting data to fuel the current ad-supported business model. To these questions, Zuckerberg responded that “there will always be a version of Facebook that is free.”

This implies that having two versions of Facebook, one paid and one free, is in fact a possibility. Shortly after, Facebook COO Sheryl Sandberg said in an interview that a paid product is not out of the question. This is big news because moving away from the purely ad-supported business model means moving away from how most of the internet is currently financed.

By now, many have suggested that we move from a targeted-ad-supported internet to a pay-for-the-service internet or that we should otherwise monetize our personal information. Some simply divide Facebook’s annual revenue by its numberofusers. So if Facebook made $ 40.1 billion last year, and it has 2.2 billion users, each could pay $ 18.50 annually — although to know how much your data is contributing to Facebook’s revenue, you should adjust that by region and every other demographic it has about you. Others point to companies with different business models, like Salon, which profits from using consumers’ computers to mine bitcoin instead of mining their data. In the House Committee hearing, Congressman Paul Tonko suggested another approach, asking Zuckerberg, “Why doesn’t Facebook pay its users for their incredibly valuable data?” The problem with these proposals, and with the suggestion of a paid and privacy-respecting Facebook that many hope might someday happen, is that you can’t simply monetize your data. Monetizing your Facebook data might be unworkable, it’s not profitable for Facebook, and it might even cause further problems.

Monetizing your Facebook data is difficult because, unlike money, data gives endless profit possibilities in the long run. For example, if Facebook served me ads for ultra-comfort sneakers, modern furniture, and sportswear (don’t judge me), it’s because Facebook has decided I might buy these products. But Facebook decided this not based on information it gathered this session, or this week, or this year. Facebook’s targeting choices are based on information collected about me since I joined the platform back in 2007. Whatever Facebook shows me now reflects years of mining my data. Therefore, what is most valuable to Facebook today is not the few minutes of my attention when I visited the website, but the accumulation of hundreds of those minutes in which it learned about me.

Moreover, monetizing your privacy is not only about what it’s worth to Facebook but also about what it’s worth to you, and this is difficult to see because pieces of information combine in unexpected ways to reveal more information about you. Facebook and other companies following the same business model, such as Google, can learn a host of unexpected things about us from the different pieces of information that we give them. For example, if you use any wearables, combining data from an accelerometer and a gyroscope determines if your movements are steady or shaky, so can inform your level of relaxation at any time. A company that has as much information about us as Facebook and Google do takes this unexpected assembling of information to a whole new level. This is crucial for monetization because the way that information combines makes it close to impossible to say how much information is worth to a company or to you. Of course, full knowledge of all future aggregations would solve this. But this is hardly possible because adding up what is collected now and what was collected yesterday depends on context, and no one knows what new information will arrive in the future.

As if that were not enough, your Facebook information combines not only with other information about you but also with information about others. Information about other users is crucial to how Facebook extracts profit from you because that is how it guesses what you might like. The company knows that a person with my demographics might like modern furniture because a lot of other people with my demographics have demonstrated that preference on Facebook. For that reason, if half of the people with these demographics drop out of an information-supported Facebook into a pay-per-month Facebook, the ad-supported profit derived from all the rest of us wouldn’t simply be half of what it currently is; it would be much lower.

Lastly, the value of my information to Facebook, and its value to me, depends not only on what information is collected, but also on how it’s used. The Cambridge Analytica scandal is the perfect illustration of this. Even if we overcame all of the difficulties mentioned above, to monetize someone’s personal information we would need to know exactly how it will be used.

So this is the bad news: While Facebook might offer an option to pay instead of having targeted ads shown, it’s also likely that people purchasing such an option will have their personal information collected anyway. Zuckerberg hinted there might be a version of Facebook that is not free, but he never hinted that he might stop collecting your data. And Facebook is only one example of a ubiquitous business model.

The reason this is bad news is that the most important problem with the current business model is not renting out spots for targeted ads. The problem is what is done to make these spots valuable: collecting users’ personal information indiscriminately. Not viewing ads might make for a more pleasant browsing experience, but it will not solve Facebook’s privacy problem.

The way forward is not to get rid of ads through payment or regulation but to better enforce norms on appropriate collection and dissemination of our private information. Ridding social media of ads will not prevent the next Cambridge Analytica. Understanding what are appropriate ways to collect and disseminate consumers’ personal information to support them might.

Ignacio Cofone is a postdoctoral research fellow at the NYU Information Law Institute. His research focuses on information privacy law and behavioral economics. Before joining NYU, he was a resident fellow at the Yale Information Society Project and a legal advisor to the City of Buenos Aires.

Let’s block ads! (Why?)

Big Data – VentureBeat

Scality raises $60 million to accelerate development of its cloud-based storage tools

 Scality raises $60 million to accelerate development of its cloud based storage tools

San Francisco-based Scality today announced that it has raised $ 60 million in venture capital, bringing its total raised to $ 152 million.

The company sells software to manage and protect data for companies using one or more cloud systems. The distributed approach to helping customers work across these different clouds is gaining traction as online data gathering and management becomes more complex.

“We are very proud that our customers are delighted by the reliability, performance, and cost-effectiveness of our solutions, and at the same time, they praise us for our forward thinking,” said Jerome Lecat, CEO of Scality, in a statement. “The Fourth Industrial Revolution is a real force, challenging every company in its business model and challenging every IT department.”

Founded in France almost nine years ago, Scality last raised money in 2015 when it closed a $ 57 million round of funding that was led by Menlo Ventures.

The round announced today was led by a new investor, Harbert European Growth Capital and included participation from previous investors, including Menlo, Idinvest, and Bpifrance.

Scality says it now has more than 200 large enterprise customers. It plans to use the new money to invest in product development and marketing and to expand its staff.

“Scality’s leadership is apparent, not only through what we hear from Jerome Lecat and his team, but also through what the analysts are writing, and, most importantly, through what the company’s customers and partners are saying,” said Doug Carlisle, Partner Emeritus at Menlo Ventures, in a statement. “It’s exciting to see them grow and innovate, anticipating the truly important trends that incorporate real needs, like multi-cloud control and open source code. Scality has built a solid reputation as a leader, and they continue to prove their vision.”

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat