Tag Archives: Look

How does the CMO role look different in the US when compared to the UK?

cmo 351x200 How does the CMO role look different in the US when compared to the UK?

Analysis of CMOs from FTSE 100 companies and the Inc. 5000 list found that a typical UK CMO is male, British and 44 years old, while US research suggests the typical US CMO is an American born female having just turned 40

In the UK and US, the path to becoming the Chief Marketing Officer in a company is often seen as a long one. But what is it really that this coveted role demands? What stories can we tell from the statistics available to us? To find out, we set about analysing those who hold that title – or equivalent – across FTSE 100 companies, as well as the Inc. 5000 list of fastest-growing mid-size companies. Below are the results and key trends that emerged: – in short, what it takes to become CMO supreme in both the UK and America.

Here were the five biggest takeaways from our research:

  • Success is a Waiting Game. Our research showed that on average, FTSE 100 CMOs had served their companies for around 8-9 years in total – having worked at least three other companies prior, and spending around 5-6 years at each job prior to becoming CMO. In contrast, marketers in the US moved around fairly often, usually working at four or so companies for an average of 5 years, and typically serving in a company for around 5 years by the time they become CMO; a much shorter tenure than in the UK.This taught us two things: first, that a diverse resume or portfolio – proven experience across a range of titles and jobs – can go a long way in distinguishing CMO candidates, in the US especially; second, that it pays to stick around for longer in the UK, as that special promotion really could be just around the corner. It could be argued, of course, that things might move slightly faster in the job market in the US, which is why the CMO role is awarded to people earlier in their careers. In the UK, marketers will often have to wait until their 44th birthday to become CMO – whereas the big promotion will come to those around their 40th birthday in the US.However, as has been shown by Korn Ferry’s research, it is of concern to a number of US firms that the CMO position has the highest turnover in the C-suite. They stay in office for 4.1 years on average, whereas CEOs average 8 years; CFOs, 5.1 years; CHROs, 5; and CIOs, 4.3 years. On top of this, HBR found that 57% of CMOs in the US have been in their position for three years or less. It’s a disparity that raises important questions about how CMOs in the US are nurtured, enabled, and supported in their roles; if, for instance, tenure should be the thing that’s prioritized in hiring CMOs (experience with the business), or longevity generally (a record of leadership roles longer than the 5-year average).
  • The Gender Gap – or lack of it: While the majority of CMOs in the UK FTSE 100 were men (60%), 56% of the companies analysed in the Inc. 5000 were led by female CMOs. It’s a difference that suggests, at least on its face, that US companies are perhaps going farther in employing women in leadership roles; further research found that only 7% of the FTSE 100’s companies are led by a female CEO. But it’s worth remembering the historical trends that have made this possible – the number of women that have come to fill marketing and advertising roles in general (holding 60% of all positions at professional agencies) as these fields have naturally diversified.Also important to remember: per a 2016 report in Forbes, white women specifically are by and large in the majority. US firms might have further still to go. But all the same! It’s worth wondering just how long before UK companies bring down the male-female ratio to the evens, given, certainly, the very clear business benefits: diversity in the boardroom, it has been shown, usually results in “increased value for shareholders.”
  • Home Grown Talent – on both sides of the pond: In the UK 60% of the CMOs researched were British, with 16% of these from a European background. In North America, all 70 CMOs that were analysed were born in the US. This tells us that both countries are keen to invest in local talent, and that the barriers to entry for outside contenders – executives born or educated outside the UK and US – are likely higher as a result; an important consideration for international companies looking to establish and grow a presence in either country.
  • Education: Is Oxbridge Irrelevant for the UK? In the UK, only 9% of the CMOs analysed as part of our research were Oxbridge educated. In fact, more CMOs studied in Ireland (10%) or at a major university in the North of England (15%) than at the two best universities in the country – suggesting, perhaps, that the prestige of Oxbridge has worn somewhat thin in recent decades. One could argue, in fact, that universities in the South of England (41%) or overseas (34%) were more reliable incubators for CMOs.Overall, the number of CMOs with advanced degrees (master’s certifications and higher) was about the same between the two countries – 27% of CMOs in the US, 30% of CMOs in the UK. These are figures that speak in part to larger trends in education; per the most recent US Census, more young adults than ever are pursuing postgraduate studies (9.3 percent, a steady increase in the last decade). But they may also reflect the specific advantages of a master’s for the modern marketer looking to distinguish themselves and remain competitive. A 2014 report by the U.S Department of Education, for instance, found that the wide popularity of the MBA in particular could be explained by a perceived “return on investment”; a sense among potential employers that the degree translated directly to a job candidate’s success.
  • In-house is the way forward: A huge majority (86%) of UK CMOs move up in the company, rather than coming across from working in an agency – while in the US, 89% are not from an agency. Our assumption might be to expect some of those who head the marketing division at companies will have worked for ‘the dark side’, and therefore know how to work with these agencies to really get the best from them. However, it would appear that for the majority of these CMOs this is not the case.

Despite the research, we all know that everyone’s path to success is different. But our analysis into the British and American CMO has introduced some interesting facts that we’d be keen to keep an eye on in future – particularly in the UK. Will more women be introduced into the CMO role – and across the C-suite board in general? Will Brexit and EU negotiations signal a change in where the CMO is sourced? We’ll have to wait and see.

Check out our infographic for the full results and global comparisons.

Let’s block ads! (Why?)

Act-On Blog

Pence continues to look … presidential … again … he's measuring drapes!

 Pence continues to look ... presidential ... again ... he's measuring drapes!

Pence apparently decided there were no repercussions from his brief self-censored attempt to look presidential last week, so he decided to go with a Reagan — Bush brush-clearing photo-op … recalling that such images by contrast make Agent Orange more doddering.

Trump’s own speech yesterday in MO got cut by CNN because it bordered on the illegal in terms of politicking against Claire McCaskill.

Will god-emperor Trump get nervous about Pence measuring the drapes and do his usual lashing-out… expect some Breitbart noise.

Pence will need that work-out stamina for the coming legal battles. Unlike Trump, Pence’s lawyers have yet to argue against potential obstruction of justice charges. In Pence’s case it will because of his excuse that Mike Flynn lied to him.

 Pence continues to look ... presidential ... again ... he's measuring drapes!

Let’s block ads! (Why?)

moranbetterDemocrats

Inside Look: CEO’s Note to Employees on Syncsort & Vision Solutions Combination

Today, we announcedthe closing of Centerbridge’s $ 1.26 billion acquisition of Syncsort and Vision Solutions, with Syncsort CEO Josh Rogers leading the combined company under the Syncsort brand. We are sharing what most companies don’t often publicly disclose – our CEO’s note to employees. Josh provides some background on Vision and Syncsort as well as outlines our key priorities going forward. Read on!

A Message to the Team: CEO’s Note to Employees

josh email 150x150 Inside Look: CEO’s Note to Employees on Syncsort & Vision Solutions CombinationFrom: Josh Rogers
To: All Employees

Subject: A message to the team


Dear Colleagues,

For nearly fifty years, Syncsort has delivered data integration solutions for enterprises with a focus on helping them process large volumes of data as efficiently as possible. Whether the platform was the mainframe as it was starting in the late ’60s, Unix and Windows in the early ’90s, Linux in the early 2000s, or the Big Data and cloud platforms of today, Syncsort has always provided fast, efficient software that allows our customers to copy, move, aggregate, join, and generally manipulate massive data volumes as quickly as possible. Last year, with the acquisition of Trillium Software, we dramatically expanded our capabilities to include not only movement and transformation of data but improving the quality and integrity of that data.

Today we are closing on the previously announced combination with Vision Solutions. Over the past three decades, Vision has been a pioneer in the IBM i space. Nicolaas and his team have built a world-class organization that has delivered the market-leading portfolio of business resilience solutions for IBM i and AIX Power Systems environments including high availability, disaster recovery, data migration and data sharing.

At their core, these solutions are sophisticated data management products that center around the ability to replicate data in real time. This replication can be used to support data and application availability across machines and across locations. When I think about the addition of the capabilities and expertise to the Syncsort business, the possibilities for growth and innovation seem almost limitless.

As we announce the closing of the combination, I wanted to take a moment to lay out the key priorities we must pursue moving forward:

1. We will remain committed to our customers

The combined company serves more than 6,000 enterprise customers globally. These are the largest and most complex IT environments in the world. Our products are engineered into mission-critical systems that deliver products and services of every kind imaginable. Syncsort, Trillium and Vision have a long history of delivering award-winning support to these customers and that will continue. We don’t just simply know how to serve customers, it is core to our culture and our DNA. The same can be said of our relationship with partners. The role our partners play in this business is core to our strategy and our ability to support our customers.

But our goal is not just to support our customers’ operations, it is to learn from them. Our engagement with customers is THE key source of insight on where we should take our products and what new products we should build. We are not only committed to supporting the current product portfolio but to expand and evolve it. And we will do that through deep customer engagement. We are not just looking for ideas, but rather to partner with them through the product definition and development process to make sure we get it right.

2. We will lead the Big Iron to Big Data market

Big Iron to Big Data is the daunting task of integrating your next-generation Big Data infrastructure with your traditional application systems that produce your critical data. The mainframe and IBM i Power Systems platforms run the transactional applications that support the global economy. That is not an exaggeration. When you swipe your credit card, inquire about a claim, make a call on your cell phone, check the app on your phone to see if your flight is on time, you are initiating a transaction that ultimately gets executed on the mainframe or IBM i Power Systems platform.

These systems are highly optimized for transaction workloads and still power the core business processes for enterprises across the globe. The Vision combination brings technology and expertise that will allow Syncsort to expand its market-leading Big Iron to Big Data solutions to include best-of-breed support for the IBM i Power Systems platform. The application and machine data generated on this platform is a critical input into a Big Data and advanced analytics strategy, and we will invest heavily here to make sure our customers can easily integrate these critical data assets into their next-generation infrastructure.

3. We will continue to expand our capabilities around data

The data industry is not new, but it is certainly at an inflection point. Customers are looking to leverage data as a strategic asset and are embracing new technologies to do so. At the same time, the vendor landscape remains fragmented. This combined organization’s scale and maturity provides a strong foundation to deliver and support mission-critical data management solutions. Our expertise in Big Data and Cloud, and our partnerships with leading next-generation platform providers are powerful assets that afford us the opportunity to acquire and modernize key data management capabilities and deliver a larger value proposition to our customers.

Just as we have extended Syncsort’s and Trillium’s core data management engines to run on Hadoop and Spark, we can apply this same approach to additional segments of data management. We will continue to search for highly differentiated, quality data management products where our expertise and global reach can help enterprises meet the next generation of data management challenges.

Big Data became mainstream, at least according to Google Trends, in 2012. Over the past five years, this desire to manage massive volumes of data, to enable businesses to ask bigger questions, has had a profound impact on the data management industry. It is a powerful trend. But in many ways, it is a symptom. There is an insatiable appetite for data. The industry is scrambling to keep up. As machine learning and artificial intelligence applications mature, this will only serve to increase the demand for data.

It is an honor to lead a team of some of the most experienced and talented data professionals in the industry. Collectively, we have played an important role in the evolution of this industry. Together, we will help shape the next era. I could not be more optimistic about what we will achieve together, and with our partners and customers. I am also really excited to meet many of our new colleagues when Nicolaas and I travel to Vision offices over the next week, and look forward to bringing everyone together later in the month for our All Hands call.

Regards,

Josh

For additional information, read the full press release: Centerbridge Completes $ 1.26 Billion Acquisition of Enterprise Software Providers Syncsort and Vision Solutions

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

A Whimsical Look At GDPR

Dan McCaffrey has an ambitious goal: solving the world’s looming food shortage.

As vice president of data and analytics at The Climate Corporation (Climate), which is a subsidiary of Monsanto, McCaffrey leads a team of data scientists and engineers who are building an information platform that collects massive amounts of agricultural data and applies machine-learning techniques to discover new patterns. These analyses are then used to help farmers optimize their planting.

“By 2050, the world is going to have too many people at the current rate of growth. And with shrinking amounts of farmland, we must find more efficient ways to feed them. So science is needed to help solve these things,” McCaffrey explains. “That’s what excites me.”

“The deeper we can go into providing recommendations on farming practices, the more value we can offer the farmer,” McCaffrey adds.

But to deliver that insight, Climate needs data—and lots of it. That means using remote sensing and other techniques to map every field in the United States and then combining that information with climate data, soil observations, and weather data. Climate’s analysts can then produce a massive data store that they can query for insights.

sap Q217 digital double feature3 images2 A Whimsical Look At GDPR

Meanwhile, precision tractors stream data into Climate’s digital agriculture platform, which farmers can then access from iPads through easy data flow and visualizations. They gain insights that help them optimize their seeding rates, soil health, and fertility applications. The overall goal is to increase crop yields, which in turn boosts a farmer’s margins.

Climate is at the forefront of a push toward deriving valuable business insight from Big Data that isn’t just big, but vast. Companies of all types—from agriculture through transportation and financial services to retail—are tapping into massive repositories of data known as data lakes. They hope to discover correlations that they can exploit to expand product offerings, enhance efficiency, drive profitability, and discover new business models they never knew existed.

The internet democratized access to data and information for billions of people around the world. Ironically, however, access to data within businesses has traditionally been limited to a chosen few—until now. Today’s advances in memory, storage, and data tools make it possible for companies both large and small to cost effectively gather and retain a huge amount of data, both structured (such as data in fields in a spreadsheet or database) and unstructured (such as e-mails or social media posts). They can then allow anyone in the business to access this massive data lake and rapidly gather insights.

It’s not that companies couldn’t do this before; they just couldn’t do it cost effectively and without a lengthy development effort by the IT department. With today’s massive data stores, line-of-business executives can generate queries themselves and quickly churn out results—and they are increasingly doing so in real time. Data lakes have democratized both the access to data and its role in business strategy.

Indeed, data lakes move data from being a tactical tool for implementing a business strategy to being a foundation for developing that strategy through a scientific-style model of experimental thinking, queries, and correlations. In the past, companies’ curiosity was limited by the expense of storing data for the long term. Now companies can keep data for as long as it’s needed. And that means companies can continue to ask important questions as they arise, enabling them to future-proof their strategies.

sap Q217 digital double feature3 images3 copy 1024x572 A Whimsical Look At GDPR

Prescriptive Farming

Climate’s McCaffrey has many questions to answer on behalf of farmers. Climate provides several types of analytics to farmers including descriptive services, which are metrics about the farm and its operations, and predictive services related to weather and soil fertility. But eventually the company hopes to provide prescriptive services, helping farmers address all the many decisions they make each year to achieve the best outcome at the end of the season. Data lakes will provide the answers that enable Climate to follow through on its strategy.

Behind the scenes at Climate is a deep-science data lake that provides insights, such as predicting the fertility of a plot of land by combining many data sets to create accurate models. These models allow Climate to give farmers customized recommendations based on how their farm is performing.

“Machine learning really starts to work when you have the breadth of data sets from tillage to soil to weather, planting, harvest, and pesticide spray,” McCaffrey says. “The more data sets we can bring in, the better machine learning works.”

The deep-science infrastructure already has terabytes of data but is poised for significant growth as it handles a flood of measurements from field-based sensors.

“That’s really scaling up now, and that’s what’s also giving us an advantage in our ability to really personalize our advice to farmers at a deeper level because of the information we’re getting from sensor data,” McCaffrey says. “As we roll that out, our scale is going to increase by several magnitudes.”

Also on the horizon is more real-time data analytics. Currently, Climate receives real-time data from its application that streams data from the tractor’s cab, but most of its analytics applications are run nightly or even seasonally.

In August 2016, Climate expanded its platform to third-party developers so other innovators can also contribute data, such as drone-captured data or imagery, to the deep-science lake.

“That helps us in a lot of ways, in that we can get more data to help the grower,” McCaffrey says. “It’s the machine learning that allows us to find the insights in all of the data. Machine learning allows us to take mathematical shortcuts as long as you’ve got enough data and enough breadth of data.”

Predictive Maintenance

Growth is essential for U.S. railroads, which reinvest a significant portion of their revenues in maintenance and improvements to their track systems, locomotives, rail cars, terminals, and technology. With an eye on growing its business while also keeping its costs down, CSX, a transportation company based in Jacksonville, Florida, is adopting a strategy to make its freight trains more reliable.

In the past, CSX maintained its fleet of locomotives through regularly scheduled maintenance activities, which prevent failures in most locomotives as they transport freight from shipper to receiver. To achieve even higher reliability, CSX is tapping into a data lake to power predictive analytics applications that will improve maintenance activities and prevent more failures from occurring.

sap Q217 digital double feature3 images4 A Whimsical Look At GDPR

Beyond improving customer satisfaction and raising revenue, CSX’s new strategy also has major cost implications. Trains are expensive assets, and it’s critical for railroads to drive up utilization, limit unplanned downtime, and prevent catastrophic failures to keep the costs of those assets down.

That’s why CSX is putting all the data related to the performance and maintenance of its locomotives into a massive data store.

“We are then applying predictive analytics—or, more specifically, machine-learning algorithms—on top of that information that we are collecting to look for failure signatures that can be used to predict failures and prescribe maintenance activities,” says Michael Hendrix, technical director for analytics at CSX. “We’re really looking to better manage our fleet and the maintenance activities that go into that so we can run a more efficient network and utilize our assets more effectively.”

“In the past we would have to buy a special storage device to store large quantities of data, and we’d have to determine cost benefits to see if it was worth it,” says Donna Crutchfield, assistant vice president of information architecture and strategy at CSX. “So we were either letting the data die naturally, or we were only storing the data that was determined to be the most important at the time. But today, with the new technologies like data lakes, we’re able to store and utilize more of this data.”

CSX can now combine many different data types, such as sensor data from across the rail network and other systems that measure movement of its cars, and it can look for correlations across information that wasn’t previously analyzed together.

One of the larger data sets that CSX is capturing comprises the findings of its “wheel health detectors” across the network. These devices capture different signals about the bearings in the wheels, as well as the health of the wheels in terms of impact, sound, and heat.

“That volume of data is pretty significant, and what we would typically do is just look for signals that told us whether the wheel was bad and if we needed to set the car aside for repair. We would only keep the raw data for 10 days because of the volume and then purge everything but the alerts,” Hendrix says.

With its data lake, CSX can keep the wheel data for as long as it likes. “Now we’re starting to capture that data on a daily basis so we can start applying more machine-learning algorithms and predictive models across a larger history,” Hendrix says. “By having the full data set, we can better look for trends and patterns that will tell us if something is going to fail.”

sap Q217 digital double feature3 images5 A Whimsical Look At GDPR

Another key ingredient in CSX’s data set is locomotive oil. By analyzing oil samples, CSX is developing better predictions of locomotive failure. “We’ve been able to determine when a locomotive would fail and predict it far enough in advance so we could send it down for maintenance and prevent it from failing while in use,” Crutchfield says.

“Between the locomotives, the tracks, and the freight cars, we will be looking at various ways to predict those failures and prevent them so we can improve our asset allocation. Then we won’t need as many assets,” she explains. “It’s like an airport. If a plane has a failure and it’s due to connect at another airport, all the passengers have to be reassigned. A failure affects the system like dominoes. It’s a similar case with a railroad. Any failure along the road affects our operations. Fewer failures mean more asset utilization. The more optimized the network is, the better we can service the customer.”

Detecting Fraud Through Correlations

Traditionally, business strategy has been a very conscious practice, presumed to emanate mainly from the minds of experienced executives, daring entrepreneurs, or high-priced consultants. But data lakes take strategy out of that rarefied realm and put it in the environment where just about everything in business seems to be going these days: math—specifically, the correlations that emerge from applying a mathematical algorithm to huge masses of data.

The Financial Industry Regulatory Authority (FINRA), a nonprofit group that regulates broker behavior in the United States, used to rely on the experience of its employees to come up with strategies for combating fraud and insider trading. It still does that, but now FINRA has added a data lake to find patterns that a human might never see.

Overall, FINRA processes over five petabytes of transaction data from multiple sources every day. By switching from traditional database and storage technology to a data lake, FINRA was able to set up a self-service process that allows analysts to query data themselves without involving the IT department; search times dropped from several hours to 90 seconds.

While traditional databases were good at defining relationships with data, such as tracking all the transactions from a particular customer, the new data lake configurations help users identify relationships that they didn’t know existed.

Leveraging its data lake, FINRA creates an environment for curiosity, empowering its data experts to search for suspicious patterns of fraud, marketing manipulation, and compliance. As a result, FINRA was able to hand out 373 fines totaling US$ 134.4 million in 2016, a new record for the agency, according to Law360.

Data Lakes Don’t End Complexity for IT

Though data lakes make access to data and analysis easier for the business, they don’t necessarily make the CIO’s life a bed of roses. Implementations can be complex, and companies rarely want to walk away from investments they’ve already made in data analysis technologies, such as data warehouses.

“There have been so many millions of dollars going to data warehousing over the last two decades. The idea that you’re just going to move it all into a data lake isn’t going to happen,” says Mike Ferguson, managing director of Intelligent Business Strategies, a UK analyst firm. “It’s just not compelling enough of a business case.” But Ferguson does see data lake efficiencies freeing up the capacity of data warehouses to enable more query, reporting, and analysis.

sap Q217 digital double feature3 images6 A Whimsical Look At GDPRData lakes also don’t free companies from the need to clean up and manage data as part of the process required to gain these useful insights. “The data comes in very raw, and it needs to be treated,” says James Curtis, senior analyst for data platforms and analytics at 451 Research. “It has to be prepped and cleaned and ready.”

Companies must have strong data governance processes, as well. Customers are increasingly concerned about privacy, and rules for data usage and compliance have become stricter in some areas of the globe, such as the European Union.

Companies must create data usage policies, then, that clearly define who can access, distribute, change, delete, or otherwise manipulate all that data. Companies must also make sure that the data they collect comes from a legitimate source.

Many companies are responding by hiring chief data officers (CDOs) to ensure that as more employees gain access to data, they use it effectively and responsibly. Indeed, research company Gartner predicts that 90% of large companies will have a CDO by 2019.

Data lakes can be configured in a variety of ways: centralized or distributed, with storage on premise or in the cloud or both. Some companies have more than one data lake implementation.

“A lot of my clients try their best to go centralized for obvious reasons. It’s much simpler to manage and to gather your data in one place,” says Ferguson. “But they’re often plagued somewhere down the line with much more added complexity and realize that in many cases the data lake has to be distributed to manage data across multiple data stores.”

Meanwhile, the massive capacities of data lakes mean that data that once flowed through a manageable spigot is now blasting at companies through a fire hose.

“We’re now dealing with data coming out at extreme velocity or in very large volumes,” Ferguson says. “The idea that people can manually keep pace with the number of data sources that are coming into the enterprise—it’s just not realistic any more. We have to find ways to take complexity away, and that tends to mean that we should automate. The expectation is that the information management software, like an information catalog for example, can help a company accelerate the onboarding of data and automatically classify it, profile it, organize it, and make it easy to find.”

Beyond the technical issues, IT and the business must also make important decisions about how data lakes will be managed and who will own the data, among other things (see How to Avoid Drowning in the Lake).

sap Q217 digital double feature3 images7 1024x572 A Whimsical Look At GDPR

How to Avoid Drowning in the Lake

The benefits of data lakes can be squandered if you don’t manage the implementation and data ownership carefully.

Deploying and managing a massive data store is a big challenge. Here’s how to address some of the most common issues that companies face:

Determine the ROI. Developing a data lake is not a trivial undertaking. You need a good business case, and you need a measurable ROI. Most importantly, you need initial questions that can be answered by the data, which will prove its value.

Find data owners. As devices with sensors proliferate across the organization, the issue of data ownership becomes more important.

Have a plan for data retention. Companies used to have to cull data because it was too expensive to store. Now companies can become data hoarders. How long do you store it? Do you keep it forever?

Manage descriptive data. Software that allows you to tag all the data in one or multiple data lakes and keep it up-to-date is not mature yet. We still need tools to bring the metadata together to support self-service and to automate metadata to speed up the preparation, integration, and analysis of data.

Develop data curation skills. There is a huge skills gap for data repository development. But many people will jump at the chance to learn these new skills if companies are willing to pay for training and certification.

Be agile enough to take advantage of the findings. It used to be that you put in a request to the IT department for data and had to wait six months for an answer. Now, you get the answer immediately. Companies must be agile to take advantage of the insights.

Secure the data. Besides the perennial issues of hacking and breaches, a lot of data lakes software is open source and less secure than typical enterprise-class software.

Measure the quality of data. Different users can work with varying levels of quality in their data. For example, data scientists working with a huge number of data points might not need completely accurate data, because they can use machine learning to cluster data or discard outlying data as needed. However, a financial analyst might need the data to be completely correct.

Avoid creating new silos. Data lakes should work with existing data architectures, such as data warehouses and data marts.

From Data Queries to New Business Models

The ability of data lakes to uncover previously hidden data correlations can massively impact any part of the business. For example, in the past, a large soft drink maker used to stock its vending machines based on local bottlers’ and delivery people’s experience and gut instincts. Today, using vast amounts of data collected from sensors in the vending machines, the company can essentially treat each machine like a retail store, optimizing the drink selection by time of day, location, and other factors. Doing this kind of predictive analysis was possible before data lakes came along, but it wasn’t practical or economical at the individual machine level because the amount of data required for accurate predictions was simply too large.

The next step is for companies to use the insights gathered from their massive data stores not just to become more efficient and profitable in their existing lines of business but also to actually change their business models.

For example, product companies could shield themselves from the harsh light of comparison shopping by offering the use of their products as a service, with sensors on those products sending the company a constant stream of data about when they need to be repaired or replaced. Customers are spared the hassle of dealing with worn-out products, and companies are protected from competition as long as customers receive the features, price, and the level of service they expect. Further, companies can continuously gather and analyze data about customers’ usage patterns and equipment performance to find ways to lower costs and develop new services.

Data for All

Given the tremendous amount of hype that has surrounded Big Data for years now, it’s tempting to dismiss data lakes as a small step forward in an already familiar technology realm. But it’s not the technology that matters as much as what it enables organizations to do. By making data available to anyone who needs it, for as long as they need it, data lakes are a powerful lever for innovation and disruption across industries.

“Companies that do not actively invest in data lakes will truly be left behind,” says Anita Raj, principal growth hacker at DataRPM, which sells predictive maintenance applications to manufacturers that want to take advantage of these massive data stores. “So it’s just the option of disrupt or be disrupted.” D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Timo Elliott is Vice President, Global Innovation Evangelist, at SAP.

John Schitka is Senior Director, Solution Marketing, Big Data Analytics, at SAP.

Michael Eacrett is Vice President, Product Management, Big Data, Enterprise Information Management, and SAP Vora, at SAP.

Carolyn Marsan is a freelance writer who focuses on business and technology topics.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

How Google, Amazon, and Facebook would look if they had started in the age of AI

In the past few years, artificial intelligence has come into its own, and lots of companies are grafting it onto their core businesses, marrying AI with search, ecommerce, social networking, cybersecurity — you name it. But what if those businesses had started out in an age of AI and had integrated it into their products from the very beginning?

Peter Relan addressed this speculative question for us at our MobileBeat 2017 conference this week. Relan is a well-known entrepreneur who started the YouWeb incubator, which spawned startups such as mobile gaming companies OpenFeint and CrowdStar. Now he’s CEO of Got It and an investor in the popular gaming chat app Discord.

Relan’s Got It is a new kind of search engine, and it uses AI to locate human experts who can answer your questions in a personalized way. He thinks this will yield better results, and it’s an example of the kind of business that is better because it was born in the AI boom.

Here’s an edited transcript of our interview.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Peter Relan speaks with VentureBeat’s Dean Takahashi at MobileBeat 2017.

Image Credit: Michael O’Donnell/VentureBeat

VentureBeat: What if Google, Amazon, and Facebook had started with AI algorithms a long time ago, before they got hip to this subject more recently? Peter, why don’t you talk about that for us?

Peter Relan: The last speaker mentioned briefly that AI had gone through a popular phase in the ‘80s. I was in college at the time. The AI hype that started in the 80s continued into the ‘90s a bit, but by the 2000s we were much more focused on web 2.0, social commerce, and so on. You had this slew of companies, major tech giants, starting in that era without AI in their core spectrum.

VB: They were born in the AI bust.

Relan: Exactly, if we can imagine such a thing. I found it very curious, looking back over the last 35 years. Today, if you’re a startup like the ones in my portfolio, involving AI wouldn’t even be a question in your strategy. You would start with AI as a key part of what you do. So I chose a few companies to look at very closely and consider what they would be like if they’d started with AI as something core to what they do.

I can start by talking about Facebook. A lot of people here know this, but it’s a very broad, white-page company. Facebook’s content is generated by its community of users. If Facebook had started with AI, the number one problem it would solve is, how does content surface? We all remember this from the gaming craze in 2007 and 2008, when Facebook opened up its API. The number one complaint users had was spam. We all got FarmVille requests filling up our news feeds.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Facebook cofounder and chief executive Mark Zuckerberg appears at the company’s F8 developer conference in San Jose on April 18, 2017.

Image Credit: Screenshot

VB: Too many sheep flying back and forth.

Relan: You had all this irrelevant content, and the core strategy of Facebook in the early days was simply to let the community sort it out, until it reached a breaking point where there was so much spam from FarmVille on Facebook — I remember meeting Mark Zuckerberg in 2010, and at this point he literally said, “I hate this.” It had completely destroyed the network. It’s interesting, because games are actually one of the most important applications on any new platform — consider the iPhone — but there’s obviously a risk of letting that get out of control.

Fast forward to today, what’s the bad content today? The bad content 10 years ago was game spam. Today it’s fake news. It’s on 10 times the scale that existed 10 years ago. FarmVille had 200 million users. Facebook has 2 billion users now. So how do you stop fake news — which we all agree is bad content — from maybe even throwing an election? We have a huge problem there.

You would think that Facebook’s natural instincts would be to stop fake news with — well, what was I just saying? Let the community handle it. But in 2016 they acknowledged that they had built AI machinery into the system. They’re using that to identify fake news in combination with the community. Which is a huge admission and a huge point for Facebook, which so deeply believes that the user community will take care of bad content.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Got It uses AI to find you human experts.

Image Credit: Got It

VB: Do you feel like they’re turning a battleship here, to focus on AI and try to clean out garbage content from the network?

Relan: Facebook is more like an aircraft carrier when it tries to turn. We saw that when they went into mobile. I think it’s more that it’s inevitable. Even if you’re 100 percent in control of your company, you can’t avoid the fact that here is this technology, and you would literally have to say, “I do not want to use it.” The community isn’t going away.

We have a similar problem at our company called Discord. It’s a voice chat community for gamers. It’s, what, 50 million users? That’s one-fortieth the size of Facebook. It’s become a hub for people who create small communities to chat about the games they’re playing. But we’re finding that groups are certainly coming up that aren’t about games. They’re about all kinds of other topics. So we use image recognition to, for example, enforce our policies about porn on these channels. Image recognition is great for knocking that out. Even in a community platform, then, you would use AI if you were a startup today to stop bad content.

VB: Do you still eventually have to get to human curation, like in our earlier talk about eBay?

Relan: I think it’s sort of like exception panels. You ultimately have to allow the AI engines to do their job. And then you have a sort of exception flow where the AI says, “I don’t know exactly what to make of this.” Then you have a small operation, like we do at Discord, that deals with exceptions and edge cases and so on. But imagine the scale issues if you went the other way.

You also need tools for your users. Anybody in a group can flag a chat and say, “I found this unacceptable,” or “I found this objectionable.” That will also do it. Whether the community speaks first or whether the AI speaks first, you have to have them working together.

VB: So there’s a human in the loop, still. That takes us to your new startup, Got It. Tell us more about Got It and the human aspect of it.

Relan: Got It is a new company that’s saying, “If Amazon can give you a new virtual machine server as a service, why not allow knowledge to be a service?” If you want to know something, you can go through Google and search for it. You can browse forums and communities. But neither of those is really a service, because a service has to follow four key criteria.

One, it has to have a defined unit. With Google, you don’t know how many links you’re getting, whether it’s four or 45. Two, it must have a set price. Three, it must be on demand. And four, it must be guaranteed. If you look at Google or Quora, you find that neither of those meet all four criteria. Communities and forums and Q&A sites, you don’t know if somebody will ever answer. There’s no guarantee, and it’s certainly not on demand.

Got It is creating a version of that just like Amazon, which sets up a machine for you on demand, and the machine is well-understood. The pricing is well-understood. This is the spec, the CPU, and this is the price you pay for it. We have this notion of a 10-minute chat session, on demand, with an expert for your question. You have a question, you ask it, and the expert shows up.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Peter Relan’s Got It uses AI to find the best human experts to answer your search question.

Image Credit: Michael O’Donnell/VentureBeat

VB: And this is a human expert.

Relan: Exactly. You have your 10-minute chat session, you go back and forth, and when you’re done, you evaluate your question. It could be something pretty technical. I’m working on a pivot table in Excel and it’s not pivoting right. Let’s work on this together for 10 minutes. We do believe that humans need to be in the loop. But the interesting thing is, the way you find the expert is using AI. I don’t think you can substitute for the sharing experience we get when two human beings connect and work on something together, when they explain something to each other. That’s irreplaceable, frankly. I don’t think any sort of content interaction replaces a person-to-person contact.

The cool thing is that finding the expert, which is not actually human-interaction-dependent, can be done using AI. We use the same algorithms that Google uses, which is PageRank. Google has a new system now called RankBrain, which is the first time they’ve acknowledged the use of AI in addition to content as a way of finding the best pages for you. We use what we call ExpertRank, which is an AI that asks, “For this problem, of all the millions or billions of people out there, who is the best person, the expert?” As long as experts are registered, they get a notification that tells them, “Somebody would like help with this problem.”

We all know that for any question we have, among the seven billion people in the world, there is someone who is perfectly matched for our question. We know that, intuitively. The notion of combining humans with AI, whether it’s at Facebook or — at Google it’s actually very interesting, because the search engine runs completely on servers, and the AI engine they’ve added to the search system is also completely running on servers. It’s truly a server-based system. But only 15 percent of Google’s queries are answered with the help of AI. The other 85 percent are still using the traditional PageRank.

When you don’t have humans in the loop, I think AI-assisted engines are extremely welcome. AI-only engines will get there in time, like the true self-driving car. But if you look at Tesla and Google’s Waymo, the two strategies are different. Tesla’s strategy is AI-assisted. In other words, it won’t drive without you there. The human has to be in the loop. I would argue that Tesla is collecting more data than Google and Uber are today, because they got ahead with the AI assistant, as opposed to pure self-driving.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Amazon Prime

Image Credit: Paul Sawers / VentureBeat

VB: So there’s more efficiency in the answering if you have this combination.

Relan: You have more data. People have said here before that AI is all about data. The more data, the better your AI. If it’s bad content, the more your community generates bad content, in a strange way the more your users will pick up on what is bad content. In a strange way, the more fake news, the easier it is to fight the fake news, because you can train your AI to recognize it. If fake news were like trying to find a needle in a haystack, it would actually be harder to train the AI.

It’s the same with Google searches. The more searches, the better Hummingbird is going to get at understanding the query. Hummingbird is the RankBrain algorithm that actually understands the query. The more queries you can train it on and understand those results, the better.

VB: The question I think of here is how you scale the human part on your end. I may be the top expert on a given subject, but I’m not going to answer a query at three in the morning.

Relan: The vision of Got It is very simple. Today we have a very large social network with 2 billion people. I would argue that most of the communication that goes on in that social network is, well, social. It doesn’t emerge from the perspective of, “Hey, I have a problem and I need some knowledge to solve it.”

So imagine a world where we have 7 billion people. Just take a guess. How many cars are out there in the world? One billion? How many homes are there in the world? One billion? Then we can have on-demand services connecting those supplies of homes and cars to the demand for homes and cars. The companies doing that are doing very well, Airbnb and Uber. So how many human brains are out there in the world? How many companies or systems in the world need to connect to the right brain in a knowledge network to solve a specific problem?

There’s somebody out there who very specifically can give you the knowledge you need. But there isn’t yet a system for that. So the idea is, build a knowledge network as big as Facebook, but it’s not social now. The interesting thing about Got It is, if you own a home, there’s a mortgage cost. If you own a car, there’s a lease cost. But 10 minutes, in that 10-minute chat session—the knowledge you carry in your brain, the things that you know about, it costs you nothing to carry it in your brain, as far as I know. We have the world’s most underutilized resource, and it’s free. All that doesn’t exist is the knowledge network the size of Facebook to connect it.

The vision, then, is to get everybody to be an expert at something or another on the system, and build an AI engine that finds the right person for a problem. We’ve delivered more than 3 million sessions now. We have 12,500 ranked experts in the network. Two hundred more join every day. We have more than a quarter million applied. We have people like software engineers taking questions about Excel. They’re having lunch, this thing pops up, and it looks interesting.

The marginal cost of this inventory of brains is zero. All we need is the AI, because the humans exist. We have no shortage of humans. We don’t want to replace them. We want to find them.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Peter Relan said that the big search, social network, and e-commerce companies are late in grafting AI to their businesses.

Image Credit: Michael O’Donnell/VentureBeat

VB: Where are you with this now? What’s your road map for where you need to go?

Relan: Today, as I say, we’ve delivered about 3 million sessions. Now we have data. One of the most interesting things is, we have 3 million chat sessions in our database between a client and an expert over some problem or another. Now we get to the point of, say, “How can we mine that data for our machine learning algorithms? How do we look at that data from the point of view of the expert?”

Our AI algorithm looks at every single session and adjusts the expert’s rank based on six factors. The first factor is politeness. We have processing that says, “Did the expert talk to the user politely?” This is a utility. The user pays for this. Second is empathy. Does the user feel — do they say something like, “Yes, I feel like you understand my problem”? Those are signals, in the chat session, that the user is feeling empathy from the expert. Third, of course, is accuracy. Did they answer the question? Did their Excel pivot table end up working? And fourth is personal information. Did they try to exchange personal information?

If you look at a 10-minute chat session and the richness of the human conversational content, it’s very large. You have politeness, empathy, accuracy, customer service at the end. Hey, are we done? Are you satisfied? All of those go into it, and as a result the expert’s rank will adjust. Right away, at the end of the session, they’ll be told, “Hey, that was a great session. Here’s your new rank.”

Our road map is very clear. We will never replace human beings, but we will always be looking to AI for content, to weed out bad content, to provide good service, to promote empathy and understanding. When you start to describe these attributes in a chat session, it sounds pretty human, doesn’t it? We’re asking our AI engine to provide no bad content, the same as Facebook. We want it to find a relevant expert, the right person, same as Google — like relevant search results. And then this last thing is the human quality of the session. Which is delivered by a human, but we look at it and say, “Was it delivered in a way that made it a great one-on-one chat session?”

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Above: Google Brain

VB: This is a service that can still get better over time, then.

Relan: It will only get better over time, because the data keeps improving. We’ll find more bad content. We’ll get more relevant experts for people’s problems. We’ll obviously have better results within the session, as long as we keep adding the data to train our AI.

Amazon, by the way, is very interesting. This is my challenge to Amazon, because it keeps building a black box. We’re all developers here, or many of us. We look for a service on demand. Say we need more compute power. We now know there’s an enormous variety of compute power you could get out there. So how do you know whether you’re getting the best resources for your problem? If it’s a large data analytics problem it’s probably a different set of resources than if you’re a transaction processing app or a machine learning app.

What is the relevance of an on-demand service? I think that’s an important aspect of Amazon’s future, even though most people are obsessing about Alexa and Echo. As a platform, I would want to make sure that when a request comes in on demand, I find the best resource for it. The sheer number of AI applications right now is exploding. How do you find the best resource for your particular application? Maybe they do and maybe they don’t, but we don’t really know. We don’t have the ability to manage that response.

 How Google, Amazon, and Facebook would look if they had started in the age of AI

Let’s block ads! (Why?)

Big Data – VentureBeat

Alphabet’s autonomous driving unit, Waymo, ordered to give Uber an inside look at its Lyft alliance

 Alphabet’s autonomous driving unit, Waymo, ordered to give Uber an inside look at its Lyft alliance

Waymo, the Google self-driving project that spun out to become a business under parent company Alphabet, has been ordered by a U.S. district judge to provide Uber with documents related to an alliance with rival Lyft.

The ruling, along with other court documents, was filed on Friday. Lyft declined to comment.

Waymo is suing Uber and Otto, the self-driving truck startup it acquired last year, for alleged patent infringement and stealing trade secrets. Uber has argued that the lawsuit filed in February is a strategy meant to delay the deployment of its self-driving vehicle technology.

This latest ruling isn’t a total win for Uber. The ride-hailing company wanted to be able to compel its competitor Lyft to share documents related to a deal with Waymo to bring self-driving car technology into the mainstream through pilot projects and product development efforts. The New York Times reported on the Waymo-Lyft alliance in May.

A judge granted Lyft’s motion for a protective order and to quash the subpoenas. This means Lyft won’t have to provide any sworn testimony or share its internal documents with rival Uber. However, a judge has ordered Waymo to turn over its documents related to the deal. Waymo has until July 13 to comply with the order.

Uber requested all communications with Waymo about past, current, or potential competition with Uber, documents analyzing Lyft as a potential acquisition target, and any “term sheet” related to the deal between the two companies. Uber also wanted documents that would identify the first date that Lyft began discussion of any potential merger or agreement with Waymo.

In a separate ruling filed Friday, the court said Uber will also be allowed to deposeAlphabet CEO Larry Page as well as David Drummond, the company’s chief legal officer and senior vice president of corporate development.

This story originally appeared on Fortune.com. Copyright 2017

Let’s block ads! (Why?)

Big Data – VentureBeat

New Look Blog – Site-Under-Construction

Welcome to our new look blog. We are currently in the process of moving all of our blog posts from the old blogging platform to our completely new Oracle blogging platform.  As the title of this posts suggests, we are having some teething issues….what this all means is that we are currently in “site-under-construction” mode.

under construction New Look Blog   Site Under Construction


This new platform offers us a lot of significant improvements over the old blogging software: 1) Posts will display correctly on any size of screen so now you can read our blogs on your smartphone as well as your desktop browser, 2) simpler interaction with social media pages means it’s now much less painful for us to push content to you via all the usual social media pages and 3) improved RSS feed capabilities.

While we get used to this new software, which has a lot of great features for us as writers, please cut us some slack over any layout, content and formatting issues. Note that at the moment we are still working through the migration process which means that a lot of our posts look quite ugly. We hope to get all the formatting issues resolved ASAP that are affecting our existing posts.  We are working hard to go through our old posts and get them fixed.

Enjoy our new blogging platform and please let us know what you think about the new-look blog. All feedback gratefully received.

Let’s block ads! (Why?)

Oracle Blogs | Oracle The Data Warehouse Insider Blog

Serving More Services: 5 Things to Look for at SuiteWorld 2017

websitelogo Serving More Services: 5 Things to Look for at SuiteWorld 2017

Posted by Jack Bryant, Services Industry Marketing Lead

Companies that sell time or any combination of services and products are constantly confronted with questions around their business model and how they self-identify. How these companies market themselves and how they craft their offering may be in flux, but there is one certainty. No matter which direction you commit to or pivot from, operational efficiency is absolutely an imperative. Plans will never move forward if there is not a literal and figurative platform to manage operations.

As many know, NetSuite allows services companies to streamline operations across all functional areas, improving visibility and ensuring compliance on a global scale. At SuiteWorld this year, we are thrilled to “serve services” and showcase our commitment to the space. Among countless things to look forward to, here are five areas of particular interest:

New Specialties, New Keynotes

It’s no secret that there are many subsets of business types in our services vertical at NetSuite, but we are becoming more and more vertically specialized every year. In fact, our Advertising, Media and Publishing customer base (AMP) and the financial services contingency have outgrown the services keynote and paved their way into two new ones. That’s right, in addition to the services keynote, 2017 will be the inaugural year for keynote addresses in AMP and Financial Services

NetSuite, The Global Professional Services Provider

By now, many people are familiar with who NetSuite is as the global leader of cloud ERP. However, few people probably see or realize the breadth of our company as a professional service provider. NetSuite’s services organization comprised of over 1,500 consultants with 1,000,000+ hours of services delivered in 2016. This year, Heather Miller, VP of Professional Services will present on how NetSuite as an organization uses NetSuite as a product. The Services Keynote will take place on Wednesday April 26th at 1:30 p.m.

Open Road for OpenAir 

We are also pleased to announce a dedicated track for OpenAir this year. In addition to the Project Manager/Consultant breakout sessions for service oriented attendees, we will have 11 product focused sessions on OpenAir. Outside of these two tracks, we will be hosting more than 150 other sessions over three days in different functional areas of NetSuite users such as development, finance, operations, and more.

Play Bigger

During the Services keynote, we look forward to hosting Chris Lochhead, co-founder of Play Bigger, a management consulting firm that has pioneered a new marketing discipline around positioning and “category design.” We look forward to Chris breaking down the aforementioned marketing dilemma that many of us in the services industry face.

Humanizing People Management

Managing people as both resources and as employees will be a focus during SuiteWorld 2017. Among other customers that we will feature on stage during the Services Keynote, we will host Mark Baldwin, SVP of Administration and General Counsel at DSI Global. He will discuss DSI’s experience as a NetSuite customer from the point of view as an embedded services organization running SRP. He also has much to share around how DSI has been managing their human resource department.

Posted on Mon, April 10, 2017
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

An Inside Look at Microsoft Dynamics 365 for Sales

Dynamics 365 for Sales Border 300x300 An Inside Look at Microsoft Dynamics 365 for SalesWith the vast amount of content on the web today (like this blog), buyers are more educated and further through the buying process than ever before.

How do businesses and specifically sales maximize efficiency and effectiveness? Even though salespeople are under tremendous pressure to win deals faster, sales reps spend more than 67% of their time on non-selling activities. Overall, workers lose 40% of their productivity when they switch tasks.

So, how does the new Microsoft Dynamics 365 create efficiencies and let sales focus on their customers?

Here are three key benefits:

Opportunity Management:
Make it easy for people in your organization to get the information needed to deliver great customer experiences.

Mobile Productivity:
Empower your sales team to do their best work from virtually anywhere on any device

Business Insight:
Get visibility into your organization to make informed decisions and grow your business.

Ledgeview Partners created an on-demand video that will guide you through a ‘day in the life’ scenario how sales professionals can get the most value from Microsoft Dynamics 365

So whether you are exploring Microsoft Dynamics 365 (it’s okay to still call it CRM) or a current user of this productivity tool but not realizing the full benefits of it, see how Microsoft Dynamics 365 can help sales professionals zero in, win deals faster and provide amazing customer experiences.

>>Access Microsoft Dynamics 365 for Sales Video On-Demand

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to artificial intelligence (AI), we expect it to do the same, only better.

Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. AI, on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.

sap Q117 digital double feature3 images 1 full 1024x683 NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities—and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.

Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.

That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best—ethically, legally, and, of course, financially—are those that are free from bias, conscious or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.

Bias: Bad for Business

When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.

Using AI for automated decision making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable, or finding the most qualified candidates for jobs by helping HR look beyond the expected demographics.

As AI takes on these increasingly complex decisions, it can help reduce bias, conscious or otherwise. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions; they reveal imbalances; and they alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.

Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.

AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.

That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. An AI could apply corrective pressure by reminding the hiring manager that all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.

sap Q117 digital double feature3 images full NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them—and other people in HR—that the company still has some remaining hidden biases against female candidates to address.

Look for Where Bias Already Exists

In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.

There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now co-founder of People Pattern, a startup for predictive demographics using social media analytics. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.

Code Is Only Human

The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.

“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says Cathy O’Neil, whose best-selling book Weapons of Math Destruction was long-listed for the 2016 National Book Award. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.” (To learn more from O’Neil about transparency in algorithms, read Thinkers in this issue.)

sap Q117 digital double feature3 images 2 half 1024x683 NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

Don’t Do What You’ve Always Done

To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention.

SAP has been working on an initiative that tackles this issue directly by spotting and categorizing gendered terminology in old job postings. Nothing as overt as No women need apply, which everyone knows is discriminatory, but phrases like outspoken and aggressively pursuing opportunities, which are proven to attract male job applicants and repel female applicants, and words like caring and flexible, which do the opposite.

Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.

Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.

Look Beyond the Obvious

AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a black female candidate living in Harlem simply because there are fewer black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.

sap Q117 digital double feature3 images 3 half 1024x683 NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work/life balance.

Many companies find it all too easy to conclude that women simply aren’t qualified for middle management. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.

That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to do X, so consider doing Y,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.

Context Matters—and Context Changes

Even though AI learns—and maybe because it learns—it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.

sap Q117 digital double feature3 images 4 half 1024x683 NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word sick to someone having health problems, but it’s also a popular slang term for something good or impressive, which could lead to an awkward experience if someone confuses the two meanings, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word sick appears in proximity to positive emoji,” takes human oversight.

Moving Forward with AI

Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are—or soon will be—obvious.

In IDC FutureScapes’ webcast “Worldwide Big Data, Business Analytics, and Cognitive Software 2017 Predictions,” research director David Schubmehl predicted that by 2020 perceived bias and lack of evidentiary transparency in cognitive/AI solutions will create an activist backlash movement, with up to 10% of users backing away from the technology. However, Schubmehl also speculated that consumer and enterprise users of machine learning will be far more likely to trust AI’s recommendations and decisions if they understand how those recommendations and decisions are made. That means knowing what goes into the algorithms, how they arrive at their conclusions, and whether they deliver desired outcomes that are also legally and ethically fair.

sap Q117 digital double feature3 images 2 full NRF 2017 Keynote Crib Sheet: Look For The Future In Your Failures

Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as the annual Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop, which was held for the third time in November 2016.

O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.

As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good—so that their companies can use AI to do well. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.


About the Authors:

Yvonne Baur is Head of Predictive Analytics for Sap SuccessFactors solutions.

Brenda Reid is Vice President of Product Management for Sap SuccessFactors solutions.

Steve Hunt is Senior Vice President of Human Capital Management Research for Sap SuccessFactors solutions.

Fawn Fitter is a freelance writer specializing in business and technology.

Comments

Let’s block ads! (Why?)

Digitalist Magazine