Tag Archives: Change

The Best Healthcare System In The World Is About To Change

Dan McCaffrey has an ambitious goal: solving the world’s looming food shortage.

As vice president of data and analytics at The Climate Corporation (Climate), which is a subsidiary of Monsanto, McCaffrey leads a team of data scientists and engineers who are building an information platform that collects massive amounts of agricultural data and applies machine-learning techniques to discover new patterns. These analyses are then used to help farmers optimize their planting.

“By 2050, the world is going to have too many people at the current rate of growth. And with shrinking amounts of farmland, we must find more efficient ways to feed them. So science is needed to help solve these things,” McCaffrey explains. “That’s what excites me.”

“The deeper we can go into providing recommendations on farming practices, the more value we can offer the farmer,” McCaffrey adds.

But to deliver that insight, Climate needs data—and lots of it. That means using remote sensing and other techniques to map every field in the United States and then combining that information with climate data, soil observations, and weather data. Climate’s analysts can then produce a massive data store that they can query for insights.

sap Q217 digital double feature3 images2 The Best Healthcare System In The World Is About To Change

Meanwhile, precision tractors stream data into Climate’s digital agriculture platform, which farmers can then access from iPads through easy data flow and visualizations. They gain insights that help them optimize their seeding rates, soil health, and fertility applications. The overall goal is to increase crop yields, which in turn boosts a farmer’s margins.

Climate is at the forefront of a push toward deriving valuable business insight from Big Data that isn’t just big, but vast. Companies of all types—from agriculture through transportation and financial services to retail—are tapping into massive repositories of data known as data lakes. They hope to discover correlations that they can exploit to expand product offerings, enhance efficiency, drive profitability, and discover new business models they never knew existed.

The internet democratized access to data and information for billions of people around the world. Ironically, however, access to data within businesses has traditionally been limited to a chosen few—until now. Today’s advances in memory, storage, and data tools make it possible for companies both large and small to cost effectively gather and retain a huge amount of data, both structured (such as data in fields in a spreadsheet or database) and unstructured (such as e-mails or social media posts). They can then allow anyone in the business to access this massive data lake and rapidly gather insights.

It’s not that companies couldn’t do this before; they just couldn’t do it cost effectively and without a lengthy development effort by the IT department. With today’s massive data stores, line-of-business executives can generate queries themselves and quickly churn out results—and they are increasingly doing so in real time. Data lakes have democratized both the access to data and its role in business strategy.

Indeed, data lakes move data from being a tactical tool for implementing a business strategy to being a foundation for developing that strategy through a scientific-style model of experimental thinking, queries, and correlations. In the past, companies’ curiosity was limited by the expense of storing data for the long term. Now companies can keep data for as long as it’s needed. And that means companies can continue to ask important questions as they arise, enabling them to future-proof their strategies.

sap Q217 digital double feature3 images3 copy 1024x572 The Best Healthcare System In The World Is About To Change

Prescriptive Farming

Climate’s McCaffrey has many questions to answer on behalf of farmers. Climate provides several types of analytics to farmers including descriptive services, which are metrics about the farm and its operations, and predictive services related to weather and soil fertility. But eventually the company hopes to provide prescriptive services, helping farmers address all the many decisions they make each year to achieve the best outcome at the end of the season. Data lakes will provide the answers that enable Climate to follow through on its strategy.

Behind the scenes at Climate is a deep-science data lake that provides insights, such as predicting the fertility of a plot of land by combining many data sets to create accurate models. These models allow Climate to give farmers customized recommendations based on how their farm is performing.

“Machine learning really starts to work when you have the breadth of data sets from tillage to soil to weather, planting, harvest, and pesticide spray,” McCaffrey says. “The more data sets we can bring in, the better machine learning works.”

The deep-science infrastructure already has terabytes of data but is poised for significant growth as it handles a flood of measurements from field-based sensors.

“That’s really scaling up now, and that’s what’s also giving us an advantage in our ability to really personalize our advice to farmers at a deeper level because of the information we’re getting from sensor data,” McCaffrey says. “As we roll that out, our scale is going to increase by several magnitudes.”

Also on the horizon is more real-time data analytics. Currently, Climate receives real-time data from its application that streams data from the tractor’s cab, but most of its analytics applications are run nightly or even seasonally.

In August 2016, Climate expanded its platform to third-party developers so other innovators can also contribute data, such as drone-captured data or imagery, to the deep-science lake.

“That helps us in a lot of ways, in that we can get more data to help the grower,” McCaffrey says. “It’s the machine learning that allows us to find the insights in all of the data. Machine learning allows us to take mathematical shortcuts as long as you’ve got enough data and enough breadth of data.”

Predictive Maintenance

Growth is essential for U.S. railroads, which reinvest a significant portion of their revenues in maintenance and improvements to their track systems, locomotives, rail cars, terminals, and technology. With an eye on growing its business while also keeping its costs down, CSX, a transportation company based in Jacksonville, Florida, is adopting a strategy to make its freight trains more reliable.

In the past, CSX maintained its fleet of locomotives through regularly scheduled maintenance activities, which prevent failures in most locomotives as they transport freight from shipper to receiver. To achieve even higher reliability, CSX is tapping into a data lake to power predictive analytics applications that will improve maintenance activities and prevent more failures from occurring.

sap Q217 digital double feature3 images4 The Best Healthcare System In The World Is About To Change

Beyond improving customer satisfaction and raising revenue, CSX’s new strategy also has major cost implications. Trains are expensive assets, and it’s critical for railroads to drive up utilization, limit unplanned downtime, and prevent catastrophic failures to keep the costs of those assets down.

That’s why CSX is putting all the data related to the performance and maintenance of its locomotives into a massive data store.

“We are then applying predictive analytics—or, more specifically, machine-learning algorithms—on top of that information that we are collecting to look for failure signatures that can be used to predict failures and prescribe maintenance activities,” says Michael Hendrix, technical director for analytics at CSX. “We’re really looking to better manage our fleet and the maintenance activities that go into that so we can run a more efficient network and utilize our assets more effectively.”

“In the past we would have to buy a special storage device to store large quantities of data, and we’d have to determine cost benefits to see if it was worth it,” says Donna Crutchfield, assistant vice president of information architecture and strategy at CSX. “So we were either letting the data die naturally, or we were only storing the data that was determined to be the most important at the time. But today, with the new technologies like data lakes, we’re able to store and utilize more of this data.”

CSX can now combine many different data types, such as sensor data from across the rail network and other systems that measure movement of its cars, and it can look for correlations across information that wasn’t previously analyzed together.

One of the larger data sets that CSX is capturing comprises the findings of its “wheel health detectors” across the network. These devices capture different signals about the bearings in the wheels, as well as the health of the wheels in terms of impact, sound, and heat.

“That volume of data is pretty significant, and what we would typically do is just look for signals that told us whether the wheel was bad and if we needed to set the car aside for repair. We would only keep the raw data for 10 days because of the volume and then purge everything but the alerts,” Hendrix says.

With its data lake, CSX can keep the wheel data for as long as it likes. “Now we’re starting to capture that data on a daily basis so we can start applying more machine-learning algorithms and predictive models across a larger history,” Hendrix says. “By having the full data set, we can better look for trends and patterns that will tell us if something is going to fail.”

sap Q217 digital double feature3 images5 The Best Healthcare System In The World Is About To Change

Another key ingredient in CSX’s data set is locomotive oil. By analyzing oil samples, CSX is developing better predictions of locomotive failure. “We’ve been able to determine when a locomotive would fail and predict it far enough in advance so we could send it down for maintenance and prevent it from failing while in use,” Crutchfield says.

“Between the locomotives, the tracks, and the freight cars, we will be looking at various ways to predict those failures and prevent them so we can improve our asset allocation. Then we won’t need as many assets,” she explains. “It’s like an airport. If a plane has a failure and it’s due to connect at another airport, all the passengers have to be reassigned. A failure affects the system like dominoes. It’s a similar case with a railroad. Any failure along the road affects our operations. Fewer failures mean more asset utilization. The more optimized the network is, the better we can service the customer.”

Detecting Fraud Through Correlations

Traditionally, business strategy has been a very conscious practice, presumed to emanate mainly from the minds of experienced executives, daring entrepreneurs, or high-priced consultants. But data lakes take strategy out of that rarefied realm and put it in the environment where just about everything in business seems to be going these days: math—specifically, the correlations that emerge from applying a mathematical algorithm to huge masses of data.

The Financial Industry Regulatory Authority (FINRA), a nonprofit group that regulates broker behavior in the United States, used to rely on the experience of its employees to come up with strategies for combating fraud and insider trading. It still does that, but now FINRA has added a data lake to find patterns that a human might never see.

Overall, FINRA processes over five petabytes of transaction data from multiple sources every day. By switching from traditional database and storage technology to a data lake, FINRA was able to set up a self-service process that allows analysts to query data themselves without involving the IT department; search times dropped from several hours to 90 seconds.

While traditional databases were good at defining relationships with data, such as tracking all the transactions from a particular customer, the new data lake configurations help users identify relationships that they didn’t know existed.

Leveraging its data lake, FINRA creates an environment for curiosity, empowering its data experts to search for suspicious patterns of fraud, marketing manipulation, and compliance. As a result, FINRA was able to hand out 373 fines totaling US$ 134.4 million in 2016, a new record for the agency, according to Law360.

Data Lakes Don’t End Complexity for IT

Though data lakes make access to data and analysis easier for the business, they don’t necessarily make the CIO’s life a bed of roses. Implementations can be complex, and companies rarely want to walk away from investments they’ve already made in data analysis technologies, such as data warehouses.

“There have been so many millions of dollars going to data warehousing over the last two decades. The idea that you’re just going to move it all into a data lake isn’t going to happen,” says Mike Ferguson, managing director of Intelligent Business Strategies, a UK analyst firm. “It’s just not compelling enough of a business case.” But Ferguson does see data lake efficiencies freeing up the capacity of data warehouses to enable more query, reporting, and analysis.

sap Q217 digital double feature3 images6 The Best Healthcare System In The World Is About To ChangeData lakes also don’t free companies from the need to clean up and manage data as part of the process required to gain these useful insights. “The data comes in very raw, and it needs to be treated,” says James Curtis, senior analyst for data platforms and analytics at 451 Research. “It has to be prepped and cleaned and ready.”

Companies must have strong data governance processes, as well. Customers are increasingly concerned about privacy, and rules for data usage and compliance have become stricter in some areas of the globe, such as the European Union.

Companies must create data usage policies, then, that clearly define who can access, distribute, change, delete, or otherwise manipulate all that data. Companies must also make sure that the data they collect comes from a legitimate source.

Many companies are responding by hiring chief data officers (CDOs) to ensure that as more employees gain access to data, they use it effectively and responsibly. Indeed, research company Gartner predicts that 90% of large companies will have a CDO by 2019.

Data lakes can be configured in a variety of ways: centralized or distributed, with storage on premise or in the cloud or both. Some companies have more than one data lake implementation.

“A lot of my clients try their best to go centralized for obvious reasons. It’s much simpler to manage and to gather your data in one place,” says Ferguson. “But they’re often plagued somewhere down the line with much more added complexity and realize that in many cases the data lake has to be distributed to manage data across multiple data stores.”

Meanwhile, the massive capacities of data lakes mean that data that once flowed through a manageable spigot is now blasting at companies through a fire hose.

“We’re now dealing with data coming out at extreme velocity or in very large volumes,” Ferguson says. “The idea that people can manually keep pace with the number of data sources that are coming into the enterprise—it’s just not realistic any more. We have to find ways to take complexity away, and that tends to mean that we should automate. The expectation is that the information management software, like an information catalog for example, can help a company accelerate the onboarding of data and automatically classify it, profile it, organize it, and make it easy to find.”

Beyond the technical issues, IT and the business must also make important decisions about how data lakes will be managed and who will own the data, among other things (see How to Avoid Drowning in the Lake).

sap Q217 digital double feature3 images7 1024x572 The Best Healthcare System In The World Is About To Change

How to Avoid Drowning in the Lake

The benefits of data lakes can be squandered if you don’t manage the implementation and data ownership carefully.

Deploying and managing a massive data store is a big challenge. Here’s how to address some of the most common issues that companies face:

Determine the ROI. Developing a data lake is not a trivial undertaking. You need a good business case, and you need a measurable ROI. Most importantly, you need initial questions that can be answered by the data, which will prove its value.

Find data owners. As devices with sensors proliferate across the organization, the issue of data ownership becomes more important.

Have a plan for data retention. Companies used to have to cull data because it was too expensive to store. Now companies can become data hoarders. How long do you store it? Do you keep it forever?

Manage descriptive data. Software that allows you to tag all the data in one or multiple data lakes and keep it up-to-date is not mature yet. We still need tools to bring the metadata together to support self-service and to automate metadata to speed up the preparation, integration, and analysis of data.

Develop data curation skills. There is a huge skills gap for data repository development. But many people will jump at the chance to learn these new skills if companies are willing to pay for training and certification.

Be agile enough to take advantage of the findings. It used to be that you put in a request to the IT department for data and had to wait six months for an answer. Now, you get the answer immediately. Companies must be agile to take advantage of the insights.

Secure the data. Besides the perennial issues of hacking and breaches, a lot of data lakes software is open source and less secure than typical enterprise-class software.

Measure the quality of data. Different users can work with varying levels of quality in their data. For example, data scientists working with a huge number of data points might not need completely accurate data, because they can use machine learning to cluster data or discard outlying data as needed. However, a financial analyst might need the data to be completely correct.

Avoid creating new silos. Data lakes should work with existing data architectures, such as data warehouses and data marts.

From Data Queries to New Business Models

The ability of data lakes to uncover previously hidden data correlations can massively impact any part of the business. For example, in the past, a large soft drink maker used to stock its vending machines based on local bottlers’ and delivery people’s experience and gut instincts. Today, using vast amounts of data collected from sensors in the vending machines, the company can essentially treat each machine like a retail store, optimizing the drink selection by time of day, location, and other factors. Doing this kind of predictive analysis was possible before data lakes came along, but it wasn’t practical or economical at the individual machine level because the amount of data required for accurate predictions was simply too large.

The next step is for companies to use the insights gathered from their massive data stores not just to become more efficient and profitable in their existing lines of business but also to actually change their business models.

For example, product companies could shield themselves from the harsh light of comparison shopping by offering the use of their products as a service, with sensors on those products sending the company a constant stream of data about when they need to be repaired or replaced. Customers are spared the hassle of dealing with worn-out products, and companies are protected from competition as long as customers receive the features, price, and the level of service they expect. Further, companies can continuously gather and analyze data about customers’ usage patterns and equipment performance to find ways to lower costs and develop new services.

Data for All

Given the tremendous amount of hype that has surrounded Big Data for years now, it’s tempting to dismiss data lakes as a small step forward in an already familiar technology realm. But it’s not the technology that matters as much as what it enables organizations to do. By making data available to anyone who needs it, for as long as they need it, data lakes are a powerful lever for innovation and disruption across industries.

“Companies that do not actively invest in data lakes will truly be left behind,” says Anita Raj, principal growth hacker at DataRPM, which sells predictive maintenance applications to manufacturers that want to take advantage of these massive data stores. “So it’s just the option of disrupt or be disrupted.” D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

About the Authors:

Timo Elliott is Vice President, Global Innovation Evangelist, at SAP.

John Schitka is Senior Director, Solution Marketing, Big Data Analytics, at SAP.

Michael Eacrett is Vice President, Product Management, Big Data, Enterprise Information Management, and SAP Vora, at SAP.

Carolyn Marsan is a freelance writer who focuses on business and technology topics.


Let’s block ads! (Why?)

Digitalist Magazine

Stop Trying to Win Against Robo-Advisors: Change the Game by Providing Value they Can’t with Today’s “Predictive” CRM

CRM Blog Stop Trying to Win Against Robo Advisors: Change the Game by Providing Value they Can’t with Today’s “Predictive” CRM

Morgan Stanley recently announced it is putting a machine learning-based system in an effort to make its financial advisors (the human ones) more effective. And this means what? That Morgan Stanley is ahead of the pack in understanding that robo-advisors are diluting the entire market, and simply having a “relationship” is not enough to retain clients and keep them loyal. Human advisors today need the ability to offer something more—something robo-advisors cannot: a crystal ball.

But here’s the problem: Human advisors, as far as we know, don’t have one, either.

What Morgan Stanley is doing, though, is s helping its advisors become more proactive by adding predictive technology—which provides more value to their existing clients. The theory they are promoting is that having a human advisor with an “algorithmic assistant” would be preferable to basic software that lumps the clients together using extremely limited information, and allocating assets wholesale within each category, based on how they are profiled.

We think Morgan Stanley has got this right, but do the rest of us necessarily need to have highly customized, complex technology (which is not cheap, I might add) to get us closer to that crystal ball? Or, could your firm achieve this goal with—hmmm—say, your CRM system?

Traditional CRM: It did what it needed to do before, but it’s no longer fighting the fight

If you have worked with CRM in a financial services capacity, you already know what it can do. The typical role for CRM has been that of a control tool—primarily taking care of managing relationships, asset aggregation, and reporting. At AKA, we have been implementing these Microsoft Dynamics CRM systems for many years, and we could easily argue that, when it comes to the integration of data from transfer agents, the systems that manage portfolios, trade settlement systems, and other such programs, we are the go-to experts.

Here’s the problem with traditional CRM systems:  They cannot provide enough value for the financial advisors when it comes to predictive relationship management. The reason for this is that traditional CRM systems are housed on premises, which limits them to using internal data such as roll-up information and account information. These systems have not been able to tap into any external sources, and they also lacked the capability to provide predictive analytics and machine learning. They basically were just not built that way.

Super-charged CRM: Machine learning is the new crystal ball

But hold on: If you’re thinking you’ve lost this war with the robo-advisors, you haven’t. Right now, advisors have the perfect opportunity to show clients that they are not just portfolio managers. They can help their clients reach their goals and realize their dreams. But to do this, they must get in front of the information so they can start providing such an unbelievable client experience that their client begins to think their advisor does indeed possess the ability to see into the future. Today’s CRM can aid them in doing just that. In fact, what CRM can offer now is pretty amazing.

Taking advantage Cloud capabilities, the CRM functionality in Microsoft Dynamics 365 can instantly integrate with other systems along with their information sources. Microsoft also features Relationship Insights, which basically super-charges CRM, turning it into a predictive tool that adds value through a proactive approach. Relationship Insights offers Microsoft’s capacity for managing external data as well as data that you have typically integrated within your CRM system. That takes CRM—and client relationships—to a new, exciting level by leveraging AI and machine learning along with the Cloud.

With the parameters and triggers you establish in the system, CRM reaches out to advisors well in advance—allowing them to make smarter, more predictive decisions that will benefit their clients. Beyond just monitoring their clients’ portfolios, advisors can now ensure that action is being taken on trends in the earliest stages. In addition, by monitoring social media channels, the advisors can provide a more personalized type of guidance. For example, if an advisor begins noticing that a client is posting lots of photos of sailboats on Instagram, he (the advisor) can then reach out to discuss how the client can work the purchase of a sailboat into their financial plan.

With this new layer of technology, a reactive CRM changes to a predictive tool, thereby allowing for greatly improved outreach to your clients and a considerably higher relationship score. The very minute you give a financial advisor the ability to make better decisions in the limited time that they have, you have added more worth to their client relationships. You’ve allowed them to focus more on those clients who are more valuable, increasing the chances for retention.

I’d like to see a robo-advisor do that.

Change the game!

To summarize, the CRM of today is equipped with predictive technology. Programs like Relationship Insights give your advisors the ability to build upon those precious, human relationships they have with their clients, allowing them to focus on helping them achieve their goals and live the dream. And that, my friends, is how you get a client for life.

So, stop playing the robo-advisor game–and learn how to change it. Check out our recorded webcast, Competing with Robo Advisors: How to Carry a Bigger Book of Business While Providing Clients with a High-touch Experience.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

3 Webcasts for a Smarter Data: Change Data Capture, Customer Records & Data Confidence

Summer is in full swing! Take a break from the heat and catch up on some best practices for getting more out of your Big Data.

We’ve got three recently recorded webcasts featuring a updates to our DMX/DMX-h products, enhancing customer data and a discussion on how data quality can give you data confidence. So sit back, relax and get smarter with your data!

blog webcast TrilliumPrecise 300x207 3 Webcasts for a Smarter Data: Change Data Capture, Customer Records & Data Confidence

Getting Closer to Your Customers with Trillium Precise

See a live demonstration of how Trillium Precise provides the most detailed view of your customers and prospects, increases efficiency in managing customer information, and optimizes the customer experience for better business outcomes.

Watch the Trillium Precise demo now >

blog asg trillium webcast 300x222 3 Webcasts for a Smarter Data: Change Data Capture, Customer Records & Data Confidence

Gain Clarity and Confidence In Your Data – Benefit from Better Data Quality Today!

Discover proven solutions for increasing data quality through data lineage in this one hour webinar hosted by ASG Technologies and Trillium. Learn how to improve data quality by leveraging lineage maps, gain insight into where data quality gaps may exist, and understand changes that may impact critical data elements and data quality.

Watch the data quality webinar now >

Bonus: Put This on Your Summer Reading List

Discover how the new data supply chain impacts how data is moved, manipulated, and cleansed – download our new eBook The New Rules for Your Data Landscape today!

 blog banner landscape 3 Webcasts for a Smarter Data: Change Data Capture, Customer Records & Data Confidence


Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

New Application of Change Data Capture Technology Revolutionizes Mainframe Data Access and Integration into the Data Lake

Introducing DMX Change Data Capture (CDC) the only product available with mainframe “capture” AND Big Data “apply…”

Syncsort’s New CDC Product

Yesterday, we announced our new product offering, DMX Change Data Capture. This unique new offering, which works with both DMX and DMX-h, allows for reliable, low impact (I’ll explain this more) capture from mainframe sources and automatically applied to Hadoop data stores. I’d like to use this blog to explain what we announced and why it’s unique in the industry.

Our unique, high-performance, mainframe access and integration provided by the DMX portfolio of products understands mainframe data including databases and complex copybooks. I once saw an 86-page mainframe copybook that DMX read in without any problem…could any ETL tool do that?… I highly doubt it!.

blog data lake cdc New Application of Change Data Capture Technology Revolutionizes Mainframe Data Access and Integration into the Data Lake

DMX Change Data Capture is a unique application of CDC technology that provides unrivaled mainframe data access and integration to continuously, quickly and efficiently populate Hadoop data lakes with changes in mainframe data.

Achieving Low Impact on Resources AND Performance

Another key differentiator that saves an incredible amount of time and resource is the ability for Syncsort’s data integration leverages our UI and dynamic optimization, which eliminates the need for coding or tuning.

Anyone associated with a mainframe is always concerned with mainframe MIPS/CPU impact because of the way mainframe usage is metered and charged for by IBM. DMX CDC doesn’t use database triggers, which can negatively impact performance and can also have an impact on MIPS.

Not only does DMX CDC have a small footprint on the mainframe to capture the logs, but the CPU impact – like everything we do – is minimal to keep costs low.

blog DMX CDC upcoming webcast New Application of Change Data Capture Technology Revolutionizes Mainframe Data Access and Integration into the Data Lake

DMX and DMX CDC is a single offering to “capture” the changes on the mainframe (getting the changes from the logs, not triggers). The “apply” is to Hadoop Hive with create, updates and deleted records (yes, updates!) to any Hive file data store including Avro, ORC and Parquet.

This is also a very reliable transfer of data, even during a loss of connectivity between the mainframe and the Hadoop cluster. DMX and DMX CDC can pick up the transfer and update where the transfer stopped, without restarting the entire process.

Our initial support will be for IBM DB2/z and VSAM data sets. We’ll add more data stores both on and off the mainframe. Mainframe data stores are in and of themselves unique, but when you can liberate that data AND add in the automatic apply to Hive (including updates!), this is truly a one of a kind offering. I’m proud of what the team has been able to accomplish in solving challenges in getting critical mainframe data into the data lake to support real-time business insights!

For more information about DMX Change Data Capture,register now for our June 27 webcast

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

“We Can Change Our Strategies in 2 Days” – African Bank

African Bank “We Can Change Our Strategies in 2 Days” – African Bank

African Bank, a large retail bank in South Africa, recently went through a large-scale restructuring in order to bring more efficiency, transparency and collaboration to the way it made decisions. Working with FICO, the bank applied a standards-based decision management methodology to fully modernize its decision system. Now that the new solution has been implemented, we spoke with Dawid Van Zyl, Program Executive of Credit Decisioning at African Bank, to learn more.

Q: What challenges was African Bank facing with its existing credit decision process?

Dawid: Our credit decision lifecycle was fragmented over different applications and teams. This was causing inefficiencies and missed opportunities for our executives to react to the market. We knew we needed to do a complete overhaul of the existing process in order to be effective and profitable.

Q: How did you and the FICO team create a solution for this challenge?

Dawid: The first step was an extensive internal review of bank operations. We had the entire bank look at all existing logic going into our decision-making process, which was fragmented over different applications and teams. We challenged and rationalized existing frameworks until we came up with an optimal plan.

Q: How did technology help transform your decision management process?

Dawid: We were already using FICO Blaze Advisor for rules management. Through our review, we found that Blaze Advisor could make strategy changes within two days, but the ecosystem around it took as long as two months. We looked at what was happening and created a proposal to completely overhaul the way we make decisions. We decided to extend Blaze Advisor capabilities at African Bank and use it as our hub for all decision making.

Q: How did FICO help you create a solution?

Dawid: FICO introduced a new standards-based decision-making methodology to us called Decision Implementation Accelerator. We refer to it as DIA3. The FICO consultants worked with us to help us understand how to approach certain decision scenarios, as well as get strategies configured and defined in Blaze Advisor. The methodology was integral to the success of this project. As we became more familiar with the approach, each iteration improved and this was key to the reduced total time taken.

Q: You mention a reduction in time to implementation. What specific benefits have you realized?

Dawid: After this overhaul, we now have a very complex but elegant solution in place that enables collaboration and complete transparency. We’ve been able to develop and implement new strategies 30% faster than expected, and we have reduced costs by 25%. Now we have a decision system that can make changes in two days, not two months. That makes a big difference to our efficiency and profitability.

We’d like to thank Dawid for taking the time to speak with us. If you’d like to learn more about how African Bank is using FICO solutions to manage credit decisions, read the case study.

Let’s block ads! (Why?)


Coffee Talk: Managing Pace of Change in the Power BI Era

coffee talk 1 1024x546 Coffee Talk: Managing Pace of Change in the Power BI Era

Welcome to an experimental new feature here at PowerPivotPro, where members of the team discuss various topics related to Power BI, Power Pivot, and Analytics/BI in general. These conversations take place during the week on a Slack channel, and are then lightly edited for publishing on Friday.

This Week’s Topic: Managing Pace of Change in the Power BI Era

In our second installment of our weekly Coffee Talk leading up to the Microsoft Data Insights Summit in June we will be discussing the current pace of change, and what decisions organizations can make to ensure they are balancing their new investments in Power BI with their existing investments in Excel based BI.

But first, let me introduce the team members we brought in for this week’s Coffee Talk, and their role within the PowerPivotPro team:

91339353030 d4216b27052f074785f0 72 Coffee Talk: Managing Pace of Change in the Power BI Era

Myself, Kellan Danielson – I joined the P3 team in 2015 as a Consultant and am now responsible for ensuring we offer the highest level of service to everyone we interact with.

62532670500 bef90a8385687e039629 72 Coffee Talk: Managing Pace of Change in the Power BI Era

Austin Senseman – My counterpart, Austin joined the P3 team as a Consultant as well in 2015 and is now responsible for ensuring P3 functions like a well oiled machine.

168917467394 05527716e61e91175269 72 Coffee Talk: Managing Pace of Change in the Power BI Era

Ryan Sullivan – Our newest Principal Consultant, Ryan is an expert in an annoying amount of tools and query languages. He is right at home here at P3 and is already making his presence known.

47610417478 cde60e84cad51f20e449 72 Coffee Talk: Managing Pace of Change in the Power BI Era

92931878243 ba52fdc9a41be546fdff 72 Coffee Talk: Managing Pace of Change in the Power BI Era

Reid Havens – One of our Principal Consultants, Reid provides consulting and training to some of our largest clients in the Seattle area as well as becoming a common voice on the blog.

167379583378 f3b9726cf016425e6a63 72 Coffee Talk: Managing Pace of Change in the Power BI Era

David Harshany – One of our Principal Consultants, David provides remote consulting to clients with a myriad of business and technical challenges.


Channel created April 4th. This is the very beginning of #coffeetalk_apr14 channel. Purpose: Weekly roundtable of what went on in the company, on the blog, in the wider community that we think is worth talking about!

91339353030 d4216b27052f074785f0 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 17:25
Welcome back! Last week was a lot of fun and thank you Austin for co-facilitating! This week let’s bring in several of our Principal Consultants that are in the weeds with clients week in and week out. The topic this week is Microsoft’s BI Platform Architecture, and more specifically, what guidance you would give to folks who are starting their journey into the new Microsoft BI world of Power BI, Flow, PowerApps, and on and on and on.

91339353030 d4216b27052f074785f0 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 17:27

91339353030 d4216b27052f074785f0 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 17:31
First Question: How do organizational leaders or “grass roots data junkies” stay up to date with the current pace of change, and what decisions can organizations make to ensure they are balancing their new investments in Power BI with their existing investments in Excel based BI? This is a question I get routinely and one I am interested in hearing some of your perspectives on.

168917467394 05527716e61e91175269 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 18:23

168917467394 05527716e61e91175269 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 18:24
They have also built forum based community pages on each of those sites that lets us see everything from how the experts are using these new tools to how problems are being fixed.

168917467394 05527716e61e91175269 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 18:26

To answer your second question: All of these tools are specifically designed to interface smoothly with each other. When I first learned how to use each one, I’d log in and be amazed at the ease with which I hit the ground running with my existing data from other solutions.

That said, DAX is the backbone of Excel and PBI, so for learning DAX and making a couple of clicks/sign-ins, look at all of the awesome functionality we receive!!

92931878243 ba52fdc9a41be546fdff 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 20:06

I often get the question or reactions from people that think that Power BI is replacing Excel. I try to describe Excel/PowerBI as a Venn diagram with each other. There’s significant overlap of reporting needs that can be done in either universe but there’s also a very large and distinct section of reporting that can only be done with one of the two tools. Rob actually has a great Venn diagram visual explaining some of this, and that I show to clients occasionally. The entire post is great actually when it comes to discussing “Power BI”

Tool Overlap PBI Excel 1024x754 Coffee Talk: Managing Pace of Change in the Power BI Era

It’s a beautiful symbiosis between the two of them, complimentary rather than oppositional.

92931878243 ba52fdc9a41be546fdff 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-10 20:09

I will say that *learning* DAX, Power Query, etc… is best done in Excel first, then knowledge transferred over to the PBI universe.

My typical elevator pitch of the two at a very high level. “Power BI is best for visualizations and telling a story. Excel is best for tables and detail reports.”

62532670500 bef90a8385687e039629 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-11 00:43

To the general public reading this, I need to confess that I _really_ like Venn diagrams

Yes Excel and Power BI are *slowly* starting to come together better. I remember at the Data Insights Summit last year Kellan and I playing around with the “Analyze in Excel” feature which was totally new at the time – pretty relentless pace since then.

Last week we spent a lot of time discussing / looking ahead on new features and yeah the pace is relentless. I will be the first to admit that I’m not totally up to speed with this Power BI / Power Apps / Flow stack. I’ve got the first piece down, sure. When I think about our business over the next year I hope we stay focused on bridging the gap between reporting and the actions people are taking to improve their business. To the extent that this new stack helps achieve that goal then count me in.

As far as keeping up to date – there are two stories here, a surface and a depth. Twitter is my surface story. That’s where news happens. In many ways I know everything that’s going on. As far as depth, I can’t get too far in building a process in Microsoft Flow. It gets too technical too quickly for me. I’m wondering if many people feel that same way about Power BI.

167379583378 f3b9726cf016425e6a63 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-11 01:20
I’m with Austin on “the surface and the depth” story. My main source of information is email lists. Lists such as Microsoft Power BI, TechNet Flashes, Azure Updates and User Groups are my go to sources for keeping up to date with the ever changing technologies. If something catches my eye such as a gamechanger like Connect to Dataset in Power BI, I’ll go right to the source and investigate. For the rest, I’ll mark them as follow ups and try to set aside some time every few weeks to catch up. The changes and updates are coming at such a furious pace, I think one can get overwhelmed if you try to focus on every single detail. Gain deeper knowledge of the items that will move the needle for you and just be aware of the rest for when they’re useful in the future.

62532670500 bef90a8385687e039629 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-11 16:23

@djharshany I’ve found Pocket (https://getpocket.com/) really useful for saving items for later. I’m on a schedule as well – I save a lot of articles and then pour through them when I’m on an airplane or waiting in line somewhere. #productivityhack

I think this furious pace of technological development has made me much more aware 1) of the amount of noise out in the world that I’m safe ignoring and 2) of how we need to stay vigilant in producing content that cuts through the noise.

168917467394 05527716e61e91175269 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-11 16:59

Going off Austin’s second point, I think that staying vigilant of what we create to cut through the noise ties into something we talked about last week: the difference between GUI created DAX/visuals and expert guided development.

As the BI field becomes filled with solutions and loads of new people using them, the difference between the two becomes more important than ever. Many Excel people aren’t aware that they already have most of the tools in their back pocket to build amazing reporting that will rise above the rest and that we are here to help them get there!

92931878243 ba52fdc9a41be546fdff 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-12 04:11

So I’m not sure if this is purely coincidence or related. But our wonderful colleague Matt Allington just posted a blog titled “Which To Use: Excel or Power BI”


After giving it a thorough read it does a great job of breaking out the pros/cons of using Excel, Power BI Desktop, or http://PowerBI.com as the reporting tools to use. It even mentions a fourth option, SSAS Tabular as a data modeling option outside of the above three.

91339353030 d4216b27052f074785f0 72 Coffee Talk: Managing Pace of Change in the Power BI Era

2017-04-13 21:46
Thanks for sharing Reid, and thank you everyone for a number of great resources for our readers to take advantage of! There is a treasure trove of great advice in this Coffee Talk, even for myself who uses these tools day in and day out. The pace of change is indeed furious but ultimately, would we want it any differently!? :slightly_smiling_face: The key, which Austin points to, is targeting that content that cuts through the noise and brings the largest value to your specific organizational challenges and skimming the rest. The tools don’t matter so much to me, it’s the insane capabilities that they allow and every month I see added capabilities that are changing the way I solve huge business problems in a ridiculously short amount of time. Stay tuned for another Coffee Talk series coming soon, To SQL or Not to SQL, the Power Query Story. Thanks again everybody!

Let’s block ads! (Why?)


Measuring the Pace of Change in the Fourth Industrial Revolution

ThinkstockPhotos 466602891 Measuring the Pace of Change in the Fourth Industrial Revolution
  1. How Digital Thinking Separates Retail Leaders from Laggards
  2. To Bot, or Not to Bot
  3. Oils, Bots, AI and Clogged Arteries
  4. Artificial Intelligence Out of Doors in the Kingdom of Robots
  5. How Digital Leaders are Different
  6. The Three Tsunamis of Digital Transformation – Be Prepared!
  7. Bots, AI and the Next 40 Months
  8. You Only Have 40 Months to Digitally Transform
  9. Digital Technologies and the Greater Good
  10. Video Report: 40 Months of Hyper-Digital Transformation
  11. Report: 40 Months of Hyper-Digital Transformation
  12. Virtual Moves to Real in with Sensors and Digital Transformation
  13. Technology Must Disappear in 2017
  14. Merging Humans with AI and Machine Learning Systems
  15. In Defense of the Human Experience in a Digital World
  16. Profits that Kill in the Age of Digital Transformation
  17. Competing in Future Time and Digital Transformation
  18. Digital Hope and Redemption in the Digital Age
  19. Digital Transformation and the Role of Faster
  20. Digital Transformation and the Law of Thermodynamics
  21. Jettison the Heavy Baggage and Digitally Transform
  22. Digital Transformation – The Dark Side
  23. Business is Not as Usual in Digital Transformation
  24. 15 Rules for Winning in Digital Transformation
  25. The End Goal of Digital Transformation
  26. Digital Transformation and the Ignorance Penalty
  27. Surviving the Three Ages of Digital Transformation
  28. The Advantages of an Advantage in Digital Transformation
  29. From Digital to Hyper-Transformation
  30. Believers, Non-Believers and Digital Transformation
  31. Forces Driving the Digital Transformation Era
  32. Digital Transformation Requires Agility and Energy Measurement
  33. A Doctrine for Digital Transformation is Required
  34. Digital Transformation and Its Role in Mobility and Competition
  35. Digital Transformation – A Revolution in Precision Through IoT, Analytics and Mobility
  36. Competing in Digital Transformation and Mobility
  37. Ambiguity and Digital Transformation
  38. Digital Transformation and Mobility – Macro-Forces and Timing
  39. Mobile and IoT Technologies are Inside the Curve of Human Time


Kevin Benedict
Senior Analyst, Center for the Future of Work, Cognizant
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin’s YouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

image thumb 3 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

The Pivot Pictured Above Acts as if We’ve Swapped Out Fields on Rows – in Response to a Slicer Click!

First off…my first post! Being one of the newest (and youngest) members of the PowerPivotPro family has been very exciting so far. As a way of introducing myself, I’d like to share a creative solution to a problem I’m sure many of you have encountered when building a report or dashboard.

Typical Client Report Requests:

  • Simple but complex
  • High-level yet detailed
  • Compact yet containing everything…

Quite understandably, clients love to channel their inner Marie Antoinette by basically asking to have their cake and eat it too. I actually relish those scenarios, they allow me to flex my “think outside the box” muscles!  And hey, it’s great to be working with a toolset that truly CAN accommodate the special demands posed by real-world situations.

Well one such scenario had a customer wanting a summary Pivot Table with about five columns (fields) worth of details in it. No problem, done and done! The problem we were encountering however…was that 90% of the real-estate space on our reporting page had already been used for other visuals, none of which this client wanted eliminated or reduced to make room. So I’m left with the predicament of figuring out how to fit all this data onto this dashboard…Tetris mode engage!

Unfortunately despite my best efforts to rearrange the dashboard (accompanied by my 80’s Rush Mixtape) I simply could not find any way to display a wide Pivot Table on this dashboard. So I circled back to the drawing board and asked myself what variables I could manipulate to achieve the desired outcome.

I realized that I had an assumption that the PivotTable had to be fixed, meaning that it always has to show all levels of the data. However I LOVE to design visuals for clients that are dynamic, only showing the relevant data to them (often based on slicer selections). So I politely asked my previous assumption to leave and invited over my good friend paradigm shift. After some long conversations and extensive Google searches I actually ran across a PowerPivotPro Blog Post written by Rob that inspired my eventual solution.

Discovering this post almost felt like a relay race and I was being passed the baton to cross the finish line. Using the idea from this post that a slicer could change the axis of a chart, I realized the same would work in a PivotTable. All five columns in my table were part of a hierarchy…so why not use this technique to display a single column that would DYNAMICALLY switch between any of the five levels of this hierarchy based on slicer selections. I would now be able to create a table that is both compact and would display all the data the client needed.

Time to break out the cake forks!

Now for the fun part as I share this recipe for success (that was the last cake joke I promise). The general idea will be to create a hierarchy in the data model, and then reference those in an Excel set to be used in my Pivot Table. I’ll be using tables from the publicly available Northwind DW data set for this example.

Download Completed Example Workbook


Get Your Files

FIRST, create a Customer Geography Hierarchy in the data model on the DimCustomer Table.

Hierarchy in the Data Model table:

Customer Geo Hierarchy with box thumb 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

SECOND, create a new DAX Measure called “Distinct Count of Country.” This will be used in our set to indicate whether or not a selection was made on our country Slicer.

=DISTINCTCOUNT( DimCustomer[Country] )

Now some of you technically savvy readers may be thinking “why didn’t he use the DAX Function HASONEVALUE?” I’ll explain more on this later when I explain how to write the set using MDX.

THIRD, create a new set for your pivot table referencing our recently created Hierarchy and DAX Measure. Note that the only way to access your sets is through a conditional ribbon that is displayed only when a cell selection is on a Pivot Table.

Opening the Set Manager window:

Sets with boxes thumb 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

Creating a new set using MDX:

Set Manager with boxes thumb 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

Writing the MDX Code:

MDX with boxes thumb 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

This MDX query works by utilizing an IIF statement, which operates the same was as in DAX or Excel. It checks to see if our (Distinct Count of Country) DAX Measure is greater than 1 (indicating no slicer selection). If TRUE it returns the Country column from our hierarchy, if FALSE (slicer selection has been made) then it returns the City column. It’s important to note that I must reference the columns in the hierarchy, if you were to put just the column names in this query it would not run. It’s also important that the “Recalculate set with every update” box is checked, this makes sure the MDX statement is calculated every time someone uses a slicer, otherwise it’ll appear like the set isn’t working.

Keen observers take note! Here’s where I explain WHY I used a DISTINCTCOUNT rather than HASONEVALUE in my DAX Measure. Let’s say a client would like to multi-select countries in the slicer and still have it display the City column on rows in the Pivot Table. If I were to use HASONEVALUE in my DAX Measure I would only switch to the city column when a single value was selected.

The way I’ve designed it we can change the value in the MDX query from 1 to any number we’d like (E.g. 2, 3, etc.) which gives us the flexibility to allow multiple slicer selections and still have it switch to the City column.

“Clever Girl…”

Clever Girl thumb Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

I’m not actually sure if I’m supposed to be the hunter or dinosaur in this analogy from Jurassic Park…but either way I felt clever for that last step.

FOURTH, we can now use our newly created set in a Pivot Table. You’ll notice that a new folder called Sets is now nested in our DimCustomer Table.

Placing the Customer Geo set on rows in our Pivot Table:

PivotTable with boxes thumb 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

Making a slicer selection to observe the dynamic switch from Country to City. Pretty cool!

PivotTable with slicer selection AND Boxes thumb 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

My client’s reaction could be summed up in a single word spoken by the immortal Keanu Reeves…


Now some of you may have noticed that I have a Total Sales value at the top of my Pivot Table. Now’s my chance to point out one unfortunate drawback of using sets on rows, it eliminates the totals row at the bottom of the Pivot Table. All is not lost though my friends, for every every problem a solution can always be found! In this case I created an artificial “Total’s row” at the top of the Pivot Table. I did this using the tried and true CUBEVALUE function to call the measure I’m already using in my PivotTable. NOTE that you need to make sure you connect all slicers (via the slicer_name) in the cube string for them to slice the CUBEVALUE as well. Finally, just a bit of dash formatting and some elbow grease and we have ourselves a Totals row!

CUBEVALUE Formula used in the cell for totals:

Totals Row with box thumb Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

There you have it, your very own Mighty Morphing Pivot Table. wlEmoticon smile 1 Mighty Morphing Pivot Tables or: How I learned to automatically change hierarchy levels on rows

Let’s block ads! (Why?)


5 Trends Driving Change in Transportation and Logistics in 2017

websitelogo 5 Trends Driving Change in Transportation and Logistics in 2017

Posted by Emily Houghton, Emerging Industries Marketing Lead

Globalization and technological advances have brought rapid change to the transportation and logistics sector in recent years and 2017 promises to be no different. Whether businesses are able to adapt and take advantage of those changes is a top priority for companies in the industry. From changes in the modern consumer’s needs and the skyrocketing growth of ecommerce to digitalization of the supply chain, automation technology, and the overall economic shift to the cloud, we see 2017 as being a pivotal year for transportation and logistics companies.

Here are the five most important trends we see impacting the industry.

The Modern Consumer

As is the case with many industries, transportation and logistics will continue to be shaped by rising consumer expectations in 2017. Those who have grown up in the age of Amazon have an inherent desire to receive goods and services instantly—putting increased pressure on transportation and logistics companies to deliver goods exceptionally fast, and at the lowest price. Consumers now demand unprecedented visibility into order status, tracking and delivery, forcing the industry to invest in new technologies and partnerships.

A Rise in Ecommerce

Increasing consumer demands are fueled by the explosive growth of ecommerce. According to a survey by UPS, 51 percent of purchases were made online in 2016. Moreover, the phone is becoming the primary shopping device of consumers, according to PwC, meaning that they can literally shop anytime, anywhere. To compete, retailers must employ an omnichannel logistics strategy to deliver a seamless shopping experience. This inevitably introduces new supply chain, fulfillment and shipping challenges.

Supply Chain Innovation

Omnichannel logistics lends itself to another trend that will be prevalent in 2017: the digital supply chain. Harnessing the power of IoT and data driven insights at various points along the supply chain offers huge potential to improve customer service and maximize efficiency. Big data and predictive analytics are empowering event-driven logistics that can account for external factors like natural disaster and war hazards which can help significantly reduce risk along the supply chain.

Automated Delivery of Goods

In addition, the movement toward automation is drastically improving productivity. Amazon has already started experimenting with drones as a new form of express delivery and advancements in sensor technology have made autonomous vehicles a reality for 2017 and beyond. These automated solutions have the potential to increase safety, reduce risk, and significantly increase efficiency.


Overarching the broader industry is the movement towards cloud logistics that enables, “logistics–as-a-service” business models. Innovations in the cloud have improved control over supply chain processes with access to real-time information—allowing companies to be more agile in response to volatility or disruptive events. Meanwhile, this same technology facilitates flexible integrations with other key business processes to optimize all operations.

Whether moving product via land, air, sea or a combination thereof, transportation and logistics companies have a lot on their plate. Advancements in technology and changes in way goods are bought and sold are creating complexity, but also opportunity for the industry. In order to keep pace, companies operating in this sector will have to learn to be agile, forward thinking and open to collaboration as they navigate the constantly changing global economy.

Posted on Thu, February 16, 2017
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

How Salesforce AI aims to change everyday business

Artificial intelligence has become a battleground for major technology providers. But some vendors are trying to leapfrog their competition by making AI more practical, bringing intelligence to workers’ daily tasks.

Salesforce is one such vendor intent on improving workers’ day-to-day activities with its flavor of artificial intelligence (AI). During the past year, through acquisitions and internal development, Salesforce AI has arrived in the form of Salesforce Einstein. Salesforce, the cloud-based CRM vendor, will integrate Einstein into various clouds in its Salesforce Customer Success Platform, from the Sales and Marketing clouds to the new customer experience offering Commerce Cloud. By building AI natively into workers’ existing tasks and using Salesforce data, the company believes it has an edge over competitors.

AI can enhance capabilities like predictive lead scoring (software that assigns numerical values to sales leads to identify the most promising), marketing campaign management and customer service, said Salesforce chief scientist Richard Socher in a conversation with SearchSalesforce about Einstein.

“Where can you use AI to enhance someone’s workflow?” Socher said. “How can you continuously collect data so that AI elements get smarter and smarter, and then surface them back to people so they are empowered?”

socher richard How Salesforce AI aims to change everyday businessRichard Socher

Socher outlined the kinds of efficiencies workers can derive with Salesforce AI. He explored the example of AI in common sales processes. AI can surface data in email and calendaring to make recommendations to sales reps and help optimize their time.

“We can go through email and help you understand your calendar, your schedule and help make smart recommendations on what to follow up on and whom to talk to,” Socher said. “No one wants to make a sales call that isn’t wanted.”

While Socher acknowledged that technology will have sweeping effects on human work, he remained steadfast that AI will bring new job opportunities. “We aren’t creating self-driving cars. We created new jobs based on new kinds of technology.” Socher said that ultimately, AI’s ability to displace jobs “depends on the use case” and added that it was important not to build biases into the algorithms. Bullish on the future of Salesforce AI, Socher provided his take on Einstein and how AI will take shape in the Salesforce roadmap.

Many major vendors are trying to make their mark in AI. How does Salesforce Einstein differ from IBM’s Watson, Microsoft AI or even Oracle’s Adaptive Intelligent Systems?

We don’t build a general-purpose AI engine that is abstract. We try to focus on use cases.

Richard Socher: We are focused on AI for CRM: sales, service, marketing, IoT [internet of things] and a little bit of healthcare. We don’t build a general-purpose AI engine that is abstract. We try to focus on use cases. We first make sure you flip a switch and it works. Then, over time, you can customize it and build your own apps. But our core is always CRM.

There are three elements of AI:

  • You have to have access to the data to understand it. We have metadata associated with that data.
  • The second is the algorithms. You need access to top talent to develop new things. We have a research group and access to great talent.
  • And third is workflow integration. It can mean a lot of different things: In CRM apps, it means how you use AI to enhance someone’s workflow. How can you continuously collect data so that the AI algorithms get smarter and smarter and then surface that back to people so they are empowered; collecting the data, surfacing AI predictions in a way that empowers and makes users more efficient.

So it’s native to the processes that workers are already using?

Socher: We can go through your email as a sales rep that helps you understand your calendar and whom to follow up with. We can surface and score all the opportunities and leads you have. No one wants to make a sales call that isn’t wanted or doesn’t result in a sale. You can sort by a score of all your leads, and you call those that are most likely to result in a sale.

How does Einstein’s predictive scoring or personalization feature extend functionality in Salesforce Sales and Marketing clouds, features that were present prior to Einstein?

Socher: My team is pushing the state of the art in artificial intelligence. That can mean you improve existing features, but it also enables us to build new kinds of products. Now we’re working on question answering, where we can use general text and ask generic questions and get good snippets back. Generally, this technology has been in the hands of consumer-oriented companies, and we wanted to bring it to the enterprise customers. It’s an ongoing competition on the Stanford question-and-answering data sets; for a while we were No. 1, and now we have some tricks up our sleeves to improve our accuracy. We’re taking research ideas. And we publish our research and exchange with the academic community.

Talk about your strategy for rolling this out to various clouds. What are you starting with?

It’s important to think through how the technology will affect people.

Socher: We’re actually working in parallel effort. AI is moving into all the clouds, and it’s a parallel process. We have predictive lead scoring for sales, case routing for Service Cloud, working in Marketing Cloud on images to target their audiences. We have various features in IoT. All of them will need AI and are getting it through different efforts.

How might AI affect human work? Could it take human jobs?

Socher: It’s important to think through how the technology will affect people. You need to ensure that there are no inherent biases in the algorithm. The answer is quite complex, but in some ways it might empower people.

But if salespeople now aren’t wasting time on the wrong calls and are now making sales that work out, a company won’t let go of half of its salesforce. In marketing, 15 years ago, we didn’t have the social media marketing position, but we created new jobs based on new technology. In some areas there may be less need, but it may empower new jobs and enable people to be successful. There are AI use cases we also don’t work on, such as self-driving cars, and that may have a more immediate impact on jobs. Whatever you are working on actively you should think about.

There have been some examples of AI — such as a Microsoft chatbot rolled out then removed last year — that haven’t been successful.

Socher: We take it seriously — the trust other companies have in us. We’re cautious about AI. If Salesforce worked on a chatbot — now, this is hypothetical — the user would have an incentive to work together with the AI chatbot. The feature sets we’re working on — we want to make the job easier and better and more efficient with CRM. So there’s less of this kind of attack angle or scenario that you described.

What is the business value of Einstein ultimately? Does that change over time?

Socher: More important than the top business value, it’s important to note that AI will be in all the different aspects of enterprise software. You want to empower service folks to focus on the hard cases and give them tips on how to give the right answers quickly in a live setting. You want to empower salespeople to spend time in the most efficient way. You want to empower marketers to understand the whole product in the landscape. All three are important.

Let’s block ads! (Why?)

SearchCRM: News on CRM trends and technology