Tag Archives: Part

Modernize Your Business, And Consequently Cover GDPR: Part 1

When outspoken venture capitalist and Netscape co-founder Marc Andreessen wrote in The Wall Street Journal in 2011 that software is eating the world, he was only partly correct. In fact, business services based on software platforms are what’s eating the world.

Companies like Apple, which remade the mobile phone industry by offering app developers easy access to millions of iPhone owners through its iTunes App Store platform, are changing the economy. However, these world-eating companies are not just in the tech world. They are also emerging in industries that you might not expect: retailers, finance companies, transportation firms, and others outside of Silicon Valley are all at the forefront of the platform revolution.

These outsiders are taking platforms to the next level by building them around business services and data, not just apps. Companies are making business services such as logistics, 3D printing, and even roadside assistance for drivers available through a software connection that other companies can plug in to and consume or offer to their own customers.

SAP Q317 DigitalDoubles Feature1 Image2 Modernize Your Business, And Consequently Cover GDPR: Part 1There are two kinds of players in this business platform revolution: providers and participants. Providers create the platform and create incentives for developers to write apps for it. Developers, meanwhile, are participants; they can extend the reach of their apps by offering them through the platform’s virtual shelves.

Business platforms let companies outside of the technology world become powerful tech players, unleashing a torrent of innovation that they could never produce on their own. Good business platforms create millions in extra revenue for companies by enlisting external developers to innovate for them. It’s as if strangers are handing you entirely new revenue streams and business models on the street.

Powering this movement are application programming interfaces (APIs) and software development kits (SDKs), which enable developers to easily plug their apps into a platform without having to know much about the complex software code that drives it. Developers get more time to focus on what they do best: writing great apps. Platform providers benefit because they can offer many innovative business services to end customers without having to create them themselves.

Any company can leverage APIs and SDKs to create new business models and products that might not, in fact, be its primary method of monetization. However, these platforms give companies new opportunities and let them outflank smaller, more nimble competitors.

Indeed, the platform economy can generate unbelievable revenue streams for companies. According to Platform Revolution authors Geoffrey G. Parker, Marshall W. Van Alstyne, and Sangeet Paul Choudary, travel site Expedia makes approximately 90% of its revenue by making business services available to other travel companies through its API.

In TechCrunch in May 2016, Matt Murphy and Steve Sloane wrote that “the number of SaaS applications has exploded and there is a rising wave of software innovation in APIs that provide critical connective tissue and increasingly important functionality.” ProgrammableWeb.com, an API resource and directory, offers searchable access to more than 15,000 different APIs.

According to Accenture Technology Vision 2016, 82% of executives believe that platforms will be the “glue that brings organizations together in the digital economy.” The top 15 platforms (which include companies built entirely on this software architecture, such as eBay and Priceline.com) have a combined market capitalization of US$ 2.6 trillion.

It’s time for all companies to join the revolution. Whether working in alliance with partners or launching entirely in-house, companies need to think about platforms now, because they will have a disruptive impact on every major industry.

SAP Q317 DigitalDoubles Feature1 Image3 1024x572 Modernize Your Business, And Consequently Cover GDPR: Part 1

To the Barricades

Several factors converged to make monetizing a company’s business services easier. Many of the factors come from the rise of smartphones, specifically the rise of Bluetooth and 3G (and then 4G and LTE) connections. These connections turned smartphones into consumption hubs that weren’t feasible when high-speed mobile access was spottier.

One good example of this is PayPal’s rise. In the early 2000s, it functioned primarily as a standalone web site, but as mobile purchasing became more widespread, third-party merchants clamored to integrate PayPal’s payment processing service into their own sites and apps.

In Platform Revolution, Parker, Van Alstyne, and Choudary claim that “platforms are eating pipelines,” with pipelines being the old, direct-to-consumer business methods of the past. The first stage of this takeover involved much more efficient digital pipelines (think of Amazon in the retail space and Grubhub for food delivery) challenging their offline counterparts.

What Makes Great Business Platforms Run?

SAP Q317 DigitalDoubles Feature1 Image8 Modernize Your Business, And Consequently Cover GDPR: Part 1

The quality of the ecosystem that powers your platform is as important as the quality of experience you offer to customers. Here’s how to do it right.

Although the platform economy depends on them, application programming interfaces (APIs) and software development kits (SDKs) aren’t magic buttons. They’re tools that organizations can leverage to attract users and developers.

To succeed, organizations must ensure that APIs include extensive documentation and are easy for developers to add into their own products. Another part of platform success is building a general digital enterprise platform that includes both APIs and SDKs.

A good platform balances ease of use, developer support, security, data architecture (that is, will it play nice with a company’s existing systems?), edge processing (whether analytics are processed locally or in the cloud), and infrastructure (whether a platform provider operates its own data centers and cloud infrastructure or uses public cloud services). The exact formula for which elements to embrace, however, will vary according to the use case, the industry, the organization, and its customers.

In all cases, the platform should offer a value proposition that’s a cut above its competitors. That means a platform should offer a compelling business service that is difficult to duplicate.

By creating open standards and easy-to-work-with tools, organizations can greatly improve the platforms they offer. APIs and SDKs may sound complicated, but they’re just tools for talented people to do their jobs with. Enable these talented people, and your platform will take off.

In the second stage, platforms replace pipelines. Platform Revolution’s authors write: “The Internet no longer acts merely as a distribution channel (a pipeline). It also acts as a creation infrastructure and a coordination mechanism. Platforms are leveraging this new capability to create entirely new business models.” Good examples of second-stage companies include Airbnb, DoubleClick, Spotify, and Uber.

Allstate Takes Advantage of Its Hidden Jewels

Many companies taking advantage of platforms were around long before APIs, or even the internet, existed. Allstate, one of the largest insurers in the United States, has traditionally focused on insurance services. But recently, the company expanded into new markets—including the platform economy.

Allstate companies Allstate Roadside Services (ARS) and Arity, a technology company founded by Allstate in late 2016, have provided their parent company with new sources of revenue, thanks to new offerings. ARS launched Good Hands Rescue APIs, which allow third parties to leverage Allstate’s roadside assistance network in their own apps. Meanwhile, Arity offers a portfolio of APIs that let third parties leverage Allstate’s aggregate data on driver behavior and intellectual property related to risk prediction for uses spanning mobility, consumer, and insurance solutions.

SAP Q317 DigitalDoubles Feature1 Image4 Modernize Your Business, And Consequently Cover GDPR: Part 1For example, Verizon licenses an Allstate Good Hands Rescue API for its own roadside assistance app. And automakers GM and BMW also offer roadside assistance service through Allstate.

Potential customers for Arity’s API include insurance providers, shared mobility companies, automotive parts makers, telecoms, and others.

“Arity is an acknowledgement that we have to be digital first and think about the services we provide to customers and businesses,” says Chetan Phadnis, Arity’s head of product development. “Thinking about our intellectual property system and software products is a key part of our transformation. We think it will create new ways to make money in the vertical transportation ecosystem.”

One of Allstate’s major challenges is a change in auto ownership that threatens the traditional auto insurance model. No-car and one-car households are on the rise, ridesharing services such as Uber and Lyft work on very different insurance models than passenger cars or traditional taxi companies, and autonomous vehicles could disrupt the traditional auto insurance model entirely.

This means that companies like Allstate are smart to look for revenue streams beyond traditional insurance offerings. The intangible assets that Allstate has accumulated over the years—a massive aggregate collection of driver data, an extensive set of risk models and predictive algorithms, and a network of garages and mechanics to help stranded motorists—can also serve as a new revenue stream for the future.

By offering two distinct API services for the platform economy, Allstate is also able to see what customers might want in the future. While the Good Hands Rescue APIs let third-party users integrate a specific service (such as roadside assistance) into their software tools, Arity instead lets third-party developers leverage huge data sets as a piece of other, less narrowly defined projects, such as auto maintenance. As Arity gains insights into how customers use and respond to those offerings, it gets a preview into potential future directions for its own products and services.

SAP Q317 DigitalDoubles Feature1 Image5 1024x572 Modernize Your Business, And Consequently Cover GDPR: Part 1

Farmers Harvest Cash from a Platform

Another example of innovation fueling the platform economy doesn’t come from a boldfaced tech name. Instead, it comes from a relatively small startup that has nimbly built its business model around data with an interesting twist: it turns its customers into entrepreneurs.

Farmobile is a Kansas City–based agriculture tech company whose smart device, the Passive Uplink Connection (PUC), can be plugged into tractors, combines, sprayers, and other farm equipment.

Farmobile uses the PUC to enable farmers to monetize data from their fields, which is one of the savviest routes to success with platforms—making your platform so irresistible to end consumers that they foment the revolution for you.

Once installed, says CEO Jason Tatge, the PUC streams second-by-second data to farmers’ Farmobile accounts. This gives them finely detailed reports, called Electronic Field Records (EFRs), that they can use to improve their own business, share with trusted advisors, and sell to third parties.

The PUC gives farmers detailed records for tracking analytics on their crops, farms, and equipment and creates a marketplace where farmers can sell their data to third parties. Farmers benefit because they generate extra income; Farmobile benefits because it makes a commission on each purchase and builds a giant store of aggregated farming data.

This last bit is important if Farmobile is to successfully compete with traditional agricultural equipment manufacturers, which also gather data from farmers. Farmobile’s advantage (at least for now) is that the equipment makers limit their data gathering to their existing customer bases and sell it back to them in the form of services designed to improve crop yields and optimize equipment performance.

Farmobile, meanwhile, is trying to appeal to all farmers by sharing the wealth, which could help it leapfrog the giants that already have large customer bases. “The ability to bring data together easily is good for farmers, so we built API integrations to put data in one place,” says Tatge.

Farmers can resell their data on Farmobile’s Data Store to buyers such as reinsurance firm Guy Carpenter. To encourage farmers to opt in, says Tatge, “we told farmers that if they run our device over planting and harvest season, we can guarantee them $ 2 per acre for their EFRs.”

So far, Farmobile’s customers have sent the Data Store approximately 4,200 completed EFRs for both planting and harvest, which will serve as the backbone of the company’s data monetization efforts. Eventually, Farmobile hopes to expand the offerings on the Data Store to include records from at least 10 times as many different farm fields.

SAP Q317 DigitalDoubles Feature1 Image6 1024x572 Modernize Your Business, And Consequently Cover GDPR: Part 1

Under Armour Binges on APIs

Another model for the emerging business platform world comes from Under Armour, the sports apparel giant. Alongside its very successful clothing and shoe lines, Under Armour has put its platform at the heart of its business model.

But rather than build a platform itself, Under Armour has used its growing revenues to create an industry-leading ecosystem. Over the past decade, it has purchased companies that already offer APIs, including MapMyFitness, Endomondo, and MyFitnessPal, and then linked them all together into a massive platform that serves 30 million consumers.

This strategy has made Under Armour an indispensable part of the sprawling mobile fitness economy. According to the company’s 2016 annual results, its business platform ecosystem, known as the Connected Fitness division, generated $ 80 million in revenue that year—a 51% increase over 2015.

SAP Q317 DigitalDoubles Feature1 Image7 Modernize Your Business, And Consequently Cover GDPR: Part 1By combining existing APIs from its different apps with original tools built in-house, extensive developer support, and a robust SDK, third-party developers have everything they need to build their own fitness app or web site.

Depending on their needs, third-party developers can sign up for several different payment plans with varying access to Under Armour’s APIs and SDKs. Indeed, the company’s tiered developer pricing plan for Connected Fitness, which is separated into Starter, Pro, and Premium levels, makes Under Armour seem more like a tech company than a sports apparel firm.

As a result, Under Armour’s APIs and SDKs are the underpinnings of a vast platform cooperative. Under Armour’s apps seamlessly integrate with popular services like Fitbit and Garmin (even though Under Armour has a fitness tracker of its own) and are licensed by corporations ranging from Microsoft to Coca-Cola to Purina. They’re even used by fitness app competitors like AthletePath and Lose It.

A large part of Under Armour’s success is the sheer amount of data its fitness apps collect and then make available to developers. MyFitnessPal, for instance, is an industry-leading calorie and food tracker used for weight loss, and Endomondo is an extremely popular running and biking record keeper and route-sharing platform.

One way of looking at the Connected Fitness platform is as a combination of traditional consumer purchasing data with insights gleaned from Under Armour’s suite of apps, as well as from the third-party apps that Under Armour’s products use.

Indeed, Under Armour gets a bonus from the platform economy: it helps the company understand its customers better, creating a virtuous cycle. As end users use different apps fueled by Under Armour’s services and data-sharing capabilities, Under Armour can then use that data to fuel customer engagement and attract additional third-party app developers to add new services to the ecosystem.

What Successful Platforms Have in Common

The most successful business platforms have three things in common: They’re easy to work with, they fulfill a market need, and they offer data that’s useful to customers.

For instance, Farmobile’s marketplace fulfills a valuable need in the market: it lets farmers monetize data and develop a new revenue stream that otherwise would not exist. Similarly, Allstate’s Arity experiment turns large volumes of data collected by Allstate over the years into a revenue stream that drives down costs for Arity’s clients by giving them more accurate data to integrate into their apps and software tools.

Meanwhile, Under Armour’s Connected Fitness platform and API suite encourage users to sign up for more apps in the company’s ecosystem. If you track your meals in MyFitnessPal, you’ll want to track your runs in Endomondo or MapMyRun. Similarly, if you’re an app developer in the health and fitness space, Under Armour has a readily available collection of tools that will make it easy for users to switch over to your app and cheaper for you to develop your app.

As the platform economy grows, all three of these approaches—Allstate’s leveraging of its legacy business data, Farmobile’s marketplace for users to become data entrepreneurs, and Under Armour’s one-stop fitness app ecosystem—are extremely useful examples of what happens next.

In the coming months and years, the platform economy will see other big changes. In 2016 for example, Apple, Microsoft, Facebook, and Google all released APIs for their AI-powered voice assistant platforms, the most famous of which is Apple’s Siri.

The introduction of APIs confirms that the AI technology behind these bots has matured significantly and that a new wave of AI-based platform innovation is nigh. (In fact, Digitalistpredicted last year that the emergence of an API for these AIs would open them up beyond conventional uses.) New voice-operated technologies such as Google Home and Amazon Alexa offer exciting opportunities for developers to create full-featured, immersive applications on top of existing platforms.

We will also see AI- and machine learning–based APIs emerge that will allow developers to quickly leverage unstructured data (such as social media posts or texts) for new applications and services. For instance, sentiment analysis APIs can help explore and better understand customers’ interests, emotions, and preferences in social media.

As large providers offer APIs and associated services for smaller organizations to leverage AI and machine learning, these companies can in turn create their own platforms for clients to use unstructured data—everything from insights from uploaded photographs to recognizing a user’s emotion based on facial expression or tone of voice—in their own apps and products. Meanwhile, the ever-increasing power of cloud platforms like Amazon Web Services and Microsoft Azure will give these computing-intensive app platforms the juice they need to become deeper and richer.

These business services will depend on easy ways to exchange and implement data for success. The good news is that finding easy ways to share data isn’t hard and the API and SDK offerings that fuel the platform economy will become increasingly robust. Thanks to the opportunities generated by these new platforms and the new opportunities offered to end users, developers, and platform businesses themselves, everyone stands to win—if they act soon. D!


About the Authors

Bernd Leukert is a member of the Executive Board, Products and Innovation, for SAP.

Björn Goerke is Chief Technology Officer and President, SAP Cloud Platform, for SAP.

Volker Hildebrand is Global Vice President for SAP Hybris solutions.

Sethu M is President, Mobile Services, for SAP.

Neal Ungerleider is a Los Angeles-based technology journalist and consultant.


Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Exploring Field Types – Part II of IV: Calculated Fields

Exploring Field Types Part 2 300x225 Exploring Field Types – Part II of IV: Calculated Fields

Welcome back to our four-part blog series on Field Types. In Part I, we tackled Rollup Fields – if you missed it, check it out here. In today’s blog, we’ll take it a step further and discuss Calculated Fields, including when and why to use them and how to add them. Let’s get started.

Recalling back to Part I, we wanted to easily see the number of tasks against each opportunity as well as how many of those tasks have been completed at any given time. To accomplish this, we created two roll-up fields: Tasks and Tasks Completed.

The result was a view that looked like this:

090717 1521 ExploringFi1 Exploring Field Types – Part II of IV: Calculated Fields

It allowed us to see that the 4G Enabled Tablets opportunity required 9 tasks, but only 3 of those had been completed. This is useful information but it would be nice to take it a step further.

Let’s say that for each opportunity or Topic we want to quickly see the percentage of tasks completed and the number of tasks outstanding. Luckily, it’s just a matter of creating a couple new fields.

First, let’s define the math required to achieve our desired information:

% Completed = (Tasks Completed /Tasks) * 100

Tasks Outstanding = Tasks – Tasks Completed

Note that both formulas require that we know Tasks Completed and Tasks. You’ll see these exact two fields in PART I of this series!

1. Begin by creating the first of two more new Fields. In our example below, we’ve named it % Completed. Set Data Type to Decimal Number; set Field Type to Calculated as shown:
090717 1521 ExploringFi2 Exploring Field Types – Part II of IV: Calculated Fields

Note in the screenshot above that the Schema automatically removes the % from the ‘Name’ but don’t worry, because the Display Name (which is what we will eventually see in our View) retains the %.

2. Click Edit (circled above) to modify the properties of the Calculated field.

3. You may recall that in Part I of this blog series when we created two new fields: Tasks and Tasks Completed, we made a special note of the fact that we were including the word Tasks in the name of each new field. Well, here is where that comes in handy. On the next screen, click in the action area to the right of the equal sign and start typing Tasks. As you begin typing, all field names that match the text you’ve entered will appear, as shown below. How cool is that?

090717 1521 ExploringFi3 Exploring Field Types – Part II of IV: Calculated Fields

4. At this point, we want to define the actual calculation we want performed in the % Completed field. Earlier, we defined this as % Completed = (Tasks Completed/Tasks) * 100. So, double-click on new_taskscompleted from the available options, and this will add the field name to the action area. Next, type “/” then double-click on new_tasks to add it to the action area. Type *100. Finally, add the necessary brackets. When you’re done, it will look like this:

090717 1521 ExploringFi4 Exploring Field Types – Part II of IV: Calculated Fields
Note that adding a CONDITION is optional. You’ll see that in our example neither is needed since we want to apply the calculation to every Opportunity record without exclusion.

5. In step five we’ll create the second of two new Fields, we’ve named it Tasks Outstanding. Set Data Type to Whole Number; set Field Type to Calculated, as shown:

090717 1521 ExploringFi5 Exploring Field Types – Part II of IV: Calculated Fields

6. Click Edit to modify the properties of the Calculated field. Once again, begin typing Tasks in the action area to reveal the available fields.

At this point, we want to define the actual calculation we want to be performed in the Tasks Outstanding field. Earlier, we defined this as Tasks Outstanding = Tasks – Tasks Completed. Double-click on new_tasks from the available options this will add the field name to the action area. Next, type “-” then double-click on new_taskscompleted to add it to the action area. It should look like this:

090717 1521 ExploringFi6 Exploring Field Types – Part II of IV: Calculated Fields
Once again, note that adding a CONDITION is optional. In our example, neither is needed since we want to apply the calculation to every Opportunity record without exclusion.

7. In step seven, we need to add our two newly-created fields to our existing Section in the Form. For more information, see Customizing Entities.

8. After that, we need to add our new fields to a View, which is as simple as selecting the View and adding the columns. For detailed information on how to do this, please see Customizing Views.

9. Publish all changes. Unlike Rollup fields, which take up to 24 hours for the changes to be applied, Calculated fields are updated immediately. This means all your data is populated right away, as shown below:

090717 1521 ExploringFi7 Exploring Field Types – Part II of IV: Calculated Fields

Now we can very quickly and easily see that the 4G Enabled Tablets opportunity, for example, is 33.33% complete with 6 outstanding tasks. All the math is done for us! Pretty slick!

Well, that’ll do it for Part II. In our next blog on Field Types, we will tackle Option Sets. Wait until you see what’s in store!

In the meantime, Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Exploring Field Types – Part II of IV: Calculated Fields

Exploring Field Types Part 2 300x225 Exploring Field Types – Part II of IV: Calculated Fields

Welcome back to our four-part blog series on Field Types. In Part I, we tackled Rollup Fields – if you missed it, check it out here. In today’s blog, we’ll take it a step further and discuss Calculated Fields, including when and why to use them and how to add them. Let’s get started.

Recalling back to Part I, we wanted to easily see the number of tasks against each opportunity as well as how many of those tasks have been completed at any given time. To accomplish this, we created two roll-up fields: Tasks and Tasks Completed.

The result was a view that looked like this:

090717 1521 ExploringFi1 Exploring Field Types – Part II of IV: Calculated Fields

It allowed us to see that the 4G Enabled Tablets opportunity required 9 tasks, but only 3 of those had been completed. This is useful information but it would be nice to take it a step further.

Let’s say that for each opportunity or Topic we want to quickly see the percentage of tasks completed and the number of tasks outstanding. Luckily, it’s just a matter of creating a couple new fields.

First, let’s define the math required to achieve our desired information:

% Completed = (Tasks Completed /Tasks) * 100

Tasks Outstanding = Tasks – Tasks Completed

Note that both formulas require that we know Tasks Completed and Tasks. You’ll see these exact two fields in PART I of this series!

1. Begin by creating the first of two more new Fields. In our example below, we’ve named it % Completed. Set Data Type to Decimal Number; set Field Type to Calculated as shown:
090717 1521 ExploringFi2 Exploring Field Types – Part II of IV: Calculated Fields

Note in the screenshot above that the Schema automatically removes the % from the ‘Name’ but don’t worry, because the Display Name (which is what we will eventually see in our View) retains the %.

2. Click Edit (circled above) to modify the properties of the Calculated field.

3. You may recall that in Part I of this blog series when we created two new fields: Tasks and Tasks Completed, we made a special note of the fact that we were including the word Tasks in the name of each new field. Well, here is where that comes in handy. On the next screen, click in the action area to the right of the equal sign and start typing Tasks. As you begin typing, all field names that match the text you’ve entered will appear, as shown below. How cool is that?

090717 1521 ExploringFi3 Exploring Field Types – Part II of IV: Calculated Fields

4. At this point, we want to define the actual calculation we want performed in the % Completed field. Earlier, we defined this as % Completed = (Tasks Completed/Tasks) * 100. So, double-click on new_taskscompleted from the available options, and this will add the field name to the action area. Next, type “/” then double-click on new_tasks to add it to the action area. Type *100. Finally, add the necessary brackets. When you’re done, it will look like this:

090717 1521 ExploringFi4 Exploring Field Types – Part II of IV: Calculated Fields
Note that adding a CONDITION is optional. You’ll see that in our example neither is needed since we want to apply the calculation to every Opportunity record without exclusion.

5. In step five we’ll create the second of two new Fields, we’ve named it Tasks Outstanding. Set Data Type to Whole Number; set Field Type to Calculated, as shown:

090717 1521 ExploringFi5 Exploring Field Types – Part II of IV: Calculated Fields

6. Click Edit to modify the properties of the Calculated field. Once again, begin typing Tasks in the action area to reveal the available fields.

At this point, we want to define the actual calculation we want to be performed in the Tasks Outstanding field. Earlier, we defined this as Tasks Outstanding = Tasks – Tasks Completed. Double-click on new_tasks from the available options this will add the field name to the action area. Next, type “-” then double-click on new_taskscompleted to add it to the action area. It should look like this:

090717 1521 ExploringFi6 Exploring Field Types – Part II of IV: Calculated Fields
Once again, note that adding a CONDITION is optional. In our example, neither is needed since we want to apply the calculation to every Opportunity record without exclusion.

7. In step seven, we need to add our two newly-created fields to our existing Section in the Form. For more information, see Customizing Entities.

8. After that, we need to add our new fields to a View, which is as simple as selecting the View and adding the columns. For detailed information on how to do this, please see Customizing Views.

9. Publish all changes. Unlike Rollup fields, which take up to 24 hours for the changes to be applied, Calculated fields are updated immediately. This means all your data is populated right away, as shown below:

090717 1521 ExploringFi7 Exploring Field Types – Part II of IV: Calculated Fields

Now we can very quickly and easily see that the 4G Enabled Tablets opportunity, for example, is 33.33% complete with 6 outstanding tasks. All the math is done for us! Pretty slick!

Well, that’ll do it for Part II. In our next blog on Field Types, we will tackle Option Sets. Wait until you see what’s in store!

In the meantime, Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Expert Interview (Part 2): Alation’s CEO Sangani Discusses Big Data Management Trends and Best Practices

In Part 1of this two-part interview, Satyen Sangani (@satyx), CEO and co-founder of Alation, spoke about data cataloging. In today’s Part 2, he provides his thoughts on trends and best practices in Big Data management.

What are some of the more outdated or inefficient processes involved with accessing relevant data today? What is slowing businesses down?

It can take a long time to extract data from the data lake and get the right data to the right person exactly when they need it.

Businesses are moving to self-service analytics solutions where it isn’t necessary to have the involvement of the IT department to access and work with data. However, self-service tools often fail at helping users understand how to appropriately use the data. Specifically, they don’t always know which data sets to use, which definitions to use or which metrics are correct.

blog TDWI Data Lake Checklist 1 Expert Interview (Part 2): Alation’s CEO Sangani Discusses Big Data Management Trends and Best Practices

What should companies be doing today to prepare for how they’ll use data in the future? What should their long-term strategies look like?

Ultimately, you want to get data, business context and technical context in front of your employees as quickly as possible. The days where you could take months to prepare a report are over.

Given this, companies need to spend time thinking a.) how they can get data to their employees as fast as possible and b.) how to train their workforces to find, understand and use that data to get insights fast.

What’s one piece of advice you find yourself repeating to your clients over and over? Something you wish more companies were doing to get more out of their data?

Data governance has traditionally implied a top down, command and control oriented approach. Such an approach generally works when compliance is the primary goal, but when the goal is to get data consumers to use data more often, it’s important to take an iterative and agile approach to data governance.

It’s less about prescribing rules than reacting to users by gently correcting and improving their behavior.

What trends or innovations in Big Data management are you following today? Why do they excite you?

Self-service is, of course, a big one. We also like distributed computation engines like Presto and Spark. The notion that we can disconnect compute from storage is finally becoming a reality.

AI and Machine Learning need to be embedded into every layer of the stack. There’s too much manual work in data and that manual work comes at the cost of speed.

To learn how to put your legacy data to work for you, and plan and launch successful data lake projects with tips and tricks from industry experts, download the TDWI Checklist Report: Building a Data Lake with Legacy Data

2017 Big Data Survey Promo Expert Interview (Part 2): Alation’s CEO Sangani Discusses Big Data Management Trends and Best Practices

 Expert Interview (Part 2): Alation’s CEO Sangani Discusses Big Data Management Trends and Best Practices

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Expert Interview (Part 1): Satyen Sangani of Alation on Data Cataloging

Satyen Sangani (@satyx) is the CEO and co-founder of Alation. In founding Alation, he aspired to help people become more data literate.

Before Alation, Sangani spent nearly a decade at Oracle, where he ran the Financial Services Warehousing and Performance Management business. Prior to Oracle, he was an associate with the private investment firm, Texas Pacific Group and an analyst with Morgan Stanley & Co. He holds a Masters in Economics from the University of Oxford and a Bachelors from Columbia College.

We recently asked Sangani for his insight on data cataloging and how businesses can better manage their data. Here’s what he shared:

Tell us about the mission at Alation. How are you hoping to change the ways businesses approach managing data?

At Alation, we’re looking to fundamentally change the way data consumers, creators and stewards find, understand and trust data. Our product, a data catalog, provides a single source of reference for data in the enterprise.

What is collaborative data cataloging? How does it work?

No matter where your data resides, in a data lake, a data warehouse or a business intelligence system, Alation regularly and automatically indexes your data and automatically gathers knowledge about the data and how it is being used.

Like Google, Alation uses machine learning to continually improve its understanding of that data. Clients use Alation to work better together, to leverage data with greater confidence, to improve productivity and to index all your data knowledge. Anyone who works with data, from IT to a less-than-technical business user can collaborate and comment on data using annotations and threaded discussions much as you’d find in a consumer catalog such as Yelp!

What are the benefits to businesses of using a collaborative data catalog? How do they make data management more efficient and effective?

Alation replaces tribal knowledge with a complete repository for all data assets and data knowledge in your organization including capabilities such as:

  • Business glossary
  • Data dictionary
  • Wiki articles

blog banner landscape Expert Interview (Part 1): Satyen Sangani of Alation on Data Cataloging

Alation profiles data and monitors usage to ensure that users have accurate insight into data accuracy. This includes providing insights through:

  • Usage reports
  • Data profiles
  • Interactive lineage

Alation provides deep insight into how users are creating and sharing knowledge from raw data. This includes surfacing details that include:

  • Top users
  • Column-level popularity
  • Shared joins & filters

What are the common challenges you’re observing businesses facing with data management today? What is at the root of these challenges?

The problem isn’t really about the volume of data that businesses must deal with these days. It’s really about how to find, understand and trust that data to fundamentally understand what is going on with your business in terms that everyone in the organization can agree upon. That’s what data cataloging can provide.

Tune in tomorrow for Part 2 of this interview where Satyen Sangani will discuss Big Data management trends and best practices.

Discover the new ways data is being moved, manipulated, and cleansed – download Syncsort’s eBook The New Rules for Your Data Landscape.

 Expert Interview (Part 1): Satyen Sangani of Alation on Data Cataloging2017 Big Data Survey Promo Expert Interview (Part 1): Satyen Sangani of Alation on Data Cataloging

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Expert Interview (Part 1): Splunk’s Andi Mann on IT Service Intelligence, ITOA and AIOps

For over 30 years across five continents, Andi Mann (@AndiMann) has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant. He currently serves as Splunk’s Chief Technology Advocate. He is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, and communicator. In the first of this two-part interview, he shares his thoughts on IT Service Intelligence (ITSI) and its role in IT Operational Analytics (ITOA) and Artificial Intelligence Operations (AIOps).

What is ITSI and how does it fit with ITOA and/or AIOps?

According to Gartner, IT Operational Analytics (ITOA) is a market for solutions that bring advanced analytical techniques to IT operations management use cases and data. ITOA solutions collect, store, analyze, and visualize IT operations data from other applications and IT operations management (ITOM) tools, enabling IT Ops teams to perform faster root cause analysis, triage, and problem resolution.

As it has become more sophisticated, Gartner has redefined ITOA as “AIOps,” initially calling it Algorithmic IT Ops, now morphing into “Artificial Intelligence Ops,” reflecting the increasing use of machine learning, predictive analytics, and artificial intelligence in these solutions.

Splunk IT Service Intelligence (ITSI) is a next-generation monitoring and analytics solution in the ITOA/AIOps space, built on top of Splunk Enterprise or Splunk Cloud. ITSI uses machine learning and event analytics to simplify operations, prioritize problem resolution, and align IT with the business.

blog banner eBook Ironstream case studies 1 Expert Interview (Part 1): Splunk’s Andi Mann on IT Service Intelligence, ITOA and AIOps

Using metrics and performance indicators that are aligned with strategic goals and objectives, ITSI goes beyond reactive and ad hoc troubleshooting to proactively organize and correlate relevant metrics and events according to the business service they support. With ITSI, IT Ops can better understand and even predict KPI trends, to identify and triage systemic issues, and to speed up investigations and diagnosis.

This allows maturing IT organizations to quickly yet deeply understand the impact that service degradation has not only on the components in their service stack, but also on service levels and business capabilities – think more “web store” than “web server.”

We are seeing a lot of investment by organizations in leveraging the value of Big Data, what do you see as the major drivers for this?

I see three main drivers for this new focus on Big Data.

Firstly, the increasing volume of data is creating a maintenance nightmare, but also an analytics dream. This new data – from online applications, mobile devices, cloud systems, social services, partner integrations, connected devices, and more – is full of insights, but cannot be managed with traditional tools. Big Data is often the only way to understand a modern business service at scale.

Secondly, speed and agility are emerging as market differentiators. Slow, old-school techniques like data warehousing, Extract-Transfer-Load (ETL) operations, batch data processing, and scheduled reporting are not fast enough. New-style Big Data tools, by contrast, ingest data in real time, use machine learning and predictive analytics to generate meaning, instantly display sophisticated and customizable visualizations, and produce actionable insights from Big Data as it is produced.

Thirdly, there is an increasing focus on data-driven decisions to drive innovation. From junior IT admins to senior business execs, innovation requires all stakeholders make accurate decisions in real time. Big Data allows everyone to try new ideas, determine what works and what doesn’t, and then iterate quickly to course-correct from failures or double-down on successes, quickly adjusting to new information and meeting the changing demands of the market.

In Part 2, Andi Mann discusses the reasons mainframe and distributed IT are sharing data, and the use cases where organizations are building more effective digital capabilities with mainframe back ends.

Download Syncsort’s latest eBook, Ironstream in the Real-World for ITOA, ITSI, SIEM, to explore real world use cases where new technologies can provide answers to the questions challenging organizations.

Try Ironstream fbanner Expert Interview (Part 1): Splunk’s Andi Mann on IT Service Intelligence, ITOA and AIOps

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Expert Interview (Part 2): Andi Mann Compares ITSI and Business Service Management (BSM)

In Part 1 of this two-part expert interview with Andi Mann (@AndiMann), Splunk’s Chief Technology Advocate, he talks about IT Service Intelligence (ITSI) and how it fits with ITOA and AIOps, and the main drivers for the big investments organizations are making in Big Data. In today’s Part 2, he compares ITSI and Business Service Management, and discusses the reasons mainframe and distributed IT are sharing data, and the use cases where organizations are building more effective digital capabilities with mainframe back ends.

ITSI sounds a lot like what we have called Business Service Management (BSM) for the last decade, what is different about it?

Today, it is clear that the promise of Business Service Management was never fulfilled. It was too ambitious for its time, too complex in its make-up, and suffered from deficient underlying technologies – not least its database-driven approach.

Business Service Management was too hard to create and update service definitions, and too rigid in how service data was collected and managed. It relied on too few data sources for actionable business insight, and was typically restricted to too few (typically tech-centric) users to have broad business impact.

blog banner eBook ITSI Need to Know Expert Interview (Part 2): Andi Mann Compares ITSI and Business Service Management (BSM)

By contrast, Splunk IT Service Intelligence (ITSI) uses an analytics-driven solution, with machine learning, open integrations, and real-time processes as part of a modern business-centric solution. Unlike legacy BSM tools, ITSI integrates data sources from across the organization (and beyond), providing highly customizable visualizations, even in rapidly changing environments. ITSI is flexible and secure enough to provide real-time insights for any user, on-demand and on-the-fly. BSM tools pale in comparison.

One of short-comings of BSM, or whatever we chose to call it, had been the weak integration between IT metrics from distributed platforms with mainframe systems within IT infrastructures – how does ITSI help to address this?

With so much mission-critical data coming from mainframe platforms, it is amazing how many tools and solutions ignore it, or at best publish some kind of loosely-coupled connector and call it a day. That is not nearly enough for such a valuable source of intelligence. Instead you need to treat the mainframe as a first-class citizen in the service environment, alongside cloud, *nix, mobile, and other systems.

ITSI integrates tightly with solutions from trusted mainframe partners like Syncsort to ingest data from mainframe platforms, and combine them with distributed systems, to provide cohesive insights into the activity, status, and performance of cross-enterprise services.

This means more than just scraping application outputs, but also safely and securely integrating data from syslog, RMF, SMF, IMS/CICS, MICS, and other sources on zOS and zLinux partitions. Tightly integrating mainframe data in this way is the only way to provide total visibility of enterprise-wide services.

2017 Mainframe Survey Promo Expert Interview (Part 2): Andi Mann Compares ITSI and Business Service Management (BSM)

We are seeing the walls between mainframe and distributed IT coming down as organizations are more open to sharing information – what are some of the primary use cases driving this?

People are starting to understand that, despite some challenges, mainframe applications and data are too critical to leave in their own silo.

As organizations work through new “digital” projects, they eventually realize that the mainframe is the locus of so much mission-critical information. Teams focused on mobile engagement, customer experience, sentiment analysis, web interfaces, application modernization, digital transformation, and even innovation are realizing that you cannot just “ringfence” the mainframe.

You need mainframe data for a truly cohesive application, so they are building more effective experiences by integrating new “digital” capabilities with mainframe back ends.

Download Syncsort’s eBook, IT Service Intelligence: What Professionals Need to Know, to see how an IT Service Intelligence approach extends ITSSM to provide end-to-end visibility and insight into the operational health of critical IT and business services which span distributed systems, mainframe, and even mobile devices.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Expert Interview (Part 2): Decideo’s Nieuwbourg Talks Trends in Big Data, IoT & Data Visualization

In Part 1 of our two-part conversation, Phillipe Nieuwbourg (@nieuwbourg) talked about the founding of Decideo, interesting developments in data, what tools and skills you need, and mistakes to avoid to get the most out of data. In today’s part 2, he addresses what businesses should be doing with data today, and down the road, and trends in Big Data, IoT and data visualization.

In general, what should businesses be doing from a technology standpoint today to prepare for the ways they’ll use data down the road?

First: collect and store data. Without fresh and accurate data in your data lake, you can’t analyze anything. It’s the first step of a data culture. Of course, privacy and regulation are key to collecting the right data you will be able to analyze.

From a technology standpoint, it means to create what we call a “data lake” – a place where data will be stored, ready to be used by analytics applications we already know or for future needs. And of course, I repeat it, to verify and protect data from their sources to the data lake, to be sure we will be able to use it to generate value.

blog TDWI Data Lake Checklist 1 Expert Interview (Part 2): Decideo’s Nieuwbourg Talks Trends in Big Data, IoT & Data Visualization

What are the most promising new IoT applications you’re observing in business today?

IoT is fantastic. Humans can generate data, but they have limits that sensors don’t have. Objects, sensors, can generate thousands of data every minute. It’s an inexhaustible data source.

Predictive maintenance, consumer behavior analytics, video recognition for security applications … there’s no industry that will avoid the IoT wave. If you don’t know yet how IoT will transform your industry, focus on it! It will, you are just late!

Related: The IoT-Big Data Convergence

Why is effective data visualization so critical to an organization’s ability to understand their data? What should organizations look for in high-quality data visualization tools?

Imagine you have analyzed terabytes of data. You found something, a behavior, a trend, a pattern… how will you bring this to your management? Will you, like a research scientist, produce a black and white text presentation with complex equations? Certainly not the best way to convince your general manager. Just one image is better than a report or a long text explanation. But how to choose the right graphical visualization? How to create the Wow effect in your boss’ eyes? That’s where data visualization software come.

Do you know that Excel can generate only two types of graphic? There’s plenty of highly understandable graphics that your management will love, but that you can’t do in Excel. Just remember how Charles Joseph Minard created in 1869 the famous Napoleon’s march to Moscow graphic. The data visualization you will choose doesn’t depend on your data, it depends on your message. And you need an advanced software to do it.

Related: Visualization for Big Data

How has the way companies are able to visualize their data evolved in recent years? What developments interest you in the world of data visualization right now?

We’ve moved from data visualization to data storytelling.

Like in Hans Rosling famous TED presentations, you must tell a story. People never remember the statistics you put in your PowerPoint, they will remember the story you told them. And to transform your boring slides into a stunning story, you will need to apply storytelling techniques to data analysis. Animated data, storytelling techniques, you have the keys – the same used in Hollywood movies, or to write the scenario of House of Cards next season. By the way, do you know that Netflix is the first data driven movie/series producer?

What trends or innovations in the field of Big Data, data visualization and IoT are you following right now? Why do they excite you?

I continue to follow the data storytelling field. Actual software is a first generation. I miss a lot of functionalities. There’s actually no real data storytelling software on the market. I hope to discover one very soon.

And I’m really interested in deep learning, specially applied to IoT video feeds and photos analysis. If we can automatically “understand” a situation, we can automatically suggest an action. It can be a recommendation, a human intervention, but if we help people to understand data more quickly, we will make a step forward to the “augmented intelligence” I was talking before.

What’s one piece of advice you find yourself repeating to organizations over and over related to Big Data? One takeaway you think every organization should hear?

It’s not a technology project! Big Data is not a technology issue, it’s all about business. Stop buying technology before having understood and measured your data value. The result of Big Data analysis can be very small. Imagine a model that every morning gives you the list of the 10 prospects you must call today because they are ready to sign. Perhaps it’s Big Data analysis, but the result is a list of 10 names … small data. Don’t focus on technology, focus on business.

And don’t tell me Big Data is expensive. It’s never too costly. Because you will always prototype, often with open source software or with tools you already have, and anticipate your ROI before investing in high level infrastructure. Big Data will never be a charge, it’s will always be a source of revenue. If you accept to move step by step and focus on business challenges. And I wish the best of luck to all our readers. Data Driven Economy is a fantastic opportunity for the actual and next generations to generate value, and do things better.

To learn how to put your legacy data to work for you, and plan and launch successful data lake projects with tips and tricks from industry experts,download Syncsort’s Building a Data Lake Checklist Report!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

DAX Reanimator Series: Moving Averages Controlled by Slicer (Part 1)

upload 3 300x150 DAX Reanimator Series: Moving Averages Controlled by Slicer (Part 1)

In 2013, Rob showed how touse a disconnected table and slicer to show moving averages with a variable time period. That post built on an earlier post, which steps through the process of creating moving averages.

What-If-Parameters, teased at MDIS in June 2017, have just been released to Power BI. This is a great new way to get disconnected tables into Power BI. But, it should also help more people than ever discover the joy of disconnected tables. With the latest update of Power BI, you can now CLICK A BUTTON to create new what-if-parameters. As a result, the button creates a series of numbers in a disconnected table and a harvester measure to get the selected value. As an added bonus, you can even add the slicer to the page at the same time.

So, could we use the what-if-parameters to get the same results? Let’s try it. To begin with, we’ll need a Sales and Calendar table and a DAX measure for Units Sold.

Step 1: Create a new What-if-ParameterMA Length parameter 1 DAX Reanimator Series: Moving Averages Controlled by Slicer (Part 1)

From the Modeling tab, click the New Parameter button to bring up the  window. This window creates a table and a DAX harvester measure. You can also add a slicer to the page. Values for Minimum, Maximum, and Increment are required. As a great touch, there’s an optional AlternateResult. This default value is in effect if no slicers are active, or if the user clicks more than one slicer.

     = SELECTEDVALUE (Table[ColumnName, [AlternateResult] )

In a recent post, Matt Allington points out that this function is the same as:

     =IF(HASONEVALUE(Table[Column]), VALUES(Table[Column]), )

Step 2: View your new disconnected table and slicer
MA Length parameter 2 1 DAX Reanimator Series: Moving Averages Controlled by Slicer (Part 1)

The new table now displays in the fields area. Before, we had to make these tables in Power Query, or with the Enter Data button in Power BI. But no more! Now, the What-if button creates a table for us using DAX. No query or pasting required. By the way, this new tool is  only available in Power BI. Neither the What-If-Parameter nor the DAX functions behind it are yet available in Excel.

The What-If-Parameter button created this tableMA Length parameter table e1504038063750 DAX Reanimator Series: Moving Averages Controlled by Slicer (Part 1)

Note the new GENERATESERIES() function!  This is how the table looks in Data view. There’s a table formula at the top and the results on the rows of the table.

     [MA Length] = GENERATESERIES(-12, 12, 1).

This formula plugs in the values from the window: start of -12, end of 12, and move up by 1. Increment can be any positive number like 2, 5, or .05.

     GENERATESERIES ( StartValue, EndValue, [IncrementValue] )

The formulas below are from Rob’s original post. I made a couple of changes. Firstly,I replaced the original harvester measure.

     [Variable Moving Sum] =
     VAR CurrentDate =
          IF (
               [MA Length Value] > 0,
               FIRSTDATE ( ‘Calendar'[Date] ),
               LASTDATE ( ‘Calendar'[Date] )
          )
     RETURN
     CALCULATE([Units Sold],
          DATESINPERIOD(Calendar[Date],
          CurrentDate,
          [MA Length Value],Month
          )          
     )
     [Variable Moving Average] =
     [Variable Moving Sum] /
          CALCULATE( DISTINCTCOUNT(Calendar[Year Month]),
               DATESINPERIOD(Calendar[Date],
               LASTDATE(Calendar[Date]),
               [MA Length Value], Month
          )
     )

Secondly, I set the first argument for DATESINPERIOD based on whether the range is forward or back. The original post advised this step to fix the time range for forward averaging.

Here’s the syntax for DATESINPERIOD():

     DATESINPERIOD(,,,)

How do we get the right number of months going forward?

DATESINPERIOD casts ahead or back x months, starting with the start date parameter. Going back, there’s no problem because it starts with the last date of the current month, and therefore includes the full current month. Going forward, the measure should begin with the first date of the current month in order to include the full month. Instead, if the last date is used for forward averaging, the period will include one day from the current month. And this makes the count of months one more than it should be! For example, with three months forward, the measure would divide by four instead of three. So, the formula for [Variable Moving Sum] here corrects this issue. Since our variable switches between FIRSTDATE() and LASTDATE(),  this gives us the right count of months.

After creating these measures, we can put [Variable Moving Average] on the chart. Then, we can apply color to the sales and moving average lines. In addition, we also have new formatting options: markers and line style.

Step 3: Create & format your moving average chartVariable Moving Average Graph 1 DAX Reanimator Series: Moving Averages Controlled by Slicer (Part 1)

The AlternateValue of the harvester measure is -3. Thus, the moving average displays at three months back, even without a slicer selection. All great stuff!

Personally, I love these new What-if-parameters. And I especially love GENERATESERIES(). However, with fine tuning, we can go beyond what what’s possible out of the box. So stay tuned later this year for some cool stuff in Part 2!

Thanks to Reid Havens, for designing the moving averages dashboard. Download it below!

Download the Power BI Desktop (.pbix) Report Here

X

Get Your Files

Let’s block ads! (Why?)

PowerPivotPro

Expert Interview (Part 1): Philippe Nieuwbourg of Decideo on the Evolution of Data Management

Philippe Nieuwbourg (@nieuwbourg) is an independent analyst, author and lecturer focusing his work on information technology to improve a data-driven economy. In today’s part 1 of this two-part interview, Nieuwbourg discusses the founding of Decideo and the evolution of data management.

Can you tell us about your professional background? How did you become interested in Data Science?

I have been working in “data” since I created Decideo, more than 25 years ago. After studying accounting, I worked in a couple of companies, first to install accounting software in medium and large-sized organizations, and in a professional IT magazine as editor in chief.

After those experiences, I founded Decideo, as the first French-speaking professional community about “data.” It has been called reporting, business intelligence, data warehouse, business analytics, big data, artificial intelligence … words change like marketing waves, but it’s always about data!

What have been the most interesting developments in the field since you started your career? What has made the biggest impact on the way we use data today?

For 60 to 70 years, Information Technology was only about numbers. Created during the second world war, the first computers for decades were just able to manipulate numbers and characters chains viewed like ASCII codes. The shift came with mobile phones, social networks and micro-processors power.

Since then, we can collect, store, manipulate and analyze what is called “unstructured data” coming from photos, audio and video files. That’s a huge step. It means that what we say, hear and view, can be “understood” by computers and analyzed, a little like our brain did it.

blog banner landscape Expert Interview (Part 1): Philippe Nieuwbourg of Decideo on the Evolution of Data Management

With the progress of artificial intelligence and deep learning, it’s clear that we will soon have an augmented intelligence available at our fingertips. Unstructured data analysis will have a huge impact on businesses during the next years.

What do businesses need to know about selecting the right tools to manage their data? How should they approach finding the tools that will work best for their needs?

Don’t focus on the data visualization part! It’s a sexy, amazing and entertaining software to choose, but it’s the cherry on the cake. Keep it for the end of your project. First focus on the data integration tools. It’s much more important. You could have the best of class data visualization tool, but if your data isn’t accurate or comprehensive, you won’t find any value in it.

During the typical day of a data scientist, only a small part of their time is used for noble tasks, like machine learning, data storytelling, graphical analysis. More than 50 percent of their time is used for manipulating datasets and fixing data quality problems. List your data sources (for today and tomorrow), think about data quality, metadata management, GDPR, regulation, privacy, security. When you will have fixed all this, enjoy a little time to choose the best graphical tool. It’s the easy part of the job.

What are the most common mistakes you observe businesses making when searching for and using different data management tools? What should they be doing differently?

The most common mistake is to focus on technology. Believe me, you can do a lot of things without having a Hadoop cluster in your data center! I’ve met a lot of companies, like a bank I remember in Canada, which bought a Hadoop distribution without knowing why – only because its main competitor did it before. And three years later… nothing… Why? Because it was a technology purchase made without connection with business needs.

What type of skills and training would you like to see more businesses focus on when it comes to data management? What training should they invest in to prepare for the future?

I think that “data analysis” skills are not an option anymore. Especially for business people. Do you really want to rely on an IT department that just focuses on technology and doesn’t really understand your business needs? I don’t. All business people should be trained to acquire basic skills on data management and data analysis. If data is the new oil of the economy, all business people should be able to generate value from it.

I don’t try to say here that IT people are not doing their job. They have the most important one: focusing on infrastructure, compliance, cybersecurity … but have to let business people take care of data analysis.

In universities and business schools, all students should be prepared to manipulate datasets.

Tomorrow, in part 2, Phillipe talks about what organizations should be doing with data today, and down the road, and trends in Big Data, IoT & data visualization.

Download Syncsort’s latest eBook to discover the new rules that are redesigning the relationship between business and IT.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog