Tag Archives: Part

Expert Interview (Part 2): Elise Roy on Human Centered Design and Overcoming Challenges with Big Data

In case you missed Part 1, read here!

Recently, while Elise was working with NPR, they discussed the fact that episodes of NPR programs posted online did not provide captions. While these shows generally have an article associated with them or a transcript of the conversation, Elise pointed out that NPR might be filtering out a significant portion of the population who might have hearing loss but are still able to appreciate an audio-centered show. Or, those who were completely deaf who liked the pacing captions brought and a less cluttered visual experience.

Expert Interview Part 2 Elise Roy on Human Centered Design and Overcoming Challenges with Big Data banner Expert Interview (Part 2): Elise Roy on Human Centered Design and Overcoming Challenges with Big Data

Because of their conversation, NPR has a better understanding of an entire market they might be missing out on.

Her way of problem-solving is catching on.

“A couple years ago I was telling people about human centered design, they had no idea what I was talking about,” Elise says. “But now they’re starting to recognize the value it provides businesses and starting to see how they can create more targeted responsive solutions.”

Big Data plays an important role in creating more customer-centric solutions. It allows organizations to better understand how to react to the human experience and build more personalized and customized experiences and identify patterns that otherwise might have been difficult to see.

Currently, one of the biggest struggles with integrating the perspective of people with disabilities is that there are such a wide variety of disabilities– it can be challenging to design with each one in mind.

Elise says Big Data can help overcome those challenges.

There are already products on the market that benefit individuals with disabilities that use the power of Big Data and the Internet of Things.

For instance, there are companies developing doorbell home security solutions that alert users to motion and allow them to monitor the door remotely– an ideal solution for individuals with mobility problems. Innovation like this and others including the Roomba or self-driving cars not only make it easier for people with disabilities to live independently but are also products that the general population enjoys as well.

In order to continue to bring innovations like these to market, it will be essential that Big Data be paired with human centered design methods.

“This is because big data can easily be influenced by bias,” Elise says. “For example, we could only collect certain kinds of data and be missing out on a key thing that would get uncovered through the human centered design process during the observation phase.”

Recently, Microsoft hired several experts in bias reduction in Artificial Intelligence when they recognized their AI applications were biased in the sense that they were designed around the beliefs of those who were designing them rather than the people who were going to experience their applications.

Moving forward, Elise believes there needs to be symbiosis between Big Data and the human aspect of design.

Elise’s consulting business is still in its infancy, but she’s excited about potential impact on innovation that of looking at innovation through the lens of the disabled offers for businesses.

“There’s a lot of people who have gotten back to me and said it’s really impacted how they’re thinking about things,” Elise says.

We also have a new eBook focused on Strategies for Improving Big Data Quality available for download. Take a look!

Let’s block ads! (Why?)

Syncsort Blog

Expert Interview (Part 1): Human Centered Design and Elise Roy on Transforming Disability into Innovation

Elise Roy says that losing her hearing when she was 10 years old has been one of the greatest gifts she’s ever received.

Early on, she viewed her loss as something she had to deal with and overcome. That perspective has shifted though.

“My disability has become an asset,” Elise says. “Rather than something I have to deal with, it’s a tool.”

Expert Interview Part 1 Human Centered Designer Elise Roy on Transforming Disability into Innovation banner Expert Interview (Part 1): Human Centered Design and Elise Roy on Transforming Disability into Innovation

A tool Elise has leveraged in just about every job she’s taken on – as one of the country’s few deaf lawyers, as an artist and designer, and as a human rights activist.

Most recently, she’s started working as a consultant, using her unique perspective to help organizations take a different approach to their design practices. Her goal is to show the groups she works with that incorporating a deeper understanding of how the disabled navigate the world will lead to extraordinary innovation and results.

“I believe that these unique experiences that people with disabilities have is what’s going to help us make and design a better world … both for people with and without disabilities,” she shared in her TED talk.

She consults through the lens of Human Centered Design, trying to develop the best product by defining problems and understanding constraints, observing people in real-world situations, asking questions and then using prototyping to test it quickly and cheaply all while keeping the end users– the customers in focus.

Elise learned first-hand how effective this method of problem-solving is back when she was taking a fabrication class in art school. The tools she was using for woodworking would sometimes kick back at her. Generally, before doing this they would emit a sound. But because of her hearing loss, Elise wasn’t able to hear it. In response, she developed a pair of safety goggles that give a visual warning when the pitch of the machine changed. The product can help protect both those who are hearing impaired and those with no hearing loss.

She points to other widely used inventions that were initially created for people with a disability, too. Email and text messaging, for instance, were designed for deaf users.

The OXO potato peeler was designed to help individuals with arthritis but was adopted by the general population because of how comfortable it is to use. There are tech companies currently developing apps and websites who are looking to people with dyslexia and intellectual disabilities for inspiration on simplifying design and offering an easier-to-use interface for everyone.

Check back for part 2 where Elise goes more in depth with what she is doing with Human Centered Design.

Also, we have a new eBook focused on Strategies for Improving Big Data Quality available for download.

Let’s block ads! (Why?)

Syncsort Blog

Power BI Introduction: Working with Power BI Desktop — Part 2

The series so far:

  1. Power BI Introduction: Tour of Power BI — Part 1
  2. Power BI Introduction: Working with Power BI Desktop — Part 2

Microsoft’s Power BI is a comprehensive suite of integrated business intelligence (BI) tools for retrieving, consolidating, and presenting data through a variety of rich visualizations. One of the suite’s most important tools is Power BI Desktop, a free, on-premises application for building comprehensive reports that can be published to the Power BI service or saved to Power BI Report Server, where they can be shared with users through their browsers, mobile apps, or custom applications.

This article is the second in a series about Power BI. The first article covered the basic components that make up the Power BI suite. This article focuses specifically on Power BI Desktop because of the important role it plays in creating reports that can then be used by the other components.

With Power BI Desktop, you can access a wide range of data sources, consolidate data from multiple sources, transform and enhance the data, and build reports that utilize the data. Although you can access data sources and create reports directly in the Power BI service, Power BI Desktop provides a far more robust environment for working with data and creating visualizations. You can then easily publish your reports to the Power BI service by providing the necessary credentials.

The features in Power BI Desktop are organized into three views that you access from the navigation pane at the left side of the main window:

  • Report view: A canvas for building and viewing reports based on the datasets defined in Data view.

  • Data view: Defined datasets based on data retrieved from one or more data sources. Data view offers limited transformation features, with many more capabilities available through Query Editor, which opens in a separate window.

  • Relationships view: Identified relationships between the datasets defined in Data view. When possible, Power BI Desktop identifies the relationships automatically, but you can also define them manually.

To access any of the three views, click the applicable button in the left navigation pane. The following figure shows Power BI Desktop with Report view selected, displaying a one-page report with three visualizations.

word image 21 Power BI Introduction: Working with Power BI Desktop — Part 2

When working in Power BI Desktop, you will normally perform the following four basic steps, although not necessarily in a single session:

  1. Connect to one or more data sources and retrieve the necessary data.

  2. Transform and enhance the data using Data view, Relationships view, or the Query Editor, as necessary.

  3. Create reports based on the transformed data, using Report view.

  4. Publish the reports to the Power BI service or upload them to Power BI Report Server.

This article will focus primarily on steps 1 and 2 so you can get a better sense of how to retrieve and prepare the data, which is essential to building comprehensive reports. Later in the series, you’ll learn more about building reports with different types of visualizations, but first you need to make sure you get the data right.

Connecting to Data in Power BI Desktop

Power BI Desktop supports connectivity to a wide range of data sources, which are divided into the following categories:

  • All: Every data source type available through Power BI.

  • File: Source files such as Excel, CSV, XML, or JSON.

  • Database: Database systems such as SQL Server, Oracle Database, IBM DB2, MySQL, SAP HANA, and Amazon Redshift.

  • Azure: Azure services such as SQL Database, SQL Data Warehouse, Blob Storage, and Data Lake Store.

  • Online Services: Non-Azure services such as Google Analytics, Salesforce Reports, Facebook, Microsoft Exchange Online, and the Power BI service.

  • Other: Miscellaneous data source types such as Microsoft Exchange, Active Directory, Hadoop File System, ODBC, OLE DB, and OData Feed.

To retrieve data from one of these data sources, click the Get Data button on the Home ribbon in the main Power BI window. This launches the Get Data dialog box, shown in the following figure.

word image 22 Power BI Introduction: Working with Power BI Desktop — Part 2

To access a data source, navigate to the applicable category, select the source type, and click Connect. You will then be prompted to provide additional information, depending on the data source. This might include connectivity details, an instance or file name, or other types of information. After you provide the necessary details, Power BI Desktop will launch a preview window that will display a sample of the data along with additional options.

This article uses a CSV file based on data from the Hawks dataset, which is available through a GitHub collection of sample datasets. The Hawks dataset contains data collected over several years from a hawk blind at Lake MacBride near Iowa City, Iowa. Name the file hawks.csv and save it to a local folder (C:\DataFiles).

To import the data from the file into Power BI Desktop, select the Text/CSV data source type in the Get Data dialog box. Click Connect and the Open dialog box will appear. From there, navigate to the C:\DataFiles folder, select the hawks.csv file, and click Open. This launches the preview window shown in the following figure.

word image 23 Power BI Introduction: Working with Power BI Desktop — Part 2

In addition to being able to preview a subset of the data, you can configure several options: File Origin (the document’s encoding), Delimiter, and Data Type Detection. In most cases, you’ll be concerned primarily with the Data Type Detection option. By default, this is set to Based on first 200 rows, which means that Power BI Desktop will convert the data to Power BI data types based only on the first 200 rows of data.

This might be okay in some cases, but there could be times when a column includes a value different from the majority of values, but that value is not in the first 200 rows. As a result, you could end up with a suboptimal data type or even an error. To avoid this risk, you can use the Data Type Detection option to instruct Power BI Desktop to base the data type selection on the entire dataset, or you can instead forego any conversion and keep all the data as string values.

You can also edit the dataset before loading it into Power BI Desktop. When you click the Edit button, Power BI Desktop launches Query Editor, which provides a number of tools for transforming data. After you make the changes, you can then save the updated dataset.

In this case, load the data into Power BI Desktop without changing any settings or editing the dataset. You will be transforming the data later as part of working through this article. To import the data as is, you need only click the Load button. You can then view the imported dataset in Data view, as shown in the following figure.

word image 24 Power BI Introduction: Working with Power BI Desktop — Part 2

After you import a dataset into Power BI Desktop, you can use the data in your reports as is, or you can edit the data, applying various transformations and in other ways shaping the data. Although you can make some modifications in Data view, such as being able to add or remove columns or sort data, for the majority of transformations, you will need to use Query Editor, which includes a wide range of features for shaping data.

Introducing Query Editor

To launch Query Editor, click the Edit Queries button on the Home ribbon. Query Editor opens as a separate window from the main Power BI Desktop window and displays the contents of the dataset selected in the left pane, as shown in the following figure. In this case, the pane includes only the hawks dataset, which is the one you just imported.

word image 25 Power BI Introduction: Working with Power BI Desktop — Part 2

The dataset itself is displayed in the main pane of the Query Editor window. You can scroll up and down or right and left to see all the data. The pane to the right, which is separated into two sections, provides additional information about the dataset. The Properties section displays the dataset’s properties. In this case, the section includes only the Name property. To view all properties associated with a dataset, click the All Properties link within that section.

The Applied Steps section lists the transformations that have been applied to the dataset in the order they were applied. Each step in this section is associated with a script statement written in the Power Query M formula language. The statement is shown in the formula bar directly above the dataset. (If the formula bar is not displayed, select the Formula Bar checkbox on the View ribbon to display it.)

Currently, the Applied Steps section for the hawks dataset includes only two steps: Source and Changed Type. Both steps are generated by default when importing a CSV file. The Source step corresponds to the following M statement (provided here in case you cannot read it in the above figure):

The M statement consists primarily of the connection string necessary to import the hawks.csv file. Notice that it includes the delimiter and encoding specified in the preview window, along with other details.

The second step, Changed Type, converts the data in each column to Power BI data types. The step is associated with the following M statement, which shows that the first column is assigned the Int64 type and that all other columns are assigned the text type:

If you had opted to forego data type conversions when importing the hawks.csv file, all columns would be assigned the text data type, but you instead stuck with the Based on first 200 rows option, which resulted in one column being converted to the Int64 type. The following figure shows the dataset with the Changed Type step selected and its associated M statement in the formula bar.

word image 26 Power BI Introduction: Working with Power BI Desktop — Part 2

This article won’t spend much time on the M language. Just know that the language is the driving force behind the Power BI Desktop transformations in Query Editor. Articles later in this series will dig into the language in more detail so you can work with it directly.

Every transformation you apply to a dataset is listed in the Applied Steps section, and each step is associated with an M statement that builds on the preceding step. You can use the steps and their associated M statements to work with the dataset in various ways. For example, if you select a step in the Applied Steps section, Query Editor will display the dataset as it existed when the step was applied. You can also modify steps, delete steps, inject steps in between others, or move steps up or down. In fact, the Applied Steps section is one of the most powerful tools you have for working with Power BI Desktop datasets. Be aware, however, that it’s very easy to introduce an error into the steps that turns all your work into a big mess.

With that in mind, you’ll next provide column names for the dataset. You might have noticed from the screenshots—or when you imported the data for yourself—that the first row of data contains the original column names. This is why all columns but the first were assigned the text data type, even if most values were numbers. The first column was assigned the Int64 data type because it was the index for the original dataset and did not require a column name.

To convert the first row into columns names, click the Use First Row as Headers button on the Home ribbon. Power BI Desktop will promote the row and add a Promoted Headers step to the Applied Steps section, as well as a Changed Type step, which converts the data types of any fields that might be impacted by removing the column names as data values.

The following M statement is associated with the Changed Type step added to the hawks dataset:

Notice that, in addition to the first column, several other columns have been converted to the Int64 data type. However, the Wing column has been converted to the number type because it contains at least one decimal value. Other columns retained the text data type because they contained only character values or contained a mix of types.

It should be noted that the Changed Type step just added to the hawks data set is actually named Changed Type1. If a step is added that is of the same type as a previous step, Query Editor tags a numerical value to onto the new step to differentiate its name from the earlier step.

To complete the process of promoting the first row to headers, you might need to rename one or more columns. For example, because the hawks dataset did not include a name for the first column, change it from the auto-generated column1 name to TagID, as shown in the following figure by right-clicking into the column1 heading and selecting Rename.

word image 27 Power BI Introduction: Working with Power BI Desktop — Part 2

As you can see, the Applied Steps section now includes the Renamed Columns step. Also notice that each column name is preceded by a symbol that indicates the column’s data type. If you click the symbol, a context menu will appear and display a list of symbols and their meanings. You can also change the data type from this menu.

Note, however, that the name of the data types listed here are not always the same as the names used in the M statements. For example, the Int64 data type that appears in the M statements is listed as the Whole Number data type in the context menu.

Addressing Errors

When working in Query Editor, you might run into errors when converting data, modifying applied steps, or taking other actions. Often you won’t realize that there is an error until you try to apply any changes you’ve made. (You should be applying and saving your changes regularly when transforming data.)

To apply your changes in Query Editor, click the Close & Apply down arrow on the Home ribbon, and then click Apply. Any changes that you made since they last time you applied changes will be incorporated into the dataset, unless there is an error. For example, when you apply the changes to the hawks dataset after promoting the headers, you will receive the message shown in the following figure.

word image 28 Power BI Introduction: Working with Power BI Desktop — Part 2

When you click the View errors link, Query Editor will isolate the row that contains the error, as shown in the following figure. The TagID value for this row is 263. After viewing the error row, you should see that the Wing column contains an NA value, although the column is configured with the number data type, which is why you received the error.

word image 29 Power BI Introduction: Working with Power BI Desktop — Part 2

Click hawks on the left to exit out of the Errors screen and get back to the Query Editor.

To address an error in Query Editor, you can replace or remove the error or change the column’s data type:

  • To change the column’s type, click the type icon next to the column name, and then click Text.

  • To replace the error, right-click the column header, click Replace Errors, type a replacement value, and then click OK.

  • To filter out the row with the error, click the down arrow next to the TagID header, search for the 263, and then clear the checkbox associated with this ID.

  • To remove all rows that contain errors, click the table icon in the top left corner of the data set, and then click Remove Errors. This is the approach I took for the hawks dataset. As a result, the Removed Errors step was added to the Applied Steps section.

NOTE: One thing interesting about the error is that it occurs as a result of promoting the first row to column headers and the subsequent type conversion that came with it. From what I can tell, Query Editor must use only a sample of the data when selecting the data type during this step. It does not appear to have anything to do with the Data Type Detection setting in the preview window. I tried all possible settings and the results were always the same.

Removing Columns

In some cases, the data you import includes columns that you don’t need for your reports. Query Editor makes removing these columns a simple and quick process:

  • To remove a single column, select the column directly in the displayed dataset, and then click the Remove Columns button on the Home ribbon, or you can instead right-click the column header and then click Remove.

  • To remove multiple columns, select the first column, press and hold the Control key, select each additional column, and then click the Remove Columns button on the Home ribbon.

For the hawks dataset, remove the following columns because they are not relevant to the reports you might create or they contained numerous NA values:

  • CaptureTime

  • ReleaseTime

  • Sex

  • Culmen

  • Hallux

  • Tail

  • StandardTail

  • Tarsus

  • WingPitFat

  • KeelFat

  • Crop

After removing the columns, Query Editor adds a single Removed Columns step to the Applied Steps section, as shown in the following figure.

word image 30 Power BI Introduction: Working with Power BI Desktop — Part 2

One of the advantages of having the Removed Columns step listed along with the other steps is that you can easily remove or modify the step if you later decide that one or more of the columns should not have been removed. In this case, you’re relatively safe modifying this step because any steps that you might have added after deleting the columns cannot reference those columns because they’re considered nonexistent. Close the Query Editor to go back to Data view.

Adding a Calculated Column

With Power BI Desktop, you can add calculated columns to a dataset that concatenates data or performs calculations. Power BI Desktop provides two methods for adding a calculated column. The first is to create the column in Data view, using the Data Analysis Expressions (DAX) language to define the column’s logic.

To create a DAX-based column, click the New Column button on the Home ribbon in Data view, and then type the DAX expression in the formula bar at the top of the dataset. The new column should be selected when adding the expression. After you type the expression, click the checkmark to the left of the expression to verify the syntax and populate the new column.

For example, add a column named Date to the hawks dataset to provide a single value that shows the date when a hawk was tagged, using the following DAX expression:

The expression uses the DATE function to create a date value based on the Year, Month, and Day columns. After you add the Date column, you can set the displayed format. Go to the Modeling ribbon in Data view, click the Format drop-down list, point to Date Time, and click 3/14/2001 M/d/yyyy. The following figure shows the new column after I added it to the dataset, with the applied format.

word image 31 Power BI Introduction: Working with Power BI Desktop — Part 2

Using DAX to create a column in Data view is quick and easy. However, this approach has some limitations. For example, the column is not available in Query Editor, so it cannot be used as part of another calculated column definition in Query Editor. In addition, if you delete a column referenced by a DAX expression, the values in the DAX-based column will show only errors.

The safest way to get around these limitations is to create your column in Query Editor. Delete the Date column you just added in the Data view. Click Edit Queries to reopen the Query Editor. Click the Custom Column button on the Add Column tab. When the Custom Column dialog box appears, type a name for the new column and the column’s formula, which is an M expression that defines the column’s logic. (Query Editor automatically adds the rest of the M syntax to create a complete statement.) For example, to create a column like the DAX-based column above, used the following M expression:

The statement uses the #date method to create a date value based on the Year, Month, and Day columns, similar to the DAX expression. The following figure shows what the M expression looks like when entered into the Custom Column dialog box.

word image 32 Power BI Introduction: Working with Power BI Desktop — Part 2

When entering your M expression in the Custom Column dialog box, be sure that the green arrow is showing at the bottom of the dialog box to ensure that there are no syntax errors. Be aware, however, that you can have errors in your M statement without them showing up as syntax errors.

When you create a calculated column, Query Editor adds the Added Custom step to the Applied Steps section. You can then reference the new column in other calculated columns. You can also remove any of the columns referenced by the M expression. In this case, remove the Month and Day columns but keep the Year column in case you want it available for reports.

Splitting a Column

In Query Editor, you can split a column based on a specified value (delimiter) or by a specified number of characters. For example, the hawks dataset includes the BandNumber column, which is made up of two parts, separated by a hyphen. One of those parts might have special meaning, such as indicating the individual who tagged the hawk or the process used to tag the hawk. Splitting the column could make it easier to group the data by a particular entity.

To split a column by a delimiter, right-click the column’s header, point to Split Column, and then click By Delimiter. In the Split Column by Delimiter dialog box, specify the delimiter and then click OK. Query Editor will split the column into two columns (removing the delimiter) and update the data types if necessary. You can then rename the columns if you do not want to use the autogenerated names.

The following figure shows the hawks dataset after splitting the BandNumber column and changing the names of the two new columns to BandPrefix and BandSuffix.

word image 33 Power BI Introduction: Working with Power BI Desktop — Part 2

Notice that three steps have been added to the Applied Steps section: Split Column by Delimiter, Changed Type2, and Renamed Columns1.

Performing Additional Data Transformations

When you’re preparing a dataset for reporting, you should evaluate the data types that have been assigned to each column to ensure they’re what you need. Usually you’ll want to hold off doing this until after you’ve added, removed, or split columns so you’re not introducing unnecessary transformation steps. Evaluating the data types can also point to possible anomalies in the data.

For example, the BandPrefix column was assigned the number data type, which would have been expected to be the Int64 data type. The same goes for the Wing column. In addition, the Weight column was assigned the text data type, and the Date column was assigned the Any data type. For this article, change the data types for the BandPrefix, Wing, and Weight columns to the Int64 data type (whole number), and change the Date column to the date data type.

NOTE: I could not find a clear answer for why Query Editor assigned the number type to the BandPrefix column and the Int64 type to the BandSuffix column. The only numeric values I found in the BandPrefix column were whole numbers. However, when researching this issue, I discovered that both columns contained null values, so I removed those rows. To filter out rows with null values, click the down arrow in the applicable column header, and clear the checkbox associated with the null value.

The reason the Wing column was assigned the number data type was because it contained a decimal value. When changing the data type to Int64, Query Editor automatically rounded the value to the nearest whole number, which worked fine for the purposes of this article.

The process was not quite as simple for the Weight column. The reason it was assigned the text data type was because the column contained NA values, which were treated as errors when changing the column to the Int64 data type. Remove these rows by using the Remove Errors option.

The next step is to move the Date column left within the dataset so it appears before the Year column. To move a column in Query Editor, simply drag the column header to the new position. Query Editor will add the Reordered Columns step to the Applied Steps section, as shown in the following figure. The figure also shows the other steps that were added when changing data types, filtered rows, and removed errors.

word image 34 Power BI Introduction: Working with Power BI Desktop — Part 2

The final step is to replace the values in the Species and Age columns to make them more readable. To replace a value, right-click the column header, and click Replace Values. In the Replace Values dialog box, type the original value and new value in the applicable text boxes. Next, click the Advanced options arrow, and select the Match entire cell contents checkbox. This is important to ensure that you don’t replace part of another value.

The first value to replace in the hawks data set is the RT value in the Species column. For the new value, used Red-Tailed, as shown in the following figure.

word image 35 Power BI Introduction: Working with Power BI Desktop — Part 2

Also replace the following values:

  • The CH value in the Species column with Cooper’s.

  • The SS value in the Species column with Sharp-Shinned.

  • The A value in the Age column with Adult.

  • The I value in the Age column with Immature.

The following figure shows the updated values in the Species and Age columns, as the dataset appears in the Data view of the main Power BI Desktop window.

word image 36 Power BI Introduction: Working with Power BI Desktop — Part 2

Notice that the columns are not impacted by the reordering that was done in Query Editor. The newer columns—Date, BandPrefix, and BandSuffix—are included at the right side of the dataset, after the original columns. Also notice that in this case the Date column is configured in the format mm/dd/yyyy, but you might see the data values spelled out or displayed in another format. However, you can choose any format from the Format drop-down list on the Modeling ribbon.

Generating Reports

Once you get your datasets in the format you want them, you can begin to create reports and their visualizations. Just to show you a sample of what can be done, I created one report with three visualizations based on the hawks data set, as shown in the following figure.

word image 37 Power BI Introduction: Working with Power BI Desktop — Part 2

The figure contains a one-page report as it appears in Report view in Power BI Desktop. The report includes a table, bar chart, and donut chart, all based on the hawks dataset. Although this article doesn’t focus on reports, be aware that Report view includes a variety of options for creating and configuring different types of visualizations. Later in the series, you’ll learn how to create reports and visualizations and publish them to the Power BI service.

Working with Power BI Desktop

There’s much more you can do with Power BI Desktop and Query Editor than what was covered here. In addition to creating various types of visualizations, you can merge and append queries, group and aggregate data, pivot columns, define relationships, run R scripts, and carry out a number of other tasks. The best way to become familiar with these features is to experiment with them against different types of data. The better you understand how to work with Power BI Desktop, the more effectively you can use the rest of the Power BI tools, including the Power BI service, Power BI Report Server and the Power BI mobile apps.

Let’s block ads! (Why?)

SQL – Simple Talk

Adobe: In the weeds, in the zone (part two)

Video: Adobe to acquire Magento Commerce for $ 1.68 billion

what’s hot on zdnet

OK, this section, part two of the Adobe Digital Marketing Summit has been weighing on my mind for weeks. I’ve been procrastinating in part because Adobe is such a damned complex company. And now that they bought Magento for e-commerce, even though that wasn’t discussed at the Digital Marketing Summit, I have to discuss it here, because it has a potential bearing on one of the things that I’m going to bring up, making this even more complex.

Read also: Adobe: Experiencing the experiential (part one)

First, to recap:

In Adobe part one from a couple of weeks ago, I ran you through two different “tracks”: First, how did Adobe do when it came to their actual event? Then, how do you distinguish the kinds of customer experience out there, and which one is Adobe actually focused on? That was important to set the context of Adobe’s current and next phase — where they are, where they are going, and what they need to do to get there.

I hope, after you read part one (and this part), that you’re clear on why I wouldn’t typically support a technology company’s messaging around customer experience, especially since time and time again. I’ve railed against the whole idea of customer experience technology under the umbrella trope — “You can’t enable how someone feels” — via technology, which, not so incidentally, I still maintain.

However, that said, I am going to actually do something that I literally told Adobe was a bad idea about 10 years ago and support their messaging around the Adobe Experience Cloud (the experience part, not the cloud part so much. Explanation to follow).

But let’s dig in based on what I saw and learned from both the Digital Marketing Summit 2018 and what has been supported/supplemented by the acquisition of Magento, which I am feeling a bit mixed — though mostly positive — about right now.

Open Ecosystems and Platforms: Wave of the… Present

To explain why the Digital Marketing Summit was of pivotal importance in Adobe’s evolution in the market, I have to reiterate something that I think was the most important pronouncement that Adobe made — and they made it in 2016, too. Adobe CTO, Abhay Parasnis, spoke with analysts about a major turn that Adobe would be taking that involves knitting the Document Cloud, the Digital Marketing Cloud (now the Adobe Experience Cloud), and the Creative Cloud into a single platform, and at the same time, building what he called an “open ecosystem.” That meant systematically defining what Adobe’s customers and (I hope, though he didn’t say this) their potential customers needed, identifying Adobe’s core capabilities to-be as a platform from end to end, what they were willing to build, what they needed to partner on, and, interestingly enough, if there were things that their competitors did better than they did, and thus, possible partnerships with them as a result — all in the interests of the customer-driven “open” ecosystem.

Read also: How Adobe moves AI, machine learning research to the product pipeline

It was a bold move and, honestly, unexpected since Adobe never particularly seemed to be a candidate for thinking about ecosystems. They seemed content to be a leader in the B2C markets. Comfortable in their domain — and, when it came to it — willing to extend their capabilities within its own confines, but not much beyond that. But announce it they did in 2016 to the analysts, and by 2017, it was a much more well-known strategic imperative at the company — both internally and publicly. It was one of the reasons that Suresh Vittal, an industry star, was elevated to run all the Experience Cloud’s product. The platform and the ecosystem were front and center at Adobe by 2017 and he was one of the people there best qualified to handle that effort — and the thinking that goes with it.

The implications of this kind of thinking are considerable. First, it takes you to markets that you have never been. Second, it identifies gaps that you didn’t realize you had. Third, it forces you to rethink everything in terms of overall business transformation from the perspective of the domain you are in, without losing the broader focus of the total needs of a customer, including those you can’t possibly meet, which, of course, means that you need to consider fulfilling those needs via partners, which means a partner program that gets beyond just Value Added Resellers (VARs) and marketplaces/exchanges. Partner ecosystems are strategic because they are focused around partners who are part of the core company offering. That means a strategic relationship — a true partnership — that involves co-creation, mutual investment, and go-to market strategies. That means that the sales teams of each company are trained to sell the partner’s particular offering and the sales people of each company are compensated for that.

If you look at Adobe’s behavior the last couple of years since the announcement, one thing is clear — they are sticking to plan and sticking the landing as they proceed to execute.

I’m going to detail that with proofs of concept. But, to set the stage, I need to show you how Adobe is (successfully) repurposing their company and gaining market ground (I don’t mean that as a substitute for market share).

Livin’ the Dream…

If you haven’t guessed it by now, enterprise-focused companies, especially tech companies, when it comes to how to be a competitive success, need to be thinking about platforms and ecosystems. Customers are no longer just demanding products and services, but they are looking to companies to provide them with consumable experiences and they are looking to feel valued as a customer. That means that not only is it important to provide the fundamentals — products and services, but also the kind of personalized knowledge of the customer that makes them feel as if they are important to the company. In order to do that, you have to provide as a company the range of products, services, experiences that the customer expects from a company like yours. Plus, the customer has to be able to perceive your company as a place that he or she cares to return to for whatever it is you provide for them when they are in the market. That means the environment, the vibe, the context, the feeling about the company has to be, to keep it super simple, warm and comfy enough for the customer to feel good about the return visit to you. When they are involved with you, they need to feel as if, and this may be the most important point of all, the interactions they are having are convenient. And I do mean “convenient,” which is the most important “experience” that any company can actually offer a customer.

Read also: Adobe patches critical vulnerabilities in Flash, Creative Cloud

To provide the appropriate things that the customer is looking for from end to end, the company needs to do what I said above, figure out who is going to provide what that the core company is offering to the customers. Will it be (in this case) Adobe or one of its strategic partners? Will it be Adobe and a strategic or even point solution partner? Regardless of who the provider is, customers are expecting a range of choices that pretty much need to be available, or those customers will begin to look elsewhere for what they need.

Adobe in 2016 claimed that they were working toward this idea — and given their unique kind of B2C audience — they also wanted to address the needs of a different B2B audience at some point too. But, intent and outcomes are two very different things. Once Adobe announced its intent, the verification of the commitment to this significant change in model, structure, and outlook was signaled by their announcement jointly about their partnership with Microsoft. And the intent of the company went from speculative and important to game changer.

Ecosystems: Adobe and Microsoft — and Magento?

In 2017, Adobe and Microsoft announced their relationship. Rumor had it, it was driven by the friendship of Microsoft CEO Satya Nadella and Adobe CEO Shantanu Narayen, who were college mates and have been friends ever since. Whatever the reason, I liked what I saw and how it was presented.

Read also: Adobe updates Experience Manager for marketers and developers

Microsoft Dynamics got something it desperately needed in two areas: An enterprise-grade, highly competitive marketing automation solution that was evolving toward an experiential marketing solution, which played well with (but wasn’t identical to) Microsoft’s messaging around intelligent customer engagement. It was also an entrée into the B2C market, which, by no means was Dynamics’ forte, but it was still somewhere they needed to go. Adobe was getting the muscle of Microsoft. More importantly, they got a significant addition to the customer-facing operational portion of their ecosystem — sales, service, and the engine behind it, plus an enterprise-grade cloud infrastructure. And some traction in the B2B market. Little did I know that this burgeoning strategic friendship would quickly blossom into something so deep that I now call it the “Get a Room” partnership. I can tell you, after conversations with Dave Welch, the vice president and Solution Leader Microsoft Solutions at Adobe, and also due to my own investigations, that Adobe and Microsoft are deeply committed to the partnership, and it is arguably one of the best strategic partnerships I have ever seen. I am not saying that lightly.

Briefly, I was able to see the willingness of each party to actually work with the other to sell into the market. That means committing dollars to co-marketing, training the sales teams to sell each other’s products as part of an overall solution set, and compensating accordingly. It also means several real deals won under their belt so that it’s a partnership both in name and deed.

But there was one other piece that was powerful enough to be viscerally striking — the depth of and collaboration around the technical integration between the Adobe Experience Cloud, and not only Microsoft Dynamics products but Azure as the infrastructure wrapper. The teams at both companies involved in this were working night and day to get the integration of the two solutions and Azure so compact and tight that it was metaphorically down to the object level — and, if I knew enough about the technical side to ask, possibly actually down to that level. This is not an easy task because Microsoft Dynamics was making strong technical improvements on a regular basis (e.g. a common data model), and for Adobe Digital Marketing to work, those improvements had to be taken into account as they occurred.

They were and are to this day.

An interesting new wrinkle was thrown into this particular mix about two weeks ago, when Adobe acquired SMB e-commerce platform Magneto. The reason that I think makes the most sense for this is actually posited in the TechCrunch article by the awesome journalist Ron Miller. Between Ron Miller and analyst/influencer Brent Leary (you heard from him here last week), the condensed version of the argument goes like this (all completely paraphrased):

Magento was acquired because it is an e-commerce platform that plays in both B2B and B2C and was a missing piece for Adobe and their experience cloud. They now have an opportunity to close the loop so that what began with Adobe digital marketing can end with the transaction via Magento e-commerce.

E-commerce technology plays the role of being the core transactional piece of a larger customer engagement technology matrix, the same way CRM is the core operational piece. So, there is a larger strategic value, too.

It does raise interesting question for the Microsoft Adobe partnership. Microsoft, even more so than Adobe, in order to compete with Salesforce, SAP, and Oracle, had a major deficiency — and that was e-commerce. Even though their e-commerce needs might be better served via partnership with companies in their existing partner ecosystem like Episerver, the intriguing thing is that Magento could potentially work. However, Adobe is going to have to scale Magento to make this a truly worthwhile acquisition. because they play a lot in the upper end of small business and lower end of the midmarket — not at the enterprise, and Adobe is an enterprise company. But I will leave you, and me, and Adobe and Microsoft with this question: Is this going to also help Microsoft? I don’t know the answer, only the question.

But all in all, this was big evidence of Adobe following through on building their ecosystem. What about the platform? They did promise both, you know.

Platforms: Adobe Banks on the Consumable

I had a conversation with Adobe about seven years ago (If my memory serves me) where I told Adobe that positioning themselves around being a customer experience technology wasn’t the smartest move. That’s the polite version of the story. Needless to say, they didn’t listen to me. You would think since I still firmly believe when it comes to the greater customer experience outlined in part one of this post a few weeks ag, that you can’t enable how a customer feels. But that goes to the definition of the broader customer experience (i.e. how a customer feels about a company over time). In other words, technology cannot support the manipulation of the customer’s feelings about a company. What drives that is much more complex and has far more to do with the way that the company treats the individual customer than it does what technology it uses.

Read also: Adobe and Nvidia expand partnership for Sensei AI

However, why I’m glad that Adobe didn’t listen to me is because they did figure out — recently — what you can do with technology in service of customer experience(s). You can create them, distribute them, and let them be consumed. These are the modular experiences that Joe Pine posited all the way back in 1999 when he showed how mass customization had evolved and what that meant in his seminal work, The Experience Economy. Plus, they fully understand because of the way they built their technology, that there is a symbiosis between customer experience and customer engagement. That means, simply put, that the more effectively you engage customers in an ongoing way, the more they feel good about your company, which then impacts the interactions with you they have and the behaviors they manifest when it comes to your company. So. if you provide them with the consumable kind of experiences that, at the end of the day, make them happy in some way and makes them want to engage further, then, over time, they become increasingly positive toward you. That said, if they are not that positive, a bad interaction or a poor immediate experience can lead to customer churn because the overarching experience is already negative.

I’m not going to dwell on this. The point is that Adobe is cognizant of the difference, which makes their approach to customer experience(s) on point.

What this led to originally when it came to Adobe product strategy was to transform Adobe Experience Manager (AEM) from what it was — a very good digital asset manager — and not much else — to a platform and toolset to both create experiences and manage the assets. The name became a true product description rather than the name. The asset management became features identified in AEM as AEM Assets. This meant the place to identify, share, and/or use the Create Cloud assets (e.g. photos, videos, other graphic pieces) for the creation of the content (i.e. the experiences). In fact, with the new release of AEM Assets 6.4 announced at the Digital Marketing Summit, they were able to include 3D models, VR, and panoramic properties. But what made this DAM different than in the past was the in the past, while it was cool to see, the tools weren’t really there to assemble the consumable experiences except in a painstakingly tedious way. For example, watching an earlier incarnation of AEM at, I think, the first Adobe Digital Marketing Summit I ever attended in 2014, they showed us a split screen of a site at their HQ in Utah and a Times Square Jumbo Video Screen. They then showed someone removing the existing video playing in Times Square and swapping it into the same location within AEM and the video changed at Time Square. While it was a genuine “oooh ahhhh” moment, it was still just swapping out one video for another — meaning it was managing a pair of digital assets — with Adobe’s level of flare. But it was DAM, nothing else.

But what I saw at the 2018 Digital Marketing Summit, it was not only “oooh ahhh” but “oh yeah” when I realized that Adobe, within the past year, had built what had been the missing piece: Platform proof of life. They showed the approximately 14,000 attendees how to build a specific experiential “campaign” using Creative Cloud, Experience Cloud, and ultimately Document Cloud, with the glue being Adobe Sensei. Here’s a pic that shows you what they were doing.

adobesummit2018 1493 Adobe: In the weeds, in the zone (part two)

Adobe demos the platform — the intersection of Campaign, Experience Manager, Sensei.


Source: Adobe

Here is a link to the video demoing this amazing solution based on the new platform. I highly recommend you watch it.

There are two things of importance related to Adobe Sensei.

First, the strategic implications. Sensei is being treated as both a layer in the platform and — via what will seemingly become a ubiquitous blue button in Sensei activated applications — Sensei will be easy to use, though I’m not sure how easy it will be to deploy. Setup can’t be as simple as its made out to be. But its strategic importance is the first consideration. This is the first evidence I’ve seen that Adobe is realizing their promise that they will be “merging” the clouds into a single platform — just as they said they would. What makes this even more exciting is that it is being realized via actual output — and that the output is based on outcomes and specific use cases that are visible manifestations of the platform in production, not just action. I like this; I like this a lot.

Ecosystems and platforms, baby. Ecosystems and platforms.

Second, Sensei is among the most mature, most advanced AI platforms I’ve ever seen. While I don’t pretend to be “the” AI expert, I know enough about what the vendors are doing with AI and where they are in the fulfillment of their promise and promises to make that statement. Sensei in action is both elegant and substantial. It is fast, for example, via Adobe Smart Tag, processing, and identifying the handful of the millions of images that are appropriate to the creation of a specific campaign; it is remarkable in how effectively it learns the environment it needs to operate in and acts accordingly. There is a Sensei-powered feature in AEM called Smart Crop that can take any image, identify the focal point (i.e. point-of-interest) — say, a picture of you in the middle of bungie jumping — and, regardless of screen size, bandwidth, device type, and optimizes the size, resolutions, and compression of the single image so that it never loses its focal point and is fully optimized to lose zero visual acuity, though there can be up to 70 percent image size reduction (slow bandwidth, etc). This is all done at breathtaking speed, too. It seems to be instantaneous, yet if you think of the number of calculations going on via the Sensei algorithms, it is a truly amazing feat.

Sensei’s most interesting application — though they are all fascinating — is not with the Creative Cloud’s images, videos, and illustrations and content, but is how it is applied with what Adobe calls “experience intelligence” (in line with their messaging and narrative — see below for more on this). Adobe claims that Sensei is already capable of the following (within varying products e.g. Target, Campaign AEM), some of which is table stakes and some of which is leading edge (watch parentheses here). Note, it is interesting that these are being offered as services that have been created via the Sensei AI platform.

  1. Attribution AI Services: Determining the incremental marketing impact “driven by owned, earned, and paid media” (table stakes); determining best allocation of spend across channels (table stakes); and understanding the demographics of your customers being converted via marketing (table stakes).
  2. Customer AI Services: Customer individual and group churn propensity (table stakes but hard to do well); personalized nurture campaigns for individual prospects “based on behaviors and interests” (claimed by many but mostly on road maps, done by Adobe); and upsell propensity (table stakes).
  3. Journey AI Services: Personalize the timings of emails for both prospects and existing customers (table stakes but done by a very few); knowledge of individual customer behavior for predictive insights i.e. individual likelihood of email open and clickthrough (table stakes); and predictive messaging cadence (leading edge).

Are there missing pieces? Sure. I think the Journey AI services are far too focused on email to call this journey AI services, since there are so many other components of what constitutes a holistic look of a dynamic customer journey. But what they do offer on the core marketing side, focused behavioral insights that are based on continual and deep learning — in other words, that are supporting dynamically engaging the customer as the conditions, behaviors, and contexts change — are a helluva start to something a year ago seemed to be more of a dream to be than a reality, and its now a delivered reality, not just a demo fantasy.

The other interesting aspect of Adobe’s drive to platform is their expanded personalization offerings via Sensei, as manifested via Adobe Target. This is more “pedestrian,” but it’s a really sharp dressing pedestrian. What Adobe delivers as their personalization offerings are mobile, scalable, and, of course, optimized — a cornucopia of industry buzzwords that, despite their vast overuse, are meaningful. Meaning, if you promise them, you’d better deliver them or you are full of —. Adobe does deliver all three. The most interesting particular piece is their Personalization Insights Report, coming to you thanks to Adobe Target. Here is the Adobe blog MarTech speak weeds description: “The algorithm’s analysis of all profile attributes of each individual visitor over time produces exponential conversion lift, along with a deep analysis — equivalent to the results of hundreds of concurrent tests — in real time and self-optimizing over time.” What this ultimately means is that their algorithms are able to isolate the most important aspects of a customer’s behavior and deliver optimized offers — offers tailored to that individual’s most apparent desires and interests. This was determined by how the individual was behaving. What makes this particularly useful is that it can be done in real time and scales to concurrent thousands and even millions of users.

A momentary aside…

In the same blog post that I found the prior quote I also found this one and this goes to something that I think is extremely important and has nothing to do with Adobe, except as one of almost all companies who make the same fundamental mistake. It is one that reflects what I think needs both a clarity of definition and a more active discussion in the industry and beyond. Here’s the quote, “There’s a discussion across the industry that algorithms used by AI are very accurate and valuable, but they are also highly complex and not human readable… To solve for that, we’ve created a patented algorithm that sits on top of our AI-based personalization used in Auto-Target and Automated Personalization. The algorithm, which is cutting edge data science, generates human understandable insights from the from the output of AI-driven activities.” So close.

What I’m hinting at rather broadly here is that there is a significant difference between personalization and humanization. I’m not saying that the technology companies I deal with daily don’t get it. In fact, I would say that the quote above shows that they do distinguish between that. The work going on with chatbots (here’s a brand spanking new white paper I did on chatbots for Pitney Bowes that might be of interest) shows that the difference is understood. But the use of personalization is often misunderstood to be humanization, and it isn’t the same. To put it simply here (and I will write a post on this very, very soon): Personalization is associated with the best possible promotion/offer to a customer given the digital — and occasionally digital behavior of that customer and customers like him or her. It allows the customer to make a meaningful individual choice. This is not humanization. Humanization is an interaction between a real or computer generated representative of a company, and a customer that feels real and warm enough to that individual customer and has enough of a truly conversational flow to make it seem as if it is a real human being. The “feeling” of the customer is that this is someone who understands me and gets what I’m looking for. And speaks to me in my metaphor.

A customer’s idea of value is feeling valued. Humanization, because it means that “you” are concentrating on “me,” is part of the path to that customer feeling valued in a world that is nonstop more mobile and digitally driven and more demanding of the human touch, as it seems to be getting more and more away from us.

That’s kind of the awkward version of the difference. I will refine this a great deal, but I want you to know what I’m thinking now because I think humanization and thus all the technologies that are necessary to “humanize at scale” is a next big leap that is greatly aided by conversational interfaces, chatbots, AI, machine learning, and NLP-related activities — and, of course, actual people.

Its also Adobe’s next big step and opportunity. But let’s not go there quite yet.

Narrative: Purpose Built for the Experience Business

OK, Dr. Watson, we see concrete evidence of both ecosystem (Adobe-Microsoft partnership) and platform (Creative Cloud, Experience Cloud, Sensei working in unison) as significant progress to the evolution of the company. But this is a company that is both deep in the technical weeds and broad in the creation of brilliant creative art. How do you reconcile this progress, these rather disparate sets, into a single corporate narrative that reconciles the sets, increases the trust of the market and the once and future customers in the company, and sets the stage for the business value proposition that Adobe offers? What’s the story, morning glory?

Read also: Adobe adds more AI, customization, transparency in Adobe Target update

Well, they are onto it, but not quite there. It’s a charming but awkward attempt. It is a pirouette but without the elegance of the turn being fully realized.

They call it “The Experience Business,” which is, in reality, the (consumable) experiences business. Its actually a pretty good name for what they do and kind of a repurposing or, more accurately, a strengthening of the company’s brand and purpose. It is also what distinguishes the company from its competitors. They are arguably the only company who can make this claim genuinely at the enterprise level (or at any level really). They have changed the name of the Digital Marketing Cloud to the Adobe Experience Cloud — a solution set, so a cloud works, though oddly, they haven’t changed the name of the user conference to The Experience Cloud Summit, which they really should for 2019. They are also creating Experience Data Models (XDM) that are designed to create and support reference-able common semantics across all data, platforms. and channels. In fact, Adobe calls XDM “a formal standard, published in JSON schema, enabling data interoperability in Adobe Cloud Platform”

So far, so good. They are aligning their mission and vision with their strategy and the evolution of their technology — as Adobe always does down in the weeds. They also have an appropriate — to them and in reference to the markets — message and positioning.

Where I’m still not sold, though Adobe makes a compelling case is on their idea of an Experience System of Record. Let’s say I’m open to listening but I’m not convinced.

The components of the Experience System of Record seem to be:

  1. All relevant customer data (as Brad Rencher, EVP of marketing at Adobe defines it) from behavioral to CRM (with the intent to make it actionable).
  2. Define a common taxonomy for all the data regardless of where it is sourced from and defining specific treatments for personal (individual) data.
  3. Apply machine learning geared to specific business cases — again, in Brad Rencher’s words, “attribution analyses, audience segmentation, customer scoring and journey prediction” what he calls the “fundamentals of the experience business.” (Source: DMN)
  4. From this develop a constantly refreshed unified experience cloud profile for the individual customers so that responsiveness can not only be almost instantaneous, but can be contemporary and relevant.

Here’s why I’m skeptical — none of what he said, except one thing, is all that different than the many systems of record I see in CRM and in related areas. CRM data — the transactional and operational data — is not all that’s stored in CRM related systems of record. Social data, conversational data, journey data can all be centralized in what is dismissed by Adobe as a CRM system of record. If the system of record is NOSQL, the ability to handle high volume and real time transactions is there, so this is not a convincing argument. Plus the application of machine learning doesn’t have to be endemic to the system of record. The data from the system of record has to be made available and the machine learning can be applied. I’m not a technical person, so I may not be expressing this in the best possible way but what I am saying is that this is not the thing that makes me or should make anyone go “wow.”

But it’s the “one thing” that makes me willing to listen. Adobe’s creation of a common “experience” taxonomy and the special but related taxonomies for individual data. That is where something different is going on and, to my knowledge, Adobe is the only company doing it. But I need to investigate this further to see if they can actually show me something. Also, if they are doing this, it would be an important step for high degrees of personalization at scale and honestly, a first step toward humanization at scale too. Possibly. But more on that humanization thing some other time.

So, I’m listening but not convinced.

Chto Delat: What is to be done?

There are two things that I think that Adobe has to do. One is simple, the other not so much.

Read also: Adobe XD for Windows review: A powerful but usable design tool

The simple one: Change the name of the Digital Marketing Summit to Adobe Experience Cloud Summit or something much more creative than that. It is no longer about marketing for them. I found this really good article by DMN Editor-in-Chief Kim Davis. He makes this point incredibly concisely:

“Adobe offered what was, by common consensus, the first marketing cloud; and what was then regarded by influential analysts as the best, with Salesforce coming up hard on the rails. And it still exists today, in the sense that its main components — Experience Manager, Target, Campaign, etc — still pull together to drive marketing operations. Rencher gives it a brief nod, with his habitual remark that ‘we created the category.’

That’s about all you’ll hear about a marketing cloud at an Adobe event. The customer experience, the thinking goes, exceeds marketing (and advertising). It encompasses sales, service, loyalty, of course, but ultimately, it’s more than that. It’s about brand affinity; potentially lifetime brand affinity. That’s where all the themes of the conference — including data protection and privacy — come together.”

So, new name. If marketing is no longer the focus, then they need to change the name of the summit. Not even an option. A necessity.

The other thing, and I never thought I’d say this when it came to Adobe, they need to step up the thought leadership on customer experience(s). They are not in the “how-a-customer-feels-about-a-company-over-time” Experience Business. They are in the “create-distribute-consume” Experience Business. But their thought leadership isn’t geared to this — or anything about the customer experience. Case in point — go to CMO.com right now and tell me what your view of their landing page suggests. I can tell you what it gives no indication of really at all. Customer experience. A classic CRM thought leadership page could be identical. And, as an aside, given the May 31 featured post, How the Sharing Economy is Transforming Travel, no one calls it the “Sharing Economy ” anymore. Its pretty much the Gig Economy. Because it isn’t a sharing economy. No one is sharing anything. You are paying people who have the time to do something for a service that they can provide via site that aggregates similar offerings. They drive for Uber or Lyft — and you pay for the ride. No sharing. You rent an Airbnb property. They aren’t sharing with you. They are renting to you.

Aside over.

Regardless of bad feature headlines, Adobe needs to revamp the overall content and the look and feel and patina of their main distribution site for thought leadership, if they are seriously realigning their entire company around customer experience. And they need to be clear what kind of experience they actually provide — or rather, they are pretty clear I think in their own heads — they need to make it clear to the world on what kind of experience they provide tools for. They have an opportunity to be the runaway market leaders in this area and even to “make” a market and establish a category in an area that I thought a decade ago was impossible to do. Now, I’m older and wiser, and Adobe’s actually starting to do it. Though are not by any means there yet.

But all in all, the journey they started two years ago actually is moving quickly to realization of its earliest stages — and that is a lot to be proud of. I’m not in awe of them, since I’m rarely in awe of companies, but I do applaud them for not just laying down a vision and then stepping all over it but instead building the highway to the mission and to the execution of that vision. Take care of the things that I lay out at the end here and this company that is deep in the weeds and at the same time the most creatively bright company I’ve seen in the industry so far might just pull this off and make me say, “I stand corrected.” Might. I’m tough.

Previous and related coverage

Zoho at a crossroads: Stepping up means stepping out

Zoho has been one of the great successes in the world of small business technologies. Few companies have been able to succeed with a similar business model, yet Zoho has been wildly successful. But they are also enshrouded in mystery. Read on to see what’s under their veil and what they have to do next — if they want to.

Infor Innovation Analyst Summit 2018: I totally get it and yet, I don’t see it

Infor is a company on the fast track, though you wouldn’t know that. It is among the most design-focused, progressive companies in the technology world, and it has an offering that can go to head-to-head with anyone’s out there. Yet, it is a best-kept secret. I’m now going to show and tell. Read on — Infor is now in the sunlight.

Conversations are precisely what we need to think about

Thought leader Mitch Lieberman takes the conversation about conversations from personalization to precision. What the hell is the benefit for business of that level of deep thinking? Listen — precisely. Personally, you’ll learn something.

How to fix your brand experience from the outside in

Johann Wrede: To bring real consistency to the brand experience, leaders should stop slicing the problem into pieces that they try to solve independently.

Let’s block ads! (Why?)

ZDNet | crm RSS

Power BI – Part 1: Introduction

The series so far:

  1. Power BI – Part 1: Introduction

Microsoft’s Power BI is not just a cloud service. It’s a suite of integrated business intelligence (BI) tools for accessing and consolidating data and then presenting it as actionable insights. If you haven’t checked out Power BI lately, you might not be aware of how aggressively Microsoft has been expanding its features and extending its reach into additional sources of data.

With Power BI, you can shape and transform data, aggregate and summarize data, apply complex calculations and conditional logic, and produce a wide range of visually rich reports that you can distribute to both internal and external users.

Power BI has, in fact, become a force to be reckoned with in the BI universe and is well worth trying out for yourself. Unfortunately, all the new and improved features, while in themselves have made Power BI a more robust offering, have also made it more difficult to understand how all the pieces fit together, a process helped little by the sometimes confusing and often obscure marketing hype.

In this article, the first in a series about Power BI, I try to make sense of the main components that are part of the Power BI ecosystem. The article takes a high-level look at these components in order to provide you with a foundation for delving into more specific details later in the series. I wrote several Simple Talk articles about Power BI back in 2015 and 2016, but much has changed this then, and this seems to be a good time to revisit the topic and fill in some of the newly added details.

Introducing Power BI

Microsoft describes Power BI as a “suite of business analytics tools that deliver insights throughout your organization.” With Power BI, you can retrieve data from hundreds of data sources, shape the data to fit your specific requirements, perform ad hoc analytics, and present the results through various types of visualizations. Power BI greatly simplifies the entire BI process, making it possible for business users and data analysists to take control of their own reporting needs, while providing enterprise-grade security and scalability.

Microsoft makes Power BI available as part of the Microsoft Business Application Platform, a somewhat confusing umbrella term that refers to several related technologies, including Power BI, PowerApps, and (according to some documentation) Microsoft Flow. You should already have a sense of what Power BI is about, but you might not be familiar with the other two. PowerApps is a point-and-click application development platform, and Microsoft Flow is a workflow and business process management platform.

For this series, we’re concerned primarily with Power BI, which provides a number of tools for delivering BI insights through browsers or mobile apps as well as embedding them within custom applications. In addition to the online service, Power BI includes Power BI Desktop, the Power BI mobile apps, the Power BI API, and Power BI Report Server. The rest of the article goes into more detail about each component.

Power BI Service

The Power BI service lies at the heart of the Power BI offering, providing a cloud-based platform for connecting to data and building reports. Users can access the service through a web-based portal that provides the tools necessary to retrieve, transform and present business data. For example, the following figure shows the portal with the Human Resources Sample dashboard selected. The dashboard includes several visualizations that are part of the Human Resources Sample report. (Microsoft provides several sample datasets, reports, and dashboards for learning about Power BI.)

word image 3 Power BI – Part 1: Introduction

Notice that the My Workspace section in the left navigation pane is expanded, showing links to dashboards, reports, workbooks, and datasets. These four items represent the primary components that go into the Power BI presentation structure:

  • Dataset: Collection of related data that you import or connect to. A dataset is similar to a database table and can be used in multiple reports, dashboards, and workspaces. You can retrieve data from files, databases, online services, or Power BI apps published by other people in your organization.
  • Report: One or more pages of visualizations based on a single dataset. A report can be associated with only one workspace, but it can be associated with multiple dashboards within that workspace. You can interact with a report either in Reading view or Editing view, depending on your granted level of permissions.
  • Dashboard: A presentation canvas that contains zero or more tiles or widgets. A dashboard can be associated with only one workspace, but it can display visualizations from multiple datasets or reports. You can pin an individual visualization to a tile or pin an entire report to a dashboard. If you’re a Power BI Pro or Premium subscriber, you can also share dashboards.
  • Workspace: A container for datasets, reports, and dashboards. The Power BI service supports two types of workspaces: My Workspace and app workspaces, which you access through the Workspaces section in the left navigation pane. My Workspace is a personal work area provided automatically when you log into the service. Only you can access this space. An app workspace is used to share and collaborate on content. You can also use an app workspace to create, publish, and manage Power BI apps (collections of dashboards and reports).

Microsoft offers several Power BI subscription plans. At the entry level is the Power BI Free service. To register, you must use a work email account, not a personal account such as Gmail. If you try, you’ll receive a polite message denying you access. In addition, you’re limited 10 GB of storage, and you can use only the basic features, although these are actually fairly robust. For example, you can connect to all the supported data sources, clean and prepare the data, and build and publish reports. You can even embed the reports in public websites.

The next level up is the Power BI Pro service, which builds on the Free service but adds such features as sharing, collaboration, auditing, and auto-refresh. The Pro service also lets users create app workspaces. As with the Free service, Pro users are limited to 10 GB of storage; however, they can also create app workspaces that support up to 10 GB of storage each. Microsoft currently offers of a 60-day free trial of the Pro service.

The Power BI Premium subscription level builds on the Pro service, but also provides an organization with dedicated resources (capacities) for deploying Power BI at scale, with up to 100 TB of storage per capacity. In addition, an organization can distribute Power BI content to non-licensed users as well as embed content in customized applications. Plus, the Premium service includes Power BI Report Server, an on-premises solution for publishing reports in-house.

Microsoft also offers versions of the Power BI service for US government customers and European Union customers. The services are separate from the regular commercial services. Microsoft does not offer a free version of either one. (Contact Microsoft for more details.)

Power BI Desktop

Power BI Desktop is a downloadable application that Microsoft provides for free. The application is essentially a report-building tool that provides capabilities similar to the Power BI service, but kicks them up a notch. With Power BI Desktop, you can build advanced data queries and models, create sophisticated reports and visualizations, and publish the consolidated report packages to the Power BI service or Power BI Report Server.

Both conceptually and physically, Power BI Desktop can be divided into three categories, or views, for how you interact with data and create reports:

  • Report view: A canvas for building and viewing reports based on the datasets defined in Data view.
  • Data view: Defined datasets based on data retrieved from one or more data sources. Data view offers limited transformation features, with many more capabilities available through the Query Editor, which opens in a separate window.
  • Relationships view: Identified relationships between the datasets defined in Data view. When possible, Power BI Desktop identifies the relationships automatically, but you can also define them manually.

To access any of the three views, click the applicable button in the navigation pane at the left side of the Power BI Desktop interface, shown in the following figure. In this case, Report view is selected, displaying a one-page report that includes two visualizations, one table and one bar chart.

word image 4 Power BI – Part 1: Introduction

The data for the report comes from the AdventureWorks2017 sample database, running on a local instance of SQL Server 2017. However, you can define datasets based on data from a variety sources, including files such as Excel, CSV, XML, and JSON; databases such as Oracle, Access, DB2, and MySQL; and online services such as Azure, Salesforce Reports, Google Analytics, and Facebook.

Power BI Desktop also provides generic connectors for accessing data not available through the predefined connectors. For example, you can use an interface type such as ODBC, OLE DB, OData, or REST to connect to a data source, or you can run an R script and create a dataset based on the results.

Where Power BI Desktop really shines, when compared to the Power BI service, are in the features available in the Query Editor to shape and combine data, some of which are shown in the following figure. In this case, the Sales.vSalesPerson dataset is open, which is based on a view in the AdventureWorks2017 database with the same name.

word image 5 Power BI – Part 1: Introduction

In the Query Editor, you can rename datasets or columns, filter out columns or rows, aggregate or pivot data, and shape data in numerous other ways. You can also combine datasets, even if they come from different sources. In addition, Power BI Desktop provides the Data Analysis Expressions (DAX) language for performing more complex transformations.

After you’ve gotten the data in the format you need, you can use Report view to create multiple types of visualizations, including bar chats, line charts, scatter charts, pie charts, treemaps, tables, matrices, and maps. Report view provides numerous options for configuring and refining the charts so you’re presenting the data as effectively as possible. In addition, you can import and display key performance indicators (KPIs) as well as add dynamic reference lines to visualizations to focus on important insights. Once you have your reports the way you want them, you can publish them to the Power BI service or to Power BI Report Server.

There are plenty of other features available to Power BI Desktop than what I’ve covered here, and most of them are easy to access and understand. The UI is powerful and intuitive enough to support a wide range of users, from data stewards to business users to data analysts.

Power BI Mobile Apps

Microsoft offers Power BI mobile apps for iOS, Android, and Windows mobile devices. The apps make it possible to provide specific users with access the Power BI dashboards, reports, and apps, while taking into account the form factor of the smaller devices. For example, the following figure shows the Human Resources Sample dashboard (in landscape mode), as it is rendered by the Power BI app for iPhone.

word image 6 Power BI – Part 1: Introduction

With a Power BI app, you can connect to either the Power BI service or to a Power BI Report Server instance. Because you’re dealing with an app rather than a website, you can view the Power BI content offline. Once you’re reconnected, Power BI automatically refreshes the data. When you’re connected via a 3G network, the data is refreshed every 24 hours. When you’re connected via Wi-Fi, the updates occur every two hours.

A Power BI app lets you zoom in on individual visualizations, add annotations, and share snapshots of a report or visualization. You can also filter content by owner, search content, or tag content as favorites. With a Power BI Pro or Premium license, you can share a link with colleagues so they can view your dashboards. There are plenty of other features as well, and since the apps are free, there’s no reason not to try one out, as long as you’re signed up for the Power BI service.

When you’re creating Power BI reports, you can optimize them for mobile devices. This causes Power BI to add features to the reports specific to mobile usage, such as allowing users to drill down into visualizations. In addition, you can add slicers to your reports that let users filter the displayed data. Plus, you can create a QR code for a report and distribute it to colleagues, who can then scan the code from within their Power BI app to view the report.

Although the mobile apps are similar from one platform to the next, there are some differences. For example, only the iOS and Android apps let users annotate visualizations, share snapshots, or view a report via a QR code. Despite these differences, the main functionality is the basically the same from one platform to the next, allowing users to view a wide range of information, no matter where they’re working or how they’re connected.

Organizations that use Microsoft Intune to manage mobile devices can also use the service for managing the Power BI mobile apps. By configuring the necessary policies, administrators can control how data is handled and when application data should be encrypted.

Power BI API

Microsoft offers development teams a REST API that provides programmatic access to Power BI resources. Developers can use the API with any programming language that supports REST calls.

One of the most important capabilities that the API supports is the ability to embed reports, tiles, and dashboards into customized applications. The reports are fully interactive and are automatically refreshed whenever the data changes. Depending on the organization’s subscription level, developers can embed components into applications for internal users who are licensed for Power BI or for users who do not have Power BI accounts.

Another option for developers using the API is the ability to create customized visualizations that can be used in Power BI reports. Customized visualizations are written in TypeScript, a JavaScript superset that Microsoft developed for large-scale JavaScript applications. The visualizations also incorporate cascading style sheets (CSS) and support such features as variables, nesting, mixins, loops, and conditional logic.

Developers can also use the Power BI API to push data into a dataset. In this way, they can extend their business workflows to the Power BI environment. Any reports or dashboards that incorporate the dataset are automatically updated to reflect the new data.

Power BI Report Server

One of the newer tools in the Power BI arsenal is Power BI Report Server, an on-premises solution for creating, deploying and managing Power BI reports. The product is included with a Power BI Premium subscription to provide customers with a tool for delivering reports from within their own data centers. Users, in turn, can view the reports via their browsers or Power BI mobile apps or as email attachments.

If you choose to install Power BI Report Server, you must use the Report Server Configuration Manager to specify such settings as the service account, web service URL, SQL Server database, and web portal URL, as shown in the following figure. You need to set up your configurations before you can start working with the actual reports.

word image 7 Power BI – Part 1: Introduction

The Report Server Configuration Manager is included in the Power BI Report Server installation, but it is separate from the tools you use to manage the reports. For report management, you must use the Report Server web portal, which is enabled after you configure the necessary settings. Through the web portal, you can access all your reports and KPIs, as well as carry out such tasks as schedule data updates or subscribe to published reports.

Like the Power BI service, Power BI Report Server works in conjunction with Power BI Desktop. You can create reports and then save them to Power BI Report Server. For example, you can save a report such as the one shown in the following figure to Power BI Report Server.

word image 8 Power BI – Part 1: Introduction

The report is based on data from the Titanic dataset, available as a CSV file from the site https://vincentarelbundock.github.io/Rdatasets/datasets.html. The report includes one table and one ribbon visualization. To save the report to Power BI Report Server, you must use the Save As command and provide the web portal URL.

When you connect to the Power BI Report Server web portal, you’re taken to your Home page, which lists any reports you’ve added to the server. For example, the following figure shows the two reports I added on my system: AdventureWorksSales and Titanic. I had created both of these reports in Power BI Desktop.

word image 9 Power BI – Part 1: Introduction

To view the Titanic report, click the applicable report icon. This takes you to the report page shown in the following figure.

word image 10 Power BI – Part 1: Introduction

Notice that the report looks similar to what we saw in Power BI Desktop, although the colors are a bit different from the original. Even so, this should give you an idea of how Power BI Report Server works and how easy it is to copy your reports from Power BI Desktop to Power BI Report Server.

Keep in mind, however, that Power BI Report Server is still a young product and, as such, you might run into some odd behavior along with way. For example, if you install the product on a standalone server that is not part of a domain, you might have trouble viewing reports in the Edge browser. Worse still, the error messages you receive might send you down a rabbit hole that can waste much of your time (as it did me). However, the Chrome browser appears to work fine for viewing reports in Power BI Report Server, and I’ve also read that running Internet Explorer as an administrator can get the reports to render properly.

Another problem you might run into has to do with the Power BI Desktop version that you’re using. You must use one that’s optimized for Power BI Report Server, which is not always the most current release. This can be problematic if you create reports in a newer version of Power BI Desktop and then discover you have to revert back to an older version to save the reports to Report Server. The older version of Power BI Desktop might not be able to properly process the report files. Last I checked, the most recent Power BI Desktop release was April 2018, but Power BI Report Server required a March 2018 release of Power BI Desktop.

I’ve no doubt that, with time, Microsoft will get many of these bugs worked out and will be adding new features along the way. It will be interesting to see what Power BI Report Server looks like a year from now, or even over the next six months.

More to Come

As Power BI continues to evolve and grow, new features will continue to come online, as well as services related to the Power BI ecosystem. For example, Microsoft now offers Power BI Embedded, a service for ISVs and developers looking for an easier way to embed Power BI analytics into their applications. Microsoft also now provides features for better integrating Excel and Power BI. In addition, Microsoft will soon be offering insights, line-of-business apps that apply advanced intelligence to data for better understanding that data. Two insight apps—Power BI for Sales Insights and Power BI for Service Insights—are expected to preview in the very near future. What will come after that is anyone’s guess.

Let’s block ads! (Why?)

SQL – Simple Talk

Expert Interview (Part 2): Paige Bartley on Data Lineage, Data Quality, and Data Availability in GDPR Compliance

In this expert interview series, Paige Bartley, Senior Analyst for Data and Enterprise Intelligence at Ovum, discusses the state of GDPR readiness, and how data quality, data availability and data lineage play into the GDPR compliance landscape.

Part 2 weighs in on the role of data lineage, data quality, and data availability in GDPR compliance as well as the level of enforcement that we should expect after the deadline.

Expert Interview Part 2 Paige Bartley on Data Lineage Data Quality and Data Availability in GDPR Compliance banner Expert Interview (Part 2): Paige Bartley on Data Lineage, Data Quality, and Data Availability in GDPR Compliance

What role do data lineage, data quality, and data availability play in GDPR compliance?

Data lineage, data quality, and data availability are inherently linked to GDPR via several mechanisms.

When it comes to data lineage, article 30 of GDPR details the requirements for records of processing activities on personal data. This entails requirements for maintaining records of the purposes of processing, records of data transfers to non-EU locations, and records of who the data was disclosed to, among other requirements. While data lineage is never specifically mandated in the text of the regulation, lineage is critical to understanding how data was handled, who it was handled by, and where it was handled. Data lineage, when tracked at a granular level, can provide the means for automated reporting that can fulfill Article 30’s requirements. Furthermore, it can provide the enterprise with a mechanism for Article 31 requirements for cooperation with supervisory authorities; when the organization understands every action that has been taken on a given piece of personal data, it is much easier to communicate with supervisory authorities and demonstrate that compliance has been maintained throughout the data handling process.

As for data quality, data quality is neither solely a direct product of GDPR compliance nor solely a direct driver of GDPR compliance. Rather, data quality is the result of a positive feedback loop between compliance efforts and preexisting data management initiatives. Good initial data quality will help in GDPR compliance initiatives because it means that data subjects will have less opportunity to invoke their Article 16 right to rectification – or correction – of data. But GDPR compliance, by virtue, also helps increase the quality of new data collected. GDPR compliance and explicit consent practices mean that the data collected under the regulation will largely be voluntary and accurate. GDPR is an opportunity to build trust with consumers, and trusted relationships yield more relevant and accurate data relative to the opt-out consent model which collects data opaquely. So data quality is both a driver of compliance as well as a product of it.

For data availability, it’s important to understand that GDPR is not a technical regulation by nature. It focuses more on process, and names very few explicit technical requirements. This is by design. If the regulation were built around specific technical capabilities, it would quickly become obsolescent, as technology evolves much more quickly than policy. However, data availability, which is relatively unique amongst technical capabilities, is cited directly in GDPR as part of Article 32’s requirement guidelines for the Security of Processing of personal data. High availability of systems, while not absolutely mandated, is highly encouraged for GDPR compliance.

How rigid do you expect GDPR enforcement to be after the deadline?  Do you think regulators will focus only on major violations and big companies, or should everyone be worried about even minor deviations?

Initial enforcement will likely focus on prominent, high-margin organizations that use data monetization as their primary business model. The regulatory bodies have only so many resources for audit and investigation, and they are likely looking to “make an example” of a household-name organization that processes personal data at scale as a fundamental part of their business. My intuition is that the EU will likely seek to pursue initial enforcement against a non-EU business, to underscore the point that the regulation is global in its reach.

This isn’t to say that smaller firms or minor deviations from compliance will be let off the hook. The regulation allows robust mechanisms for data subject to legal remedies against data controllers and processors which have run afoul of the regulation. Article 79, in particular, guarantees the right to effective judicial remedy against a controller or processor, opening the door to class-action lawsuits.

In this sense, consumers can become the eyes and ears of the regulatory bodies, taking legal action against any firm that they feel has not properly protected their personal data. So while the supervisory authorities may not initially set out to enforce against smaller firms or minor infractions, there is always the possibility that regular EU citizens may lodge a complaint or initiate legal action against a firm that they feel has mishandled their personal data.

Be sure to tune in for the final installment when Bartley speaks about the difference between the technology and process in the GDPR and how it can potentially inspire other regions to create regulations of their own.

If you want to learn more about GDPR compliance and how Syncsort can help, be sure to view our webcast on Data Quality-Driven GDPR: Compliance with Confidence.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

People Are The Engine That Drives Finance Transformation: Part 1

For nerds, the weeks right before finals are a Cinderella moment. Suddenly they’re stars. Pocket protectors are fashionable; people find their jokes a whole lot funnier; Dungeons & Dragons sounds cool.

Many CIOs are enjoying this kind of moment now, as companies everywhere face the business equivalent of a final exam for a vital class they have managed to mostly avoid so far: digital transformation.

But as always, there is a limit to nerdy magic. No matter how helpful CIOs try to be, their classmates still won’t pass if they don’t learn the material. With IT increasingly central to every business—from the customer experience to the offering to the business model itself—we all need to start thinking like CIOs.

Pass the digital transformation exam, and you probably have a bright future ahead. A recent SAP-Oxford Economics study of 3,100 organizations in a variety of industries across 17 countries found that the companies that have taken the lead in digital transformation earn higher profits and revenues and have more competitive differentiation than their peers. They also expect 23% more revenue growth from their digital initiatives over the next two years—an estimate 2.5 to 4 times larger than the average company’s.

But the market is grading on a steep curve: this same SAP-Oxford study found that only 3% have completed some degree of digital transformation across their organization. Other surveys also suggest that most companies won’t be graduating anytime soon: in one recent survey of 450 heads of digital transformation for enterprises in the United States, United Kingdom, France, and Germany by technology company Couchbase, 90% agreed that most digital projects fail to meet expectations and deliver only incremental improvements. Worse: over half (54%) believe that organizations that don’t succeed with their transformation project will fail or be absorbed by a savvier competitor within four years.

Companies that are making the grade understand that unlike earlier technical advances, digital transformation doesn’t just support the business, it’s the future of the business. That’s why 60% of digital leading companies have entrusted the leadership of their transformation to their CIO, and that’s why experts say businesspeople must do more than have a vague understanding of the technology. They must also master a way of thinking and looking at business challenges that is unfamiliar to most people outside the IT department.

In other words, if you don’t think like a CIO yet, now is a very good time to learn.

However, given that you probably don’t have a spare 15 years to learn what your CIO knows, we asked the experts what makes CIO thinking distinctive. Here are the top eight mind hacks.

1. Think in Systems

Q118 Feature3 img1 Jump People Are The Engine That Drives Finance Transformation: Part 1A lot of businesspeople are used to seeing their organization as a series of loosely joined silos. But in the world of digital business, everything is part of a larger system.

CIOs have known for a long time that smart processes win. Whether they were installing enterprise resource planning systems or working with the business to imagine the customer’s journey, they always had to think in holistic ways that crossed traditional departmental, functional, and operational boundaries.

Unlike other business leaders, CIOs spend their careers looking across systems. Why did our supply chain go down? How can we support this new business initiative beyond a single department or function? Now supported by end-to-end process methodologies such as design thinking, good CIOs have developed a way of looking at the company that can lead to radical simplifications that can reduce cost and improve performance at the same time.

They are also used to thinking beyond temporal boundaries. “This idea that the power of technology doubles every two years means that as you’re planning ahead you can’t think in terms of a linear process, you have to think in terms of huge jumps,” says Jay Ferro, CIO of TransPerfect, a New York–based global translation firm.

No wonder the SAP-Oxford transformation study found that one of the values transformational leaders shared was a tendency to look beyond silos and view the digital transformation as a company-wide initiative.

This will come in handy because in digital transformation, not only do business processes evolve but the company’s entire value proposition changes, says Jeanne Ross, principal research scientist at the Center for Information Systems Research at the Massachusetts Institute of Technology (MIT). “It either already has or it’s going to, because digital technologies make things possible that weren’t possible before,” she explains.

2. Work in Diverse Teams

When it comes to large projects, CIOs have always needed input from a diverse collection of businesspeople to be successful. The best have developed ways to convince and cajole reluctant participants to come to the table. They seek out technology enthusiasts in the business and those who are respected by their peers to help build passion and commitment among the halfhearted.

Digital transformation amps up the urgency for building diverse teams even further. “A small, focused group simply won’t have the same breadth of perspective as a team that includes a salesperson and a service person and a development person, as well as an IT person,” says Ross.

At Lenovo, the global technology giant, many of these cross-functional teams become so used to working together that it’s hard to tell where each member originally belonged: “You can’t tell who is business or IT; you can’t tell who is product, IT, or design,” says the company’s CIO, Arthur Hu.

One interesting corollary of this trend toward broader teamwork is that talent is a priority among digital leaders: they spend more on training their employees and partners than ordinary companies, as well as on hiring the people they need, according to the SAP-Oxford Economics survey. They’re also already being rewarded for their faith in their teams: 71% of leaders say that their successful digital transformation has made it easier for them to attract and retain talent, and 64% say that their employees are now more engaged than they were before the transformation.

3. Become a Consultant

Good CIOs have long needed to be internal consultants to the business. Ever since technology moved out of the glasshouse and onto employees’ desks, CIOs have not only needed a deep understanding of the goals of a given project but also to make sure that the project didn’t stray from those goals, even after the businesspeople who had ordered the project went back to their day jobs. “Businesspeople didn’t really need to get into the details of what IT was really doing,” recalls Ferro. “They just had a set of demands and said, ‘Hey, IT, go do that.’”

But that was then. Now software has become so integral to the business that nobody can afford to walk away. Businesspeople must join the ranks of the IT consultants. “If you’re building a house, you don’t just disappear for six months and come back and go, ‘Oh, it looks pretty good,’” says Ferro. “You’re on that work site constantly and all of a sudden you’re looking at something, going, ‘Well, that looked really good on the blueprint, not sure it makes sense in reality. Let’s move that over six feet.’ Or, ‘I don’t know if I like that anymore.’ It’s really not much different in application development or for IT or technical projects, where on paper it looked really good and three weeks in, in that second sprint, you’re going, ‘Oh, now that I look at it, that’s really stupid.’”

4. Learn Horizontal Leadership

CIOs have always needed the ability to educate and influence other leaders that they don’t directly control. For major IT projects to be successful, they need other leaders to contribute budget, time, and resources from multiple areas of the business.

It’s a kind of horizontal leadership that will become critical for businesspeople to acquire in digital transformation. “The leadership role becomes one much more of coaching others across the organization—encouraging people to be creative, making sure everybody knows how to use data well,” Ross says.

In this team-based environment, having all the answers becomes less important. “It used to be that the best business executives and leaders had the best answers. Today that is no longer the case,” observes Gary Cokins, a technology consultant who focuses on analytics-based performance management. “Increasingly, it’s the executives and leaders who ask the best questions. There is too much volatility and uncertainty for them to rely on their intuition or past experiences.”

Many experts expect this trend to continue as the confluence of automation and data keeps chipping away at the organizational pyramid. “Hierarchical, command-and-control leadership will become obsolete,” says Edward Hess, professor of business administration and Batten executive-in-residence at the Darden School of Business at the University of Virginia. “Flatter, distributive leadership via teams will become the dominant structure.”

Q118 Feature3 img3 rock People Are The Engine That Drives Finance Transformation: Part 15. Understand Process Design

When business processes were simpler, IT could analyze the process and improve it without input from the business. But today many processes are triggered on the fly by the customer, making a seamless customer experience more difficult to build without the benefit of a larger, multifunctional team. In a highly digitalized organization like Amazon, which releases thousands of new software programs each year, IT can no longer do it all.

While businesspeople aren’t expected to start coding, their involvement in process design is crucial. One of the techniques that many organizations have adopted to help IT and businesspeople visualize business processes together is design thinking (for more on design thinking techniques, see “A Cult of Creation“).

Customers aren’t the only ones who benefit from better processes. Among the 100 companies the SAP-Oxford Economics researchers have identified as digital leaders, two-thirds say that they are making their employees’ lives easier by eliminating process roadblocks that interfere with their ability to do their jobs. Ninety percent of leaders surveyed expect to see value from these projects in the next two years alone.

6. Learn to Keep Learning

The ability to learn and keep learning has been a part of IT from the start. Since the first mainframes in the 1950s, technologists have understood that they need to keep reinventing themselves and their skills to adapt to the changes around them.

Now that’s starting to become part of other job descriptions too. Many companies are investing in teaching their employees new digital skills. One South American auto products company, for example, has created a custom-education institute that trained 20,000 employees and partner-employees in 2016. In addition to training current staff, many leading digital companies are also hiring new employees and creating new roles, such as a chief robotics officer, to support their digital transformation efforts.

Nicolas van Zeebroeck, professor of information systems and digital business innovation at the Solvay Brussels School of Economics and Management at the Free University of Brussels, says that he expects the ability to learn quickly will remain crucial. “If I had to think of one critical skill,” he explains, “I would have to say it’s the ability to learn and keep learning—the ability to challenge the status quo and question what you take for granted.”

7. Fail Smarter

Traditionally, CIOs tended to be good at thinking through tests that would allow the company to experiment with new technology without risking the entire network.

This is another unfamiliar skill that smart managers are trying to pick up. “There’s a lot of trial and error in the best companies right now,” notes MIT’s Ross. But there’s a catch, she adds. “Most companies aren’t designed for trial and error—they’re trying to avoid an error,” she says.

Q118 Feature3 img4 fail People Are The Engine That Drives Finance Transformation: Part 1To learn how to do it better, take your lead from IT, where many people have already learned to work in small, innovative teams that use agile development principles, advises Ross.

For example, business managers must learn how to think in terms of a minimum viable product: build a simple version of what you have in mind, test it, and if it works start building. You don’t build the whole thing at once anymore.… It’s really important to build things incrementally,” Ross says.

Flexibility and the ability to capitalize on accidental discoveries during experimentation are more important than having a concrete project plan, says Ross. At Spotify, the music service, and CarMax, the used-car retailer, change is driven not from the center but from small teams that have developed something new. “The thing you have to get comfortable with is not having the formalized plan that we would have traditionally relied on, because as soon as you insist on that, you limit your ability to keep learning,” Ross warns.

8. Understand the True Cost—and Speed—of Data

Gut instincts have never had much to do with being a CIO; now they should have less to do with being an ordinary manager as well, as data becomes more important.

As part of that calculation, businesspeople must have the ability to analyze the value of the data that they seek. “You’ll need to apply a pinch of knowledge salt to your data,” advises Solvay’s van Zeebroeck. “What really matters is the ability not just to tap into data but to see what is behind the data. Is it a fair representation? Is it impartial?”

Increasingly, businesspeople will need to do their analysis in real time, just as CIOs have always had to manage live systems and processes. Moving toward real-time reports and away from paper-based decisions increases accuracy and effectiveness—and leaves less time for long meetings and PowerPoint presentations (let us all rejoice).

Not Every CIO Is Ready

Of course, not all CIOs are ready for these changes. Just as high school has a lot of false positives—genius nerds who turn out to be merely nearsighted—so there are many CIOs who aren’t good role models for transformation.

Success as a CIO these days requires more than delivering near-perfect uptime, says Lenovo’s Hu. You need to be able to understand the business as well. Some CIOs simply don’t have all the business skills that are needed to succeed in the transformation. Others lack the internal clout: a 2016 KPMG study found that only 34% of CIOs report directly to the CEO.

This lack of a strategic perspective is holding back digital transformation at many organizations. They approach digital transformation as a cool, one-off project: we’re going to put this new mobile app in place and we’re done. But that’s not a systematic approach; it’s an island of innovation that doesn’t join up with the other islands of innovation. In the longer term, this kind of development creates more problems than it fixes.

Such organizations are not building in the capacity for change; they’re trying to get away with just doing it once rather than thinking about how they’re going to use digitalization as a means to constantly experiment and become a better company over the long term.

Q118 Feature3 img6 CIOready People Are The Engine That Drives Finance Transformation: Part 1As a result, in some companies, the most interesting tech developments are happening despite IT, not because of it. “There’s an alarming digital divide within many companies. Marketers are developing nimble software to give customers an engaging, personalized experience, while IT departments remain focused on the legacy infrastructure. The front and back ends aren’t working together, resulting in appealing web sites and apps that don’t quite deliver,” writes George Colony, founder, chairman, and CEO of Forrester Research, in the MIT Sloan Management Review.

Thanks to cloud computing and easier development tools, many departments are developing on their own, without IT’s support. These days, anybody with a credit card can do it.

Traditionally, IT departments looked askance at these kinds of do-it-yourself shadow IT programs, but that’s changing. Ferro, for one, says that it’s better to look at those teams not as rogue groups but as people who are trying to help. “It’s less about ‘Hey, something’s escaped,’ and more about ‘No, we just actually grew our capacity and grew our ability to innovate,’” he explains.

“I don’t like the term ‘shadow IT,’” agrees Lenovo’s Hu. “I think it’s an artifact of a very traditional CIO team. If you think of it as shadow IT, you’re out of step with reality,” he says.

The reality today is that a company needs both a strong IT department and strong digital capacities outside its IT department. If the relationship is good, the CIO and IT become valuable allies in helping businesspeople add digital capabilities without disrupting or duplicating existing IT infrastructure.

If a company already has strong digital capacities, it should be able to move forward quickly, according to Ross. But many companies are still playing catch-up and aren’t even ready to begin transforming, as the SAP-Oxford Economics survey shows.

For enterprises where business and IT are unable to get their collective act together, Ross predicts that the next few years will be rough. “I think these companies ought to panic,” she says. D!


About the Authors

Thomas Saueressig is Chief Information Officer at SAP.

Timo Elliott is an Innovation Evangelist at SAP.

Sam Yen is Chief Design Officer at SAP and Managing Director of SAP Labs.

Bennett Voyles is a Berlin-based business writer.

Let’s block ads! (Why?)

Digitalist Magazine

Expert Interview (Part 1): The GDPR, Data and You with Ovum’s Paige Bartley

In this expert interview series, Paige Bartley, Senior Analyst for Data and Enterprise Intelligence at Ovum, discusses the state of GDPR readiness, and how data quality, data availability and data lineage play into the GDPR compliance landscape.

For part one, Bartley focuses on how prepared organizations are for GDPR as well as some key challenges they may face.

Is the typical organization ready for the big day — May 25 — when the GDPR goes into effect?

In general, a significant portion of organizations will not be fully compliant with GDPR by the time the deadline passes. Of course, there is no such thing as a “typical” organization; GDPR readiness varies greatly across industry verticals, regions, and organization sizes. Those that are most likely to be prepared are large enterprise firms that are in highly-regulated verticals, as they tend to already have the human processes and IT infrastructure in place for managing data at a fine-grained level. EU-based organizations, additionally, will have a head start in compliance efforts, as they have historically had to adjust business practices to accommodate the requirements of GDPR’s predecessor: the 1995 Data Protection Directive (Directive 95/46/EC).

Those that will struggle the most are smaller organizations, often based outside of Europe, that operate in historically unregulated verticals and have a minority of their customers or employees based in the EU.

Expert Interview Part 1 The GDPR Data and You with Ovums Paige Bartley banner Expert Interview (Part 1): The GDPR, Data and You with Ovum’s Paige Bartley

What are the key challenges standing in the way of GDPR compliance for organizations that are not yet ready for the law to take effect?

There are a number of areas of difficulty being faced by organizations as they travel along the path to compliance. Let’s talk about the biggest ones.

First is the issue of documentation of processes. Even if an organization is unable to meet the May 25th deadline, it is critical that the steps taken towards compliance have been fully documented. Regulators will be more flexible with an organization that has taken good-faith measures to meet the deadline, as opposed to an organization that has failed to act entirely.

Mapping and identification of personal data are important, too. The enterprise cannot control or manage data that they cannot accurately and consistently locate within their IT ecosystem. However, today’s IT environments are increasingly distributed and heterogeneous, with data scattered across repositories in the cloud and in various databases and legacy systems. Therefore, it’s important to map and detect all instances of personal data within these environments. Simply knowing where personal data resides in the IT ecosystem is a major challenge for most organizations.

Then there’s data erasure, data rectification, and data duplicates. The data subject’s right to data erasure and data rectification under the GDPR is complicated by the prevalence of data silos within most organizations. Duplicate data is rife within most IT ecosystems, and just because data has been updated or deleted in one repository doesn’t mean that it has been updated or deleted in another. Furthermore, businesses that are unable to centrally search all of their repositories are at increased risk of being non-compliant

Finally, there is the challenge of data transfer and data sovereignty. The GDPR has restrictions on where, physically, EU resident data can be processed. EU resident data needs to be either processed on EU servers, or on servers in a country that has an “adequacy decision,” meaning that the regional laws offer protections comparable to the EU directive. In the absence of either of these two conditions, the data of EU residents may be processed on non-EU servers when certain conditions, such as binding corporate rules have been established within contracts or approved certification mechanisms have been met.

This maze of legal requirements has made it difficult for organizations to determine when and where data may be legally processed: a daunting challenge in the cloud era, where compute location is often algorithmically and automatically determined (and optimized) based on pricing and server availability, directing data processing to servers around the world. GDPR’s broad definition of processing – even including the viewing of data – further compounds this challenge. Organizations need the capability to override automated, managed service decisions regarding where data will be processed, and need to be able to localize their EU data to EU servers or servers within countries that have adequacy decisions.

This isn’t a complete list of the major GDPR challenges I’m seeing. These are just some of the key issues.

Check out part 2 tomorrow when Bartley discusses how data lineage, data quality, and data availability play in GDPR compliance.

If you want to learn more about GDPR compliance and how Syncsort can help, be sure to view our webcast on Data Quality-Driven GDPR: Compliance with Confidence.

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Adobe: Experiencing the experiential, part one

image 2018 05 09 at 9 08 40 am Adobe: Experiencing the experiential, part one

Video: Adobe rolls out updates to Experience Manager

what’s hot on zdnet

I know I’m late on this one by a couple of months. But, hopefully, the wait is worth it.

First, I apologize for taking so long to get this out. My other excuse is that I had six client-related projects to do in the course of the past few weeks. Plus, I’m reviving the Event Scorecard for 2018 with this post, and I had to tweak the weights and the categories. In fact, there is so much to this post, it’s going to be published in two parts.

Read also: How Adobe moves AI, machine learning research to the product pipeline

So, I’m sorry, this is part one, and let’s get to it.

Event Scorecard: A few changes

After all these years, I’ve made some changes, and I’m sure most of you have also forgotten about it completely. So, as I always do for the very first scorecard of the year, I provide an explanation of the scorecard itself. If you know it from my distant past, you’ll possibly note the changes.

As many analysts, I travel to a lot of conferences over a year. I have either been engaged as a speaker who also is an analyst or am there strictly as an analyst to find out what’s going on with the company. Either way, I’m a participant at the event.

The thing is, the event itself is of real importance to the company putting it on, because, if its a user event, and it usually is, it shapes the perception of the company for a 12-month period until the next event. It’s an overarching, powerful creator of an impression that lingers a long time after the knowledge of the specific event fades. Something on this order is what the attendee might say about the event 10 months after the fact: “What was it about? Oh, something to do with marketing I think but it was a great event, and it’s a great company.” The lingering feeling is more powerful than the content presented. Obviously, I’m exaggerating a little at both ends of what’s remembered and what isn’t, but it shapes the perception of the company in the months to come. Its how the users and prospects and analysts and everyone attending gets a feel for not just the products and services but the company and its people and culture.

So, I score the events. However, I am more lenient in my scoring than I am with the CRM Watchlist, because I know how hard it is to produce an event, and then actually do it, with a boatload of moving parts to pay attention to that could veer off in the wrong direction at any given moment.

But score it, I do.

Read also: Adobe patches critical vulnerabilities in Flash, Creative Cloud

Here is how the scorecard works, and how Adobe, as the first of this season, did. The second part of this post will be on the analysis of Adobe’s direction, actions, future, and opportunity. This post is the event scorecard, the Adobe ratings, and setting the stage on customer experience — so that I can show you what I think Adobe is doing in context in part two.

Event Scorecard: Criteria/Categories

  • Crowd Size: This is a new category about the size of the crowd. But the actual grade is based on the size and impact of the company holding the event (revenue, etc.); the impact of the company holding the event; the size of the crowd relative to other like company events; and the size relative to the previous year. It isn’t just a number that is graded. It’s what that crowd represents to the outside world. There is a lot of subjectivity in this rating, because I have a number that I think represents a minimum threshold, a likely ceiling (though this is not much more than a paper ceiling and can be easily busted), and a percentage of growth relative to the last year (minimum).
  • Keynotes (Content): This is the messaging, the focus, the details, the presentation, and in this particular case, the presenters who work for the conference host — all taken into account in the assessment. That means how visionary, how practical, how detailed, and what it said to an audience given its mindset, etc.
  • Conference (Staging): This is the rest of the main stage speakers (e.g., the celebrities, the lighting, the screens on the stage, the video content shown, the music, the quality of the demos given, the feel of the main stage effort, among other things). The reason that the non-keynote speakers (those who don’t work for the host but are on the stage, like at this conference, Steve Young or Michael Keaton, for example) are included is that they are only ancillary to a message — not the message purveyor. How effectively do the ancillary speakers either represent the company’s messaging (e.g., this year, for Adobe, there was Jensen Huang, CEO of Nvidia) or support the brand image and culture and enhance crowd engagement (e.g., SNL’s Leslie Jones, during Sneaks)?
  • Tracks/General Sessions: Did the content of the tracks and general sessions cover what the attendees needed and what the host needed to say? Did the titles meet the expectations of the attendees? For example, at CRM Evolution last year, we had a couple of glitches where the titles of the track didn’t really follow the actual content of the presentation. What was the level of the presentations? Did the content get presented in a way that left something for the attendees to chew on?
  • Analyst/Press Relations: How were the analysts and press treated? Did they have a working environment that allowed them to get out the coverage they needed on the conference? Did they have the materials they needed? Were the AR/PR teams responsive to requests? What was the general environment? Friendly? Standoffish? Neutral? Were the “asks” of the analysts or press met (e.g., one-on-one meetings) as best as possible (Note: It’s not always possible to accommodate everyone for everything they ask, and that is accounted for by me)?
  • Exhibition Hall: The partner pavilions are critical parts of any event. Prospects, customers, analysts, and journalists get a feel for who is willing to throw their lot in with the hosts of the event. Partner ecosystems are a critical part of a tech company’s particular offering. So, who is represented? How wide are the pathways to walk to make it easy for an attendee to talk to the partner? What about the organization of the hall, the breadth and depth of the partners represented, the quality of the booths, etc.?
  • Crowd Engagement: How into the conference is the crowd? This doesn’t mean how loud they are, but how involved they are at the event. There is a palpable energy that is highly noticeable over the course of an event. It has an impact on the residual feelings about the experience. Electricity works.
  • Logistics: This is the conference center environment, the room set up, the hallways, the swag that all the attendees are offered, and the concert that may or may not be part of the event (the lighting and the Wi-Fi). This is the ability to handle crowd movement. This is the security of the conference. All those things that add to a feeling in an environment and to the ease of navigation at a conference.

Read also: Adobe updates Experience Manager for marketers and developers

Event Scorecard: Adobe Experience Cloud Summit 2018

Category Grade notes
Crowd Size B+ The official attendance seemed to increase from the prior event about 40 percent — i.e., from 10,000 attendees to 14,000 attendees, which is a nice leap. But, don’t forget, this is also measured against other conferences of like sized or influence companies. So, B+, which is a solid mark, but not mind-blowing. It would have taken closer to 18,000 to get an A in this one, given Adobe’s influence. But the leap is nothing to ignore either. This was a sizeable increase in conference attendance, and Adobe should be applauded for it.
Keynotes (Content) A The keynotes were the best I’ve seen at an Adobe conference. Led by CEO Shantanu Narayen, the speeches were visionary, outcomes driven, and at the same time, were as in the weeds as Adobe classically is — but without the annoyance of geek-speak that goes over the heads of me and most of the audience. All in all, if you take the combination of Shantanu, the demos, the interviews with partner leaders like Jensen Huang of Nvidia, the other leaders who spoke, a consistent, interesting, and balanced narrative kept its thread throughout the entire conference.
Conference Staging A This was literally a perfect numerical score in addition to the highest letter grade you can get. Adobe does brilliant staging. The opening videos were both beautiful and inspiring with ultraHD color and flowing movement — a literally moving sometimes Salvador Dali, sometimes Piet Mondrian, sometimes Andy Warhol-type canvas, and other times, during some of the ancillary presentations, montages of the person and the company he/she represented. Additionally, Adobe had momentary flashes of 3D screens when it was making some business point.
Tracks/General Sessions A- The best I can honestly say about the tracks overall is that Adobe kept its promise. Keep in mind, here I’m operating from a small sample size (probably 30 to 35) and extrapolating. What was good about Adobe’s tracks is that it didn’t skimp on the right-brained side of its equation, so there were plentiful tracks on things like personalization, building customer experiences, etc. Also, they were true to their descriptions. The presenters and their presentations got somewhat mixed reviews, though overall leaned strongly to the positive. But not entirely. Still, keeping the promise with strong positives is enough to get them the A-.
Analyst/Press Relations A- As always, a great job done by the analysts relations team members at Adobe — they’re the top four in the business, as far as I’m concerned. They took care of each of the analysts at the event and made sure that they had a “customized” experience, meaning if one-on-ones were your thing, you had some one-on-ones, for example, though on the lower-volume side. That’s understandable, since they are juggling the needs of dozens of people with a limited group of executives and customers. They had groups of senior executives meetings with groups of analysts — and the executives were candid and encouraged feedback. The only reason they got less than a solid A was the lack of tables and power for the analysts and press during the keynotes in the main ballroom, which is pretty much at this point, table stakes (no pun intended). That disparity makes it more difficult for the analysts and press to do their jobs, since they were working off their laps and they had to concern themselves with the preservation of their battery power. Neither of those should be a concern.
Exhibition Hall C- While a marked improvement over some of the past Adobe Exhibition Halls, this one wasn’t that much better. It was somewhat confusing, with poorly labeled booth numbers/names and numbered “rows” (i.e. ,aisles) that at times made no sense as to where they were and which booths were there. The community maps there for directions were so general that at times they were useless. So, as in prior years, they were a mishmash. The walkways were good enough to allow traffic to flow both ways without too much interference, but they weren’t particularly spacious areas. The one thing that was at least pretty good was the recharge areas, where you could do anything from getting water to playing some simple games or just relax. One thing I would suggest, aside from clarity and much better labeling and improved organization, is that Adobe strategic partners should be featured in the central Adobe exhibit space in the hall. That sends a message to those walking the hall that their strategic partners are part of their ecosystem. It’s a message, given Adobe’s publicly announced direction, that’s a very important one.
Crowd Engagement B This was odd, because it was highly uneven. For example, there was extremely high audience engagement during the Sneaks session. The interactions with the audience and the presenters, including the MC and Leslie Jones, were exceptional and lively. Oddly, as good as the keynotes were, and as memorable as the staging was, the crowd’s energy and level of engagement was never that high. It took some real effort for those on stage to get a reaction to whatever — ranging from the usual, “Good morning! I can’t hear you. You’re not loud enough. You must have been out late, so I’ll say it again, good morning!” to the level of gasps and applause that you would expect given the quality of what’s happening on stage. The comments about what went on stage throughout the conference were indicative of something being very well received, but that wasn’t evident during the actual presentations.
Logistics B+ These were solid everywhere: The food was out where it needed to be; the quality of the food was decent, not great; the crowds, while on occasion knotting up, were moving to their appointed destinations without a lot of difficulty; and the security and traffic control people and service desk staff were courteous and well informed and able to keep things moving the way it needed to be. There was nothing exceptional about it, but then, it doesn’t have to be exceptional. It just has to work, and it did.
Overall B+ Adobe is very good at putting on events that have a lasting impact, and this one continued that tradition. It had areas that could be better — notably, again, the exhibition hall. But, on the whole, in the places where it counts, the content and the presentation of the content shined — and that is what matters. Ultimately, in eight months, when someone asks an attendee about Adobe, the event has to leave enough of an emotional trace for the attendee, even if they don’t remember the content that well to remember enough of how they felt in order to say, “It was great” or “They are a really cool company doing really cool things” or some variation on that theme. Adobe, in my estimation, achieved that — again. Though, probably, given this year, it needs to change the name next year to the “Adobe Experience Cloud Summit” or something a little more creative. Because it now speaks to areas well beyond just marketing.

Scorecard in the books. Now, we move on to what are we actually talking about when we are talking about customer experience — more complicated than you might think. And, in part two, I’ll cover what did the conference tell me about Adobe’s direction, how far it’s come in the achievement of its vision, and what can you expect of it. And, finally, right here and right now: Will the Yankees win the Series this year?

Let’s start with the latter. Given current trajectories, yes. That takes care of that. No room for debate on that one, fans of other teams.

Read also: Adobe and Nvidia expand partnership for Sensei AI

Before I get into the Adobe Experience Cloud and Adobe’s direction, I want to clarify the definitions of customer experience. Note the plural, because there are three kinds of experience that need some definition, since Adobe is focused on one of them. The reason I want to be clear is that, if I thought Adobe was promoting the other two, I wouldn’t be so comfortable with what it’s doing. In fact, I’d blast it.

Customer Experience (x3)

I spend an entire chapter of my upcoming book, The Commonwealth of Self-Interest: Customer Engagement, Business Benefit, on the differences in customer experience(s). The reason I gave it that much time, since the book is about customer engagement, is that entire businesses will succeed or fail based on how they approach customer experience and its symbiosis with customer engagement. Thus, definitions have to be clear so there is no miscommunication between buyer and seller, between customer and company, and between industry and thought leaders about what customer experience is, which version you are talking about, and what you can do programmatically and strategically with each kind of customer experience.

Customer Experience

Customer experience, in this incarnation, is the one that technology companies most talk about and have the least impact (a.k.a., none) on: A customer’s feelings. Two definitions will suffice to explain most of this easily:

First, let me present the definition used by the individual I think is the foremost thought leader in the customer experience space — Bruce Temkin, founder and CEO of the Temkin Group, former Forrester analyst, and a long time influencer. His definition:

“The perception that customers have of their interactions with your organization.”

A small excerpt of a piece he wrote for my upcoming book explains his thinking well:

“The word ‘perception’ is critical, because customer experience is in the eyes of the beholder. It’s not what you do as a company, or how your employees think about what they do. It’s how your customers think and feel about what you do.”

That segues beautifully into the other definition on this kind of customer experience — mine:

“How a customer feels about a company over time.”

As perception is critical in Bruce’s definition, “feels” is critical in mine. When a company is talking about this “big picture” customer experience, it’s the emotional state of the customer at the time that the company either proactively or retroactively becomes aware of it. I’m not going to go into the details of what drives it, how it formulates itself, what changes it, what the value is to a company to know it, and how it impacts the customers engagement with the company. The book covers some of that. But what I am going to emphatically state is:

If a technology company is talking about providing its technology to impact this kind of customer experience directly, then it doesn’t know what it is talking about. You can’t enable how a person feels via technology. Period.

The reason I’m so emphatic about this is that it is a strategic flaw I see repeated in the messaging and promises that some tech companies make around customer experience. What technology can do is — via analytics and by understanding customer behavior individually — identify the customer’s feelings at a given moment in a journey, and that may or may not reflect the overall state of the customer’s feelings toward the company. It may be just how they feel at that given moment. Analytics can ascertain how the customer feels at the point they produced the data that the algorithms are using to draw a conclusion. But the technology can’t enable those feelings. The long-term feeling is something that evolves after engagement and interaction and outcomes are aggregated and filtered and assessed at conscious and subliminal levels by a customer or two or a million.

Read also: Adobe adds more AI, customization, transparency in Adobe Target update

If Adobe was claiming that it can enable these long-term feelings, then I’d have a problem, but it is not claiming that. So, since this is a post about Adobe, I’ll stop here. Please feel free to query me in the comments if you want to get further information.

More on innovation

Brand Experience

This is another type of customer experience that is, to use the best term I can think of at the moment (I’m pretty tired), “impressionistic.” It is the conscious effort a brand makes to leave a strong impression about the brand. That means that, when customers think about a brand, they are going to have an impression — “coolest company ever,” “what a cutting-edge company,” etc.That isn’t always that easy to articulate, but the feeling associated with coolest, for example, is easy to feel.

I know that’s not a great way to describe it, but let me give you an example, that I’ve used before in an article on customer experience, customer engagement, and CRM.

Teeling Whiskey, a fast-growing Irish Whiskey distiller, about two years ago, opened the first new distillery in Dublin in 125 years. Keep in mind, as new as the distillery was, the Teeling family, has been distilling whiskey since 1782, so its not new to the business. But, unlike the behemoth of Irish whiskey, Jameson’s, it didn’t want to be establish its brand identity as a long-standing, historic icon of distilling. A new distillery was built and designed to imply (leave the impression of) “a new generation of Irish Whiskey” challenging the long-time, seemingly old-school brands like Jameson’s, Bushmills, etc.

The way that it built this impression, was vast amount of experimentation with the distillation of the whiskey itself, with unique casks used to produce wildly different flavor profiles, with names that stepped outside the norm, (Liberties’ 11 year old, etc), and most importantly, a highly orchestrated specific experience at the distillery itself that was designed to cement the impression of a new generation.

When you signed up for the tour at the distillery, you noticed first, from the get go, a hipster sort of vibe in the initial room where you could get a light lunch of artisan sandwiches and salads. But the key was when you stepped into the tour itself. Unlike Jameson’s, which assigned a guide immediately to you who regaled you about the history of the distillery, at Teeling, you were put in a room that had museum like cases (with a lot of light, though) that had whiskey distilling historic items. You were left in the room for about 10 to 15 minutes to wander around and look at the history. Then a guide came out, took you to a thre- minute video from Teeling family members welcoming you, and off you went to the modern distillery, where the gleaming copper pot stills had glass enclosures, so you could see into the process itself as the distillery went about producing the whiskey.

The idea was simple: History, sure, look at what you want of it, but we are the new generation, with new approaches who want to be transparent about our distilling process — no secrets here. We are the ones that meet the needs of 21st century whiskey drinkers — not with our history, but with our exciting whiskey.

For all intents and purposes, it works. That’s the distinct brand impression you have as you emerge from the experience. Teeling is the leading progressive new generation of Irish whiskey distilleries.

That’s brand experience. The experience is designed at multiple levels and orchestrated to create a specific image of a company with its customers. Its not personalized, per se, its just appealing to the bulk of the customers who interact with the company.

Read also: Adobe XD for Windows review: A powerful but usable design tool

Is Adobe doing that? Indirectly, but not directly. If this was solely what Adobe was trying to do, I’d worry about it, though, not be dead set against it. But that’s more the job of the agency to support this, rather than the tech company, per se. To be fair, those lines, between an agency and a tech company, are getting increasingly blurry — and have been for a while. Look at Infor’s Hook and Loop internal agency. However, this is not Adobe’s focus.

Consumable Experiences

This is the sweet spot for Adobe. I’m not going to focus on that (that’s part two, coming either next week or the week after). I am going to explain this a bit though. This is where technology can not just impact experience(s) but actually create them and monetize them, too.

This goes back a long way. For the background, read this interview I did with Adobe’s CMO.com back in 2016 so you get a good sense of where I’m going with this third experience.

Ultimately, this experience is created and consumable. “Created” means a designed commodity that has a purpose to evoke an emotion. This could be seen as a product or service, but it might be a series of one or the other or both. “Consumable” means both available to the customer as an offering, and more likely than not, monetized. Go to a great restaurant and, one way or the other, perhaps built into the cost of the meal ,you have paid for something that, when you leave, you think or say, “That was amazing.” The food was amazing, as well as the service, the ambiance of the restaurant itself, the individuals who are providing the service, the ancillaries (such as a bartender with a flair who puts on a literal show in his/her drink creation), etc. That is a consumable (no pun intended) experience. The entire restaurant is geared not just to providing you with great food, but all in all, a great — and memorable — feeling. Go to Restaurant Daniel in New York City. When you leave, you’ll know what I mean. Trust me.

Back to Adobe (and part two, coming soon)

While I’ll dive much more into where I think Adobe is going, where it is, and what I think it probably needs to do — if my opinion matters at all — in part two of this post (either next week or the week after), I am actually more than comfortable with Adobe’s approach and messaging, which, when it comes to customer experience, as a theme and a meme for a technology company, is something that I never thought I’d say. I will continue to maintain that, when it comes to the customer’s overall experience (how a customer feels about a company over time), you cannot enable that with technology, which is why you have no such thing as a system of experience. You can enable engagement, for sure. But what I think you can enable is the creation of consumable experiences, which are a commodity or at least an actual thing. Adobe is doing just that and can thus justify its messaging and positioning and brand promise. How well it is doing that and what it is doing, I’ll leave to the second part.

See you at Restaurant Daniel.

Previous and related coverage

Zoho at a crossroads: Stepping up means stepping out

Zoho has been one of the great successes in the world of small business technologies. Few companies have been able to succeed with a similar business model, yet Zoho has been wildly successful. But they are also enshrouded in mystery. Read on to see what’s under their veil and what they have to do next — if they want to.

Infor Innovation Analyst Summit 2018: I totally get it and yet, I don’t see it

Infor is a company on the fast track, though you wouldn’t know that. It is among the most design-focused, progressive companies in the technology world, and it has an offering that can go to head-to-head with anyone’s out there. Yet, it is a best-kept secret. I’m now going to show and tell. Read on — Infor is now in the sunlight.

Conversations are precisely what we need to think about

Thought leader Mitch Lieberman takes the conversation about conversations from personalization to precision. What the hell is the benefit for business of that level of deep thinking? Listen — precisely. Personally, you’ll learn something.

How to fix your brand experience from the outside in

Johann Wrede: To bring real consistency to the brand experience, leaders should stop slicing the problem into pieces that they try to solve independently.

Let’s block ads! (Why?)

ZDNet | crm RSS

SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

The series so far:

  1. SQL Server Graph Databases – Part 1: Introduction
  2. SQL Server Graph Databases – Part 2: Querying Data in a Graph Database
  3. SQL Server Graph Databases – Part 3: Modifying Data in a Graph Database
  4. SQL Server Graph Databases – Part 4: Working with hierarchical data in a graph database
  5. SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

With the release of SQL Server 2017, Microsoft introduced graph database features to support data sets that contain complex relationships between entities. The graph capabilities are integrated into the database engine and require no special configurations or installations. You can use these features in addition to or independently of the traditional relational structures. For example, you might implement a graph database for a new master data management solution that could benefit from both graph and relational tables.

When creating a graph database, you might be working with new data, existing data, or a combination of both. In some cases, the data might already exist in relational tables, which do not support the graph features. Only node and edge tables in a graph database allow you to use the new capabilities, in which case, you must either copy the data over to the graph tables or forget about using the graph features altogether.

For those interested in the first option, this article demonstrates how to move from a relational structure to a graph structure, using data from the AdventureWorks2017 sample database. The database might not represent the type of data you had in mind, but it provides a handy way to illustrate how to migrate to a graph structure, using a relational schema already familiar to many of you. Such a recognizable structure also helps demonstrate various ways to query the data once it’s in the graph tables.

Moving from Relational Tables to Graph Tables

The AdventureWorks2017 database includes transactional data related to the fictitious company Adventure Works, which sells bicycles and related equipment to retail outlets and online customers. For this article, we’ll focus on the retail outlets that ordered the products, the sales reps who sold the products, and the vendors who supplied the products, along with such details as the number of items ordered and the amount paid for those items.

To retrieve this type of data from the AdventureWorks2017 database as it exists in its current state, you would be accessing different combinations of the tables shown in the following figure.

word image 236 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

For those who’ve been around SQL Server documentation for a while, such tables as SalesOrderHeader, SalesOrderDetail, Product, and Person should be quite familiar because they’re included in countless examples that demonstrate various ways to work with relational data. However, suppose that you now want to pull some of this information into a graph database, in which case, the data model might look more like the one shown in the next figure.

word image 237 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

The data model includes only four nodes (Stores, SalesReps, Vendors, and Products) and only three edges (Purchases, Sells, and Supplies). Together these nodes and edges define the following relationships:

  • Stores purchase products
  • Sales reps sell products
  • Vendors supply products

You’ll define these relationships within the edge tables by mapping the originating node to the terminating node for each relationship, as you saw in the first article in this series. The implication here is that you should first populate the node tables before the edge tables so you can reference the originating and terminating node IDs when defining your relationships.

Creating and Populating the Node Tables

Before you can create and populate your node tables, you must determine where to put the tables. For the examples in this article, I created the graph schema within the AdventureWorks2017 database, using the following T-SQL code:

You do not need to locate the graph tables in the graph schema or even in the AdventureWorks2017 database. However, if you plan to try out the examples to follow and want to locate the graph tables elsewhere, be sure to update the T-SQL code accordingly.

With the graph schema in place (or wherever you locate the tables), you can then create and populate the Stores node table, which includes two user-defined columns, StoreID and StoreName, as shown in the following CREATETABLE statement:

The example follows the same procedures used in the first article to create and populate node tables, so be sure to refer back to the article if you’re unsure about what we’re doing here. Keep in mind that you must include the ASNODE clause in the CREATETABLE statement. You can also add whatever other user-defined columns you want to include. SQL Server will automatically generate the table’s $ node_id column.

You can then use an INSERT…SELECT statement to populate the Stores table, as you would with any SQL Server table. In this case, you must join the Sales.Customer table to the Sales.Store table to get the store name. In addition, when supplying values for the StoreID column in the Stores table, you should use the CustomerID value in the Customer table, rather than use the StoreID value in that table, because the SalesOrderHeader table uses the CustomerID value. This approach helps to keep things simpler when populating the edge tables. SQL Server automatically populates the $ node_id column.

That’s all there is to setting up the Stores table, and creating and populating the SalesReps table is even easier:

For this example, you can pull all the data directly from the Person table, limiting the results to those rows with a PersonType value of SP (for salesperson). If you want to include such information as sales quotas or job titles in the table, you must join the Person table to the SalesPerson or Employee table (or both). For this example, however, the Person table is enough.

The next table to create and populate is Products. For this, you can pull all the data from the Production.Product table:

For this example, when retrieving data from the Product table, you should include a WHERE clause that filters the data so that only rows with a FinishedGoodsFlag value of 1 are included. This ensures that you include only salable products in the Products table.

The final node table is Vendors, which gets all its data from the Purchasing.Vendor table:

That’s all there is to creating and populating the node tables. Once they’re in place, you can start in on your edge tables.

Creating and Populating the Edge Tables

Creating an edge table is just as simple as a node table, with a few notable differences. For the edge table, the table definition requires an ASEDGE clause, rather than an ASNODE clause, and the user-defined columns are optional. (Node tables require at least one user-defined column.) In addition, SQL Server automatically generates the $ edge_id column, rather than the $ node_id column.

The first edge table is Orders, which includes three user-defined columns, as shown in the following CREATETABLE statement:

After you create the Orders table, you can add the data, which relies on the SalesOrderHeader and SalesOrderDetail tables to supply the values for the user-defined columns and, more importantly, to provide the structure for defining the relationships between the Stores and Products nodes:

After joining the SalesOrderHeader and SalesOrderDetail tables, the SELECT statement joins the SalesOrderHeader table to the Stores tables, based on the CustomerID and StoreID values. The join uses a subquery to retrieve only the $ node_id and StoreID columns from the Stores table and to rename the $ node_id column to node1. The query will fail if you try to use $ node_id in the SELECT list. You can then join the SalesOrderHeader table to the Products table, using the same logic as when joining to the Stores table.

The node1 and node2 columns returned by the SELECT statement provide the values for the $ from_id and $ to_id columns in the edge table. As you’ll recall from the first article, you must specifically provide these values when inserting data into an edge table. The values are essential to defining the relationships between the originating and terminating nodes. SQL Server automatically populates the $ edge_id column.

The next step is to create and populate the Sells edge table, which works much the same way as the Orders table, even when it comes to the user-defined columns. The main difference is that the relationships originate with the SalesReps table, as shown in the following T-SQL code:

The fact that the Orders and Sells tables include the same user-defined columns points to the possibility that you could create a fifth node table for sales orders and then include columns such as OrderDate in there. However, this approach could make your schema and queries unnecessarily complicated, while providing little benefit. On the other hand, this approach helps to eliminate duplicate data. As with any database, the exact layout of your graph model will depend on the type of data you’re storing and how you plan to query that data.

The last step is to create and populate the Supplies table. In this case, the structure for the relationships is available through the ProductVendor table:

The ProductVendor table does all the product-vendor mapping for you and includes the StandardPrice values. You need only join this table to the Vendors and Products tables to get the originating and terminating node IDs.

Retrieving Store Sales Data

With the graph tables now defined and populated, you’re ready to start querying them, just like you saw in the second article in this series. For example, you can use the following SELECT statement to return information about the products that each store has ordered:

The SELECT statement uses the MATCH function to specify what data to retrieve. As described in the second article, the function lets you define a search pattern based on the relationships between nodes. You can use the function only in the WHERE clause of a query that targets node and edge tables. The following table shows part of the results that the SELECT statement returns. (The statement returns over 60,000 rows.)

word image 238 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

In the above example, the MATCH clause specifies the relationship store orders product. If you were to retrieve the same data directly from the relational tables, you could not use the MATCH clause. Instead, your query would look similar to the following:

Although this query is more complex than the previous one, you can use it without having to create and populate graph tables. As with any data, you’ll have to determine on a case-by-case basis when a graph database will be useful to your circumstances and which structure will deliver the best-performing queries.

Returning now to the graph tables, you can modify the preceding example by grouping the data based on the stores and products, as shown in the following example:

As you can see, you can use the MATCH function in conjunction with other clauses, including the HAVING clause, which in this case, limits the results to rows with a total quantity greater than 100. The following figure shows the data now returned by the SELECT statement.

word image 239 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

When implementing a graph database based on existing relational data, you might want to copy only part of the data set into the graph tables, in which case, you’ll likely need to create queries that can retrieve data from both the graph and relational tables. One way to achieve this is to define a common table expression (CTE) that retrieves the graph data and then use the CTE when retrieving the relational data, as shown in the following example:

In this case, the outer SELECT statement joins the data from the CTE to the Product, ProductSubcategory, and ProductCategory tables in order to include the product categories and subcategories in the results, as shown in the following figure.

word image 240 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

Being able to access both graph and relational data makes it possible to implement a graph database for those complex relationships that can justify the additional work, while still retaining the basic relational structure for all other data.

Retrieving Sales Rep and Vendor Data

Of course, once you have your graph tables in place, you can run a query against any of them. For example, the following query returns a list of sales reps and the products they have sold, along with details about the orders:

As you can see, retrieving information about the Sells relationships works just like returning data about the Orders relationships, but now the results are specific to each sales rep, as shown in the following figure.

word image 241 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

The results shown here are only a small portion of the returned data. The statement actually returns over 60,000 rows. However, you can aggregate the data just as you saw earlier:

Now the SELECT statement returns only 58 rows, with the first 10 shown below.

word image 242 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

There’s little difference between returning data based on the Orders relationships or the Sells relationships, except that the originating nodes are different. You can also take the same approach to retrieve vendor data. Just be sure to update the table alias references as necessary, as shown in the following example:

This should all look familiar to you. The SELECT statement uses a CTE to join the graph and relational data together. The following table shows the first 10 rows of the 32 that the statement returns.

word image 243 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

As you can see, the results include the vendor and product names, along with the product subcategories and categories.

Digging into the Graph Data

Once you get the basics down of how to query your graph tables, you can come up with other ways to understand the relationships between the nodes. For example, the following SELECT statement attempts to identity sales reps who might be focusing too heavily on certain vendors:

The statement groups the data by the name of the sales reps and then by the vendors. The statement also includes a HAVING clause that calculates an amount 50 times the average sales and then compares that to the total sales of each sales rep. Only reps that go over the calculated amount are included in the results, as shown in the following figure.

word image 244 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

By being able to return this type of information, you can identify patterns that point to anomalies or specific trends in the data set. For instance, suppose you now want to identify the products that stores have bought based on a specified product that they also bought (a scenario sometimes referred to customers who bought this also bought that).

One way to get this information is to use a CTE to retrieve the IDs of the stores that ordered the specified product and then, for each store return the list of other products that the store ordered. To achieve this, use the CTE to qualify your query so it returns only the other products that the stores bought:

The outer SELECT statement returns the list of products that each of the three stores has ordered. The key is to use the IN operator in a WHERE clause condition to compare the StoreId value to a list of store IDs returned by the CTE. You should also include a WHERE clause condition to exclude the product Sport-100 Helmet, Blue. The SELECT statement returns the results shown in the following figure.

word image 245 SQL Server Graph Databases – Part 5: Importing Relational Data into a Graph Database

There are other ways you can get at customers who bought this also bought that information, such as using Python or R, but this approach provides a relatively simple way to get the data from a graph database, without having to jump through too many hoops.

Making the Best of Both Worlds

Because the graph database features are integrated with the database engine, there’s no reason you can’t work with graph and relational data side-by-side, depending on your application requirements and the nature of your data. This integration also gives you the flexibility to incorporate graph tables into an existing relational structure or make them both part of the design when planning for a new application. Keep in mind, however, that the graph features are still new to SQL Server and lack some of the advanced capabilities available to more established graph products. Perhaps after a couple more releases, SQL Server will be a more a viable contender in the graph database market, at least when used in conjunction with relational data.

Let’s block ads! (Why?)

SQL – Simple Talk