Tag Archives: using

Using Legacy Data Sources in Tabular 1400

The modern Get Data experience in Tabular 1400 brings interesting new data discovery and transformation capabilities to Analysis Services. However, not every BI professional is equally excited. Especially those who prefer to build their models exclusively on top of SQL Server databases or data warehouses and appreciate the steadiness of tried and proven T-SQL queries over fast SQL OLE DB provider connections might not see a need for mashups. If you belong to this group of BI professionals, there is good news: Tabular 1400 fully supports provider data sources and native query partitions. The modern Get Data experience is optional.

Upgrading from 1200 to 1400

Perhaps the easiest way to create a Tabular 1400 model with provider data sources and native query partitions is to upgrade an existing 1200 model to the 1400 compatibility level. If you used Windiff or a similar tool to compare the Model.bim file in your Tabular project before and after the upgrade, you will find that not much was changed. In fact, the only change concerns the compatibilityLevel parameter, which the upgrade logic sets to a value of 1400, as the following screenshot reveals.

At the 1400 compatibility level, regardless of the data sources and table partitions, you can use any advanced modeling feature, such as detail rows expressions and object-level security. There are no dependencies on structured data sources or M partitions using the Mashup engine. Legacy provider data sources and native query partitions work just as well. They bypass the Mashup engine. It’s just two different code paths to get the data.

Provider data sources versus structured data sources

Provider data sources get their name from the fact that they define the parameters for a data provider in the form of a connection string that the Analysis Services engine then uses to connect to the data source. They are sometimes referred to as legacy data sources because they are typically used in 1200 and earlier compatibility levels to define the data source details.

Structured data sources, on the other hand, get their name from the fact that they define the connection details in structured JSON property bags. They are sometimes referred to as modern or Power Query/M-based data sources because they correspond to Power Query/M-based data access functions, as explained in more detail in Supporting Advanced Data Access Scenarios in Tabular 1400 Models.

At a first glance, provider data sources have an advantage over structured data sources because they provide full control over the connection string. You can specify any advanced parameter that the provider supports. In contrast, structured data sources only support the address parameters and options that their corresponding data access functions support. This is usually sufficient, however. Note that provider data sources also have disadvantages, as explained in the next section.

A small sample application can help to illustrate the metadata differences between provider data sources and structured data sources. Both can be added to a Tabular 1400 model using Tabular Object Model (TOM) or the Tabular Model Scripting Language (TMSL).

Note that Analysis Services always invokes the Mashup engine when using structured data sources to get the data. It might or might not for provider data sources. The choice depends on the table partitions on top of the data source, as the next section explains.

Query partitions versus M partitions

Just as there are multiple types of data source definitions in Tabular 1400, there are also multiple partition source types to import data into a table. Specifically, you can define a partition by using a QueryPartitionSource or an MPartitionSource, as in the following TOM code sample.

As illustrated, you can mix query partitions with M partitions even on a single table. The only requirement is that all partition sources must return the same set of source columns, mapped to table columns at the Tabular metadata layer. In the example above, both partitions use the same data source and import the same data, so you end up with duplicate rows. This is normally not what you want, but in this concrete example, the duplicated rows help to illustrate that Analyses Services could indeed process both partition sources successfully, as in the following screenshot.

The Model.bim file reveals that the M and query partition sources reference a structured data source, but they could also reference a provider data source as in the screenshot below the following table summarizing the possible combinations. In short, you can mix and match to your heart’s content.

  Data Source Partition Source Comments
1 Provider Data Source Query Partition Source The AS engine uses the cartridge-based connectivity stack to access the data source.
2 Provider Data Source M Partition Source The AS engine translates the provider data source into a generic structured data source and then uses the Mashup engine to import the data.
3 Structured Data Source Query Partition Source The AS engine wraps the native query on the partition source into an M expression and then uses the Mashup engine to import the data.
4 Structured Data Source M Partition Source The AS engine uses the Mashup engine to import the data.

The scenarios 1 and 4 are straightforward. Scenario 3 is practically equivalent to scenario 4. Instead of creating a query partition source with a native query and having the AS engine convert this into an M expression, you could define an M partition source in the first place and use the Value.NativeQuery function to specify the native query, as the following screenshot demonstrates. Of course, this only works for connectors that support native source queries and the Value.NativeQuery function.

Scenario 2, “M partition on top of a provider data source” is more complex than the others because it involves converting the provider data source into a generic structured data source. In other words, a provider data source pointing to a SQL Server database is not equivalent to a structured SQL Server data source because the AS engine does not convert this provider data source into a structured SQL Server data source. Instead, it converts it into a generic structured OLE DB, ODBC, or ADO.NET data source depending on the data provider that the provider data source referenced. For SQL Server connections, this is usually an OLE DB data source.

The fact that provider data sources are converted into generic structured data sources has important implications. For starters, M expressions on top of a generic data source differ from M expressions on top of a specific structured data source. For example, as the next screenshot highlights, an M expression over an OLE DB data source requires additional navigation steps to get to the desired table. You cannot simply take an M expression based on a structured SQL Server data source and put it on top of a generic OLE DB provider data source. If you tried, you would most likely get an error that the expression references an unknown variable or function.

Moreover, the Mashup engine cannot apply its query optimizations for SQL Server when using a generic OLE DB data source, so M expressions on top of generic provider data sources cannot be processed as efficiently as M expressions on top of specific structured data sources. For this reason, it is better to add a new structured data source to the model for any new M expression-based table partitions than to use an existing provider data source. Provider data sources and structured data sources can coexist in the same Tabular model.

In Tabular 1400, the main purpose of a provider data source is backward compatibility with Tabular 1200 so that the processing behavior of your models does not change just because you upgraded to 1400 and so that any ETL logic programmatically generating data sources and table partitions continues to work seamlessly. As mentioned, query partitions on top of a provider data source bypass the Mashup engine. However, the processing performance is not necessarily inferior with a structured data source thanks to a number of engine optimizations. This might seem counterintuitive, but it is a good idea to double-check the processing performance in your environment. The Microsoft SQL Server Native Client OLE DB Provider is indeed performing faster than the Mashup engine. In very large Tabular 1400 models connecting to SQL Server databases, it can therefore be advantageous to use a provider data source and query partitions.

Data sources and partitions in SSDT Tabular

With TMSL and TOM, you can create data sources and table partitions in any combination, but this is not the case in SSDT Tabular. By default, SSDT creates structured data sources, and when you right-click a structured data source in Tabular Model Explorer and select Import New Tables, you launch the modern Get Data UI. Among other things, the default behavior helps to provide a consistent user interface and avoids confusion. You don’t need to weigh the pros and cons of provider versus structured and you don’t need to select a different partition source type and work with a different UI just because you wanted to write a native query. As explained in the previous section, an M expression using Value.NativeQuery is equivalent to a query partition over a structured data source.

Only if a model contains provider data sources already, say due to an upgrade from 1200, SSDT displays the legacy UI for editing these metadata objects. By the same token, when you right-click a provider data source in Tabular Model Explorer and select Import New Tables, you launch the legacy UI for defining a query partition source. If you don’t add any new data sources, the user interface is still consistent with the 1200 experience. Yet, if you mix provider and structured data sources in a model, the UI switches back and forth depending on what object type you edit. See the following screenshot with the modern experience on the left and the legacy UI on the right – which one you see depends on the data source type you right-clicked.

Fully enabling the legacy UI

BI professionals who prefer to build their Tabular models exclusively on top of SQL Server data warehouses using native T-SQL queries might look unfavorable at SSDT Tabular’s strong bias towards the modern Get Data experience. But the good news is that you can fully enable the legacy UI to create provider data sources in Tabular 1400 models, so you don’t need to resort to using TMSL or TOM for this purpose.

In the current version of SSDT Tabular, you must configure a DWORD parameter called “Enable Legacy Import” in the Windows registry. Setting this parameter to 1 enables the legacy UI. Setting it to zero or removing the parameter disables it again. To enable the legacy UI, you can copy the following lines into a .reg file and import the file into the registry. Do not forget to restart Visual Studio to apply the changes.

Windows Registry Editor Version 5.0

[HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server.0\Microsoft Analysis Services\Settings]
"Enable Legacy Import"=dword:00000001

With the legacy UI fully enabled, you can right-click on Data Sources in Tabular Model Explorer and choose to Import From Data Source (Legacy) or reuse Existing Connections (Legacy), as in the following screenshot. As you would expect, these options create provider data sources in the model and then you can create query partitions on top of these.

Wrapping things up

While AS engine, TMSL, and TOM give you full control over data sources and table partitions, SSDT Tabular attempts to simplify things by favoring M partitions over structured data sources wherever possible. The legacy UI only shows up if you already have provider data sources or query partitions in your model. Should legacy data sources and query partitions be first-class citizens in Tabular 1400? Perhaps SSDT should provide an explicit option in the user interface to enable the legacy UI to eliminate the need for configuring a registry parameter. Let us know if this is something we should do. Also, there is currently no SSDT support for creating M partitions over provider data sources or query partitions over structured data sources because these scenarios seem less important and less desirable. Do you need these features?

Send us your feedback via email to SSASPrev at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. Or simply post a comment to this article. Influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers!

Let’s block ads! (Why?)

Analysis Services Team Blog

How to Trigger Plugins on Custom Messages using Actions in Dynamics 365

When modifying a Microsoft Dynamics 365 solution, you may require functionality that is portable and can be triggered in multiple circumstances. For example, running a rollup field calculation job, which is able to be done from SDK messages, but otherwise cannot be triggered manually out-of-the-box. In this instance, you may require a plugin or a custom workflow activity that calls the SDK message and runs the job. However, if you want to be able to run this from multiple places, a plugin may be more appropriate.

How does this work?

Custom actions were added in Microsoft Dynamics CRM 2013, and provide a fantastic point against which to register plugins. Once actions have been created, it is possible to register a plugin against their schema name, which acts like a custom SDK message that is fired whenever the action is called.
Custom actions can provide the target for a plugin as well as pass values to plugins. This is done by setting input parameters for the custom action and then accessing these using from the plugin execution context.

To register a Plugin against a Custom Action:

  1. Navigate to your Solution > Processes > New
  2. Enter a Process name and select Action as the Category. For the Entity, select the entity for which you want to fire the Plugin. Note that the Process name that you enter will be what you use to refer to the action later when you register the plugin, so pick a name that will make it easy to identify.
    image thumb How to Trigger Plugins on Custom Messages using Actions in Dynamics 365 
  3. Click OK, add any additional steps and functionality you want to the action, then add any input parameters that you need for either your internal steps or for your plugin.
  4. Create your plugin.
  5. Register your plugin using the plugin registration tool bundled with the SDK.
  6. Right click on your plugin in the tool, and select ‘Register New Step’.
  7. In the Message field, select the custom action you previously created.
  8. Finish registering step as normal.
  9. Add action to wherever you want the plugin to run.

image thumb 1 How to Trigger Plugins on Custom Messages using Actions in Dynamics 365

Why is this useful?

Custom actions allow for a useful target for plugins to fire in multiple circumstances. For example, while a workflow activity could perform the same functionality, a custom workflow activity can only be called from a workflow or a process, while a custom action can be called from JavaScript and is generally more flexible, as the action itself can be edited and have functionality added inside CRM.

Let’s block ads! (Why?)

Magnetism Solutions Dynamics CRM Blog

how can I plot that by using mathematica?

pkJz2 how can I plot that by using mathematica?

how can I plot that by using mathematica?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Filter data based on a comma separated list using CALCULATE and PATHCONTAINS, even with external models!

Another interesting question I got this week kind of extends on the previous blog posts where I used disconnected slicers to filter data  but this time instead of having the data already in the model the request was to allow to write a filter based on a comma separated list. So off we go again into the DAX rabbit hole  Filter data based on a comma separated list using CALCULATE and PATHCONTAINS, even with external models!

I decided I wanted to extend the previous example with this question. I created a new matrix visual and added the measure that shows the sales based on the data from the slicer selected (as discussed in the previous blog post). Next I added a measure that contains the list they want to filter. So now in this case I want to see sales for Colors Red, Green, Gold as entered in the comma separated list:

 Filter data based on a comma separated list using CALCULATE and PATHCONTAINS, even with external models!

Next I need to write a measure that turns that comma separated list into a filter argument. As you might remember DAX actually contains some  functions that allows us to work with comma separated lists to do parent child using functions like PATH and PATHCONTAINS. We can use those to determine if rows need to be filtered or not.  To get the color list and put that on the filter context I create the following DAX expression:

Measure = var colorslist = TRIM(SUBSTITUTE(SUBSTITUTE([ColorList], ",","|")," ",""))
          var tableval = CALCULATE(SUM(FactOnlineSales[SalesAmount]),PATHCONTAINS(colorslist , DimProduct[Colorname]))
          return tableval

This measure uses variables for readability. The colorlist variable is used to get the values of the colorlist and then replace the , for | characters as needed for the path functions, then I trim and clean up the spaces. using TRIM and SUBSTITUTE. The result is returned in the colorlist variable. Next I use our favorite function CALCULATE to determine whether or not each row in the DimProduct[Colorname] is part of the colorlist, the PATHCONTAINS function will return TRUE whenever the value of the colorname is part of the colorlist list and thus calculate the SUM for those colors.

This works like a charm:

 Filter data based on a comma separated list using CALCULATE and PATHCONTAINS, even with external models!

I made sure the slicer selection is the same as the comma separated list to test the numbers. Now if we change it to “Red, Gold ” to see if it works:

 Filter data based on a comma separated list using CALCULATE and PATHCONTAINS, even with external models!

and it does. So voila now you can let users type in their selection into a measure.

Now this only works when you have access to the model but you can also do this using Power BI desktop pointing to an external model (either SSAS or a model already hosted in Power BI)  in the same fashion. Now your users can connect to an external model and create their own custom selections!

I have uploaded the workbook here.

Let’s block ads! (Why?)

Kasper On BI

Using Azure Analysis Services on Top of Azure Data Lake Storage

The latest release of SSDT Tabular adds support for Azure Data Lake Store (ADLS) to the modern Get Data experience (see the following screenshot). Now you can augment your big data analytics workloads in Azure Data Lake with Azure Analysis Services and provide rich interactive analysis for selected data subsets at the speed of thought!

ADLS Connector Available Using Azure Analysis Services on Top of Azure Data Lake Storage

If you are unfamiliar with Azure Data Lake, check out the various articles at the Azure Data Lake product information site. Also read the article “Get started with Azure Data Lake Analytics using Azure portal.”

Following these instructions, I provisioned a Data Lake Analytics account called tpcds for this article and a new Data Lake Store called tpcdsadls. I also added one of my existing Azure Blob Storage accounts, which contains a 1 TB TPC-DS data set, which I already created and used in the series “Building an Azure Analysis Services Model on Top of Azure Blob Storage.” The idea is to move this data set into Azure Data Lake as a highly scalable and sophisticated analytics backend, from which to serve a variety of Azure Analysis Services models.

For starters, Azure Data Lake can process raw data and put it into targeted output files so that Azure Analysis Services can import the data with less overhead. For example, you can remove any unnecessary columns at the source, which eliminates about 60 GB of unnecessary data from my 1 TB TPC-DS data set and therefore benefits processing performance, as discussed in “Building an Azure Analysis Services Model on Top of Azure Blob Storage–Part 3″.

Moreover, with relatively little effort and a few small changes to a U-SQL script, you can provide multiple targeted data sets to your users, such as a small data set for modelling purposes plus one or more production data sets with the most relevant data. In this way, a data modeler can work efficiently in SSDT Tabular against the small data set prior to deployment, and after production deployment, business users can get the relevant information they need from your Azure Analysis Services models in Microsoft Power BI, Microsoft Office Excel, and Microsoft SQL Server Reporting Services. And if a data scientist still needs more than what’s readily available in your models, you can use Azure Data Lake Analytics (ADLA) to run further U-SQL batch jobs directly against all the terabytes or petabytes of source data you may have. Of course, you can also take advantage of Azure HDInsight as a highly reliable, distributed and parallel programming framework for analyzing big data. The following diagram illustrates a possible combination of technologies on top of Azure Data Lake Store.

Analyze Big Data 1024x547 Using Azure Analysis Services on Top of Azure Data Lake Storage

Azure Data Lake Analytics (ADLA) can process massive volumes of data extremely quickly. Take a look at the following screenshot, which shows a Data Lake job processing approximately 2.8 billion rows of TPC-DS store sales data (~500 GB) in under 7 minutes!

Processing Store Sales 1024x741 Using Azure Analysis Services on Top of Azure Data Lake Storage

The screen in the background uses source files in Azure Data Lake Storage and the screen in the foreground uses source files in Azure Blob Storage connected to Azure Data Lake. The performance is comparable, so I decided to leave my 1 TB TPC-DS data set in Azure Blob Storage, but if you want to ensure absolute best performance or would like to consolidate your data in one storage location, consider moving all your raw data files into ADLS. It’s straightforward to copy data from Azure Blob Storage to ADLS by using the AdlCopy tool, for example.

With the raw source data in a Data Lake-accessible location, the next step is to define the U-SQL scripts to extract the relevant information and write it along with column names to a series of output files. The following listing shows a general U-SQL pattern that can be used for processing the raw TPC-DS data and putting it into comma-separated values (csv) files with a header row.

@raw_parsed = EXTRACT child_id int,
                empty string
FROM "/{*}_{child_id}_100.dat"
USING Extractors.Text(delimiter: '|');

@filtered_results = SELECT 
FROM @raw_parsed

OUTPUT @filtered_results
TO "//.csv"
USING Outputters.Csv(outputHeader:true);

The next listing shows a concrete example based on the small income_band table. Note how the query extracts a portion of the file name into a virtual child_id column in addition to the actual columns from the source files. This child_id column comes in handy later when generating multiple output csv files for the large TPC-DS tables. Also, the WHERE clause is not strictly needed in this example because the income_band table only has 20 rows, but it’s included to illustrate how to restrict the amount of data per table to a maximum of 100 rows to create a small modelling data set.

@raw_parsed = EXTRACT child_id int,
                      b_income_band_sk string,
                      b_lower_bound string,
                      b_upper_bound string,
                      empty string
FROM "wasb://income-band@aasuseast2/{*}_{child_id}_100.dat"
USING Extractors.Text(delimiter: '|');

@filtered_results = SELECT b_income_band_sk,
FROM @raw_parsed
ORDER BY child_id ASC

You can find complete sets of U-SQL scripts to generate output files for different scenarios (modelling, single csv file per table, multiple csv files for large tables, and large tables filtered by last available year) at the GitHub repository for Analysis Services.

For instance, for generating the modelling data set, there are 25 U-SQL scripts to generate a separate csv file for each TPC-DS table. You can run each U-SQL script manually in the Microsoft Azure portal, yet it is more convenient to use a small Microsoft PowerShell script for this purpose. Of course, you can also use Azure Data Factory, which among other things enables you to run U-SQL scripts on a scheduled basis. For this article, however, the following Microsoft PowerShell script suffices.

$  script_folder = "<Path to U-SQL Scripts>"
$  adla_account = "<ADLA Account Name>"
Login-AzureRmAccount -SubscriptionName "<Windows Azure Subscription Name>"

Get-ChildItem $  script_folder -Filter *.usql |
Foreach-Object {
    $  job = Submit-AdlJob -Name $  _.Name -AccountName $  adla_account –ScriptPath $  _.FullName -DegreeOfParallelism 100
    Wait-AdlJob -Account $  adla_account -JobId $  job.JobId

Write-Host "Finished processing U-SQL jobs!";

It does not take long for Azure Data Lake to process the requests. You can use the Data Explorer feature in the Azure Portal to double-check that the desired csv files have been generated successfully, as the following screenshot illustrates.

Output CSV Files for Modelling Using Azure Analysis Services on Top of Azure Data Lake Storage

With the modelling data set in place, you can finally switch over to SSDT and create a new Analysis Services Tabular model at the 1400 compatibility level. Make sure you have the latest version of the Microsoft Analysis Services Projects package installed so that you can pick Azure Data Lake Store from the list of available connectors. You will be prompted for the Azure Data Lake Store URL and you must sign in using an organizational account. Currently, the Azure Data Lake Store connector only supports interactive logons, which is an issue for processing the model in an automated way in Azure Analysis Services, as discussed later in this article. For now, let’s focus on the modelling aspects.

The Azure Data Lake Store connector does not automatically establish an association between the folders or files in the store and the tables in the Tabular model. In other words, you must create each table individually and select the corresponding csv file in Query Editor. This is a minor inconvenience. It also implies that each table expression specifies the folder path to the desired csv file individually. If you are using a small data set from a modelling folder to create the Tabular model, you would need to modify every table expression during production deployment to point to the desired production data set in another folder. Fortunately, there is a way to centralize the folder navigation by using a shared expression so that only a single expression requires an update on production deployment. The following diagram depicts this design.

Folder Navigation by using a shared Expression Using Azure Analysis Services on Top of Azure Data Lake Storage

To implement this design in a Tabular model, use the following steps:

  1. Start Visual Studio and check under Tools -> Extensions and Updates that you have the latest version of Microsoft Analysis Services Projects installed.
  2. Create a new Tabular project at the 1400 compatibility level.
  3. Open the Model menu and click on Import From Data Source.
  4. Pick the Azure Data Lake Store connector, provide the storage account URL, and sign in by using an Organizational Account. Click Connect and then OK to create the data source object in the Tabular model.
  5. Because you chose Import From Data Source, SSDT displays Query Editor automatically. In the Content column, click on the Table link next to the desired folder name (such as modelling) to navigate to the desired root folder where the csv files reside.
  6. Right-click the Table object in the right Queries pane, and click Create Function. In the No Parameters Found dialog box, click Create.
  7. In the Create Function dialog box, type GetCsvFileList, and then click OK.
  8. Make sure the GetCsvFileList function is selected, and then on the View menu, click Advanced Editor.
  9. In the Edit Function dialog box informing you that updates from the Table object will no longer propagate to the GetCsvFileList function if you continue, click OK.
  10. In Advanced Editor, note how the GetCsvFileList function navigates to the modelling folder, enter a whitespace character at the end of the last line to modify the expression, and then click Done.
  11. In the right Queries pane, select the Table object, and then in the left Applied Steps pane, delete the Navigation step, so that Source is the only remaining step.
  12. Make sure the Formula Bar is displayed (View menu -> Formula Bar), and then redefine the Source step as = GetCsvFileList() and press Enter. Verify that the list of csv files is displayed in Query Editor, as in the following screenshot.
    Invoke GetCsvFileList 1024x665 Using Azure Analysis Services on Top of Azure Data Lake Storage
  13. For each table you want to import:
    1. Right-click the existing Table object and click Duplicate.
    2. In the Content column, click on the Binary link next to the desired file name (such as call_center) and verify that Query Editor parses the columns and detects the data types correctly.
    3. Rename the table according to the csv file you selected (such as call_center).
    4. Right-click the renamed table object (such as call_center) in the Queries pane and click Create New Table.
    5. Verify that the renamed table object (such as call_center) is no longer displayed in italic, which indicates that the query will now be imported as a table into the Tabular model.
  14. After you created all desired tables by using the sequence above, delete the original Table object by right-clicking on it and selecting Delete.
  15. In Query Editor, click Import to add the GetCsvFileList expression and the tables to your Tabular model.

During the import, SSDT Tabular pulls in the small modelling data set. And prior to production deployment, it is now a simple matter of updating the shared expression by right-clicking on the Expressions node in Tabular Model Explorer and selecting Edit Expressions, and then changing the folder name in Advanced Editor. The below screenshot highlights the folder name in the GetCsvFileList expression. And if each table can find its corresponding csv file in the new folder location, deployment and processing can succeed.

Changing the CSV Folder 1024x644 Using Azure Analysis Services on Top of Azure Data Lake Storage

Another option is to deploy the model with the Do Not Process deployment option and use a small TOM application in Azure Functions to process the model on a scheduled basis. Of course, you can also use SSMS to connect to your Azure Analysis Services server and send a processing command, but it might be inconvenient to keep SSDT or SSMS connected for the duration of the processing cycle. Processing against the full 1 TB data set with a single csv file per table took about 15 hours to complete. Processing with four csv files/partitions for the seven large tables and maxActiveConnections on the data source set to 46 concurrent connections took roughly 6 hours. This is remarkably faster in comparison to using general BLOB storage, as in the Building an Azure Analysis Services Model on Top of Azure Blob Storage article, and suggests that there is potential for performance improvements in the Azure BLOB storage connector.

Processing 1024x529 Using Azure Analysis Services on Top of Azure Data Lake Storage

Even the processing performance against Azure Data Lake could possibly be further increased, as the processor utilization on an S9 Azure Analysis Server suggests (see the following screenshot). For the first 30 minutes, processor utilization is close to the maximum and then it decreases as the AS engine finishes more and more partitions and tables. Perhaps with an even higher degree of parallelism, such as with eight or twelve partitions for each large table, Azure AS could keep processor utilization near the maximum for longer and finish the processing work sooner. But processing optimizations through elaborate table partitioning schemes is beyond the scope of this article. The processing performance achieved with four partitions on each large table suffices to conclude that Azure Data Lake is a very suitable big-data backend for Azure Analysis Services.

QPUs 1024x352 Using Azure Analysis Services on Top of Azure Data Lake Storage

There is currently only one important caveat: The Azure Data Lake Store connector only supports interactive logons. When you define the Azure Data Lake Store data source, SSDT prompts you to log on to Azure Data Lake. The connector performs the logon and then stores the obtained authentication token in the model. However, this token only has a limited lifetime. Chances are fair that processing succeeds after the initial deployment, but when you come back the next day and want to process again, you get an error that “The credentials provided for the DataLake source are invalid.“ See the screenshot below. Either you deploy the model again in SSDT or you right-click the data source in SSMS and select Refresh Credentials to log on to Data Lake again and submit fresh tokens to the model.

refresh creds 1024x710 Using Azure Analysis Services on Top of Azure Data Lake Storage

A subsequent article is going to cover how to handle authentication tokens programmatically, so stay tuned for more on connecting to Azure Data Lake and other big data sources on the Analysis Services team blog. And as always, please deploy the latest monthly release of SSDT Tabular and send us your feedback and suggestions by using SSASPrev at Microsoft.com or any other available communication channels such as UserVoice or MSDN forums.

Let’s block ads! (Why?)

Analysis Services Team Blog

Stay in Touch using Dynamics 365

Are you marketing to all of your Dynamics Accounts, Contacts and Leads? If you are marketing with Dynamics 365, you know it’s very easy to identify which of your records are part of an active marketing campaign. But what about the new people that are added? How about the people that never responded to an old campaign, and are collecting dust? There has to be a better way to stay in touch. Check out this app for Dynamics 365 called Stay In Touch 365.

With a single click, Stay In Touch 365 will search for any active Accounts, Contacts and Leads who are NOT on an active marketing list. Three new marketing lists are created, giving you an easy way to reengage.

Stay In Touch 365

1. Visit Microsoft AppSource and download Stay In Touch 365 by clicking here.

2 .Open the Stay In Touch 365 Managed Solution and click on Configuration.

1 300x231 Stay in Touch using Dynamics 365

3. Click on the Yellow Star that reads “Click Here To Begin.”

2 768x433 Stay in Touch using Dynamics 365

4. You will be redirected to the Parent Business Unit. Click the box labeled “Click Me to Generate New Marketing Lists!”

3 Stay in Touch using Dynamics 365

5. Your new marketing lists are now generated. Click on one of the links to open a list.

4 Stay in Touch using Dynamics 365

5 Stay in Touch using Dynamics 365

Beringer Technology Group, a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics 365 and CRM for Distribution. We also provide expert Managed IT ServicesBackup and Disaster RecoveryCloud Based Computing and Unified Communication Systems.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

August 16, 2017 / Erik Svensen

Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

I am working on a project where customer would like to update a PowerBI dataset when specific events occur – in their case when an email with a specific subject arrives in a specific mailbox – and we succeeded by using Microsoft Flow, Azure Functions and PowerBI REST API.

But as I recently bought flic buttons (https://flic.io/) so I wanted to change the setup a bit so the dataset is updated when I click the button – sort of my first IOT project.

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

The ingredients

  • 1 dataset in your PowerBI tenant that can be scheduled for refresh
  • 1 Power BI Pro license (or premium if you want more than 8 refreshes a day)
  • 1 Registration of an Application for Power BI ( link – https://dev.powerbi.com/apps )
  • 1 Azure subscription
  • 1 Power Shell script
  • 1 Azure Function/Logic App
  • 1 Microsoft Flow license in any tenant
  • 1 Flic button
  • 10-15 minutes

First up – the information about the dataset and the group

We need the dataset ID and group ID where the dataset is placed.

The easiest way to find it is to navigate to the settings and the dataset tab and click the data set – the group ID is in blue area and the dataset ID is in the red box,

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Notice that the Scheduled refresh is set to Off.

Get a client ID for your Power BI Application

You will need to register an application for the update and you can do this via

Sign in and fill out the information.

The App Type has to be Native app and the Redirect URL must be – urn:ietf:wg:oauth:2.0:oob

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Select both the Dataset API’s and in step 4 click to register the App and save the Client ID.

Powershell script to update the dataset

You can find the Powershell script here – https://github.com/Azure-Samples/powerbi-powershell/blob/master/manageRefresh.ps1

Fill in the $ ClientID, $ groupID and $ datasetID in the script.

In order to avoid popup windows with authentication in the Azure Function I had to modify the script and hardcode the username and password in my script.

$ userName = “username”

$ password = “password”

$ creds = New-Object “Microsoft.IdentityModel.Clients.ActiveDirectory.UserCredential” -ArgumentList $ userName, $ password

$ authContext = New-Object “Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext” -ArgumentList $ authority

$ authResult = $ authContext.AcquireToken($ resourceAppIdURI, $ clientId, $ creds)

The bold lines are added and the $ authResult is modified as well.

Azure Function

Log on to the Azure portal – https://portal.azure.com/ and create a Function App

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Wait for the deployment to finish – it takes about 1-2 minutes.

Choose to create your own custom function

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Select language Powershell

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

And the HttpTrigger

In my example, I will keep authorization level to anonymous – but you can add more security if needed.

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Now replace the script with the PowerShell script from the previous section.

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Now you can save and click run to see if the script works

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

If the script succeeds – you can check whether the dataset is updated via the Power BI dataset refresh history

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

By clicking Get Function URL

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

We will get the POST Url that we will use later in our Flow.

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Now – Microsoft Flow

Open your Microsoft Flow in your Office 365 portal and create a new blank flow

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

Now we can select between a lot of different triggers that can activate our data refresh in Power BI. It could be certain tweet, a mail in an inbox or in onedrive and many more – we currently have 160 triggers to choose from.

In my example, I want to integrate with a flic button

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

And by connecting to my Flic account I can choose the button I want to use and which Event (it has click, double click and hold) it should be attached to on the button

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

To activate the refresh, I add a new step as an action I choose the HTTP where I can post the URL to the function.

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

After the HTTP action, I add a new step that sends and email to me – with information about the refresh and which button was presses and where it was located when pressed.

Then save the flow and we should be ready to update the dataset via the button

Trying the flow

So by clicking the button we can see the Run History

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

As you can see the flow ran as 23:25

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

And when checking the refresh history in Power BI we can see its refreshed –

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

And I get an e-mail as well

 Updating a #PowerBI dataset using Microsoft #Flow, #Azure Functions and a #Flic button

OBS – The refresh can “only” be done 8 times with a Power BI Pro License but 48 if you should be so lucky that you can have access to a premium edition.

Wrapping up

I have been so fun to create this and it was surprisingly easy – with off course some challenges underway.

But the toolset is amazing and combining Flow and Power BI opens a lot of possibilities and ideas to activate refreshes and do stuff with the Power BI REST API triggered by different events.

I will at least have an extra look at the API and see what can be done – link to documentation

Let me know if you find this interesting.


Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

Got a good question this week that had me scratching my head for a bit but then I remembered a new function that was added to Power BI (and SSAS) recently called TreatAS. Marco covered it in detail here. So what they wanted to do is have a visual where they can view the sales and compare it with sales of different colors. So let’s get too it.

Too start out please make sure you read Marco’s post really well, the trick I am about to show you works really well but if you can use the alternative (real relationships) it is preferred for performance reasons. Having said that lets continue.

I have a very simple model with Sales by product:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

Now I visualize it by creating a visual that shows Sales by Manufacturer:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

The goal here to show 4 bars:

  • one with the total sales
  • one with sales only for specific colors selected by a slicers
  • one with sales only for specific colors selected by another slicers
  • one with the remaining sales not part the selection

To start I want to populate and create the slicers. I can’t use the values of the table itself as that would filter all the results so I have to create two new tables with these values to make “disconnected” slicers. To do this I create a two new calculated tables

Colors = VALUES(DimProduct[ColorName])


MoreColors = VALUES(Colors[ColorName])

This created two new tables in my model with just the colors:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

Observe I didn’t create any relationships as I want to control this in the measure itself (more on this later)

Next I add the values as slicers to the report:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

So now I want to add the bars for sales of all colors selected in Color 1 and another for all colors in Color 2. To do this I add a new measure using the new TREATAS function:


What this measure does is calculate the Sum of SalesAmount and filtering the ColorName from the DimProduct table with the values if the current selected values of ColorName from the Colors table, like an actual relationship was used.

Now going back to the visual where I added the measure and selected 2 colors, we only see the sales for those 2 colors selected:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

Now adding the same measure for Color 2

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

MoreColorsSelected = CALCULATE(SUM(FactOnlineSales[SalesAmount]),USERELATIONSHIP(MoreColors[ColorName],DimProduct[ColorName]))

As last measure I am adding the remaining sales:

RemainingColors = SUM(FactOnlineSales[SalesAmount])-[ColorsSelected]-[MoreColorsSelected]

Now that I have that I can also add this using stacked charts to create a single bar that is split up dynamically:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

Now this works great but I hope you read Marco’s blog post and read his warning, whenever you can you should always use relationships. While I was working on this blog post it dawned on me that we can just as well use Inactive relationships here. So I went to the diagram view and created them:

 Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

Then instead of using TREATAS I am using our traditional USERELATIONSHIP instead

MoreColorsSelected = //CALCULATE(SUM(FactOnlineSales[SalesAmount]),TREATAS(VALUES(MoreColors[ColorName]),DimProduct[ColorName]))

this will, only for this measure, activate the relationship and filter the product table with the selects colors. This gives the same results but with better performance, now with a small dataset like this you will never notice any issues but if you use this with billions of rows and complex calculations any performance gain will help. It still shows you that you can use TREATAS in other more interesting scenario’s or example lets say we want to see the sum of sales amount for both selections at the same time. I could write something like this:

MoreColorsSelected2 = var Selections = UNION(VALUES(Colors[ColorName]),VALUES(MoreColors[ColorName]))
return CALCULATE(SUM(FactOnlineSales[SalesAmount]),TREATAS(Selections ,DimProduct[ColorName]))

This will use the UNION of these two values as filter for the ColorName column, now we can extend this to do all kind of cool things here but I will leave that up to your imagination  Dynamic data comparisons using disconnected slicers, TreatAs and inactive relationships

You can download the entire file here: https://github.com/Kjonge/DemoWorkbooks/blob/master/selection.pbix

Let’s block ads! (Why?)

Kasper On BI

Here’s how Microsoft executives are using Power BI

The Customer Data and Analytics (CDnA) team at Microsoft enables strategic data insights that shape the entire organization, from high-level leadership policies to small product decisions. At its core, CDnA creates and monitors indicators, known as “Power Metrics,” for some of the key divisions and businesses in Microsoft. CDnA delivers Power BI dashboards to several teams, including the Windows and Devices Group (WDG), Office, Bing, Cortana and Microsoft CEO Satya Nadella’s Senior Leadership Team (SLT). The SLT leverages the Power BI dashboards to monitor progress on strategic initiatives at the company.

Both CDnA and the Power BI product have mutually benefited from a close relationship since the Power BI public preview in 2015. In fact, the early and ongoing input from this internal Microsoft team and its users has helped make Power BI the “enterprise ready”, robust, and feature-rich platform it is today.

The joint history of Power BI and CDnA

In 2013, the CDnA team started using an on-premises Microsoft reporting solution, leveraging PowerPivot integrated with SharePoint, Power View, and Excel Services. PowerPivot, for data modeling, and Power View, for data visualization, became the foundation used by Power BI. Greg Koehler, Senior Program Manager Lead for CDnA, was the first person to set up that early portal, and he began hearing about Power BI as the next big tool in late 2014.

At the time, Power BI was only in preview and still being developed to accommodate enterprise-level requirements. Koehler and colleagues teamed up with Power BI Product Managers in a strategic partnership to work on prioritizing new features and migrating key reports and dashboards from Power View to Power BI. This partnership resulted in the development of several enterprise features for large teams, such as support for sharing dashboards and reports to security groups, improved interactivity for visuals and slicers, performance optimizations, and user experience improvements for Power BI Mobile apps.

The partnership between the two teams continues today, and has helped encourage Power BI developers to create a truly enterprise-level service. CDnA has grown its BI portal to include additional teams, each of which maintains dashboards for audiences that vary from a dozen extremely busy executives to tens of thousands of curious monthly active users.

cb67c471 6e54 43ce a486 fa97e14aaad2 Here’s how Microsoft executives are using Power BI
Measures and values in this dashboard example are purely sample data and for illustrative purposes only. Click to see large version.

Combining data from all sources

With a company as large and complex as Microsoft, simply identifying key metrics can be a challenge, let alone wrangling the data from hundreds of sources. Power BI helps by uniting this data into a single report interaction model for end users, whether the data is from a cloud service, streaming data, or on-premises location. For CDnA, a seamless migration from Power View to Power BI was possible because both supported their on-premises SQL Server Analysis Services (SSAS) tabular data. CDnA users interact with hundreds of different data sources via the Power BI cloud, with all the CDnA on-premises data hooked up to a single Power BI Enterprise Gateway (with a few extra gateways for load balancing and redundancy).

CDnA focuses on two types of data: usage and customer satisfaction. Customer satisfaction at Microsoft is often measured by Net Promoter Score (NPS), which looks at the proportion of customers that are most enthusiastic about a given product or service. A sample of users of all Microsoft products, from Cortana to Edge to Windows Calendar to, yes, Power BI, are asked for feedback, providing both numerical and anecdotal ratings. All new consumer desktop users of Windows 10 are sent similar ratings requests two weeks after installing the operating system. The data is stored in Microsoft’s internal data storage system, built on Azure Data Lake, and then aggregated into a more usable dataset that is ultimately processed into an Analysis Services tabular model.

While it might sound complicated, and it would be to most people, Senior Data Scientist Sanjay Ramanujan calls the whole process very simple to use. “We scale down the data by taking the highest top box (survey choice) of 5 and the overall NPS score.” Dashboards are updated daily and display numerical KPIs as well as feedback “verbatims”, or open-form written responses. NPS score breakdowns are also provided by geographic area and product. Sanjay’s CDnA NPS team has numerous dashboards that provide a very good indication of what is, and is not, being well received by customers.

Market share data is displayed in a dashboard known as OneShare, and supported by CDnA team members such as Karl Kreis, Senior Product Intelligence Manager. OneShare stemmed from the fact that Microsoft obtains many variations of market research data, from sources such as retail sales, competitive research, and distributors.

In the past, much of this data was maintained by separate teams in separate reports, each assigning their own share value – which led to some apparent contradictions. Now, one Power BI dashboard sits on top of all the data, creating a single source of truth and driving greater alignment across every team.

“By putting this dashboard in Power BI,” Kreis says, “we can change the mindset of how meetings are run with business users. Instead of using a static screenshot, they can just open up the dashboard in a meeting, [including from their phone], use some slicers, and look directly at their data.”

12daf546 b9e3 4f52 8a4f 5bd1dfc7aabc Here’s how Microsoft executives are using Power BI
Measures and values in this dashboard example are purely sample data and for illustrative purposes only. Click to see large version.

Guiding executive decisions through data insights

Dashboards designed for executives may not have a huge audience, but their insights are extremely valuable and shape the future of Microsoft. The Windows and Devices Group (WDG) analyst team in CDnA, for example, supports 50 dashboards with thousands of users, but also one dashboard specifically for executives.

The analyst team, including Shweta Mittal, Director Technical Data Analyst, syncs closely with the Power BI development team for their needs. They recently started using a live report tile on their dashboards, making them more interactive and by extension more productive. Shweta says, “[Our executive dashboard] focuses on live discussion of analysis and insights. You don’t have to get back to somebody – the executive can answer a question, then and there in the meeting, and then move on to having a productive discussion.”

Kanchan Sachdeva is a Technical Data Analyst on the CDnA team who focuses on a very specific audience: the Senior Leadership Team, or SLT. This group is limited to about 30 of the most influential executives in Microsoft, and they base their decisions in part on a Power BI dashboard that is refreshed every day. The SLT dashboard focuses on 20-30 key metrics that drive Microsoft’s business, including break downs by region. This reporting solution is developed on top of a 2016 SSAS tabular model, and is redesigned every fiscal year to align with the most current company goals. The CDnA team keeps this dashboard very simple, without many details, and limits the use of “fancy” colors. For the upcoming fiscal year, they are using a Power BI custom visual, called Power KPI, to combine multiple visualization types into a single report tile.

Kanchan notes, “Executive expectations for dashboards are very different, compared to the average business user. They don’t have a lot of time, so they need most of their information in a single glance.”

The CDnA team’s collective work made up 25% of all Power BI usage within Microsoft in early 2017. John Kahan, General Manager of the CDnA, praises the amazing relationship they’ve formed over time with the Power BI development team. “We meet with the Power BI team weekly to share our most recent requirements, and they’re super responsive to our needs as customers. I haven’t seen a development team operate like that before!” Kahan praises Power BI’s visualization layer as the key for organizations that want to quickly understand trends and answer questions on the spot.

James Phillips, CVP of the Microsoft Business Applications Group and the executive in-charge of the Power BI product, agrees that the close relationship between the two teams benefitted not only Microsoft but the whole Power BI community. “The CDnA team led the charge on the use of Power BI for data-driven decision making, within Microsoft. Their use of Power BI as an early adopter in large scale, highly visible, mission critical scenarios helped make Power BI an enterprise grade solution for all our customers.”

Power BI is the tool of choice for Microsoft’s usage data, satisfaction metrics, and executive KPIs. If Power BI can help run one of the world’s largest and fastest moving companies, imagine what it can do for your business!

Related Articles:

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

What Are Containers and Why You Should Consider Using Them

rsz bigstock 173232536 What Are Containers and Why You Should Consider Using Them

In the world of technology and integration, a new deployment option is taking the industry by storm: containers. To think about how containers work for software, we have to think about physical shipping containers first. Containers are crucial to the shipping industry because their standard sizes are what keeps the industry running efficiently and streamlined. Before the use of shipping containers, shipping anything worldwide was a laborious pain. Standard sizes have made the process a lot easier.

The same idea is true with software containers. A container is an OS-level virtualization method for deploying and running applications without launching an entire VM for each application. Another way to think about containers is an isolated system that runs on a single control host. Code and its dependencies are packed into a container and can run anywhere. Because they are small, you can pack a lot of containers onto a single computer.

According to Gartner, by 2018, more than 50 percent of new workloads will be deployed using containers in at least one stage of the application lifecycle. Today, only about 22 percent of workloads use containers while 31 percent are evaluating switching to containers. Of the 31 percent, 64 percent are hoping to implement them within the next year.

Pros and cons of containers

Containers have four major advantages:

  • Faster deployment
  • Less overhead
  • Easier migration
  • Faster restart

These advantageous attributes are what made cloud applications possible. Containers also enable development and editing of microservices, so instead of making major changes to how the applications interact, containers can work independently of each other. This allows for faster and error-free software development.

One of the major disadvantages to containers is lack of isolation from the host OS. Because of this, security threats have easier access to the entire system. Another major disadvantage is each container must use the same OS as the base OS, whereas on a VM, unique OSs’ can be run.

The future of containers

It’s interesting to note that containers are being implemented by new companies with newer software, whereas older and bigger companies are still utilizing VMs. If a company did want to move away from using VMs, they could transfer those applications to a cloud-based service.

Containers can help your business quickly adapt to changes in the marketplace, both application-based and in your own infrastructure. Containers can help you to simplify integration of backend APIs and systems, embracing native features of your container management platforms to speed up your time to market.

With containers, you get to enjoy the flexibility to  move between cloud environments without rewriting your applications. For more information on containers and how to utilize them for your business, check out BusinessWorks Container Edition.

Let’s block ads! (Why?)

The TIBCO Blog