Category Archives: Self-Service BI

Power BI Embedded capacity-based SKUs coming to Azure

Today, Microsoft announced new Power BI Embedded SKUs will be available in Azure beginning Oct. 2.

Power BI Embedded is intended to simplify how ISVs and developers use Power BI capabilities, helping them quickly add stunning visuals, reports and dashboards into their apps – in the same way apps built on Azure leverage services like Machine Learning and IoT. By enabling easy-to-navigate data exploration in their apps, ISVs allow their customers to make quick, informed decisions in context.

In May we announced the convergence of the Power BI and Power BI Embedded services. The convergence delivered one API surface, a consistent set of capabilities and access to the latest features across both services. Additionally, we introduced a capacity-based pricing model, simplifying how Power BI is consumed.

With Power BI Embedded, ISVs and developers will now also have added flexibility in how they embed intelligence in their apps using the Power BI APIs. ISVs and developers can take advantage of minimized development efforts to achieve faster time to market and differentiate themselves by infusing Microsoft’s world-class analytics engine in their app. Equally, developers can spend time focusing on their solution to meet customer demands, instead of developing visual analytics features. Additionally, Power BI Embedded enables you to work within the familiar development environments – Visual Studio and Azure – you already use.

Have an existing app with embedded Power BI content using Power BI Premium? If you are an ISV or developer delivering apps or an organization using them, no action is needed – you (and your customers) can continue using these apps without interruption. If you have an existing app built on Power BI Workspace Collections and are interested in taking advantage of the converged API surface and the new, capacity-based Azure SKUs, visit documentation for migration guidance.

New to embedding visuals, reports, and dashboards in your apps? ISVs and developers should look to Power BI Embedded – follow these steps to get started:

1. Setup the Power BI Embedded environment for development

Setup your environment for testing. Make sure you have an Azure Active Directory (Azure AD) tenant. You can use an existing tenant or create a new one. Then, create an Azure AD user and signup for the Power BI service with that user with a Power BI Pro license. Register your app in Azure AD and then create an App Workspace in Power BI so you can publish reports.

Learn more

2. Embed content

Integrate your Power BI content into your application using the Power BI and JavaScript APIs. Authenticate with your Power BI account to embed your dashboards, reports and tiles. 

Get hands-on experience with the Power BI – Report Embed Sample

3. Publish your solution to production

Once you’re ready, register your application in your production environment.

Learn more

Power BI Embedded utilizes Azure consumption-based, hourly pricing model with as well as the ability to pause and resume the service, and scale up/down or out/in as necessary without commitments. Watch the Azure website in the coming days for more information on SKU details, pricing and availability.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Embed your Power BI report with predefined filters

I have recently gotten this questions a few times so time to dust off my coding skills  Embed your Power BI report with predefined filters and get to work … the key question in both cases is how can I make sure I filter my report before any queries are being send to the underlying data source.

When using Power BI embedded you can use Javascript to pass in a filter instead of using the Power BI UI. To test how this worked I installed the sample app from GitHub: https://github.com/Microsoft/PowerBI-Developer-Samples and then uploaded a report to my Power BI that uses a on prem AS model. I use that so I can profile the queries easily and show you how it works. The same would apply for any data source.

Here is the embedded report that currently shows all colors:
 Embed your Power BI report with predefined filters

The DAX query send to my SSAS is the following:

EVALUATE
TOPN(
     1002,
     SUMMARIZECOLUMNS(
                      'DimProduct'[ColorName],
                       "Sum_of_SalesAmount", 'FactOnlineSales'[Sum of SalesAmount]
                    ),
    [Sum_of_SalesAmount],
    0,
    'DimProduct'[ColorName],
    1
)

ORDER BY
[Sum_of_SalesAmount] DESC, 'DimProduct'[ColorName]

Now lets filter it on load. Let’s look at filter method 1 (and the one you see most typical): https://github.com/Microsoft/PowerBI-JavaScript/wiki/Filters.

This JavaScript method uses the report loaded event to see if the report is loaded and then push in the filter into the report:

const filter = {
        $  schema: "http://powerbi.com/product/schema#basic",
        target: {
            table: "DimProduct",
            column: "ColorName"
        },
        operator: "In",
        values: ["Silver"]
    };

var config = {
              type: 'report',
              tokenType: models.TokenType.Embed,
              accessToken: accessToken,
              embedUrl: embedUrl,
              id: embedReportId,
              permissions: models.Permissions.All,
              settings: {
                            filterPaneEnabled: true,
                            navContentPaneEnabled: true
              }
};

// Embed the report and display it within the div container.
var report = powerbi.embed(reportContainer, config);

report.on('loaded', event => {
        report.getFilters()
            .then(filters => {
                filters.push(filter);
                return report.setFilters(filters);
            });

When we run this we do see the report is getting filtered:
 Embed your Power BI report with predefined filters

Then I see 2 queries being send:

EVALUATE
TOPN(
     1002,
     SUMMARIZECOLUMNS(
                      'DimProduct'[ColorName],
                      "Sum_of_SalesAmount", 'FactOnlineSales'[Sum of SalesAmount]
                    ),
                    [Sum_of_SalesAmount],
                    0,
                    'DimProduct'[ColorName],
                    1
)

ORDER BY
[Sum_of_SalesAmount] DESC, 'DimProduct'[ColorName]

And

DEFINE VAR __DS0FilterTable =
FILTER(KEEPFILTERS(VALUES('DimProduct'[ColorName])), 'DimProduct'[ColorName] = "Silver")

EVALUATE
TOPN(
1002,
SUMMARIZECOLUMNS(
                 'DimProduct'[ColorName],
                  __DS0FilterTable,
                  "Sum_of_SalesAmount", 'FactOnlineSales'[Sum of SalesAmount]
               ),
               [Sum_of_SalesAmount],
               0,
               'DimProduct'[ColorName],
               1
)

ORDER BY
[Sum_of_SalesAmount] DESC, 'DimProduct'[ColorName]

That is not what we want, we want to send only the last one. But this is to be expected, if we again look at the JavaScript we see that we are only pushing the new filter AFTER the ‘loaded’  event on the report has been triggered.  Now this is great in the case where you want to write code that pushes in filters dynamically from another app but not if you want to filter the page already filtered.

Now there is another option and that that allows you to set a particular configuration on report load (and not AFTER).  So I change my embed config to include filters as described here: https://github.com/Microsoft/PowerBI-JavaScript/wiki/Embed-Configuration-Details

const filter = {
        $  schema: "http://powerbi.com/product/schema#basic",
        target: {
            table: "DimProduct",
            column: "ColorName"
        },
        operator: "In",
        values: ["Silver"]
    };

var config = {
              type: 'report',
              tokenType: models.TokenType.Embed,
              accessToken: accessToken,
              embedUrl: embedUrl,
              id: embedReportId,
              permissions: models.Permissions.All,
              filters: [filter],
              settings: {
                            filterPaneEnabled: true,
                            navContentPaneEnabled: true
              }
};

// Embed the report and display it within the div container.
var report = powerbi.embed(reportContainer, config);

Observe that the filters parameter is an arrary so you can pass in multiple filters if you want.

This now shows us the right report:

 Embed your Power BI report with predefined filters

And sends only the one query:

DEFINE VAR __DS0FilterTable =
FILTER(KEEPFILTERS(VALUES('DimProduct'[ColorName])), 'DimProduct'[ColorName] = "Silver")

EVALUATE
TOPN(
     1002,
     SUMMARIZECOLUMNS(
                      'DimProduct'[ColorName],
                      __DS0FilterTable,
                      "Sum_of_SalesAmount", 'FactOnlineSales'[Sum of SalesAmount]
     ),
     [Sum_of_SalesAmount],
     0,
     'DimProduct'[ColorName],
1
)

ORDER BY
[Sum_of_SalesAmount] DESC, 'DimProduct'[ColorName]

That shows the 2 ways of setting filters in Power BI Embedded, one for each scenario. The one thing I want to mention here is that the filters are placed as part of the JavaScript and thus CANNOT be used as a security feature as anyone can change it.

Let’s block ads! (Why?)

Kasper On BI

Using CUSTOMDATA and SSAS with Power BI Embedded

Another Power BI embedded blog post while I have a playground up and running. Often when you want to use SSAS with Power BI embedded you have a “authentication issue”. Power BI and SSAS both leverage AD so that means that any user you pass from Power BI to SSAS needs to be known in AD. Now that is great in enterprise scenario’s where this usually is the case (and even required) but for ISV’s and any custom application this is a bigger problem. When embedding Power BI in a custom app you generally want to use the custom app’s authentication and not AD. Traditionally SSAS allows you to solve this using CUSTOMDATA and ROLES on the connection string https://docs.microsoft.com/en-us/sql/analysis-services/instances/connection-string-properties-analysis-services. With Power BI you cannot access the connectionstring so it is a bit harder. But there is a way. Let’s solve this issue with Power BI.

First of course I have a SSAS model, for convenience sake I added a simple measure CustomData = Customdata() that will allow us to see what has been passed in.

I start with a PBI desktop file that points to the SSAS model and create a very simple report. You see the CustomData measure doesn’t return anything as it hasn’t been set on the connection string:

 Using CUSTOMDATA and SSAS with Power BI Embedded

Next I publish the report to Power BI and set up a data gateway.  Here I am using a AD user that is admin to SSAS as the username and password (usually a service account)

 Using CUSTOMDATA and SSAS with Power BI Embedded

In the Power BI service the report looks identical:

 Using CUSTOMDATA and SSAS with Power BI Embedded

Now by default Power BI sets the EffectiveUsername property on the connection string as we can see in SQL Server Profiler output below on my Discover end (pro tip: Always have this up and running when testing like this, it is a real treasure trove  Using CUSTOMDATA and SSAS with Power BI Embedded):

SalesModel2.0SalesModelPowerBIDOMAIN\USERNAME103345052SchemaDataTabularebe7a284-b140-4bd9-8c38-e162e36d8f9966a221b2-37b9-283a-1064-6d0c2fb6a4a0c882c8d1-60e8-4c1e-8beb-01865c2751c7
< /PropertyList>

Now here comes the interesting part. In the gateway configuration part we can actually configure Power BI to also send the customdata propertie. This is hidden away in the Map user names pane:

 Using CUSTOMDATA and SSAS with Power BI Embedded

As soon as you check this you will see 2 things. One the report now shows my email address as result from the CUSTOMDATA function

 Using CUSTOMDATA and SSAS with Power BI Embedded

But also in SQL profiler you will see Customdata showing up

SalesModel2.0SalesModelPowerBIDOMAIN\USERNAMEusername@domain.com103345052SchemaDataTabularebe7a284-b140-4bd9-8c38-e162e36d8f9966a221b2-37b9-283a-1064-6d0c2fb6a4a0c882c8d1-60e8-4c1e-8beb-01865c2751c7

Part 1 completed. Now let’s start the Power BI embedding part.

Again I used the PowerBIEmbedded_AppOwnsData sample to get started and get up and running, again by default when you run it will act the same way as Power BI and the credentials used as the service account will be the one passed into SSAS. Now as explained in this Power BI help document you can change the sample to pass in any username you want. Now here is the key, because we configured the gateway to use CUSTOMDATA instead of the EffectiveUsername it will change the CUSTOMDATA part with whatever you specify. So in my sample project (HomeController.cs) I replaced :

var generateTokenRequestParameters = new GenerateTokenRequest(accessLevel: "view");
var tokenResponse = await client.Reports.GenerateTokenInGroupAsync(GroupId, report.Id, generateTokenRequestParameters);

with

var generateTokenRequestParameters = new GenerateTokenRequest("View", null, identities: new List { new EffectiveIdentity(username: "username", roles: new List { "roleA" }, datasets: new List { report.DatasetId.ToString() }) });
var tokenResponse = await client.Reports.GenerateTokenInGroupAsync(GroupId, report.Id, generateTokenRequestParameters);

Now things get interesting on the SSAS side. We see something we might not expect as we are connecting to SSAS as administrator:

Either the user, ‘DOMAIN\USERNAME’, does not have access to the 'Sales' database, or the database does not exist.

The reason we are getting this is because we also specified the role. Going back to the connection string document in MSDN we can read the following on Roles:

Specify a comma-delimited list of predefined roles to connect to a server or database using permissions conveyed by that role. If this property is omitted, all roles are used, and the effective permissions are the combination of all roles. Setting the property to an empty value (for example, Roles=’ ‘) the client connection has no role membership.An administrator using this property connects using the permissions conveyed by the role. Some commands might fail if the role does not provide sufficient permission.

So this means that on this connection he is no longer admin and he needs a role to connect to. So let’s add one, I go into SSMS and add the role with the role name “roleA” as defined in the code above with Read rights:
 Using CUSTOMDATA and SSAS with Power BI Embedded

Next you have to make sure you add the service account as member and lastly you can add a RLS expression.

 Using CUSTOMDATA and SSAS with Power BI Embedded

In my case I added something hard coded:

=DimProduct[ColorName] = IF(CUSTOMDATA() = "username", "Silver", BLANK())

but of course you can follow the complete RLS pattern here and just replace USERNAME for CUSTOMDATA.

Running the report shows what we wanted to see, it only shows Silver colors and returns “username” out of the CUSTOMDATA function:

 Using CUSTOMDATA and SSAS with Power BI Embedded

Looking at the profiler trace we can again see it passes in the right CUSTOMDATA field:

Sales Model 9 0BA0F14D-56AA-4F8A-BC65-93FD2556FD8E 0A1AEB37-E7C5-45EB-8F75-8816A0648B0C 0BA0F14D-56AA-4F8A-BC65-93FD2556FD8E 1033 roleA DOMAIN\USERNAME PowerBI username

That’s it folks  Using CUSTOMDATA and SSAS with Power BI Embedded This opens a lot of interesting scenario’s for connecting to AS from Power BI. Unfortunately this doesn’t work for Azure AS as you cannot use the gateway to connect to it.

Let’s block ads! (Why?)

Kasper On BI

The Great Football Project Rides Again! (In Power BI)

Oh My Goodness Yes.  A Thousand Times Yes.
(Scroll Down for the Interactive Versions)

In a former life (circa 2006), I got tospend several hundred thousand dollars of Microsoft’s money on a football analytics project.  It used SSIS, old-school SSAS, and Excel Services.  We hired a consultant for two months to do the SSIS and SSAS work.  I had to teach him about the game of football.  It took forever to make small amounts of progress.  We were using traditional methodologies.  And we were getting traditional results.

As if that weren’t enough cost, we also paid six figures for access to the NFL statistics – the source data.  (Which was subsequently fed through SSIS and then SSAS by the consultant).

When we ran out of money, the consultant quit helping me and I was stuck with something that felt only half-baked.  Does this sound familiar?  It should if you’ve ever been part of a traditional BI project.

Then, in early 2010,I re-built that same project in Power Pivot.  This time, *I* did all of the SSAS work, because Power Pivot is/was essentially “SSAS made accessible to the VLOOKUP and Pivot crowd.”  The contrast between the new, agile DAX methodology (domain expert directly executing rather than “radioing” garbled instructions to someone else) knocked my socks off.  That “lightning bolt moment” of observation – one in which I realized the world was about to change DRAMATICALLY – launched this, my second career, and also launched this entire company we call PowerPivotPro.

But I cheated a bit, back in 2010, and re-used the results of the SSIS work done by the consultant, AND by using the old data.  My price tag of zero was incredible, and the results were better than anything I’d had using the expensive traditional methodologies, but it all still depended on six figures of spending done in the past.

image thumb 1 The Great Football Project Rides Again! (In Power BI)

“Thank You Sir, May I Have Another?”
(Yes. Yes in fact you may.)

Well, today we have Power Query, AND we have a more vibrant Internet providing us with much more freely-available information.  Put the two together, front them with some nifty Power BI visuals, and POW… a solution that’s more legit than anything I’ve ever done.  Up to date, current data, incredibly compelling and flexible visual canvas.

Check out the Power BI charts in action:

Power BI Football Passing Chart
Power BI Football Teaser

We’re not REMOTELY done with this, as we’re really just getting started.  Stay tuned for LOTS of improvements, as well as “how to” posts about how we did some of this.

Also, our stats are just a tad off for the moment – we’re not yet handling negative and zero-gain plays properly (back to the M lab!), but all of that will be getting sorted out quickly.

And oh yeah, we’re gonna load historical stats too wlEmoticon smile 1 The Great Football Project Rides Again! (In Power BI)

Stay tuned.

Let’s block ads! (Why?)

PowerPivotPro

Power BI Developer community September update

This blog post covers the latest updates for Power BI Developers community. Don’t forget to check out the August blog post, if you haven’t done so already.

Here’s the complete list of September updates for the Power BI Embedded APIs

  • Clone tile & dashboard
  • RLS for AS on-prem
  • RLS additional properties to dataset
  • Export/Clone PBIX (model and reports)
  • Language configuration

Clone Dashboard and Dashboard Tiles

As an ISV, we recommend supporting multiple workspaces for your application’s embedded analytics topology. You can accomplish this by creating a main workspace which contains the ‘golden reports and dashboards’ for your application. When onboarding a new customer to your application, you can then create a new workspace dedicated to that customer. Then you can use the clone API’s to make a copy of the content from the main workspace to the customer’s workspace. To do that the ISV needs automation capabilities for cloning Power BI reporting artifacts. We have previously released support for ‘Clone report’ operation, now we add the support for clone dashboard and dashboard tiles.

Dashboard cloning is based on 2 steps.

1. Create a new Dashboard – this will be the target dashboard.

2. Clone the Dashboard Tiles from the original dashboard to the target dashboard.

Since a dashboard tile has multiple uses, some of the dashboard tiles are bound to reports, some only to datasets (like Streaming data tile for example), and some are not bounded at all (an image, video or a web-content tile). It’s important to note that when cloning a dashboard tile between dashboards in the same workspace, the tile will be bounded by default to its source report or dataset unless a new target source is defined. However, when cloning dashboard tiles between workspaces, you must first make sure the target workspace already contains the target objects to bind to (report or dataset).

Using this method for cloning dashboards gives full control and granularity for both full dashboard cloning and specific dashboard tile cloning.

For more information about the APIs, see Add dashboard and Clone tile.

RLS improvements

In August, we released support for RLS. Now we are releasing additional improvements to extend RLS capabilities and data source support.

Support for Analysis Services live connections

RLS can now be used with an AS on-prem data source. The implementation and usage is mostly like cloud-based RLS, with one important note – the ‘master user’ used to authenticate your application, and call the APIs, must also be an admin of the On-Premises Data Gateway being used for the Analysis Services data source. The reason is that setting the effective identity is allowed only for users who can see all of the data. For AS on-prem, the user must be the gateway admin. For more information see Working with Analysis Services live connections.

Additional properties to dataset

As you can see in the RLS documentation, The GenerateToken API should receive additional context- username, roles and datasets. Each of these parameters needs to be populated according to various scenarios and various data source types. To remove some of the uncertainty and automate the use of RLS, we added additional properties to the JSON object of the dataset:

  • isEffectiveIdentityRequired- If the dataset requires an effective identity, this property value will be ‘true’, indicating that you must send an effective identity in the GenerateToken API.
  • isEffectiveIdentityRolesRequired – When RLS is defined inside the PBIX file, this property value will be ‘true’, indicating that you must specify a role.
  • isOnPremGatewayRequired – When the property value is ‘true’ it indicates that you must use a gateway for this On-prem datasource.

Export/Clone PBIX (model and reports)

Use the Export PBIX API to retrieve the PBIX file by a report identifier. The response will contain a PBIX file object. Once retrieved, you can decide to do two operations with it. Save the PBIX file locally for offline exploration using Power BI Desktop, or use the saved PBIX file and leverage the Import PBIX operation to clone the reports and their respective datasets. Here is a code sample on how to retrieve the PBIX and save it, try it out!

var exportRequestUri = String.Format(“https://api.powerbi.com/v1.0/myorg/reports/{0}/Export“, “Enter the report ID”);

// Create HTTP transport objects

HttpWebRequest request = System.Net.WebRequest.Create(exportRequestUri) as System.Net.HttpWebRequest;

request.Method = “GET”;

request.Headers.Add(“Authorization”, String.Format(“Bearer {0}”, ” Enter your Access token”));

//Get HttpWebResponse from GET request

WebResponse response = request.GetResponse();

using (Stream exportResponse = response.GetResponseStream())

{

//Save stream
CopyStream(exportResponse, “Enter your destination path”);

}

public void CopyStream(Stream stream, string destPath)

{

using (var fileStream = new FileStream(destPath, FileMode.Create, FileAccess.Write))

{

stream.CopyTo(fileStream);

}

}

In our next SDK update we will add support for this API call. For more information, see Export report.

Language configuration

You can define the language and text formatting of your embedded content. Changing this setting will mostly impact the number and date formatting, or the Bing maps view in your embedded content. See the full list of supported languages.

The settings can be configured through the ‘embed configuration’. Read more about the embed configurations (Search for ‘Locale Settings’).

What’s still puzzling

Q: I want to test my content through the sample tool, but how do I get the Embed Token to use it?

We get a lot of questions around using our Sample tool. It’s a great tool to explore our JS API, understand how you can embed content easily and leverage user interactions to enrich your native app experience. In this great video by Adam Saxton (Guy in a Cube), you can learn how to get the Embed Token and other properties to use the sample tool with your own content.

Funnel plot

On occasion, we find patterns in statistical noise that lead us to incorrect conclusions about the underlying data.

This month we are very excited to announce a new R-powered visual type: the funnel plot!

The funnel plot helps you compare samples, and find true outliers among the measurements with varying precision. It’s widely used for comparing institutional performance and medical data analysis.

1c8b1a8d 20b1 4ebd 9dfe 19dd21a16abd Power BI Developer community September update

The funnel plot is easy to consume and interpret. The “funnel” is formed by confidence limits and show the amount of expected variation. The dots outside the funnel are outliers.

You can check the visual out in the Office store.

Tutorial on R-powered visualization in Power BI

R-based visualizations in Power BI have many faces. We support R-visuals and R-powered Custom Visuals. The latest can be one of two types: PNG-based and HTML-based.

What are the pros and cons of each type? How to convert one type to another? How to create a custom visual from the scratch? Or how to change an existing custom visual to suit your needs?  How to debug my R-powered Custom Visual?

All these and many other questions are being answered in our comprehensive step-by-step tutorial on R-powered visualization.  You are invited to follow the steps from simple R script to the high-quality HTML-based custom visual in the store, the source code of every step is included. Very detailed changes from step to step are documented and explained. The tutorial contains bonus examples, useful links, and Tips and Tricks sections.

That’s all for this post. We hope you found it useful. Please continue sending us your feedback, it’s very important to us. Have an amazing feature in mind? please share it and vote in our Power BI Embedded Ideas forum, or our Custom Visuals Ideas forum.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Using Legacy Data Sources in Tabular 1400

The modern Get Data experience in Tabular 1400 brings interesting new data discovery and transformation capabilities to Analysis Services. However, not every BI professional is equally excited. Especially those who prefer to build their models exclusively on top of SQL Server databases or data warehouses and appreciate the steadiness of tried and proven T-SQL queries over fast SQL OLE DB provider connections might not see a need for mashups. If you belong to this group of BI professionals, there is good news: Tabular 1400 fully supports provider data sources and native query partitions. The modern Get Data experience is optional.

Upgrading from 1200 to 1400

Perhaps the easiest way to create a Tabular 1400 model with provider data sources and native query partitions is to upgrade an existing 1200 model to the 1400 compatibility level. If you used Windiff or a similar tool to compare the Model.bim file in your Tabular project before and after the upgrade, you will find that not much was changed. In fact, the only change concerns the compatibilityLevel parameter, which the upgrade logic sets to a value of 1400, as the following screenshot reveals.

At the 1400 compatibility level, regardless of the data sources and table partitions, you can use any advanced modeling feature, such as detail rows expressions and object-level security. There are no dependencies on structured data sources or M partitions using the Mashup engine. Legacy provider data sources and native query partitions work just as well. They bypass the Mashup engine. It’s just two different code paths to get the data.

Provider data sources versus structured data sources

Provider data sources get their name from the fact that they define the parameters for a data provider in the form of a connection string that the Analysis Services engine then uses to connect to the data source. They are sometimes referred to as legacy data sources because they are typically used in 1200 and earlier compatibility levels to define the data source details.

Structured data sources, on the other hand, get their name from the fact that they define the connection details in structured JSON property bags. They are sometimes referred to as modern or Power Query/M-based data sources because they correspond to Power Query/M-based data access functions, as explained in more detail in Supporting Advanced Data Access Scenarios in Tabular 1400 Models.

At a first glance, provider data sources have an advantage over structured data sources because they provide full control over the connection string. You can specify any advanced parameter that the provider supports. In contrast, structured data sources only support the address parameters and options that their corresponding data access functions support. This is usually sufficient, however. Note that provider data sources also have disadvantages, as explained in the next section.

A small sample application can help to illustrate the metadata differences between provider data sources and structured data sources. Both can be added to a Tabular 1400 model using Tabular Object Model (TOM) or the Tabular Model Scripting Language (TMSL).

Note that Analysis Services always invokes the Mashup engine when using structured data sources to get the data. It might or might not for provider data sources. The choice depends on the table partitions on top of the data source, as the next section explains.

Query partitions versus M partitions

Just as there are multiple types of data source definitions in Tabular 1400, there are also multiple partition source types to import data into a table. Specifically, you can define a partition by using a QueryPartitionSource or an MPartitionSource, as in the following TOM code sample.

As illustrated, you can mix query partitions with M partitions even on a single table. The only requirement is that all partition sources must return the same set of source columns, mapped to table columns at the Tabular metadata layer. In the example above, both partitions use the same data source and import the same data, so you end up with duplicate rows. This is normally not what you want, but in this concrete example, the duplicated rows help to illustrate that Analyses Services could indeed process both partition sources successfully, as in the following screenshot.

The Model.bim file reveals that the M and query partition sources reference a structured data source, but they could also reference a provider data source as in the screenshot below the following table summarizing the possible combinations. In short, you can mix and match to your heart’s content.

  Data Source Partition Source Comments
1 Provider Data Source Query Partition Source The AS engine uses the cartridge-based connectivity stack to access the data source.
2 Provider Data Source M Partition Source The AS engine translates the provider data source into a generic structured data source and then uses the Mashup engine to import the data.
3 Structured Data Source Query Partition Source The AS engine wraps the native query on the partition source into an M expression and then uses the Mashup engine to import the data.
4 Structured Data Source M Partition Source The AS engine uses the Mashup engine to import the data.

The scenarios 1 and 4 are straightforward. Scenario 3 is practically equivalent to scenario 4. Instead of creating a query partition source with a native query and having the AS engine convert this into an M expression, you could define an M partition source in the first place and use the Value.NativeQuery function to specify the native query, as the following screenshot demonstrates. Of course, this only works for connectors that support native source queries and the Value.NativeQuery function.

Scenario 2, “M partition on top of a provider data source” is more complex than the others because it involves converting the provider data source into a generic structured data source. In other words, a provider data source pointing to a SQL Server database is not equivalent to a structured SQL Server data source because the AS engine does not convert this provider data source into a structured SQL Server data source. Instead, it converts it into a generic structured OLE DB, ODBC, or ADO.NET data source depending on the data provider that the provider data source referenced. For SQL Server connections, this is usually an OLE DB data source.

The fact that provider data sources are converted into generic structured data sources has important implications. For starters, M expressions on top of a generic data source differ from M expressions on top of a specific structured data source. For example, as the next screenshot highlights, an M expression over an OLE DB data source requires additional navigation steps to get to the desired table. You cannot simply take an M expression based on a structured SQL Server data source and put it on top of a generic OLE DB provider data source. If you tried, you would most likely get an error that the expression references an unknown variable or function.

Moreover, the Mashup engine cannot apply its query optimizations for SQL Server when using a generic OLE DB data source, so M expressions on top of generic provider data sources cannot be processed as efficiently as M expressions on top of specific structured data sources. For this reason, it is better to add a new structured data source to the model for any new M expression-based table partitions than to use an existing provider data source. Provider data sources and structured data sources can coexist in the same Tabular model.

In Tabular 1400, the main purpose of a provider data source is backward compatibility with Tabular 1200 so that the processing behavior of your models does not change just because you upgraded to 1400 and so that any ETL logic programmatically generating data sources and table partitions continues to work seamlessly. As mentioned, query partitions on top of a provider data source bypass the Mashup engine. However, the processing performance is not necessarily inferior with a structured data source thanks to a number of engine optimizations. This might seem counterintuitive, but it is a good idea to double-check the processing performance in your environment. The Microsoft SQL Server Native Client OLE DB Provider is indeed performing faster than the Mashup engine. In very large Tabular 1400 models connecting to SQL Server databases, it can therefore be advantageous to use a provider data source and query partitions.

Data sources and partitions in SSDT Tabular

With TMSL and TOM, you can create data sources and table partitions in any combination, but this is not the case in SSDT Tabular. By default, SSDT creates structured data sources, and when you right-click a structured data source in Tabular Model Explorer and select Import New Tables, you launch the modern Get Data UI. Among other things, the default behavior helps to provide a consistent user interface and avoids confusion. You don’t need to weigh the pros and cons of provider versus structured and you don’t need to select a different partition source type and work with a different UI just because you wanted to write a native query. As explained in the previous section, an M expression using Value.NativeQuery is equivalent to a query partition over a structured data source.

Only if a model contains provider data sources already, say due to an upgrade from 1200, SSDT displays the legacy UI for editing these metadata objects. By the same token, when you right-click a provider data source in Tabular Model Explorer and select Import New Tables, you launch the legacy UI for defining a query partition source. If you don’t add any new data sources, the user interface is still consistent with the 1200 experience. Yet, if you mix provider and structured data sources in a model, the UI switches back and forth depending on what object type you edit. See the following screenshot with the modern experience on the left and the legacy UI on the right – which one you see depends on the data source type you right-clicked.

Fully enabling the legacy UI

BI professionals who prefer to build their Tabular models exclusively on top of SQL Server data warehouses using native T-SQL queries might look unfavorable at SSDT Tabular’s strong bias towards the modern Get Data experience. But the good news is that you can fully enable the legacy UI to create provider data sources in Tabular 1400 models, so you don’t need to resort to using TMSL or TOM for this purpose.

In the current version of SSDT Tabular, you must configure a DWORD parameter called “Enable Legacy Import” in the Windows registry. Setting this parameter to 1 enables the legacy UI. Setting it to zero or removing the parameter disables it again. To enable the legacy UI, you can copy the following lines into a .reg file and import the file into the registry. Do not forget to restart Visual Studio to apply the changes.

Windows Registry Editor Version 5.0

[HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server.0\Microsoft Analysis Services\Settings]
"Enable Legacy Import"=dword:00000001

With the legacy UI fully enabled, you can right-click on Data Sources in Tabular Model Explorer and choose to Import From Data Source (Legacy) or reuse Existing Connections (Legacy), as in the following screenshot. As you would expect, these options create provider data sources in the model and then you can create query partitions on top of these.

Wrapping things up

While AS engine, TMSL, and TOM give you full control over data sources and table partitions, SSDT Tabular attempts to simplify things by favoring M partitions over structured data sources wherever possible. The legacy UI only shows up if you already have provider data sources or query partitions in your model. Should legacy data sources and query partitions be first-class citizens in Tabular 1400? Perhaps SSDT should provide an explicit option in the user interface to enable the legacy UI to eliminate the need for configuring a registry parameter. Let us know if this is something we should do. Also, there is currently no SSDT support for creating M partitions over provider data sources or query partitions over structured data sources because these scenarios seem less important and less desirable. Do you need these features?

Send us your feedback via email to SSASPrev at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. Or simply post a comment to this article. Influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers!

Let’s block ads! (Why?)

Analysis Services Team Blog

Getting Started with Azure Analysis Services Webinar Sept 21

The Azure Analysis Services team would like to help you get started with this exciting new offering that makes scaling Business Intelligence solutions easier than ever.

In this Webinar you will meet the Analysis Services team (such as Josh Caplan and Kay Unkroth) where they will show you the easiest way to get started with Analysis Services. If you aren’t familiar with Azure Analysis Services, Azure Analysis Services provides enterprise-grade data modeling in the cloud. This enables you to mashup and combine data from multiple sources, define metrics, and secure your data in a single, trusted semantic data model. The data model provides an easier and faster way for your users to browse massive amounts of data with client applications like Power BI, Excel, Reporting Services, third-party, and custom apps.

When:

September 21st 10AM PST

Subscribe to watch:

https://www.youtube.com/watch?v=MmCaggAnvhM

image202 Getting Started with Azure Analysis Services Webinar Sept 21

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Don't miss the Timeline Storyteller custom visual contest!

Don’t forget: there’s only a few days left to enter the Timeline Storyteller custom visual contest! You could win a Power BI Super Swag Prize Pack.

To enter, publish a report in the Data Stories Gallery using the new Timeline Storyteller visual. Make sure you use the category “Timeline Contest“, and have shared your entry by 11:59 p.m. PT on September 18.

 Don't miss the Timeline Storyteller custom visual contest!

Our Data Journalism and Microsoft Research teams will select a winner and two runners-up from the reports submitted, based on storytelling, data modeling, and a good use of the Timeline Storyteller visual. Please note that all submissions will be used by Microsoft for research and to improve our products and services. We may also include it in a future research publication.

 Don't miss the Timeline Storyteller custom visual contest!

Timeline Storyteller is an expressive visual storytelling environment for presenting timelines. You can use Timeline Storyteller to present different aspects of your data using a palette of timeline representations, scales, and layouts, as well as controls for filtering, highlighting, and annotation.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Power BI Feature Spotlight: Report Drill Through

PBI Spotlight Logo thumb Power BI Feature Spotlight: Report Drill Through

Every month Power BI gets an update that adds a nice handful of  new features. Here at P3 we ALL get excited for this! We often stand around the water cooler talking about all the new features. When a REALLY exciting feature comes out we get as gleeful as school children! Well starting this month we want to share our excitement with you. We will showcase our favorite features (typically one to two) with you, with the aim to give you a few more ways to use the new features.

Our aim will be to keep these posts bite sized and easily digestible. We’ll also aim to get these posts out at the beginning of the week, shortly after the monthly Power BI update release. With that being said I want to get into this month’s awesome feature…Report Drill Through.

Drillthrough thumb Power BI Feature Spotlight: Report Drill ThroughReport Drill Through enables users to create a report page, filtered to a single entity (E.g. Customer, Employee, Store, Product). Reports often have a summary landing page where there might be a lot of information, but not much detail about a specific item. Traditionally in Power BI we’d create a separate reporting page for further detail breakouts on an entity. However, that would require it’s own set of SLICERS that you would have to re-select if you wanted it to mirror the reporting page you were coming from…that’s too many clicks!

 Selecting an item in Drill Through in a table from one reporting page will take you to another page, FILTERED to the entity you selected! This feature essentially let’s us create detail sub-pages that are linked to whatever primary reporting page the report uses. Features like this have been available for YEARS in Excel using linked cells, I’m super happy we finally have this as a feature in Power BI Desktop. It only takes a few steps to setup, but I’ll leave the instructions for that over at the Power BI September release page. With that said, let me run you quickly through how this looks like in a sample report. wlEmoticon smile Power BI Feature Spotlight: Report Drill Through


Animated GIF showing Drill Through functionality from the Company Page TO the Employee Page

BrqzGGDjjQ 1 thumb Power BI Feature Spotlight: Report Drill Through

 

One HANDY feature included in Drill Through is the addition of a back button on the page you drilled into. The back button allows you to quickly and easily return to the page you came from, especially useful if you’re working in a report with MANY report pages! You can see where I’ve placed the back button next to the (A)in the image below. You’ll also notice the addition of an additional filter field on the right side of the page (B). This release added a Drill Through filter here, and it is where you put the item you want to Drill Through into this page. In our example that item is the Sales Person column.

Employee Report Drill Through thumb 1 Power BI Feature Spotlight: Report Drill Through

I also took it one step further, and created a handy DAX Measure to DYNAMICALLY display the text of the Drill Through filter. If you look at the top of the report in the image above (C) you’ll see that my report title shows the name of the Sales Person I filtered to. I came up with this solution because I really wanted an obvious callout to the filter that was being supplied from the Drill Through feature. All we need is a DAX measure and a card visual!

Employee Name =
CONCATENATE ( SELECTEDVALUE ( ‘DIM Sales Person'[Sales Person], “Employee” ), ” Report” )

This DAX Measure will return “Employee Name” & “Report” if filtered, and “Employee” & “Report” if unfiltered. Placing it on a card visual and removing the category label makes it look just like a regular textbox too! Now we essentially have a dynamic report title on our page, calling out the exact filter being applied from our Company Page. Whenever I implement the Drill Through feature I will use a DAX measure like this as a best practice. Mainly because it will help avoid any confusion to the client about what was (or wasn’t) filtered.

Just because of how much I like their new Ribbon chart, I wanted to include that in this post as well. They also added this chart type in the September release, and it’s super powerful to show the performance of a category over time. In the image above you can see how easy it is to trace performance trends for Product Category over time. Using DAX it is possible to retrieve a Top Performing “thing”, Rob even wrote a post about that years back. I got excited when I realized I could now observe the performance of all PRODUCTS over TIME! Again, you can read more about this feature on the Power BI September release post. That’s it for today P3 Nation! I’ve included the embedded report below as well as download link to the pbix file, until next time. wlEmoticon smile Power BI Feature Spotlight: Report Drill Through

Power BI Embedded Report

Download the Power BI Desktop (.pbix) Report Here

X

Get Your Files

Let’s block ads! (Why?)

PowerPivotPro

On-premises data gateway September update

We’re very glad to announce that the September update for the On-premises data gateway is now available for download. This update includes the following enhancements:

  • Personal Mode support for national clouds.
  • Improved Gateway Configuration Experience for Personal Mode.
  • Updated version of the Mashup Engine.

You can download this new version and continue reading below for more details about each enhancement.

eb56202e bb6f 45c0 b0c2 0c4994ca4cbd On premises data gateway September update

Personal Mode Support for national clouds

Starting with this month’s release, customers can use the Personal Mode of the On-premises data gateway when using Power BI in one of the supported national clouds. Please check the following documentation article for more information regarding national clouds support in Power BI: https://powerbi.microsoft.com/en-us/clouds/ 

Improved Gateway Configuration Experience for Personal Mode

We have improved the Configuration experience for the On-premises data gateway when running in Personal mode. This experience matches the existing one for Enterprise mode, including Status, Service Settings, Diagnostics and Network configuration options.

7750f815 a48b 42d1 8a58 5211b7028dda On premises data gateway September update

Updated version of the Mashup Engine

Last but not least, this Gateway update includes the same version of the Mashup Engine as the Power BI Desktop update released last week. This will ensure that the reports that you publish to the Power BI Service and refresh via the Gateway will go through the same query execution logic/runtime as in the latest Power BI Desktop version. Note that there are some Beta connectors (such as Spark) still not supported outside of Power BI Desktop. Please refer to specific connector documentation for more details or contact us if you have any questions.

That’s all for this month’s Gateway update. We hope that you find these enhancements useful and continue sending us feedback for what new capabilities you’d like to see in the future.

ab5a94b7 064c 4710 b489 208e184fdffb On premises data gateway September update

Additional resources:

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI