• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Analysis

How Skullcandy Uses Predictive and Sentiment Analysis to Understand Customers

September 19, 2019   Sisense

Mark Hopkins is the Chief Information Officer at Park City, Utah based Skullcandy, leading the global IT, Digital, and Customer Service teams. During Mark’s tenure, he has helmed Skullcandy’s digital transformation and his team has successfully increased online revenue and presence in the digital ecosystems, evolved into a world class customer service organization, and enabled growth with innovative systems solutions. Mark’s team is constantly adapting to and meeting the challenges of a rapidly evolving business using cloud technologies, real-time analytics, data warehousing, and virtualization.


Skullcandy’s journey with advanced analytics started with our product development team daring to ask three big questions:

  1. What if we could predict return rates on new products before they were introduced?
  2. What if we could use insights around reviews and warranty claims to understand positive drivers, negative drivers, and sentiment to inform new product design decisions?
  3. What if we could use this data to focus our resources and deliver better products? 

I often compare those early days to a kid standing at the edge of a bowl at the skate park — getting ready to drop in for the first time. He knows it isn’t going to be pretty, but he has to start somewhere. We knew our journey with predictive analytics and sentiment analysis was going to be a gradual progression that would eventually help us understand and better serve our customers. We knew we might end up with some bumps and bruises, but to gain the advantage we had to take that first leap of faith.

SQL Python R CTA 770X250 770x250 How Skullcandy Uses Predictive and Sentiment Analysis to Understand Customers

As a long-time Sisense customer, we knew that we had an open analytics platform on our hands that would be the backbone of this next analytics frontier for Skullcandy. Here’s how we dropped in to answer the questions our Skullcandy product team dared to ask.

Predicting Return Rates On New Products 

The first piece of the puzzle was trying to see into the future — so we pulled BigSquid into the mix with Sisense to help with the heavy lifting. We were most interested in exploring if it was possible to predict the return rate on a new product based on historical return rates of products with similar features.

If you lack a data science team, integrating BigSquid with your open-platform BI tool is a powerful way to achieve the horsepower of data science while maintaining the ease of use that the average business user requires. We fed Kraken (BigSquid’s predictive analytics engine) information about historical warranty costs, claims, forecasts, historical product attributes, and attributes of the new products on the roadmap. Then we ran Kraken’s machine learning and predictive modeling engine to get the results.

Following a few false starts and some great iterative learning with the BigSquid team, we came away with a solid predictive data model of the warranty costs for future periods. That predictive output was then fed into Sisense so we could drill-down, explore, and use these predictions to make data-driven decisions, ask new questions, and understand our cost drivers. For our product development team, these kinds of insights are a goldmine for exploring opportunities for impacting warranty costs on new products before they’re even released.

Using Sentiment Analytics to Inform New Product Design Decisions

With our predictive data models telling us what might happen in the future with our products, our next step was to use sentiment analysis models to tell us what customers are saying and feeling right now. Again, with our BI housed within Sisense, we could integrate our text and sentiment data using a few different techniques. Our BigSquid partners suggested using Python and it’s natural language processing libraries to understand what customers are talking about, and Sisense helped us use AWS Comprehend to understand how our customers feel about our products.

With this integrated data tech stack, we could feed in text from customer reviews and warranty claims to be processed by a Python NLP Engine in order to pull out key themes. We could also feed the same data into Amazon Comprehend to measure the sentiment behind the themes, and which products were most associated with certain sentiments. Finally, we could load this robust sentiment analysis back into Sisense — turning what was previously a “needle in a haystack” exercise into a richly engaging data experience yielding very targeted analysis for our business users.

Where in the past we had to wade through disparate, siloed data, now we can correlate a stream of negative sentiment to reviews that mention a defect on the left side of a headset. And we could take it a step further to put a dollar value on that negative sentiment — by connecting those negative reviews on a product to the same product’s warranty claims. Incoming products with similar design and engineering could be given extra attention by our product designers and engineers before going to market. And we could easily visualize how a fix could impact our warranty claim forecast. Full circle data experience: achieved. 

Lessons Learned

If you’re considering a similar investment in your company’s data strategy, there are a few lessons we learned along the way:

  1. Be open to false starts with data modeling. Like any other data project, it won’t be instantly clear what your data model should look like. And bringing the predictive models into the mix means you’re connecting data points from the past, the present, and the “future” to build a model that provides actionable insights. It will be iterative. Be patient!
  2. Make the data easy to add to and modify. For our data team at Skullcandy, that meant using an easy to manipulate view in SQL as our main data source. This allowed our team to easily swap out code and change the view as we continued to build and iterate.
  3. You may not like what you see. It’s fun to explore the positive drivers and get lost in the positive reviews. One of the surprises that our NLP Engine identified in one product’s reviews was a predominance of the phrase “for my son.” It was fun to think about why that might be. 

    But we also came face-to-face with some harsh reviews. Not as fun to read — but when you invest in advanced analytics, sometimes that’s exactly what you came for. Your goal is to identify what isn’t working for your customers so that future products can deliver what they want and need.

Here at Skullcandy, we’re happy to report that “dropping in” to the predictive and sentiment analytics game was worth the initial uncertainty. We answered some of our most pressing questions, came up with some new insights we hadn’t originally considered, and this project has helped concretely demonstrate to the company what’s possible with advanced analytics.  As a result, we are further along on our journey as a data-driven company.

SQL Python R CTA 770X250 770x250 How Skullcandy Uses Predictive and Sentiment Analysis to Understand Customers
Tags: predictive analytics | SQL

Let’s block ads! (Why?)

Blog – Sisense

Read More

What’s new for SQL Server 2019 Analysis Services RC1

August 22, 2019   Self-Service BI

We find great pleasure in announcing RC1 of SQL Server 2019 Analysis Services (SSAS 2019). The SSAS 2019 release is now feature complete! We may still make performance improvements for the RTM release.

RC1 introduces the following features:

  • Custom ordering of calculation items in calculation groups
  • Query interleaving with short query bias for high-concurrency workloads
  • Online attach for optimized synchronization of read-only replicas
  • Improved performance of Power BI reports over SSAS multidimensional
  • Governance setting to control Power BI cache refreshes

SSAS 2019 features announced in previous CTPs are recapped here:

  • Calculation groups for calculation reusability in complex models (CTP 2.3)
  • Governance settings to protect server memory from runaway queries (CTP 2.4)
  • Many-to-many relationships can help avoid unnecessary “snowflake” models (CTP 2.4)
  • Dynamic measure formatting with calculation groups (CTP 3.0)

We think you’ll agree the pace of delivery from Analysis Services engine team has been phenomenal lately. The SSAS 2019 release demonstrates Microsoft’s continued commitment to all our enterprise BI customers whether on premises, in the cloud, or hybrid.

Calculation groups

Calculation groups address the issue of proliferation of measures in complex BI models often caused by common calculations like time-intelligence. SSAS models are reused throughout large organizations, so they tend to grow in scale and complexity.

The public preview of calculation groups was announced for SSAS 2019 in the CTP 2.3 blog post, and Azure Analysis Services on the Azure Updates blog. Calculation groups will soon be launched and supported in Power BI Premium initially through the XMLA endpoint.

We are grateful for the tremendous enthusiasm from the community for this landmark feature for Analysis Services tabular models. Specifically, we’d like to thank Marco Russo and Alberto Ferrari for their excellent series of articles, and of course Daniel Otykier for ensuring that Tabular Editor supported calculation groups since the very first preview (CTP 2.3).

Here’s a link to the official documentation page for calculation groups: https://aka.ms/CalculationGroups. It contains detailed examples for time-intelligence, currency conversion, dynamic measure formats, and how to set the precedence property for multiple calculation groups in a single model. We plan to keep this article up to date as we make enhancements to calculation groups and the scenarios covered.

Custom ordering calculation items in calculation groups (Ordinal property)

RC1 introduces the Ordinal property for custom ordering of calculation items in calculation groups. As shown in the following image, this property ensures calculation items are shown to end-users in a more intuitive way:

Calc group ordering What’s new for SQL Server 2019 Analysis Services RC1

SQL Server Data Tools (SSDT) support for calculation groups is being worked on and is planned by SQL Server 2019 general availability. In the meantime, in addition to Tabular Editor, you can use SSAS programming and scripting interfaces such as TOM and TMSL. The following snippet of JSON-metadata from a model.bim file shows the required properties to set up an Ordinal column for sorting purposes:

{
    "tables": [
        {
            "name": "Time Intelligence",
            "description": "Utility table for time-Intelligence calculations.",
            "columns": [
                {
                    "name": "Time Calc",
                    "dataType": "string",
                    "sourceColumn": "Name",
                    "sortByColumn": "Ordinal Col"
                },
                {
                    "name": "Ordinal Col",
                    "dataType": "int64",
                    "sourceColumn": "Ordinal",
                    "isHidden": true
                }
            ],
            "partitions": [
                {
                    "name": "Time Intelligence",
                    "source": {
                        "type": "calculationGroup"
                    }
                }
            ],
            "calculationGroup": {
                "calculationItems": [
                    {
                        "name": "YTD",
                        "description": "Generic year-to-date calculation.",
                        "calculationExpression": "CALCULATE(SELECTEDMEASURE(), DATESYTD(DimDate[Date]))",
                        "ordinal": 1
                    },
                    {
                        "name": "MTD",
                        "description": "Generic month-to-date calculation.",
                        "calculationExpression": "CALCULATE(SELECTEDMEASURE(), DATESMTD(DimDate[Date]))",
                        "ordinal": 2
                    }
                ]
            }
        }
    ]
}

Query interleaving with short query bias

SSAS models are reused throughout large organizations, so often require high user concurrency. Query interleaving in RC1 allows system configuration for improved user experiences in high-concurrency scenarios.

By default, the Analysis Services tabular engine works in a first-in, first-out (FIFO) fashion with regards to CPU. This means, for example, if one expensive/slow storage-engine query is received and followed by two otherwise fast queries, the otherwise fast queries can potentially get blocked waiting for the expensive query to complete. This is represented by the following diagram which shows Q1, Q2 and Q3 as the respective queries, their duration and CPU time.

FIFO What’s new for SQL Server 2019 Analysis Services RC1

Query interleaving with short query bias allows concurrent queries to share CPU resources, so fast queries are not blocked behind slow ones. Short-query bias means fast queries (defined by how much CPU each query has already consumed at a given point in time) can be allocated a higher proportion of resources than long-running queries. In the following illustration, the Q2 and Q3 queries are deemed “fast” queries and therefore allocated more CPU than Q1.

QueryInterleavingWithShortQueryBias What’s new for SQL Server 2019 Analysis Services RC1

Query interleaving is intended to have little or no performance impact on queries that run in isolation; a single query can still consume as much CPU as it does with the FIFO model.

For details on how to set up query interleaving, please see the official documentation page: https://aka.ms/QueryInterleaving

Online attach

Online attach can be used for synchronization of read-only replicas in on-premises query scale-out environments.

To perform an online-attach operation, use the AllowOverwrite option of the Attach XMLA command. This operation may require double the model memory to keep the old version online while loading the new version.

<Attach xmlns="https://schemas.microsoft.com/analysisservices/2003/engine">
  <Folder>C:\Program Files\Microsoft SQL Server\MSAS15\OLAP\Data\AdventureWorks.0.db\</Folder>
  <AllowOverwrite>True</AllowOverwrite>
</Attach>

A typical usage pattern could be as follows:

  1. DB1 (version 1) is already attached on read-only server B.
  2. DB1 (version 2) is processed on the write server A.
  3. DB1 (version 2) is detached and placed on a location accessible to server B (either via a shared location, or using robocopy, etc.).
  4. The <Attach> command with AllowOverwrite=True is executed on server B with the new location of DB1 (version 2).
    • Without this new feature, the administrator is first required to detach the database and then attach the new version of the database. This leads to downtime when the database is unavailable to users, and queries against it will fail.
    • When this new flag is specified, version 1 of the database is deleted atomically within the same transaction with no downtime. However, it comes at the cost of having both databases loaded simultaneously into memory.

Improved performance of Power BI reports over SSAS multidimensional

RC1 introduces optimized DAX query processing for commonly used DAX functions including SUMMARIZECOLUMNS and TREATAS. This can provide considerable performance benefits for Power BI reports over SSAS multidimensional. This enhancement has been referred to as “Super DAX MD”.

To make end-to-end use of this feature, you will need a forthcoming version of Power BI Desktop. Once the Power BI release is shipped and validated, we intend to enable it by default in a SSAS 2019 CU.

Governance setting for Power BI cache refreshes

The ClientCacheRefreshPolicy governance setting to control cache refreshes was first announced for Azure Analysis Services on the Azure blog in April. This property is now also available in SSAS 2019 RC1.

The Power BI service caches dashboard tile data and report data for initial load of Live Connect reports. This can cause an excessive number of cache queries being submitted to SSAS, and in extreme cases can overload the server.

1500 compatibility level

The SSAS 2019 modeling features work with the new 1500 compatibility level. 1500 models cannot be deployed to SQL Server 2017 or earlier or downgraded to lower compatibility levels.

Download Now

To get started with SQL Server 2019 RC1, find download instructions on the SQL Server 2019 web page. Enjoy!

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Connecting Azure Analysis Services to Azure Data Lake Storage Gen2

August 10, 2019   Self-Service BI

With the public preview available for “Multi-Protocol Access” on Azure Data Lake Storage Gen2 now AAS can use the Blob API to access files in ADLSg2. This unlocks the entire ecosystem of tools, applications, and services, as well as all Blob storage features to accounts that have a hierarchical namespace.

To illustrate this, we have created a guide for an example of how to use this new capability. You can find the guide in the Power BI community blog.

Please let us know what you think and enjoy!

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

What’s new for SQL Server 2019 Analysis Services CTP 3.0

May 23, 2019   Self-Service BI

This is the first blog post dedicated to Analysis Services on the Power BI blog. As discussed in this post on the old Analysis Services team blog, Microsoft is moving off MSDN and TechNet blogs. We have made it clear that Power BI will be a one-stop shop for both enterprise and self-service BI on a single, all-inclusive platform. Power BI will be a superset of Analysis Services, so there is no better place to announce future Analysis Services features than here on the Power BI blog. The Analysis Services engine is at the foundation of Power BI and powers its datasets, so this just feels right.

The calculation groups feature was announced for SQL Server Analysis Services 2019 in the CTP 2.3 blog post. CTP 3.0 provides enhancements on top of calculation groups. Calculation groups will soon be officially launched and supported in Azure Analysis Services and Power BI Premium through the XMLA endpoint.

CTP 3.0

We are excited to announce the public CTP 3.0 of SQL Server 2019 Analysis Services. This public preview includes the following enhancements for Analysis Services tabular models.

· Dynamic format strings with calculation groups

· Calculation group support for MDX queries

Dynamic format strings

Until now, tabular models in Analysis Services and Power BI have supported dynamic formatting of measures using the FORMAT function. The FORMAT function has the disadvantage of returning a string, forcing measures that would otherwise be numeric to be returned as strings. This causes limitations such as not working with most Power BI visuals depending on numeric values like charts, etc.

Dynamic format strings with calculation groups allow conditional application of format strings to measures without forcing them to return strings.

Dynamic format string example for time intelligence

Please refer to the time-intelligence example provided in the CTP 2.3 blog post. All the calculation items except YOY% should use the format of the current measure in context. Sales YTD should be currency; Orders YTD should be a whole number. YOY% however should be percentage regardless of the format of the base measure.

For YOY%, we can override the format string by setting the format string expression property to “0.00%;-0.00%;0.00%”. For more information on the sections of format strings in Analysis Services, please see this document.

The following matrix visual in Power BI shows that Sales Current/YOY and Orders Current/YOY retain their respective base measure format strings. Sales YOY% and Orders YOY% however override the format string to use percentage format.

DynamicFormatStringTimeIntel What’s new for SQL Server 2019 Analysis Services CTP 3.0

Dynamic format string example for currency conversion

Currency conversion takes dynamic format strings to the next level. Consider the following Adventure Works data model. It is modeled for “one-to-many” currency conversion as defined by this document.

OneToManyCurrencyConversion What’s new for SQL Server 2019 Analysis Services CTP 3.0

The FormatString column is added to the DimCurrency table and populated with format strings for the respective currencies.

FormatStringColumn What’s new for SQL Server 2019 Analysis Services CTP 3.0

This download file contains sample format strings.

The following calculation group is defined.

Table Currency Conversion
Column Conversion Calculation
Precedence 5
Calculation Item
"No Conversion"
Expression
SELECTEDMEASURE()
Calculation Item
"Converted Currency"
Expression
IF(
    //Check one currency in context & not US Dollar, which is the pivot currency:
    SELECTEDVALUE( DimCurrency[CurrencyName], "US Dollar" ) = "US Dollar",
    SELECTEDMEASURE(),
    SUMX(
        VALUES(DimDate[Date]),
        CALCULATE( DIVIDE( SELECTEDMEASURE(), MAX(FactCurrencyRate[EndOfDayRate]) ) )
    )
)
Format String Expression
SELECTEDVALUE(
    DimCurrency[FormatString],
    SELECTEDMEASUREFORMATSTRING()
)

The format string expression must return a scalar string. It uses the new SELECTEDMEASUREFORMATSTRING() function to revert to the base measure format string if there are multiple currencies in filter context.

The following visual shows the dynamic format of the Sales measure.

CurrencyConversion What’s new for SQL Server 2019 Analysis Services CTP 3.0

Previously, using the FORMAT function, it was not possible to switch visual type to a chart and display the values. Using dynamic format strings with calculation groups, we can.

MDX query support for calculation groups

In previous CTPs, calculation groups were not enabled for MDX queries. Now they are, so clients such as Excel can benefit.

Tooling

Calculation groups are currently engine-only features. SSDT support will come before SQL Server 2019 general availability. In the meantime, you can use the fantastic open-source community tool Tabular Editor to create calculation groups. Alternatively, you can use SSAS programming and scripting interfaces such as TOM and TMSL.

New 1470 compatibility level

To use calculation groups, existing models must be upgraded to the 1470 compatibility level. 1470 models cannot be deployed to SQL Server 2017 or earlier or downgraded to lower compatibility levels.

Download now

To get started with SQL Server 2019 CTP 3.0, find download instructions on the SQL Server 2019 web page. Enjoy!

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

Future Analysis Services posts on Power BI blog

April 27, 2019   Self-Service BI

The Analysis Services Team Blog has served as the medium for many historic announcements and updates in Microsoft BI over the past 10 years. Well, Microsoft is moving off MSDN and TechNet blogs, so this will be the last post on this blog site. The site, including all historical posts, will remain online as a read-only archive.

‘So where will new Analysis Services blog posts be published?’ I hear you ask. We have made it clear that Power BI will be a one-stop shop for both enterprise and self-service BI on a single, all-inclusive platform. Power BI will be a superset of Analysis Services, so there is no better place to announce future Analysis Services features than, you guessed it, on the Power BI blog. The Analysis Services engine is at the foundation of Power BI and powers its datasets, so this just feels right.

Please stay tuned to the new Analysis Services category on the Power BI blog for future Azure Analysis Services and SQL Server Analysis Services announcements:

https://powerbi.microsoft.com/blog/category/analysis-services/

Let’s block ads! (Why?)

Analysis Services Team Blog

Read More

Move your pbix datamodel to Azure Analysis services – #powerbi

April 19, 2019   Self-Service BI

After the Azure Analysis Services web designer was discontinued per march 1 2019 – link – there is no official tool to do a move of a PBIX datamodel to Azure Analysis Service. But by using a few different tools we do have ways of doing it anyway.

Step 1 – DAX Studio

Open your PBIX file in the Power BI Desktop and then open the DAX Studio (link to download) and connect DAX studio to the PBI model

 Move your pbix datamodel to Azure Analysis services – #powerbi

In the status bar you will see the local port of the Analysis model running on your machine

 Move your pbix datamodel to Azure Analysis services – #powerbi

Step 2 SQL Management Studio

Connect to Analysis Services using the server address from Step 1

 Move your pbix datamodel to Azure Analysis services – #powerbi

This will give you a connection to the model

 Move your pbix datamodel to Azure Analysis services – #powerbi

Now right click the Database and choose to script the Database to a new Query Editor window (or the clipboard)

 Move your pbix datamodel to Azure Analysis services – #powerbi

Step 3 Connect to Azure Analysis Services

Use the SQL Server Management Studio to connect to Azure Analysis Services

 Move your pbix datamodel to Azure Analysis services – #powerbi

Select to run a New XMLA Query on the server

 Move your pbix datamodel to Azure Analysis services – #powerbi

And paste the query created in Step 2 in the new Query window – You can specify a model name in the highlighted area

 Move your pbix datamodel to Azure Analysis services – #powerbi

Run the Query – and after a few seconds you should get this result – that the query has completed successfully.

 Move your pbix datamodel to Azure Analysis services – #powerbi

And after a refresh of the server object you should see the newly scripted data model

 Move your pbix datamodel to Azure Analysis services – #powerbi

Step 4 Finish the move

Now depending on the data sources in your model you need to setup the necessary connections and gateway.

And use your favourite Analysis Services tool to modify model – for instance the Tabular editor ( link )

 Move your pbix datamodel to Azure Analysis services – #powerbi

You will have to go through all your data sources and Tables to make sure the connections and M-Scripts are functioning.

Tip

Before you script your PBIX data model – turn off the Time intelligence in your current file – otherwise you could get a lot of extra date/time tables in your model

 Move your pbix datamodel to Azure Analysis services – #powerbi

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More

Prominent AI researchers call on Amazon to stop selling Rekognition facial analysis to law enforcement

April 4, 2019   Big Data
 Prominent AI researchers call on Amazon to stop selling Rekognition facial analysis to law enforcement

In a letter published today, a cohort of about two dozen AI researchers working in tech and academia are calling on Amazon’s AWS to stop selling facial recognition software Rekognition to law enforcement agencies.

Among those who object to Rekognition being used by law enforcement are deep learning luminary and recent Turing Award winner Yoshua Bengio, Caltech professor and former Amazon principal scientist Anima Anandkumar, and researchers in fields of computer vision and machine learning t Google AI, Microsoft Research, and Facebook AI Research.

Rekognition has been used by police departments in Florida and Washington, and has reportedly been offered to the Department of Homeland Security to identify immigrants.

“We call on Amazon to stop selling Rekognition to law enforcement as legislation and safeguards to prevent misuse are not in place,” reads the letter. “There are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties.”

The researchers cite the work of privacy advocates who are concerned that law enforcement agencies with little understanding of the technical aspects of computer vision systems could make serious errors, like committing an innocent person to jail, or trust autonomous systems too much.

“Decisions from such automated tools may also seem more correct than they actually are, a phenomenon known as ‘automation bias’, or may prematurely limit human-driven critical analyses,” the letter reads.

The research also criticizes Rekognition for its binary classification of sexual orientation as male or female, an approach that can lead to misclassifications and cites the work of researchers like Os Keyes whose analysis of gender recognition research found few examples of work that incorporate transgender people.

The letter takes issue with arguments made by Amazon’s deep learning and AI general manager Mathew Wood and global head of public policy Michael Punke, who reject the results of a recent audit that found Rekognition misidentifies women with dark skin tones as men 31% of the time.

The analysis, which examined the performance of commercially available facial analysis tools like Rekognition, was published in January at the AAAI/ACM conference on Artificial Intelligence Ethics and Society by Inioluwa Deborah Raji and Joy Buolamwini.

The report follows the release a year ago of Gender Shades, analysis that found facial recognition software from companies like Face++ and Microsoft had limited ability to recognize people with dark skin tones, especially women of color.

Timnit Gebru, a Google researcher who coauthored Gender Shades, also signed the letter published today.

A study the American Civil Liberties Union (ACLU) released last summer found that Rekognition inaccurately labeled members of the 115th U.S. Congress as criminals, a label Rekognition was twice as likely to bestow on members of Congress who are people of color than their white counterparts.

Following the release of the paper and an accompanying New York Times article, Wood claimed the research “draws misleading and false conclusions.”

In response, the letter published today says that in multiple blog posts Punke and Wood “misrepresented the technical details for the work and the state-of-the-art in facial analysis and face recognition.” The letter also refutes specific claims made by Wood and Punke, like the assertion that facial recognition and facial analysis have completely different underlying technology.

Instead, the letter asserts that many machine learning researchers view the two as closely related and that facial recognition data sets can be used to train models for facial analysis.

“So in contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications.”

The letter opposing law enforcement use of Rekognition comes weeks after members of the U.S. Senate proposed legislation to regulate the use of facial recognition software.

For its part, Amazon said it welcomes some form of regulation or “legislative framework,” while Microsoft urged the federal government to regulate facial recognition software before law enforcement agencies abuse it.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Azure Analysis Services now supported and what’s coming next with Paginated Reports in Power BI

April 4, 2019   Self-Service BI

Power BI is a platform that lets you meet the diverse needs of your enterprise user base, from self-service analytics to centralized report distribution.  A critical part of this flexibility is the enterprise semantic model, which provides a secure, governed, and descriptive representation of your business metrics, available as a single source of truth across your enterprise.  This semantic model can be reused in reports built by your IT team, in those created by business analysts, and even in external BI tools like Excel or even third-party tools as recently announced with support for the XMLA endpoint.

Paginated Reports using the enterprise semantic model

Over the next month, we will also allow Paginated reports to be built on this same enterprise semantic model, reducing the complexity of reporting deployments that mix interactive and paginated reporting.  Starting this week, you can build Paginated reports on semantic models from Azure Analysis Services.  Later in April, you will also be able to build Paginated reports for semantic models deployed as Power BI datasets.  As always, when you consume the semantic model the role-based security (RLS) defined in the model is honored for each user regardless of tool, and this is the same for Paginated reports.

How to get started with Azure Analysis Services and Paginated Reports

To connect to an Azure Analysis Services data source, use the existing option to connect to SQL Server Analysis Services in Report Builder and build your connection string accordingly.  From there, you can simply build out your report as you’d do with any of the supported data source and upload it through the Power BI portal.

“New Feature Friday” for Paginated Reports

Today’s announcement is the first of several you’ll see for new features in Power BI.  In fact, we’ll be announcing one every Friday for the next several weeks in what we’re calling “New Feature Friday”. These will include several highly anticipated features, including support for Power BI datasets, Sharing via Apps and E-Mail Subscriptions.

FAQ:

When is General Availability for the feature planned?

General availability for Paginated Repis coming in a few months time.  We have been spending the last few months working hard to add features and make sure the capabilities are GA-ready, so you should feel comfortable moving to production at any time before that if you’d like.

Will the feature be coming to smaller SKU sizes of Premium?

Yes, we are planning to support A3 and EM3 SKU sizes around the GA timeframe for customers as well as the currently supported Premium SKU’s.

Will the feature be available for Pro users vs. being a Premium only feature?

We know this is a popular request among our customers and users and we are continuing to evaluate options.  Please vote on this idea to get updates as we evaluate future capabilities —  Paginated Reports in Pro and Premium

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

What’s new for SQL Server 2019 Analysis Services CTP 2.4

March 28, 2019   Self-Service BI

We are excited to announce the public CTP 2.4 of SQL Server 2019 Analysis Services. This public preview includes the following enhancements for Analysis Services tabular models.

  • Many-to-many relationships
  • Memory settings for resource governance

Many-to-many relationships

Many-to-many (M2M) relationships in CTP 2.4 are based on M2M relationships in Power BI described here. M2M relationships in CTP 2.4 do not work with composite models. They allow relationships between tables where both columns are non-unique. A relationship can be defined between a dimension and fact table at a granularity higher than the key column of the dimension. This avoids having to normalize dimension tables and can improve the user experience because the resulting model has a smaller number of tables with logically grouped columns. For example, if Budget is defined at the Product Category level, it is not necessary to normalize the Product dimension into separate tables; one at the granularity of Product and the other at the granularity of Product Category.

Tooling

Many-to-many relationships are currently engine-only features. SSDT support will come before SQL Server 2019 general availability. In the meantime, you can use the fantastic open-source community tool Tabular Editor to create many-to-many relationships. Alternatively, you can use SSAS programming and scripting interfaces such as TOM and TMSL.

New 1470 Compatibility Level

To use many-to-many relationships, existing models must be upgraded to the 1470 compatibility level. 1470 models cannot be deployed to SQL Server 2017 or earlier or downgraded to lower compatibility levels.

Memory settings for resource governance

The memory settings described here are already available in Azure Analysis Services. With CTP 2.4, they are now also supported by SQL Server 2019 Analysis Services.

QueryMemoryLimit

The Memory\QueryMemoryLimit property can be used to limit memory spools built by DAX queries submitted to the model.

Changing this property can be useful in controlling expensive queries that result in significant materialization. If the spooled memory for a query hits the limit, the query is cancelled and an error is returned, reducing the impact on other concurrent users of the system. Currently, MDX queries are not affected. It does not account for other general memory allocations used by the query.

The settable value of 1 to 100 is a percentage. Above that, it’s in bytes. The default value of 0 means not specified and no limit is applied.

You can set this property by using the latest version of SQL Server Management Studio (SSMS), in the Server Properties dialog box. See the Server Memory Properties article for more information.

DbpropMsmdRequestMemoryLimit

The DbpropMsmdRequestMemoryLimit XMLA property can be used to override the Memory\QueryMemoryLimit server property value for a connection.  The unit of measure is in kilobytes. See the Connection String Properties article for more information.

RowsetSerializationLimit

The OLAP\Query\RowsetSerializationLimit server property limits the number of rows returned in a rowset to clients.

This property, set in the Server Properties dialog box in the latest version of SSMS, applies to both DAX and MDX. It can be used to protect server resources from extensive data export usage. Queries submitted to the server that would exceed the limit are cancelled and an error is returned. The default value is -1, meaning no limit is applied.

Download Now

To get started with SQL Server 2019 CTP 2.4, find download instructions on the SQL Server 2019 web page. Enjoy!

Let’s block ads! (Why?)

Analysis Services Team Blog

Read More

What’s new for SQL Server 2019 Analysis Services CTP 2.3

March 1, 2019   Self-Service BI

We find great pleasure in announcing the public CTP 2.3 of SQL Server 2019 Analysis Services. New features detailed here are planned to ship later in Power BI Premium and Azure Analysis Services.

Calculation groups

Here is a question for seasoned BI professionals: what is the most powerful feature of SSAS multidimensional? Many would say the ability to define calculated members, typically using scoped cell assignments. Calculated members in multidimensional enable complex calculations by reusing calculation logic. Unfortunately, Analysis Services tabular doesn’t have equivalent functionality. Correction: it does now!!!

Calculation groups address the issue of proliferation of measures in complex BI models often caused by common calculations like time-intelligence. Enterprise models are reused throughout large organizations, so they grow in scale and complexity. It is not uncommon for Analysis Services models to have hundreds of base measures. Each base measure often requires the same time-intelligence analysis. For example, Sales and Order Count may require:

  • Sales MTD, Sales QTD, Sales YTD, Sales PY, Sales YOY%, …
  • Orders MTD, Orders QTD, Orders YTD, Orders PY, Orders YOY%, …

As you can see, this can easily explode the number of measures. If a model has 100 base measures and each requires 10 time-intelligence representations, the model ends up with 1,000 measures in total (100*10). This creates the following problems.

  • The user experience is overwhelming because must sift through so many measures
  • DAX is difficult to maintain
  • Model metadata is bloated

Calculation groups address these issues. They are presented to end-users as a table with a single column. Each value in the column represents a reusable calculation that can be applied to any of the measures where it makes sense. The reusable calculations are called calculation items.

By reducing the number of measures, calculation groups present an uncluttered user interface to end users. They are an elegant way to manage DAX business logic. Users simply select calculation groups in the field list to view the calculations in Power BI visuals. There is no need for the end user or modeler to create separate measures.

CalcMembers What’s new for SQL Server 2019 Analysis Services CTP 2.3

Time-intelligence example

Consider the following calculation group example.

Table Time Intelligence
Column Time Calculation
Precedence 20
Calculation Item Expression
"Current"
SELECTEDMEASURE()
"MTD"
CALCULATE(SELECTEDMEASURE(), DATESMTD(DimDate[Date]))
"QTD"
CALCULATE(SELECTEDMEASURE(), DATESQTD(DimDate[Date]))
"YTD"
CALCULATE(SELECTEDMEASURE(), DATESYTD(DimDate[Date]))
"PY"
CALCULATE(SELECTEDMEASURE(), SAMEPERIODLASTYEAR(DimDate[Date]))
"PY MTD"
CALCULATE(
    SELECTEDMEASURE(),
    SAMEPERIODLASTYEAR(DimDate[Date]),
    'Time Intelligence'[Time Calculation] = "MTD"
)
"PY QTD"
CALCULATE(
    SELECTEDMEASURE(),
    SAMEPERIODLASTYEAR(DimDate[Date]),
    'Time Intelligence'[Time Calculation] = "QTD"
)
"PY YTD"
CALCULATE(
    SELECTEDMEASURE(),
    SAMEPERIODLASTYEAR(DimDate[Date]),
    'Time Intelligence'[Time Calculation] = "YTD"
)
"YOY"
SELECTEDMEASURE() –
CALCULATE(
    SELECTEDMEASURE(),
    'Time Intelligence'[Time Calculation] = "PY"
)
"YOY%"
DIVIDE(
    CALCULATE(
        SELECTEDMEASURE(),
        'Time Intelligence'[Time Calculation]="YOY"
    ),
    CALCULATE(
        SELECTEDMEASURE(),
        'Time Intelligence'[Time Calculation]="PY"
    ),
)

Here is a DAX query and output. The output shows the calculations applied. For example, QTD for March 2012 is the sum of January, February and March 2012.

EVALUATE
CALCULATETABLE (
    SUMMARIZECOLUMNS (
        DimDate[CalendarYear],
        DimDate[EnglishMonthName],
        "Current", CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "Current" ),
        "QTD",     CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "QTD" ),
        "YTD",     CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "YTD" ),
        "PY",      CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "PY" ),
        "PY QTD",  CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "PY QTD" ),
        "PY YTD",  CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "PY YTD" )
    ),
    DimDate[CalendarYear] IN { 2012, 2013 }
)

Time intelligence What’s new for SQL Server 2019 Analysis Services CTP 2.3

Sideways recursion

Some of the calculation items refer to other ones in the same calculation group. This is called “sideways recursion”. For example, YOY% (shown below for easy reference) refers to 2 other calculation items, but they are evaluated separately using different calculate statements. Other types of recursion are not supported (see below).

DIVIDE(
    CALCULATE(
        SELECTEDMEASURE(),
        'Time Intelligence'[Time Calculation]="YOY"
    ),
    CALCULATE(
        SELECTEDMEASURE(),
        'Time Intelligence'[Time Calculation]="PY"
    ),
)

Single calculation item in filter context

Here is the definition of PY YTD:

CALCULATE(
    SELECTEDMEASURE(),
    SAMEPERIODLASTYEAR(DimDate[Date]),
    'Time Intelligence'[Time Calculation] = "YTD"
)

The YTD argument to the CALCULATE() function overrides the filter context to reuse the logic already defined in the YTD calculation item. It is not possible to apply both PY and YTD in a single evaluation. Calculation groups are only applied if a single calculation item from the calculation group is in filter context.

This is illustrated by the following query and output.

EVALUATE
CALCULATETABLE (
    SUMMARIZECOLUMNS (
        DimDate[CalendarYear],
        DimDate[EnglishMonthName],

        //No time intelligence applied: all calc items in filter context:
        "InternetTotalSales", [InternetTotalSales],

        //No time intelligence applied: 2 calc items in filter context:
        "PY || YTD", CALCULATE ( [InternetTotalSales],
            'Time Intelligence'[Time Calculation] = "PY" || 'Time Intelligence'[Time Calculation] = "YTD"
        ),

        //YTD applied: exactly 1 calc item in filter context:
        "YTD", CALCULATE ( [InternetTotalSales], 'Time Intelligence'[Time Calculation] = "YTD" )
    ),
    DimDate[CalendarYear] = 2012
)

Single calc item What’s new for SQL Server 2019 Analysis Services CTP 2.3

A calculation group should be designed so that each calculation item within it presented to the end user only makes sense to be applied one at a time. If there is a business requirement to allow the end user to apply more than one calculation item at a time, multiple calculation groups should be used with different precedence.

Precedence

In the same model as the time-intelligence example above, the following calculation group also exists. It contains average calculations that are independent of traditional time intelligence in that they don’t change the date filter context; they just apply average calculations within it.

In this example, a daily average calculation is defined. It is common in oil-and-gas applications to use calculations such as “barrels of oil per day”. Other common business examples include “store sales average” in the retail industry.

Whilst such calculations are calculated independently of time-intelligence calculations, there may well be a requirement to combine them. For example, the end-user might want to see “YTD barrels of oil per day” to view the daily-oil rate from the beginning of the year to the current date. In this scenario, precedence should be set for calculation items.

Table Averages
Column Average Calculation
Precedence 10
Calculation Item Expression
"No Average"
SELECTEDMEASURE()
"Daily Average"
DIVIDE(SELECTEDMEASURE(), COUNTROWS(DimDate))

Here is a DAX query and output.

EVALUATE
    CALCULATETABLE (
        SUMMARIZECOLUMNS (
        DimDate[CalendarYear],
        DimDate[EnglishMonthName],
        "InternetTotalSales", CALCULATE (
            [InternetTotalSales],
            'Time Intelligence'[Time Calculation] = "Current",
            'Averages'[Average Calculation] = "No Average"
        ),
        "YTD", CALCULATE (
            [InternetTotalSales],
            'Time Intelligence'[Time Calculation] = "YTD",
            'Averages'[Average Calculation] = "No Average"
        ),
        "Daily Average", CALCULATE (
            [InternetTotalSales],
            'Time Intelligence'[Time Calculation] = "Current",
            'Averages'[Average Calculation] = "Daily Average"
        ),
        "YTD Daily Average", CALCULATE (
            [InternetTotalSales],
            'Time Intelligence'[Time Calculation] = "YTD",
            'Averages'[Average Calculation] = "Daily Average"
        )
    ),
    DimDate[CalendarYear] = 2012
)

YTD Daily Avg What’s new for SQL Server 2019 Analysis Services CTP 2.3

The following table shows how the March 2012 values are calculated.

Column name Calculation
YTD Sum of InternetTotalSales for Jan, Feb, Mar 2012

= 495,364 + 506,994 + 373,483

Daily Average InternetTotalSales for Mar 2012 divided by # of days in March

= 373,483 / 31

YTD Daily Average YTD for Mar 2012 divided by # of days in Jan, Feb and Mar

=  1,375,841 / (31 + 29 + 31)

For easy reference, here is the definition of the YTD calculation item. It is applied with Precedence of 20.

CALCULATE(SELECTEDMEASURE(), DATESYTD(DimDate[Date]))

Here is Daily Average. It is applied with Precedence of 10.

DIVIDE(SELECTEDMEASURE(), COUNTROWS(DimDate))

Since the precedence of the Time Intelligence calculation group is higher than the Averages one, it is applied as broadly as possible. The YTD Daily Average calculation applies YTD to both the numerator and the denominator (count of days) of the daily average calculation.

This is equivalent to this calculation:

CALCULATE(DIVIDE(SELECTEDMEASURE(), COUNTROWS(DimDate)), DATESYTD(DimDate[Date]))

Not this one:

DIVIDE(CALCULATE(SELECTEDMEASURE(), DATESYTD(DimDate[Date])), COUNTROWS(DimDate))

New DAX functions

The following new DAX functions have been introduced to work with calculation groups.

Function name Description
SELECTEDMEASURE()
Returns a reference to the measure currently in context.
SELECTEDMEASURENAME()
Returns a string containing the name of the measure currently in context.
ISSELECTEDMEASURE( M1, M2, … )
Returns a Boolean indicating whether the measure currently in context is one of those specified as an argument.

SELECTEMEASURENAME() or ISSELECTEDMEASURE() can be used to conditionally apply calculation items depending on the measure in context. For example, it probably doesn’t make sense to calculate the daily average of a ratio measure.

With ISSELECTEDMEASURE():

IF (
    ISSELECTEDMEASURE ( [Expense Ratio 1], [Expense Ratio 2] ),
    SELECTEDMEASURE (),
    DIVIDE ( SELECTEDMEASURE (), COUNTROWS ( DimDate ) )
)

ISSELECTEDMEASURE() has the advantage of working with formula fix up, so measure-name changes are reflected automatically.

Power BI implicit measures

Calculation groups work with query scope measures, but not inline DAX calculations. This is shown by the following query.

DEFINE
MEASURE FactInternetSales[QueryScope] = SUM ( FactInternetSales[SalesAmount] )
EVALUATE
CALCULATETABLE (
    SUMMARIZECOLUMNS (
        DimDate[CalendarYear],
        DimDate[EnglishMonthName],

        //YTD applied successfully to model measure:
        "Model Measure", CALCULATE (
            [InternetTotalSales],
            'Time Intelligence'[Time Calculation] = "YTD"
        ),

        //YTD applied successfully to query scope measure:
        "Query Scope", CALCULATE (
            [QueryScope],
            'Time Intelligence'[Time Calculation] = "YTD"
        ),

        //YTD not applied to inline calculation:
        "Inline", CALCULATE (
            SUM ( FactInternetSales[SalesAmount] ),
            'Time Intelligence'[Time Calculation] = "YTD"
        )
    ),
    DimDate[CalendarYear] = 2012
)

Power BI implicit measures are created when the end user drags columns onto visuals to view aggregated values without creating an explicit measure. At time of writing, Power BI generates DAX for implicit measures written as inline DAX calculations. This means implicit measures don’t work with calculation groups. To reserve the right to introduce this at a later date, a new model property visible in TOM has been introduced called DiscourageImplicitMeasures. In the current version, it must be set to true to create calculation groups. When set to true, Power BI Desktop in Live Connect mode disables creation of implicit measures.

DMV support

The following Dynamic Management Views (DMV) have been introduced for calculation groups.

  • TMSCHEMA_CALCULATION_GROUPS
  • TMSCHEMA_CALCULATION_ITEMS

OLS

Object-level security (OLS) defined on calculation group tables is not supported in the current release. They can be defined on other tables in the same model. If a calculation item refers to an OLS-secured object, it will return a generic error on evaluation. This is the planned behavior for SSAS 2019.

Planned for a forthcoming CTP

We plan to introduce the following items in a forthcoming SQL Server 2019 CTP.

  • MDX query support with calculation groups.
  • RLS is not supported in CTP 2.3. The planned behavior for SSAS 2019 is that you will be able to define RLS on tables in the same model, but not on calculation groups themselves (directly or indirectly).
  • Dynamic format strings. Calculation groups increase the need for dynamic format strings. For example, the YOY% calculation item needs to be displayed as a percentage, while the others should probably inherit the data type of the measure currently in context. We plan to introduce dynamic format strings in an upcoming SQL Server 2019 CTP.
  • ALLSELECTED DAX function support with calculation groups.
  • Detail rows support with calculation groups.

Limitations of CTP 2.3

CTP 2.3 of SSAS is still an early build of SSAS 2019. It being released for testing and feedback purposes only, and should not be used by customers in production environments. This applies to models with or without calculation groups.

New 1470 Compatibility Level

To use the new features, existing models must be upgraded to the 1470 compatibility level. 1470 models cannot be deployed to SQL Server 2017 or earlier or downgraded to lower compatibility levels.

Differences between calculation groups in tabular and calculated members in multidimensional

Calculated members in multidimensional are a little more flexible and enable a few scenarios beyond calculation groups, but they come at the cost of added complexity. We feel calculation groups in tabular provide a great deal of the benefits, with significantly less complexity.

Single calculation-item column

Calculation groups can only have a single calculation-item column, whereas multidimensional allows multiple hierarchies with calculated members in a single utility dimension.

A DAX filter on a column value implicitly filters the other columns in the same table to the values of that row. Without introducing new semantics and complexity, multiple calculation-item columns in a single table would filter each other implicitly, so are disallowed. If you have a requirement to apply multiple calculation items at a time, use separate calculation groups and the Precedence property shown above.

Recursion safeguards not required

MDX supports recursion although there are known performance limitations. Quite often the same query results can be achieved using MDX set-based calculations instead of recursion.

The right-hand side of MDX-script cell assignments to calculated members created by the Business Intelligence Wizard for multidimensional include a reference to the real member from the attribute hierarchy. This is required to safeguard against recursion.

Since DAX doesn’t support recursion, so we don’t need to worry about this for calculation groups. The complexity bar is kept lower. If we ever decide to support recursive DAX in the future, we could perhaps introduce an advanced property to indicate that a DAX object is enabled for recursion, and only then require such safeguards to be in place.

Calculation items cannot be created on other column types

Multidimensional allows creation of calculated members on attribute hierarchies that are not part of utility dimensions. For example, a Northwest Region member can be added to the State hierarchy to aggregate Washington, Oregon and Idaho. This is useful for custom-grouping scenarios but can increase the likelihood of solve-order issues.

Calculation items cannot be added to other column types. This keeps semantic definitions simpler. As we enhance calculation groups in the future – for example, if we introduce query-scoped calculation groups – we will take care to learn from the solve-order lessons of the past and strive for consistent behaviors.

Tooling

Calculation groups and many-to-many relationships are currently engine-only features. SSDT support will come before SQL Server 2019 general availability. In the meantime, you can use the fantastic open-source community tool Tabular Editor to author calculation groups. Alternatively, you can use SSAS programming and scripting interfaces such as TOM and TMSL.

Tabular Editor What’s new for SQL Server 2019 Analysis Services CTP 2.3

Pace of delivery

We think you will agree the AS engine team has been on a tear lately. This is the same team that recently delivered, or is currently working on, the following breakthrough features for Power BI.

  • Arguably the biggest scalability feature in the history of the AS engine: aggregations
  • Policy-based incremental refresh
  • Opening the XMLA endpoint to bring AS to Power BI

Calculation groups is yet another monumental feature delivered in a relatively short period of time. It demonstrates Microsoft’s continued commitment to enterprise BI customers.

Download Now

To try SQL Server 2019 CTP 2.3, find download instructions on the SQL Server 2019 web page. Enjoy!

Let’s block ads! (Why?)

Analysis Services Team Blog

Read More
« Older posts
  • Recent Posts

    • What’s wrong with your dog?
    • Creating Your First Model-driven App in Dynamics 365
    • Adyen CEO on AI for payments: ‘I was surprised how effective it was’
    • 125-Year-Old Cornell Store Reinvents Itself as Omnichannel Retailer
    • Dreamforce Points to New Disruption
  • Categories

  • Archives

    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2019 Business Intelligence Info
Power BI Training | G Com Solutions Limited