47 ERP/CRM Blog Members Exhibiting at User Group Summit 2017 Events in Nashville

CRM Blog 47 ERP/CRM Blog Members Exhibiting at User Group Summit 2017 Events in Nashville

There will be so many people to meet at the User Group Summit events in Nashville!

One of the best parts about attending a User Group Summit event is exploring the Expo Hall and meeting the exhibitors. This year the GPUG, D365/AXUG, NAVUG and D365/CRMUG events are all taking place together in Nashville from October 10-13, 2017.

There will be literally hundreds of exhibitors but we hope that you pay attention to these 47 companies that are active members of the CRM Software Blog and ERP Software Blog. Their blog posts help educate you all year, but you can meet them in person in October.

CRMUG:

Socius (NAVUG) (AXUG) (GPUG) (CRMUG)

Broadpoint Technologies (GPUG, CRMUG)

ClickDimensions (CRMUG)

JourneyTEAM (CRMUG)

HITACHI Asia Pacific (AXUG) (CRMUG)

LedgeView Partners (CRMUG)

Logan Consulting (GPUG) (AXUG) (CRMUG)

GPUG/NAVUG/AXUG:

Ariett (GPUG)

AvidXchange (GPUG)

Binary Stream (GPUG)

Clients First Business Solutions LLC (AXUG, NAVUG)

Columbus Global (AXUG) (NAVUG)

Concerto Cloud Services (GPUG)

Crowe Horwath (CRMUG)

Data Masons Software (GPUG) (AXUG) (NAVUG)

Data Resolution (GPUG)

deFacto Global (AXUG) (GPUG)

enVista (AXUG)

Fastpath (GPUG) (AXUG) (NAVUG)

i95Dev (GPUG)

Implementation Specialists (GPUG)

Integrity Data (GPUG)

Interdyn BMI (NAVUG) (AXUG) (GPUG)

Journyx (GPUG)

k-eCommerce (GPUG) (NAVUG) (AXUG)

KTL Solutions (GPUG)

Metafile (GPUG) (AXUG)

MineralTree (GPUG)

Njevity, Inc.  (PowerGP Online) (GPUG)

Panatrack (GPUG)

PaperSave (GPUG)

Rockton Software (GPUG)

RoseASP  (GPUG)

RSM  (AXUG) (CRMUG)

Sana Commerce (GPUG) NAVUG) (AXUG)

SBS Group (GPUG) (AXUG)

Sierra Workforce Solutions (GPUG)

Solver, Inc (GPUG) (AXUG) (NAVUG)

SPS Commerce (NAVUG) (AXUG) (GPUG)

Stoneridge Software (NAVUG) (AXUG)

Sunrise Technologies (AXUG)

T3 Information Systems (Full Circle Budget) (NAVUG)

Tridea Partners (GPUG) (AXUG)

V-Technologies, LLC (StarShip) (GPUG)

WatServ (GPUG) (AXUG) (NAVUG)

Western Computer (GPUG) (NAVUG) (AXUG)

WithoutWire (GPUG) (NAVUG) (AXUG)

You can see the full list of exhibitors for each event here:

We hope to see you in Nashville in the Expo Hall.

By CRM Software Blog Writer, www.crmsoftwareblog.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Martin Lawrence Is Hip Hop | Hip Hop Honors: The 90’s Game Changers

Martin Lawrence Martin Lawrence Is Hip Hop | Hip Hop Honors: The 90’s Game Changers

Martin Lawrence was honored at the VH1 Hip Hop Honors: The Game Changers. Check out the above video of Martin accepting the Award.

Below, Tiny, Salt-N-Pepa, Damon Dash, Remy Ma and more discuss how Martin Lawrence brought hip hop to the masses. Watch video below;

Larenz Tate Promoted To Series Regular For ‘Power’ Season 5

Let’s block ads! (Why?)

The Humor Mill

How to Automatically Send a Webform Submission to a Nurture Campaign

Featured Imagessadfaefae 300x225 How to Automatically Send a Webform Submission to a Nurture Campaign

Do you want to know how to begin nurturing leads as soon as they submit a webform? PowerWebForm and PowerNurture can help!

First, import both PowerWebForm and PowerNurture using the PowerPack Import Guide and register for your 30-day trial.

Then, create your PowerWebForm and your PowerNurture Campaign. Be sure to set up your PowerWebForm so that upon submission, PowerWebForm creates a lead record with an easily identifiable Lead Source such as “PowerWebForm Submission”

Next go to Settings >> Processes

091917 2001 HowtoAutoma1 How to Automatically Send a Webform Submission to a Nurture Campaign

Click, New Process.

091917 2001 HowtoAutoma2 How to Automatically Send a Webform Submission to a Nurture Campaign

Create a New Workflow on the Lead Entity to be run across the organization when the record is created.

091917 2001 HowtoAutoma3 How to Automatically Send a Webform Submission to a Nurture Campaign

Start with a Check Condition that states If Lead Source Equals [PowerWebForm Submission], then create a record and select nurture automation. The nurture automation will send the lead to the PowerNurture Campaign automatically every time a lead is created with the designated lead source. Be sure to select the correct PowerNurture Campaign and the correct step – a helpful tip is when you are creating your nurture campaign use descriptive titles such as “Welcome Email” so you can tell which step to send a brand-new lead to.

091917 2001 HowtoAutoma4 How to Automatically Send a Webform Submission to a Nurture Campaign

Finally, don’t forget to activate your workflow.

And that’s it! You’ve successfully linked your PowerWebForm to your PowerNurture Campaign.

Keep in mind that if you want to get the most out of PowerNurture, you’ll want to check out all our PowerPacks and implement the ones that suit your organization’s needs.

Happy PowerNurturing and Happy Dynamics 365’ing!

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Power BI Developer community September update

This blog post covers the latest updates for Power BI Developers community. Don’t forget to check out the August blog post, if you haven’t done so already.

Here’s the complete list of September updates for the Power BI Embedded APIs

  • Clone tile & dashboard
  • RLS for AS on-prem
  • RLS additional properties to dataset
  • Export/Clone PBIX (model and reports)
  • Language configuration

Clone Dashboard and Dashboard Tiles

As an ISV, we recommend supporting multiple workspaces for your application’s embedded analytics topology. You can accomplish this by creating a main workspace which contains the ‘golden reports and dashboards’ for your application. When onboarding a new customer to your application, you can then create a new workspace dedicated to that customer. Then you can use the clone API’s to make a copy of the content from the main workspace to the customer’s workspace. To do that the ISV needs automation capabilities for cloning Power BI reporting artifacts. We have previously released support for ‘Clone report’ operation, now we add the support for clone dashboard and dashboard tiles.

Dashboard cloning is based on 2 steps.

1. Create a new Dashboard – this will be the target dashboard.

2. Clone the Dashboard Tiles from the original dashboard to the target dashboard.

Since a dashboard tile has multiple uses, some of the dashboard tiles are bound to reports, some only to datasets (like Streaming data tile for example), and some are not bounded at all (an image, video or a web-content tile). It’s important to note that when cloning a dashboard tile between dashboards in the same workspace, the tile will be bounded by default to its source report or dataset unless a new target source is defined. However, when cloning dashboard tiles between workspaces, you must first make sure the target workspace already contains the target objects to bind to (report or dataset).

Using this method for cloning dashboards gives full control and granularity for both full dashboard cloning and specific dashboard tile cloning.

For more information about the APIs, see Add dashboard and Clone tile.

RLS improvements

In August, we released support for RLS. Now we are releasing additional improvements to extend RLS capabilities and data source support.

Support for Analysis Services live connections

RLS can now be used with an AS on-prem data source. The implementation and usage is mostly like cloud-based RLS, with one important note – the ‘master user’ used to authenticate your application, and call the APIs, must also be an admin of the On-Premises Data Gateway being used for the Analysis Services data source. The reason is that setting the effective identity is allowed only for users who can see all of the data. For AS on-prem, the user must be the gateway admin. For more information see Working with Analysis Services live connections.

Additional properties to dataset

As you can see in the RLS documentation, The GenerateToken API should receive additional context- username, roles and datasets. Each of these parameters needs to be populated according to various scenarios and various data source types. To remove some of the uncertainty and automate the use of RLS, we added additional properties to the JSON object of the dataset:

  • isEffectiveIdentityRequired- If the dataset requires an effective identity, this property value will be ‘true’, indicating that you must send an effective identity in the GenerateToken API.
  • isEffectiveIdentityRolesRequired – When RLS is defined inside the PBIX file, this property value will be ‘true’, indicating that you must specify a role.
  • isOnPremGatewayRequired – When the property value is ‘true’ it indicates that you must use a gateway for this On-prem datasource.

Export/Clone PBIX (model and reports)

Use the Export PBIX API to retrieve the PBIX file by a report identifier. The response will contain a PBIX file object. Once retrieved, you can decide to do two operations with it. Save the PBIX file locally for offline exploration using Power BI Desktop, or use the saved PBIX file and leverage the Import PBIX operation to clone the reports and their respective datasets. Here is a code sample on how to retrieve the PBIX and save it, try it out!

var exportRequestUri = String.Format(“https://api.powerbi.com/v1.0/myorg/reports/{0}/Export“, “Enter the report ID”);

// Create HTTP transport objects

HttpWebRequest request = System.Net.WebRequest.Create(exportRequestUri) as System.Net.HttpWebRequest;

request.Method = “GET”;

request.Headers.Add(“Authorization”, String.Format(“Bearer {0}”, ” Enter your Access token”));

//Get HttpWebResponse from GET request

WebResponse response = request.GetResponse();

using (Stream exportResponse = response.GetResponseStream())

{

//Save stream
CopyStream(exportResponse, “Enter your destination path”);

}

public void CopyStream(Stream stream, string destPath)

{

using (var fileStream = new FileStream(destPath, FileMode.Create, FileAccess.Write))

{

stream.CopyTo(fileStream);

}

}

In our next SDK update we will add support for this API call. For more information, see Export report.

Language configuration

You can define the language and text formatting of your embedded content. Changing this setting will mostly impact the number and date formatting, or the Bing maps view in your embedded content. See the full list of supported languages.

The settings can be configured through the ‘embed configuration’. Read more about the embed configurations (Search for ‘Locale Settings’).

What’s still puzzling

Q: I want to test my content through the sample tool, but how do I get the Embed Token to use it?

We get a lot of questions around using our Sample tool. It’s a great tool to explore our JS API, understand how you can embed content easily and leverage user interactions to enrich your native app experience. In this great video by Adam Saxton (Guy in a Cube), you can learn how to get the Embed Token and other properties to use the sample tool with your own content.

Funnel plot

On occasion, we find patterns in statistical noise that lead us to incorrect conclusions about the underlying data.

This month we are very excited to announce a new R-powered visual type: the funnel plot!

The funnel plot helps you compare samples, and find true outliers among the measurements with varying precision. It’s widely used for comparing institutional performance and medical data analysis.

1c8b1a8d 20b1 4ebd 9dfe 19dd21a16abd Power BI Developer community September update

The funnel plot is easy to consume and interpret. The “funnel” is formed by confidence limits and show the amount of expected variation. The dots outside the funnel are outliers.

You can check the visual out in the Office store.

Tutorial on R-powered visualization in Power BI

R-based visualizations in Power BI have many faces. We support R-visuals and R-powered Custom Visuals. The latest can be one of two types: PNG-based and HTML-based.

What are the pros and cons of each type? How to convert one type to another? How to create a custom visual from the scratch? Or how to change an existing custom visual to suit your needs?  How to debug my R-powered Custom Visual?

All these and many other questions are being answered in our comprehensive step-by-step tutorial on R-powered visualization.  You are invited to follow the steps from simple R script to the high-quality HTML-based custom visual in the store, the source code of every step is included. Very detailed changes from step to step are documented and explained. The tutorial contains bonus examples, useful links, and Tips and Tricks sections.

That’s all for this post. We hope you found it useful. Please continue sending us your feedback, it’s very important to us. Have an amazing feature in mind? please share it and vote in our Power BI Embedded Ideas forum, or our Custom Visuals Ideas forum.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Next Week’s Strata Data Conference: What’s in a Name?

Next week, thousands of Big Data practitioners, experts and influencers will gather at New York’s Javits Center to attend the newly-branded Strata Data Conference. According to event organizers, the conference, which debuted in 2012 as Strata + Hadoop World, has been rebranded to more accurately reflect the scope of the conference beyond Hadoop.

The simplicity of the name belies the increasingly diverse and complex ecosystem of Big Data tools and technology that will be covered during three days packed with tutorials, keynotes and track sessions. It can be quite overwhelming – but here’s a sampling of what’s going on to help you plan your week.

Strata Data Conference Keynotes

Keynotes are always a great way to get energized for the day ahead, and this year looks to be no different, with titles including “Wild, Wild Data,” “Weapons of Math Destruction,” the cautionary “Your Data is Being Manipulated,” and the upbeat “Music, the window into your soul.”

We’re also looking forward to the presentation by Cloudera co-founder Mike Olson and Cesar Delgado, the current Siri platform architect at Apple.

Expanded Session Topics Connect Technology to the Business

While the main driver for dropping “Hadoop” from the conference title was to be more inclusive of the breadth of technology discussed, the new name appears to coincide with an expansion of topics that connect the technology to the business. In addition to Findata Day – a separate event curated for finance executives held on Tuesday – there is a “Strata Business Summit” track within the main conference, tailored for executives, business leaders and strategists.

Looking for more sessions that marry technology and business? You can filter on topics for Business Case Studies, Data-driven Business Management, Enterprise Adoption, and Law, Ethics & Governance.

Speaking of Governance … if you want to make sure the Big Data in your organization is actually trusted by the people who need and use it, be sure to attend “A Governance Checklist for Making Your Big Data into Trusted Data,” presented by our VP of product management, Keith Kohl, on Thursday at 2:05 pm.

LPheader StrataNYC17 Next Week’s Strata Data Conference: What’s in a Name?

Strata Data Conference Events You Won’t Want to Miss

Last, but not least, what’s a great conference without some great events? Here are a few favorites:

  • Ignite:Presenters get 5 minutes to present on an interesting topic – from technology to philosophy – that touch on the wonder and mysteries of Big Data and pervasive computing. Always a favorite, this event is free and open to the public. So stop by even if you don’t have a conference pass!
  • Strata Data Gives Back: Join Cloudera Cares and O’Reilly Media in assembling care kits for New York’s homeless and at-risk youth, in partnership with the Covenant House NY. Visit the Cloudera stand in the Expo Hall to get involved.
  • Booth Crawl and Data After Dark: Unwind after a day of sessions by with fellow attendees, speakers and authors while you enjoy a vendor-hosted cocktail hour in the Expo Hall. Be sure to stop by Syncsort Booth #715 where you can enjoy a Mexican Fiesta and get our latest t-shirt! Ask our data experts how you can unlock valuable – and trusted – insights from your mainframe and other legacy platforms using our innovative Big Data Integration and Data Quality solutions! Then head to 230 Fifth, New York’s largest outdoor rooftop garden, for Data After Dark: City View.

Haven’t registered for Strata Data Conference yet? Get a 20% discount on us!

Let’s block ads! (Why?)

Syncsort + Trillium Software Blog

Spotfire Tips & Tricks: Hierarchical Cluster Analysis

Hierarchical cluster analysis or HCA is a widely used method of data analysis, which seeks to identify clusters often without prior information about data structure or number of clusters. Strategies for hierarchical clustering generally fall into two types: Agglomerative and divisive. Agglomerative is a bottom up approach where each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy. Divisive is a top-down approach where all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

Hierarchical cluster analysis in Spotfire

The algorithm used for hierarchical clustering in TIBCO Spotfire is a hierarchical agglomerative method. For row clustering, the cluster analysis begins with each row placed in a separate cluster. Then the distance between all possible combinations of two rows is calculated using a selected distance measure. The two most similar clusters are then grouped together and form a new cluster. In subsequent steps, the distance between the new cluster and all remaining clusters is recalculated using a selected clustering method. The number of clusters is thereby reduced by one in each iteration step. Eventually, all rows are grouped into one large cluster. The order of the rows in a dendrogram are defined by the selected ordering weight. The cluster analysis works the same way for column clustering.

Distance measures: The following measures can be used to calculate the distance or similarity between rows or columns

  • Correlation
  • Cosine correlation
  • Tanimoto coefficient
  • Euclidean distance
  • City block distance
  • Square euclidean distance
  • Half Square euclidean distance

Clustering methods: The following clustering methods are available in Spotfire

  • UPGMA
  • WPGMA
  • Single linkage
  • Complete linkage
  • Ward’s method

Spotfire also provides options to normalize data and perform empty value replacement before performing clustering.

HCtool Spotfire Tips & Tricks: Hierarchical Cluster Analysis

To perform a clustering with the hierarchical clustering tool, Iris data set was used.

Select Tools > Hierarchical Clustering

Select Data Table, and next click Select Columns

Sepal length, Sepal Width, Petal Length and Petal width columns were selected

HC select columns Spotfire Tips & Tricks: Hierarchical Cluster Analysis

Next, in order to have row dendrograms, Cluster Rows check box was selected

Click the Settings button to open the Edit Clustering Settings dialog and select a Clustering method and Distance measure. In this case default options were selected.

The hierarchical clustering calculation is performed, and heat map visualization with the specified dendrograms is created in just a few clicks. A cluster column is also added to the data table and made available in the filters panel. The bar chart uses cluster ID column to display species. The pruning line was set to 3 clusters and it is observed that  Setosa was predicted correctly as single cluster, but there were some rows in Virginca and versicolor which were not in right cluster and these are known issues.

HCA Iris Spotfire Tips & Tricks: Hierarchical Cluster Analysis

Try this for yourself with TIBCO Spotfire! Test out a Spotfire free trial today. Check out other Tips and Tricks posts to help you #DoMoreWithSpotfire!

Let’s block ads! (Why?)

The TIBCO Blog

Six Steps to a Successful Lead-Management Strategy

six steps lead management 351x200 Six Steps to a Successful Lead Management Strategy

3. Create buyer personas for leads

Demographics tell only part of the story. Look beyond the flat statistics and build a better understanding of your prospects – and your own value proposition – by designing buyer personas and mapping existing leads to them. The persona is a detailed sketch of the characteristics, triggers, motivations, desires, needs, and preferences of a customer.

An effective persona is one which has the customer’s goals at its core. Rather than focusing on data and probabilities, the individual represented by a persona should have clearly defined needs, wants, and aims.

Identifying how your products or services satisfy a persona’s objectives and soothe their pain points can tell you far more about how to communicate with your leads than even the most detailed market research.

Personas are not static. Always keep in mind that these representative customers will be investigating other ways to solve their problems. This will help you understand how to spread out (or compress) your communications with a lead, and also assist you refining the persona as the marketplace changes around you.

In the B2B and considered-purchase world, develop personas for champions, buyers, decision-makers, and users. Each has their own requirements and agendas.

A lead may not fit neatly into a single persona. That’s okay. Ranking the persona fit by score will give you more flexibility and a wider range of tools to attract and retain their interest.

Let’s block ads! (Why?)

Act-On Blog

PH Businesses Increasingly Adopting Cloud-based ERP to Grow Globally, Innovate

websitelogo PH Businesses Increasingly Adopting Cloud based ERP to Grow Globally, Innovate

Posted by Jan Pabellon, Director of Product Management, Oracle NetSuite

Theo and Philo’s unique bean-to-bar, single-origin Philippine artisanal chocolate hit a sweet spot in the market shortly after its launch in 2010 – with demand growing at a rate of 700 percent to reach 14,000 bars a month. But it knew efficiently scaling to meet that demand with its current system would threaten to sour customer relationships. Its inventory management and accounting, which were previously running on spreadsheets and a disparate QuickBooks system respectively, didn’t lend the business the robust functionality that it needed to efficiently manage operations, nor afford the scale to accommodate growth.

In 2015, it implemented a cloud-based platform, NetSuite OneWorld, for end-to-end business management – including inventory, orders, financials and purchasing – empowering operational efficiencies, streamlining inventory management and lending the business multi-currency and tax functionality that enabled it to expand sales globally into Germany and Japan.

“NetSuite is an integral part of our day-to-day operations and a scalable platform for our continued growth,” Theo and Philo Founder Philo Chua said. “We’re a lot more efficient and, as we grow, the automated process flows and checks and balances that we need are already in place within NetSuite.”

Manila-based Theo and Philo is one of the growing number of Philippine businesses leveraging the power of cloud-based ERP to innovate and grow. The Philippines is also quickly leading in the Asia Pacific region in terms of cloud adoption, overtaking Thailand and climbing up a spot to land in 9th place in the Asia Cloud Computing Association’s “Cloud Readiness Index.” The country scored highly in terms of freedom of information and protection of privacy, which are both considered critical factors in cloud adoption.

Cloud computing is at the center of a confluence of trends that include mobility, social, and analytics and big data, and is the next logical step in how applications should be developed and delivered, making full use of the transformative power of the Internet. That’s why companies are increasingly leaving their aged, legacy systems behind in favor of cloud-based alternatives. They know that systems designed before the advent of the Internet and mobile won’t be able to keep up with the realities of doing business today, requiring constant change, adaptation, agility and innovation.

As such, cloud computing is extending across industries – including manufacturing, wholesale distribution, retail and services – and helping to blur the lines between them as business models transform and adapt to the changes brought about by digital technologies. For example, we are seeing manufacturing companies starting to distribute products themselves in a bid to improve margins by cutting out middlemen, or distributors augmenting revenues by selling direct to consumers by taking advantage of online channels. We are also seeing service companies starting to reconfigure their processes to offer more “productized”, repeatable, turnkey versions of products, and we see the opposite as well— product companies differentiating their offerings via value-added, digital or premium services.

Those businesses that empower innovation with the cloud are uniquely poised to capitalize on this trend – and transform their businesses to meet the demands of today’s markets and consumers.

Take Dowi Hosiery Mills, a nearly 40-year old manufacturer of socks and other hosiery products, selling brands such as Darlington and Exped through nearly 100 retailers. It implemented NetSuite in September 2014 to replace a multitude of legacy, homegrown, custom software systems, manual processes and multiple databases — which it felt harmed the company’s ability to strengthen its retail relationships, and made it difficult to accurately match manufacturing output with demand. Dowi Hosiery Mills now uses NetSuite for financials, order management, invoicing, billing, purchasing, receiving, shipping, RMA management, multi-location inventory, reporting and analytics. With NetSuite cloud ERP, there is no longer a need to endure version lock, painful upgrades and maintaining on-premise systems. Instead, Dowi can now get automatic product upgrades twice a year and a platform that allows for easy customizations and integrations. Dowi Hosiery saved eight full-time employees that it otherwise would have needed to add just to keep up with the order processing and inventory tracking demands under the old systems and accelerated the path to market for new lines of socks with high-tech fibers. The company is also able to deliver much better service to its retail partners.

The Philippines needs more of these types of businesses: world class, globally competitive business, disruptors and innovators which can transform their industries, pioneers who can provide newer and better services for consumers. By innovating in the cloud, Philippine businesses can continue to lead, and adapt at the speed demanded in today’s global economy.

Posted on Mon, September 18, 2017
by NetSuite

Let’s block ads! (Why?)

The NetSuite Blog

Using Legacy Data Sources in Tabular 1400

The modern Get Data experience in Tabular 1400 brings interesting new data discovery and transformation capabilities to Analysis Services. However, not every BI professional is equally excited. Especially those who prefer to build their models exclusively on top of SQL Server databases or data warehouses and appreciate the steadiness of tried and proven T-SQL queries over fast SQL OLE DB provider connections might not see a need for mashups. If you belong to this group of BI professionals, there is good news: Tabular 1400 fully supports provider data sources and native query partitions. The modern Get Data experience is optional.

Upgrading from 1200 to 1400

Perhaps the easiest way to create a Tabular 1400 model with provider data sources and native query partitions is to upgrade an existing 1200 model to the 1400 compatibility level. If you used Windiff or a similar tool to compare the Model.bim file in your Tabular project before and after the upgrade, you will find that not much was changed. In fact, the only change concerns the compatibilityLevel parameter, which the upgrade logic sets to a value of 1400, as the following screenshot reveals.

At the 1400 compatibility level, regardless of the data sources and table partitions, you can use any advanced modeling feature, such as detail rows expressions and object-level security. There are no dependencies on structured data sources or M partitions using the Mashup engine. Legacy provider data sources and native query partitions work just as well. They bypass the Mashup engine. It’s just two different code paths to get the data.

Provider data sources versus structured data sources

Provider data sources get their name from the fact that they define the parameters for a data provider in the form of a connection string that the Analysis Services engine then uses to connect to the data source. They are sometimes referred to as legacy data sources because they are typically used in 1200 and earlier compatibility levels to define the data source details.

Structured data sources, on the other hand, get their name from the fact that they define the connection details in structured JSON property bags. They are sometimes referred to as modern or Power Query/M-based data sources because they correspond to Power Query/M-based data access functions, as explained in more detail in Supporting Advanced Data Access Scenarios in Tabular 1400 Models.

At a first glance, provider data sources have an advantage over structured data sources because they provide full control over the connection string. You can specify any advanced parameter that the provider supports. In contrast, structured data sources only support the address parameters and options that their corresponding data access functions support. This is usually sufficient, however. Note that provider data sources also have disadvantages, as explained in the next section.

A small sample application can help to illustrate the metadata differences between provider data sources and structured data sources. Both can be added to a Tabular 1400 model using Tabular Object Model (TOM) or the Tabular Model Scripting Language (TMSL).

Note that Analysis Services always invokes the Mashup engine when using structured data sources to get the data. It might or might not for provider data sources. The choice depends on the table partitions on top of the data source, as the next section explains.

Query partitions versus M partitions

Just as there are multiple types of data source definitions in Tabular 1400, there are also multiple partition source types to import data into a table. Specifically, you can define a partition by using a QueryPartitionSource or an MPartitionSource, as in the following TOM code sample.

As illustrated, you can mix query partitions with M partitions even on a single table. The only requirement is that all partition sources must return the same set of source columns, mapped to table columns at the Tabular metadata layer. In the example above, both partitions use the same data source and import the same data, so you end up with duplicate rows. This is normally not what you want, but in this concrete example, the duplicated rows help to illustrate that Analyses Services could indeed process both partition sources successfully, as in the following screenshot.

The Model.bim file reveals that the M and query partition sources reference a structured data source, but they could also reference a provider data source as in the screenshot below the following table summarizing the possible combinations. In short, you can mix and match to your heart’s content.

  Data Source Partition Source Comments
1 Provider Data Source Query Partition Source The AS engine uses the cartridge-based connectivity stack to access the data source.
2 Provider Data Source M Partition Source The AS engine translates the provider data source into a generic structured data source and then uses the Mashup engine to import the data.
3 Structured Data Source Query Partition Source The AS engine wraps the native query on the partition source into an M expression and then uses the Mashup engine to import the data.
4 Structured Data Source M Partition Source The AS engine uses the Mashup engine to import the data.

The scenarios 1 and 4 are straightforward. Scenario 3 is practically equivalent to scenario 4. Instead of creating a query partition source with a native query and having the AS engine convert this into an M expression, you could define an M partition source in the first place and use the Value.NativeQuery function to specify the native query, as the following screenshot demonstrates. Of course, this only works for connectors that support native source queries and the Value.NativeQuery function.

Scenario 2, “M partition on top of a provider data source” is more complex than the others because it involves converting the provider data source into a generic structured data source. In other words, a provider data source pointing to a SQL Server database is not equivalent to a structured SQL Server data source because the AS engine does not convert this provider data source into a structured SQL Server data source. Instead, it converts it into a generic structured OLE DB, ODBC, or ADO.NET data source depending on the data provider that the provider data source referenced. For SQL Server connections, this is usually an OLE DB data source.

The fact that provider data sources are converted into generic structured data sources has important implications. For starters, M expressions on top of a generic data source differ from M expressions on top of a specific structured data source. For example, as the next screenshot highlights, an M expression over an OLE DB data source requires additional navigation steps to get to the desired table. You cannot simply take an M expression based on a structured SQL Server data source and put it on top of a generic OLE DB provider data source. If you tried, you would most likely get an error that the expression references an unknown variable or function.

Moreover, the Mashup engine cannot apply its query optimizations for SQL Server when using a generic OLE DB data source, so M expressions on top of generic provider data sources cannot be processed as efficiently as M expressions on top of specific structured data sources. For this reason, it is better to add a new structured data source to the model for any new M expression-based table partitions than to use an existing provider data source. Provider data sources and structured data sources can coexist in the same Tabular model.

In Tabular 1400, the main purpose of a provider data source is backward compatibility with Tabular 1200 so that the processing behavior of your models does not change just because you upgraded to 1400 and so that any ETL logic programmatically generating data sources and table partitions continues to work seamlessly. As mentioned, query partitions on top of a provider data source bypass the Mashup engine. However, the processing performance is not necessarily inferior with a structured data source thanks to a number of engine optimizations. This might seem counterintuitive, but it is a good idea to double-check the processing performance in your environment. The Microsoft SQL Server Native Client OLE DB Provider is indeed performing faster than the Mashup engine. In very large Tabular 1400 models connecting to SQL Server databases, it can therefore be advantageous to use a provider data source and query partitions.

Data sources and partitions in SSDT Tabular

With TMSL and TOM, you can create data sources and table partitions in any combination, but this is not the case in SSDT Tabular. By default, SSDT creates structured data sources, and when you right-click a structured data source in Tabular Model Explorer and select Import New Tables, you launch the modern Get Data UI. Among other things, the default behavior helps to provide a consistent user interface and avoids confusion. You don’t need to weigh the pros and cons of provider versus structured and you don’t need to select a different partition source type and work with a different UI just because you wanted to write a native query. As explained in the previous section, an M expression using Value.NativeQuery is equivalent to a query partition over a structured data source.

Only if a model contains provider data sources already, say due to an upgrade from 1200, SSDT displays the legacy UI for editing these metadata objects. By the same token, when you right-click a provider data source in Tabular Model Explorer and select Import New Tables, you launch the legacy UI for defining a query partition source. If you don’t add any new data sources, the user interface is still consistent with the 1200 experience. Yet, if you mix provider and structured data sources in a model, the UI switches back and forth depending on what object type you edit. See the following screenshot with the modern experience on the left and the legacy UI on the right – which one you see depends on the data source type you right-clicked.

Fully enabling the legacy UI

BI professionals who prefer to build their Tabular models exclusively on top of SQL Server data warehouses using native T-SQL queries might look unfavorable at SSDT Tabular’s strong bias towards the modern Get Data experience. But the good news is that you can fully enable the legacy UI to create provider data sources in Tabular 1400 models, so you don’t need to resort to using TMSL or TOM for this purpose.

In the current version of SSDT Tabular, you must configure a DWORD parameter called “Enable Legacy Import” in the Windows registry. Setting this parameter to 1 enables the legacy UI. Setting it to zero or removing the parameter disables it again. To enable the legacy UI, you can copy the following lines into a .reg file and import the file into the registry. Do not forget to restart Visual Studio to apply the changes.

Windows Registry Editor Version 5.0

[HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server.0\Microsoft Analysis Services\Settings]
"Enable Legacy Import"=dword:00000001

With the legacy UI fully enabled, you can right-click on Data Sources in Tabular Model Explorer and choose to Import From Data Source (Legacy) or reuse Existing Connections (Legacy), as in the following screenshot. As you would expect, these options create provider data sources in the model and then you can create query partitions on top of these.

Wrapping things up

While AS engine, TMSL, and TOM give you full control over data sources and table partitions, SSDT Tabular attempts to simplify things by favoring M partitions over structured data sources wherever possible. The legacy UI only shows up if you already have provider data sources or query partitions in your model. Should legacy data sources and query partitions be first-class citizens in Tabular 1400? Perhaps SSDT should provide an explicit option in the user interface to enable the legacy UI to eliminate the need for configuring a registry parameter. Let us know if this is something we should do. Also, there is currently no SSDT support for creating M partitions over provider data sources or query partitions over structured data sources because these scenarios seem less important and less desirable. Do you need these features?

Send us your feedback via email to SSASPrev at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. Or simply post a comment to this article. Influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers!

Let’s block ads! (Why?)

Analysis Services Team Blog

Copy Cats

 Copy Cats

via

Advertisements


About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Copy Cats

Let’s block ads! (Why?)

Deep Fried Bits