Category Archives: Business Intelligence

Mobileye, Intel’s $15.3 billion computer vision acquisition, partners with Nissan to crowdsource maps for autonomous cars

 Mobileye, Intel’s $15.3 billion computer vision acquisition, partners with Nissan to crowdsource maps for autonomous cars

Mobileye, the Israeli computer vision firm that’s currently in the process of being acquired by chip giant Intel for $ 15.3 billion, has announced a new tie-up with automotive giant Nissan to generate “anonymized, crowdsourced data” for precision mapping in autonomous cars.

Founded in 1999, Mobileye builds the visual smarts behind cars’ driver assistance systems that include adaptive cruise control and lane departure warnings. Its technology is already used by the likes of BMW, Volvo, Buick, and Cadillac, and last summer Mobileye announced a partnership with BMW and Intel to put self-driving cars into full production by 2021. The trio later committed to putting 40 autonomous test cars on public roads in the second half of 2017, before Intel went all-in and decided to buy Mobileye outright.

For self-driving cars to become a reality, carmakers and the technology companies they work with need access to accurate maps of roads and the environment around which autonomous cars will operate — these high-definition maps complement on-board sensors and add an additional level of safety. Mobileye’s existing Road Experience Management (REM) engine is essentially a mapping and localization toolset that can use any camera-equipped vehicle to capture and process data around geometry and physical landmarks, send it to the cloud, and then feed this back into autonomous car systems using minimal bandwidth. It’s basically a crowdsourced data collection effort using millions of cars already on the roads.

Mobileye already powers Nissan’s recently announced ProPilot, a system that’s similar to Tesla’s AutoPilot offering, which can already automate some car functions on the road, including steering and acceleration. And with Nissan recently kicking of its first self-driving car trials in London, it seems now is the time for Mobileye to work with Nissan to boost its crowdsourced mapping efforts.

This adds another major automaker to Mobileye’s existing roster of REM partners, which include the previously announced General Motors and Volkswagen, while the likes of Audio, BMW, and Daimler are on board via their ownership of the HERE mapping platform that partnered with Mobileye last year.

The more carmakers sign up to integrate Mobileye’s REM, the more data can be combined to scale the system to cover every locale where humans drive.

Let’s block ads! (Why?)

Big Data – VentureBeat

Digital Transformation And The Successful Management Of Innovation

CDP 2 1 Digital Transformation And The Successful Management Of Innovation

Achieving quantum leaps through disruption and using data in new contexts, in ways designed for more than just Generation Y — indeed, the digital transformation affects us all. It’s time for a detailed look at its key aspects.

Data finding its way into new settings

Archiving all of a company’s internal information until the end of time is generally a good idea, as it gives the boss the security that nothing will be lost. Meanwhile, enabling him or her to create bar graphs and pie charts based on sales trends – preferably in real time, of course – is even better.

But the best scenario of all is when the boss can incorporate data from external sources. All of a sudden, information on factors as seemingly mundane as the weather start helping to improve interpretations of fluctuations in sales and to make precise modifications to the company’s offerings. When the gusts of autumn begin to blow, for example, energy providers scale back solar production and crank up their windmills. Here, external data provides a foundation for processes and decisions that were previously unattainable.

Quantum leaps possible through disruption

While these advancements involve changes in existing workflows, there are also much more radical approaches that eschew conventional structures entirely.

“The aggressive use of data is transforming business models, facilitating new products and services, creating new processes, generating greater utility, and ushering in a new culture of management,” states Professor Walter Brenner of the University of St. Gallen in Switzerland, regarding the effects of digitalization.

Harnessing these benefits requires the application of innovative information and communication technology, especially the kind termed “disruptive.” A complete departure from existing structures may not necessarily be the actual goal, but it can occur as a consequence of this process.

Having had to contend with “only” one new technology at a time in the past, be it PCs, SAP software, SQL databases, or the Internet itself, companies are now facing an array of concurrent topics, such as the Internet of Things, social media, third-generation e-business, and tablets and smartphones. Professor Brenner thus believes that every good — and perhaps disruptive — idea can result in a “quantum leap in terms of data.”

Products and services shaped by customers

It has already been nearly seven years since the release of an app that enables customers to order and pay for taxis. Initially introduced in Berlin, Germany, mytaxi makes it possible to avoid waiting on hold for the next phone representative and pay by credit card while giving drivers greater independence from taxi dispatch centers. In addition, analyses of user data can lead to the creation of new services, such as for people who consistently order taxis at around the same time of day.

“Successful models focus on providing utility to the customer,” Professor Brenner explains. “In the beginning, at least, everything else is secondary.”

In this regard, the private taxi agency Uber is a fair bit more radical. It bypasses the entire taxi industry and hires private individuals interested in making themselves and their vehicles available for rides on the Uber platform. Similarly, Airbnb runs a platform travelers can use to book private accommodations instead of hotel rooms.

Long-established companies are also undergoing profound changes. The German publishing house Axel Springer SE, for instance, has acquired a number of startups, launched an online dating platform, and released an app with which users can collect points at retail. Chairman and CEO Matthias Döpfner also has an interest in getting the company’s newspapers and other periodicals back into the black based on payment models, of course, but these endeavors are somewhat at odds with the traditional notion of publishing houses being involved solely in publishing.

The impact of digitalization transcends Generation Y

Digitalization is effecting changes in nearly every industry. Retailers will likely have no choice but to integrate their sales channels into an omnichannel approach. Seeking to make their data services as attractive as possible, BMW, Mercedes, and Audi have joined forces to purchase the digital map service HERE. Mechanical engineering companies are outfitting their equipment with sensors to reduce downtime and achieve further product improvements.

“The specific potential and risks at hand determine how and by what means each individual company approaches the subject of digitalization,” Professor Brenner reveals. The resulting services will ultimately benefit every customer – not just those belonging to Generation Y, who present a certain basic affinity for digital methods.

“Think of cars that notify the service center when their brakes or drive belts need to be replaced, offer parking assistance, or even handle parking for you,” Brenner offers. “This can be a big help to elderly people in particular.”

Chief digital officers: team members, not miracle workers

Making the transition to the digital future is something that involves not only a CEO or a head of marketing or IT, but the entire company. Though these individuals do play an important role as proponents of digital models, it also takes more than just a chief digital officer alone.

For Professor Brenner, appointing a single person to the board of a DAX company to oversee digitalization is basically absurd. “Unless you’re talking about Da Vinci or Leibnitz born again, nobody could handle such a task,” he states.

In Brenner’s view, this is a topic for each and every department, and responsibilities should be assigned much like on a soccer field: “You’ve got a coach and the players – and the fans, as well, who are more or less what it’s all about.”

Here, the CIO neither competes with the CDO nor assumes an elevated position in the process of digital transformation. Implementing new databases like SAP HANA or Hadoop, leveraging sensor data in both technical and commercially viable ways, these are the tasks CIOs will face going forward.

“There are some fantastic jobs out there,” Brenner affirms.

Want more insight on managing digital transformation? See Three Keys To Winning In A World Of Disruption.

Image via Shutterstock


Let’s block ads! (Why?)

Digitalist Magazine

Deploying Solutions from the Cortana Intelligence Gallery

 Deploying Solutions from the Cortana Intelligence Gallery

The Gallery is a community site. Many of the contributions are from Microsoft directly. Individual community members can make contributions to the Gallery as well.

The “Solutions” are particularly interesting. Let’s say you’ve searched and found Data Warehousing and Modern BI on Azure:

Deploying a Solution from the Gallery

What makes these solutions pretty appealing is the “Deploy” button. They’re packaged up to deploy all (or most) of the components into your Azure environment. I admit I’d like to see some fine-tuning of this deployment process as it progresses through the public preview. Here’s a quick rundown what to expect.

1|Create new deployment:

CISGallery Deployment 1 Deploying Solutions from the Cortana Intelligence Gallery

The most important thing in step 1 above is that your deployment name ends up being your resource group. The resource group is created as soon as you click the Create button (so if you change your mind on naming, you’ll have to go manually delete the RG). Also note that you’re only allowed 9 characters, which makes it hard to implement a good naming convention. (Have I ever mentioned how fond I am of naming conventions?!?)

Resource groups are an incredibly important concept in Azure. They are a way to logically organize related resources which (usually) have the same lifecycle and are managed together. All items within a single resource group are included in an ARM template. Resource groups can serve as a boundary for security/permissions at the RG level, and can be used to track the cost of a solution. So, it’s extremely important to plan out resource group structure in your real environment. In our situation here, having all of these related resources for testing/learning purposes is perfect.

2|Provide configuration parameters:

CISGallery Deployment 2 Deploying Solutions from the Cortana Intelligence Gallery

In step 2 above, the only thing we need to specify is a user and password. This will be the server admin for both Azure SQL Database and Azure SQL Data Warehouse which are provisioned. It will use SQL authentication.

As soon as you hit the Next button, the solution is provisioning.

3|Resource provisioning (automated):

 Deploying Solutions from the Cortana Intelligence Gallery

In step 3 above we see the progress. Depending on the type of resource, it may take a little while.


CISGallery Deployment 4 Deploying Solutions from the Cortana Intelligence Gallery

When provisioning is complete, as shown in step 4 above (partial screenshot), you get a list of what was created and instructions for follow-up steps. For instance, in this solution our next steps are to go and create an Azure Service Principal and then create the Azure Analysis Services model (via PowerShell script saved in an Azure runbook provided by the solution).

They also send an e-mail to confirm the deployment:

 Deploying Solutions from the Cortana Intelligence Gallery

If we pop over to the Azure portal and review what was provisioned so far, we see the following:

 Deploying Solutions from the Cortana Intelligence Gallery

We had no options along the way for selecting names for resources, so we have a lot of auto-generated suffixes for our resource names. This is ok for purely learning scenarios, but not my preference if we’re starting a true project with a pre-configured solution. Following an existing naming convention is impossible with solutions (at this point anyway). A wish list item I have is for the solution deployment UI to display the proposed names for each resource and let us alter if desired before the provisioning begins.

The deployment also doesn’t prompt for which subscription to deploy to (if you have multiple subscriptions like I do). The deployment did go to the subscription I wanted, however, it would be really nice to have that as a selection to make sure it’s not just luck.

We aren’t prompted to select scale levels during deployment. From what I can tell, it chooses the lowest possible scale (I noted that the SQL Data Warehouse was provisioned with 100 DWUs, and the SQLDB had 5 DTUs).

To minimize cost, don’t forget to pause what you can (such as the SQL Data Warehouse) when you’re not using it. The HDInsight piece of this will be the most expensive, and it cannot be paused, so you might want to learn & experiment with that first then de-provision HDInsight in order to save on cost. If you’re done with the whole solution, you can just delete the resource group (in which case all resources within it will be deleted permanently).

Referring to Documentation for Deployed Solutions

You can find each of your deployed solutions here:

From this view, you can refer back to the documentation for a solution deployment (which is the same info presented in Step 4 when it was finished provisioning).

You can also ‘Clean up Deployments’ which is a nice feature. The clean up operation first deletes each individual resource, then it deletes the resource group:

 Deploying Solutions from the Cortana Intelligence Gallery

Let’s block ads! (Why?)

Blog – SQL Chick

VentureBeat is hiring an AI reporter

 VentureBeat is hiring an AI reporter

We’re looking for an experienced reporter to help lead our coverage of artificial intelligence.

As startups and big corporations invest money and talent into AI, VentureBeat aims to cover both the broad ways AI will change life as we know it and the technical infrastructure underpinning it.

As VentureBeat’s AI reporter, you’ll help define our daily coverage of AI and cloud technology — from incremental developments to breakthroughs that may one day beat the Turing Test, cross the Uncanny Valley, and make self-driving cars possible. We also appreciate an appropriately jaundiced view of the many consumer apps already powered by AI. You’ll be responsible for covering breaking news on this topic in a fast-paced newsroom, developing and maintaining key industry contacts, and turning those connections into scoops.

Please be available to work from our San Francisco headquarters. This is a full-time, salaried position with health benefits and a flexible time-off policy. Candidates should have at least two years journalistic experience writing on a deadline in a fast-paced online newsroom.

Finally, it would be great if you love to read VentureBeat. Seriously, though, you should already read VentureBeat!

Please send a resume (or LinkedIn page) and cover letter containing three links to your best stories to Questions? Please get in touch (with “AI reporter” in the subject line).

Let’s block ads! (Why?)

Big Data – VentureBeat

Generating HTML from SQL Server Queries

You can produce HTML from SQL because SQL Server has built-in support for outputting XML, and HTML is best understood as a slightly odd dialect of XML that imparts meaning to predefined tags. There are plenty of edge cases where an HTML structure is the most obvious way of communicating tables, lists and directories. Where data is hierarchical, it can make even more sense. William Brewer gives a simple introduction to a few HTML-output techniques.

Can you produce HTML from SQL? Yes, very easily. Would you ever want to? I certainly have had to. The principle is very simple. HTML is really just a slightly odd dialect of XML that imparts meaning to predefined tags. SQL Server has built-in ways of outputting a wide variety of XML. Although I’ve had in the past to output entire websites from SQL, the most natural thing is to produce HTML structures such as tables, lists and directories.

HTML5 can generally be worked on in SQL as if it were an XML fragment. XML, of course, has no predefined tags and is extensible, whereas HTML is designed to facilitate the rendering and display of data. By custom, it has become more forgiving than XML, but in general, HTML5 is based on XML.

Generating Tables from SQL expressions.

In HTML5, tables are best done simply, but using the child elements and structures so that the web designer has full control over the appearance of the table. CSS3 allows you to specify sets of cells within a list of child elements. Individual TD tags, for example, within a table row (TR) can delimit table cells that can have individual styling, but the rendering of the table structure is quite separate from the data itself.

The table starts with an optional caption element, followed by zero or more colgroup elements, followed optionally by a thead element. This header is then followed optionally by a tfoot element, followed by either zero or more tbody elements or one or more tr elements, followed optionally by a tfoot element, but there can be only one tfoot element.

The HTML5 ‘template’ for tables

In SQL Server, one can create the XML for a table like this with this type of query which is in the form of a template with dummy data.

Which produces (after formatting it nicely) this

So, going to AdventureWorks, we can now produce a table that reports on the number of sales for each city, for the top thirty cities.

I’ve left out the tfoot row because I didn’t need that. Likewise colgroup. I use tfoots mostly for aggregate lines, but you are limited to one only at the end, so it is not ideal for anything other than a simple ‘bottom line’.

When this is placed within and html file, with suitable CSS, it can look something like this

word image 28 Generating HTML from SQL Server Queries

Generating directory lists from SQL expressions.

The HTML is for rendering name-value groups such as dictionaries, indexes, definitions, questions and answers and lexicons. The name-value group consists of one or more names (dt elements) followed by one or more values (dd elements). Within a single dl element, there should not be more than one dt element for each name.

We’ll take as an example an excerpt from the excellent SQL Server glossary

This produces a directory list which can be rendered as you wish

word image 29 Generating HTML from SQL Server Queries

Generating hierarchical lists from SQL expressions.

HTML Lists represent probably the most useful way of passing simple hierarchical data to an application. You can actually use directories (DLs) to do this for lists name-value pairs and even tables for more complex data. Here is a simple example of a hierarchical list, generated from AdventureWorks. You’d want to use a recursive CTE for anything more complicated.


word image 30 Generating HTML from SQL Server Queries


There are quite a few structures now in HTML5. Even the body tag has subordinate header, nav, section, article, aside, footer, details and summary tags. If you read the W3C Recommendation it bristles with good suggestions for using markup to create structures. The pre tag can use child code, samp and kbd tags to create intelligent formatting. Data in SQL Server can easily generate this sort of structured HTML5 markup. This has obvious uses in indexes, chaptering, glossaries as well as the obvious generation of table-based reports. There is quite a lot of mileage in creating HTML from SQL Server queries

Let’s block ads! (Why?)

SQL – Simple Talk

Introducing the Quick Measures Gallery

In the April update of Power BI Desktop, we released a powerful feature called ‘quick measures’ that helps create DAX measures based on templates for common patterns and expressions. Thanks to everyone who tried the feature out, gave us feedback, and even created ideas on our forum with new suggestions! We’re continuing to build more of these templates to address different scenarios, but we’re also looking for help creating more measures to share with everyone.

PowerBI QuickMeasuresGallery Banner Introducing the Quick Measures Gallery

You can submit your common calculations that would be helpful for the rest of the community on our new Quick Measures Gallery. We’ll evaluate the DAX statement and may use it as the basis of a future quick measure – you can even get your name immortalized in Power BI Desktop!

You can see instructions for how to write the best submissions, including a template to use, on the Community forum.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

How Investing in Data Quality Saves You Money

You know that data quality helps your data analysts perform their jobs better and can smooth the transfer of data across your organization. But data quality is crucial for another reason: It translates directly to cold, hard cash. Here’s how increase your data quality return on investment…

blog premium data quality 300x300 How Investing in Data Quality Saves You Money

When it comes to big data, you may think of expensive storage infrastructure and sophisticated platforms like Hadoop as the most significant places to invest your money. But while it’s true that you should invest in data storage and analytics technology, investing in data quality is equally crucial.

The reason why is that poor-quality data can undercut your business operations in a variety of ways. No matter how much you spend on analytics – or marketing, recruitment, planning and other endeavors based on those analytics – you’re shooting yourself in the foot if the data you’re working with is subject to inconsistencies, inaccuracies, and other quality issues.

Consider the following ways in which investing in data quality can save you – or earn you – big money.

blog BD Legacy How Investing in Data Quality Saves You Money

Making the most of marketing

Marketing is key to attracting and keeping customers. If your marketing team’s efforts are based on low-quality data, they will chronically come up short.

Think about it. If the email addresses you collect for prospects are not accurate, your marketing campaigns will end up in black holes. If the data you collect about customer preferences turns out to be inconsistent, your marketing team will make plans based on information that doesn’t reflect reality.

The list of marketing problems that can result from low-quality data could go on. The point is that your return on the investment you make in marketing efforts is only as great as the quality of the data at the foundation of your marketing campaigns.

Keeping customers happy

In addition to using marketing to attract new customers, you also want to keep the customers you have. Quality data is key here, too.

Why? Because your ability to meet and exceed the expectations of your customers is largely based on the accuracy of the data you collect about their preferences and behavior. If time zone information in your database of customer transactions is incorrect, you might end up inaccurately predicting when customer demand for your services is highest. As a result, you’ll fall short of being able to guarantee service availability when your customers want it most.

As another example, consider the importance of making sure you maintain accurate data about your customers in order to deliver excellent customer service. When a customer calls you with a complaint or question, you don’t want to misidentify him or her because of inaccurate information linking a phone number to a name. Nor do you want to route customer calls to the wrong call center because the data you have about a customer’s geographic location is wrong.

Staying compliant

Compliance is money – there’s no denying that – and quality data can do much to help ensure that you meet regulatory compliance requirements.

blog compliant How Investing in Data Quality Saves You Money

Data Quality Return on Investment: Compliance

Without quality data, you may end up failing to secure sensitive customer information as required by compliance policies because you have no way of separating data that needs to be protected from the rest of the information you store. Or, you may run afoul of regulatory authorities because you can’t rely on low-quality data to detect and prevent fraudulent activity, another area where data quality is key.

Keeping employees happy

Good employees are hard to find, and they can be even harder to keep. That’s especially true if poor data gets in the way of their ability to do their jobs.

Whether they’re in marketing, HR, legal, development or any other area of your organization, most employees depend on data to accomplish what you expect of them. If you are unable to deliver the quality data they require, they’ll become frustrated and less productive. They may ultimately choose to look for work elsewhere.

Low employee productivity and high turnover rates translate to higher staffing costs.

Keeping operations efficient

Just as your employees can’t do their job without quality data, so will your business fail to operate efficiently without good data.

blog efficiency How Investing in Data Quality Saves You Money

Data Quality Return on Investment: Efficiency

In a world where data is at the root of almost everything a company does, inaccurate or inconsistent data slows down processes, creates unnecessary delays, introduces problems that teams have to scramble to fix, and so on.

Quality data helps you avoid these mistakes and keep your business lean and mean – which translates to greater cost-efficiency.

Achieving data quality with Syncsort

With Syncsort, you can ensure data quality and streamline your data integration workflows at the same time. That’s because Syncsort has added Trillium’s data quality software to its suite of Big Data integration solutions.

To learn more about how Syncsort helps you maximize your data quality return on investment by ensuring the quality of even the hardest-to-manage data – legacy data – check out the latest eBook, “Bringing Big Data to Life.”

 How Investing in Data Quality Saves You Money

Let’s block ads! (Why?)

Syncsort blog

Set size for multiple visualizations in #PowerBI at the same time

April 23, 2017 / Erik Svensen

Set size for multiple visualizations in #PowerBI at the same time

 Set size for multiple visualizations in #PowerBI at the same time

When designing your reports in Power BI Desktop you properly spent a lot of time making sure your visualizations is aligned and at least for some of them making sure they have the same size.

So far, we only have the align feature in the Power BI Desktop

 Set size for multiple visualizations in #PowerBI at the same time

To change the size of the visualizations we must use the General properties under Format to resize the elements

 Set size for multiple visualizations in #PowerBI at the same time

But what if you want to resize more than one element at a time – If you select more than one you get the size of the first selection in the general tab

 Set size for multiple visualizations in #PowerBI at the same time

Now here is the trick – modify the width and Height with 1 each

 Set size for multiple visualizations in #PowerBI at the same time

And then back again

 Set size for multiple visualizations in #PowerBI at the same time

And your visualizations have the same size.

OBS – This only works when you select the same type of visualizations – if select different types you won’t be able to see General under Format.

Hope this can help you too –


Blog Update: How Are Companies Using Hadoop?

It’s no secret that Hadoop is popular, and our readers have shown that they’re interested to hear the latest on all things Hadoop. On the heels of our annual Hadoop Survey, we recently updated a blog post originally published in 2015, titled “Who Is Using Hadoop? And What Are They Using It For?” The revised post provides a more accurate look at companies using Hadoop today and what business challenges they are tackling with this Big Data tool.

What Are Companies Using Hadoop For? Which Organizations are Using It?

blog Whos Using Hadoop Blog Update: How Are Companies Using Hadoop?

Whether you’re new to Hadoop or an experienced early adopter, it’s always useful to have the inside track of what’s happening in the fast-paced world of technology. Who wouldn’t want to know if their competitors were likely investing in the Big Data solution or whether their industry is finding success with the platform?

Read the updated postWho Is Using Hadoop? And What Are They Using It For? to discover the answers to key questions around which organizations are using Hadoop, such as:

  • WHO? Who is using Hadoop? What does the adoption rate currently look like?
  • WHAT? What does the return on this investment look like? How is it providing business value?
  • HOW? How are companies using Hadoop? Which industries are finding the most value and success?

The updated post also includes statistics and infographics from the Hadoop Perspectives for 2017 eBook around the value of access and integration of legacy and/or mainframe data into the Hadoop platform.

Other Related Blog Posts on this Topic:

Let’s block ads! (Why?)

Syncsort blog

New Get Data Capabilities in the GA Release of SSDT Tabular 17.0 (April 2017)

With the General Availability (GA) release of SSDT 17.0, the modern Get Data experience in Analysis Service Tabular projects comes with several exciting improvements, including DirectQuery support (see the blog article “Introducing DirectQuery Support for Tabular 1400”), additional data sources (particularly file-based), and support for data access options that control how the mashup engine handles privacy levels, redirects, and null values. Moreover, the GA release coincides with the CTP 2.0 release of SQL Server 2017, so the modern Get Data experience benefits from significant performance improvements when importing data. Thanks to the tireless effort of the Mashup engine team, data import performance over structured data sources is now at par with legacy provider data sources. Internal testing shows that importing data from a SQL Server database through the Mashup engine is in fact faster than importing the same data by using SQL Server Native Client directly!

Last month, the blog article “What makes a Data Source a Data Source?” previewed context expressions for structured data sources—and the file-based data sources that SSDT Tabular 17.0 GA adds to the portfolio of available data sources make use of context expressions to define a generic file-based source as an Access Database, an Excel workbook, or as a CSV, XML, or JSON file. The following screenshot shows a structured data source with a context expression that SSDT Tabular created for importing an XML file.

XMLFileImport 1024x561 New Get Data Capabilities in the GA Release of SSDT Tabular 17.0 (April 2017)

Note that file-based data sources are still a work in progress. Specifically, the Navigator window that Power BI Desktop shows for importing multiple tables from a source is not yet enabled so you end up immediately in the Query Editor in SSDT. This is not ideal because it makes it hard to import multiple tables. A forthcoming SSDT release is going to address this issue. Also, when trying to import from an Access database, note that SSDT Tabular in Integrated Workspace mode would require both the 32-bit and 64-bit ACE provider, but both cannot be installed on the same computer. This issue requires you to use a remote workspace server running SQL Server 2017 CTP 2.0, so that you can install the 32-bit driver on the SSDT workstation and the 64-bit driver on the server running Analysis Services CTP 2.0.

Keep in mind that SSDT Tabular 17.0 GA uses the Analysis Services CTP 2.0 database schema for Tabular 1400 models. This schema is incompatible with CTPs of SQL vNext Analysis Services. You cannot open Tabular 1400 models with previous schemas and you cannot deploy Tabular 1400 models with a CTP 2.0 database schema to a server running a previous CTP version.

Another great data source that you can find for the first time in SSDT Tabular is Azure Blob Storage, which will be particularly interesting when Azure Analysis Services provides support for the 1400 compatibility level. When connecting to Azure Blob Storage, make sure you provide the account name or URL without any containers in the data source definition, such as If you appended a container name to the URL, SSDT Tabular would fail to generate the full set of data source settings. Instead, select the desired contain in the Navigator window, as illustrated in the following screenshot.

AzureBlobs 1024x663 New Get Data Capabilities in the GA Release of SSDT Tabular 17.0 (April 2017)

As mentioned above, SSDT Tabular 17.0 GA uses the Analysis Services CTP 2.0 database schema for Tabular 1400 models. This database schema is more complete than any previous schema version. Specifically, you can find additional Data Access Options in the Properties window when selecting the Model.bim file in Solution Explorer (see the following screenshot). These data access options correspond to those options in Power BI Desktop that are applicable to Tabular 1400 models hosted on an Analysis Services server, including:

  • Enable Fast Combine (default is false)   When set to true, the mashup engine will ignore data source privacy levels when combining data.  
  • Enable Legacy Redirects (default is false)  When set to true, the mashup engine will follow HTTP redirects that are potentially insecure (for example, a redirect from an HTTPS to an HTTP URI).  
  • Return Error Values as Null (default is false)  When set to true, cell level errors will be returned as null. When false, an exception will be raised if a cell contains an error.  

DataAccessOptions 1024x725 New Get Data Capabilities in the GA Release of SSDT Tabular 17.0 (April 2017)

And especially with the Enable Fast Combine setting you can now begin to refer to multiple data sources in a single source query.

Yet another great feature that is now available to you in SSDT Tabular is the Add Column from Example capability introduced with the April 2017 Update of Power BI Desktop. For details, refer to the article “Add a column from an example in Power BI Desktop.” The steps are practically identical. Add Column from Example is a great illustration of how the close collaboration and teamwork between the AS engine, Mashup engine, Power BI Desktop, and SSDT Tabular teams is compounding the value delivered to our customers.

Looking ahead, apart from tying up loose ends, such as the Navigator dialog for file-based sources, there is still a sizeable list of data sources we are going to add in further SSDT releases. Named expressions discussed in this blog article a while ago also still need to find their way into SSDT Tabular, and there are other things such as support for the full set of impersonation options that Analysis Services provides for data sources that can use Windows authentication. Currently, only service account and explicit Windows credentials can be used. Forthcoming impersonation options include current user and unattended accounts.

In short, the work to enable the modern Get Data experience in SSDT Tabular is not yet finished. Even though SSDT Tabular 17.0 GA is fully supported in production environments, Tabular 1400 is still evolving. The database schema is considered complete with CTP 2.0, but minor changes might still be coming. So please be invited to deploy SSDT Tabular 17.0 GA, use it to work with your Tabular 1200 models and take Tabular 1400 for a thorough test drive. And as always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Or use any other available communication channels such as UserVoice or MSDN forums. Influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers!

Let’s block ads! (Why?)

Analysis Services Team Blog