Tag Archives: Explorer

Introducing Tabular Model Explorer for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

If you download and install the August 2016 release of SQL Server Data Tools (SSDT) for Visual Studio 2015, you can find a new feature in SSAS Tabular projects, called Tabular Model Explorer, which lets you conveniently navigate through the various metadata objects in a model, such as data sources, tables, measures, and relationships. It is implemented as a separate tools window that you can display by opening the View menu in Visual Studio, pointing to Other Windows, and then clicking Tabular Model Explorer. The Tabular Model Explorer appears by default in the Solution Explorer area on a separate tab, as illustrated in the following screenshot.

TME Introducing Tabular Model Explorer for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

As you no doubt will notice, Tabular Model Explorer organizes the metadata objects in a tree structure that closely resembles the schema of a tabular 1200 model. Data Sources, Perspectives, Relationships, Roles, Tables, and Translations correspond to top-level schema objects. But there are also exceptions, specifically for KPIs and Measures, which technically aren’t top-level objects yet child objects of the various tables in the model. However, having consolidated top-level containers for all KPIs and Measures makes it easier to work with these objects, especially if your model includes a very large number of tables. Of course, the measures are also listed under their corresponding parent tables, so that you have a clear view of the actual parent-child relationships. And if you select a measure in the top-level Measures container, the same measure is also selected in the child collection under its table—and vice versa. Boldface font calls out the selected object, as the following side-by-side screenshots illustrate for selecting a measure at the top level (left) versus the table level (right).

measuresselected Introducing Tabular Model Explorer for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

As you would expect, the various nodes in Tabular Model Explorer are linked to appropriate menu options that until now were hiding under the Model, Table, and Column menus in Visual Studio. It no doubt is easier to edit a data source by right-clicking on its object in Tabular Model Explorer and clicking Edit Data Source versus opening the Model menu, clicking on Existing Connections, and then selecting the desired connection in the Existing Connections dialog box, and then clicking Edit. This is great, even though not all treeview nodes have a context menu yet. Namely the top-level KPIs and Measures containers don’t yet have a menu while the Perspectives container does but its child objects do not. We will add further options in subsequent releases, including completely new commands that now make perfect sense in the context of an individual metadata object.

The same can be said for the Properties window. If you select a table, column, or measure in Tabular Model Explorer, SSDT populates the Properties window accordingly, but if you select a data source, relationship, or partition, SSDT does not and leaves the Properties window empty, as shown in the next screenshot comparison. This is simply because SSDT never had to populate the Properties window for the latter types of metadata objects before. Subsequent SSDT releases will provide more consistency and enable even more convenient editing scenarios through the Properties window. We just did not want to wait another one or two months with the initial Tabular Model Explorer release.

Properties window Introducing Tabular Model Explorer for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

The initial version already goes beyond what was previously possible in SSDT Tabular. For example, assume you have a very large number of measures in a model. Navigating through these measures in the Measure Grid can be tedious, yet Tabular Model Explorer offers a convenient search feature. Just type in a portion of the name in the Search box and Tabular Model Explorer narrows down the treeview to the matches. Then select the measure object and SSDT also selects the measure in the Measure Grid for editing. It’s a start to say good bye to Measure Grid frustration!

TME search Introducing Tabular Model Explorer for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular)

But wait, there is more! The Tables node and the Columns and Measures nodes under each table support sorting. The default is Alpha Sort, which lists the object alphabetically for easy navigation, but if you’d rather list the objects based on their actual order in the data model, just right-click the parent node and select Model Sort. In most cases, Alpha Sort is going to be more useful, but if you need Model Sort on other parent nodes as well, such as Hierarchies and Partitions, let us know and we’ll add it in a subsequent release.

Note also that the Tabular Model Explorer is only available for the tabular 1200 compatibility level or later. Models at compact level 1100 or 1103 are not supported because Tabular Model Explorer is based on the new Tabular Object Model (TOM).

And that’s about it for a whirlwind introduction of Tabular Model Explorer in SSDT Tabular. We hope you find this new feature useful, especially if your models are complex and contain a very large number of tables, columns, partitions, measures, and other metadata objects. Give it a try, send us your feedback through Microsoft Connect, community forums, or as comments to this blog post, and let us know what other capabilities you would like us to add. Import/export of selected objects? Drag and drop support? And stay tuned for even more capabilities coming to an SSDT Tabular workstation near you in the next monthly releases!

Let’s block ads! (Why?)

Analysis Services and PowerPivot Team Blog

Internet Explorer and Microsoft Dynamics CRM 4.0 and Support Lifecycle Announcement

Beginning January 12, 2016, only the most current version of Internet Explorer available for a supported operating system will receive technical support and security updates. See the Lifecycle FAQ here for more details.

So what does this mean for CRM 4 clients?

Microsoft Dynamics CRM 4.0 relies on Internet Explorer 8 and Internet Explorer 9 when running on Windows 7. If we take a look at the Microsoft Support Lifecycle as it relates to Internet Explorer and desktop operating systems, we see that IE 8 & 9 are no longer supported for Windows 7. Therefore, Internet Explorer 8 & 9 on Windows 7 will no longer receive technical support or security updates and could make your CRM system vulnerable.

SupportedIE Internet Explorer and Microsoft Dynamics CRM 4.0 and Support Lifecycle Announcement

If we look at Internet Explorer Compatibility with Microsoft Dynamics CRM 4.0, we see that IE 9 is compatible with CRM 4 but going by the table above, will not be supported going forward when using IE 9 on Windows 7 or higher.

CRM4IECompatibility Internet Explorer and Microsoft Dynamics CRM 4.0 and Support Lifecycle Announcement

Call to Action for all CRM 4 Customers!
In order to remain supported and utilize the most current software, security, and support Microsoft has to offer, we encourage all CRM 4 customers to upgrade to the latest CRM version. Not only will this eliminate the above issue, it will also give the benefits of the latest and greatest Microsoft Dynamics CRM has to offer.

Beringer Associates is a leading Microsoft Gold Certified Partner specializing in Microsoft Dynamics CRM and CRM for Distribution. We also provide expert Managed IT Services, Backup and Disaster Recovery, Cloud Based Computing and Unified Communication Systems.

by Beringer Associates

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

CRM Software Blog

Keen IO open-sources its Data Explorer tool for making quick queries

Keen IO, a startup with a cloud-based data analytics tool, is announcing today that it’s releasing one of its tools for customers, the Data Explorer, under an open-source license.

The Data Explorer first became available to customers earlier this year, as a web-based graphical user interface for making queries and getting charts with simple drop-down menus. A predecessor, called Workbench, has been around since the startup’s earliest days (it was founded in 2011).

“By open-sourcing it and letting people modify and embed it into their own apps, we kind of get back to our identity as a back-end company,” cofounder and chief executive Kyle Wild told VentureBeat in an interview.


From VentureBeat

Unlock ninja marketing project management skills with Scott Brinker. Register today for a live virtual online roundtable discussion.

Previous Keen IO open-source contributions include dashboard templates and the Pushpop plugin to send messages into Slack.

Customers have made many feature requests for the Data Explorer. It’s the sort of thing that both developers and non-technical people could use.

Sure, it can help developers get a sense of what they can do with the data that’s being tracked, whether it be from an application or a piece of hardware. But the startup wants to spend more of its attention on its application programming interface (API) for running queries, which programmers can use in their apps.

There’s a limited amount of effort we can expend,” Wild said. “We want to do it in a way that’s most useful.”

And by keeping its focus narrow, Keen IO can maintain an identity distinct from other tools, such as Google Analytics, Kissmetrics, and Mixpanel.

San Francisco-based Keen IO announced an $ 11.3 million funding round last summer. Customers include EMC, John Deere, Chartboost, and Quartz.

Find the Data Explorer here.

Keen IO is a custom analytics platform for developers to collect and analyze data from anything connected to the Internet…. read more »

Powered by VBProfiles


VB’s research team is studying web-personalization… Chime in here, and we’ll share the results.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

VentureBeat » Big Data News | VentureBeat

Interview with Simon Elliston Ball, head of big data at Red Gate Ventures, on the Making of HDFS Explorer

 Interview with Simon Elliston Ball, head of big data at Red Gate Ventures, on the Making of HDFS Explorer

Simon Elliston Ball, head of Big Data at Red Gate Ventures

When we corresponded, you communicated a lively understanding of ELT vs. ETL. Can you tell us about a project or two where the distinction became clear – even before the trade press worked on teasing out a difference?

In my previous life, I was working with companies in finance, e-commerce and ERP. Data integration was always a massive pain; even the “standard” formats never seemed to work, so we had all those classic master data, coding and encoding problems every data professional is all too familiar with. We would spin up big transformation projects, importing, cleaning, normalizing, then sit back with a satisfied sigh and look at a schema in SQL. At last, we could actually see if there was anything useful in the data, and whether all that work had really been worth it.

One of the nice things about a schema-less or semi-structured system is that you can delay decisions and all the data mapping and translation work until it has some value. Plus, you don’t have to engineer around all the edge cases to crowbar in rows that you’re not particularly interested in anyway.

RedGate just announced its free HDFS Explorer. Can you tell us how this offering came about and how you think the tool should fit into the big data toolbox?

We have been working with Hadoop for a while and increasingly got frustrated with how convoluted it was to just get data into the platform. Trying to move files of reference, or sample data from our Windows desktops was a pain, you’d have to scp the data, then ssh in to the cluster to use the hadoop fs commands to get to HDFS, or maintain local versions of the clients on Windows; generally it was just a pain, so we developed HDFS Explorer. This is a simple little utility which looks and behaves a lot like Windows Explorer, but lets you connect to HDFS on any distro and interact with the cluster file system just like it was a local disk.

The next step for us is to make it just as easy to get your relational database sources into Hadoop without having to drop to the sqoop command line and learn all the options; you can just us our Hadoop Import Export tool to get the job done and get on with querying your data. After all, not everyone wants to learn the sqoop performance for a simple job, and if they’re trying to do something more complex, or they want serious performance, there are always tools like yours at SyncSort.

For many, big data is all about an Apache suite with Hadoop at the core. An acceptance of this de facto arrangement tends to overlook a mature set of pros and cons associated with open source software. Have we passed some sort of inflection point, or has the conversation simply drifted away from this understanding?

Enterprises don’t seem to have such an issue with open source any more; in fact, in some cases it’s a benefit, especially with licenses like Apache, which makes it easier to get Legal to agree to trials. With so much innovation happening in open source, and the ability to spread the cost of development of infrastructure pieces that just aren’t a differentiator, open source has become a bit of a no-brainer.

As for big data, there’s a basic assumption, and it’s wrong, that big data = Hadoop. Sure, Hadoop is a huge part, and it was a genius move to decouple the MapReduce parts from the HDFS opening up the way for all the SQL on HDFS engines. YARN is also taking Hadoop to the next level. Lots of people talk about Hadoop as a data operating system. I’m not sure we’re quite there yet, but it’s coming together. I think in the next two to three years we’ll be in a different world for some of the tooling and programming paradigms around, especially since people are embracing Spark so fast.

I’d say it’s not a done deal yet, but Hadoop may well stay at the center of big data, but it will be things like Spark on YARN, or whatever streaming technology on YARN that will keep it there.

For the time being, Red Gate has somewhat of a lower profile than Cloudera and Hortonworks. What advice do you have for other up-and-coming outfits who are working in the same playgrounds with these two? Is the resulting plan a business model or just an acknowledgement of VC capital flows in big data?

Well, there are certainly a few gorillas in the room, but it’s like it is in every part of tech, and has been for a long time. In RDBMS it was Oracle, IBM and Microsoft; now it’s Cloudera and Hortonworks, not to mention Pivotal, IBM and MapR. Of course it’s always tough to compete with a platform owner that has close to a billion dollars of funding, but even with the distro vendors trying to produce as full a story as they can, there are always niches they can’t fill, and the speed they’re moving leaves a lot of money on the table for a third-party tooling eco-system. We’ve seen it before at Red Gate with the SQL Server and Oracle markets. The hard part when you’re small is keeping up with the tech and making sure your crystal ball doesn’t collide with theirs. Of course all the big guys have fantastic partner programs, which can be very helpful to smaller players.

You mentioned that you’re currently working on lineage and metadata management. Since the V=Veracity tends to get less attention in big data, you got my attention. Some readers will have heard about Apache Falcon. Is yours a collateral product or a better idea altogether?

Falcon is a great step for Hadoop. If you buy into putting all your workloads through Falcon, you can use its lineage features. However, if you’ve got data applications, or ad hoc uses which are not configured through Falcon jobs, you don’t get the info. Our product has the lowest footprint possible in terms of development effort; it will work with whatever tools or platform you want to work with, rather than forcing a choice. Falcon is really a tool for simplifying workflow and lifecycle jobs and in some ways is a great dose of syntactic sugar for configuring inter-dependent Oozie jobs. Where our product differs is that it specializes in lineage, making a tool that can be more easily exposed to analysts as well data ops and sysops professionals. Falcon is also currently HortonWorks specific, it will be interesting to see if it makes it into Cloudera, given the clash with their Navigator product. Our solution is designed to work with any platform you choose.

A typical SQL Server DBA has her hands full with the day-to-day operations of a “traditional” RDBMS. Through what path – straight or circuitous – do you see a SQL Server DBA coming to rest in a big data project? Will it be through a Microsoft adopted platform like HDinsight or a hybrid project that straddles the two worlds?

We’re seeing a number of traditional SQL Server DBAs and Microsoft stack developers starting to experiment with Hadoop in a variety of flavors. HDInsight is a strong default option for Microsoft shops, and we talk to a lot of people who are getting started using the HDInsight one-box; there’s still a strong tradition of local first development in the Microsoft stack. The natural next step is obviously HDInsight in Azure. That said, I’ve seen plenty of Microsoft developers looking at HortonWorks and Cloudera as well. HDInsight is not the only Microsoft path to Hadoop, either. I expect a number of SQL Server DBAs will first meet Hadoop in the form of a PDW appliance, albeit is an unusual form.

SQL DBAs tend to be incredibly busy people, working with what has become a complex engine, with so many levers to tweak, not to mention the critical routine tasks of backup, HA, maintenance plans and all those pieces. Making the move to big data platforms, they will still have a lot of these issues, but with new workflows and new levers to learn. Part of what Red Gate is doing is trying to provide some familiar stepping stones.

What will the skill set for a big data DBA need to include? Is the change incremental or a new paradigm – perhaps driven by complex event processing, or the need for real-time Internet of Things data streams?

Big data platforms certainly have some new challenges for the DBA, particularly around the details of tuning, but also at the mindset level. In some ways, this comes back to the distinction between the ETL and ELT approach. New data models, and particularly the tendency and need to denormalize, are certainly compatible with the BI end of the traditional DBA experience, but when you’ve spent so many years preaching a minimum of third-normal form, the lack of things like referential constraints can be hard to reconcile.

Of course there are skills that stay with you when you make the jump, and a lot of the SQL on Hadoop options are about that. At the moment, it can be a little frustrating to lose elements of the SQL language you’ve gotten used to. Porting a few ANSI SQL query sets over to Hive 0.12 the other day was a very frustrating experience, though most of the issues I cursed about are solved in Hive 0.13 – what a difference a minor version makes when the world is moving as fast as big data!

What is rare in the big data world is people who really have a good understanding of the new development techniques like real-time streaming processing (Kafka, Storm, Spark streaming and friends) or Map Reduce, and the traditional world of batch and interactive SQL. DBAs who can understand both sides of that equation are going to do very well for themselves.

Are your agnostic-leaning customers who use both Oracle and SQL Server early adopters of HDFS, or are other factors driving them instead?

Well, Hadoop is still pretty much in the early-adopter phase, especially for our customer base, and there is a slight bias towards the more polyglot approach among those early adopters in general. I don’t think it’s anything particularly systematic or unique to Hadoop, though. We’ve also heard a number of pure SQL Server shops who have moved to Hadoop based on their ETL processing needs, both speed and volume, or their unstructured data needs, so predisposition to different platforms certainly isn’t a prerequisite.

Your bio speaks of your interest in visualization. Do you think widely adopted products from the BI world like Tableau are meeting the needs of big data visualization, or are new techniques needed?

A lot of what you hear from the visualization vendors at the moment is about how they can connect to the new world, too, so you don’t have to change your tools. Some of them, Tableau in particular, and to an extent Microsoft with PowerView and the rest of the PowerBI suite, have started to innovate in some interesting way to deal with big data problems. I’m really talking about things like their geographic visualizations combined with things like heatmaps and treemaps to show big aggregates of data. After all, it’s tough to visualize every data point on a Hadoop cluster. That said, I’m not sure that the traditional tools, or even an out-of-the-box set of charts, is the right way to deal with the variety of data sources you get with big data. The best work I’ve seen in this area is coming out of people using libraries to craft a visualization unique to the data set, using things like d3 or Processing.

For me, one of the biggest problems facing big data adoption in business is the understanding of uncertainty and probabilistic answers, so I’m drawn to the idea of using visualization to show the bounds, confidence intervals and distributions.

You’re working with Node.js. Can you move beyond Wikipedia’s two use cases (multiple concurrent fast file uploads, chat server) and suggest where you think it’s going to be useful?

We use Node quite a bit these days. Most of it is for simple APIs; it’s a great data shovel, because you can serve a lot of clients on a single process, especially if you’re not doing a lot of computation, just shoveling JSON. We’ve also used it a fair amount for creating workers running off the back of service bus queues. Part of that is about simplicity when we’ve got Node elsewhere in the stack, but it’s great again for those lightweight pieces. I use it for websockets as well, again for serving a lot of connections from a small server. I guess that’s kind of the same thing as the chat server technically, but using it to pass front-end application events between multiple sockets gives you a lot more potential than just a chat room. A while ago, Red Gate was working on a NodeJS plugin for the Visual Studio IDE, which has now been rolled into Microsoft’s Node Tools for Visual Studio, so we’ve seen a few projects through that. I have seen people using Node to try and get the holy grail of server and client side rendering of the same templates – for dynamically updating sections after a full page load.

Your practice hosts Simple Talk, where Tony Davis wrote a post titled “We have our standards, and we need them.” What standards – de facto or otherwise – are you watching most closely?

Standards? In big data? For me, half the point of un- and semi-structured data and tools like Hadoop is to deal with the fact there are so many standards to choose from, and so many of them have been chosen. More seriously, early markets always find it hard to define standards. The most interesting emerging standards for us are probably around Internet of Things, with organizations like OIC. Of course in the pure big data space, there is the race towards full ANSI SQL, which is interesting to keep an eye on. For something a bit more exotic, I think it’s worth keeping an eye of RDF and the emerging triple stores. The whole area of linked data and graphs is getting really interesting and seems to be doing more for data integration than most standards, especially with open government data.

Last but not least, you mention having edited novels and written a screenplay or two. Do you have a C.P. Snow-inspired Two Cultures process isolation for these endeavors?

Personally I’ve never subscribed to a dichotomy between arts and sciences. As a reformed historian turned computer scientist, I found it was still all about collecting information and telling stories. That’s exactly what data science is. You need the logical “scientific” mind to build the algorithm and ensure proper rigor in your experiments on data, but there is something of the “artistic” side to shaping results, telling a story and communicating it. Perhaps “Information Scientist” is a better term, since information, not data, is the end product, but it takes both cultures. Data scientists are closer to renaissance humanists than cloistered specialists.

Learn how Syncsort DMX-h Hadoop ETL Edition is turning Hadoop into a Smarter ETL tool.

This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.

Syncsort blog