Tag Archives: Server

Cumulative Update #10 for SQL Server 2016 SP1

The 10th cumulative update release for SQL Server 2016 SP1 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Introduction to Cluster Sets in Windows Server 2019

This blog discusses a new feature in the upcoming release of Windows Server 2019.  Currently, Windows Insiders receive current builds of Windows Server 2019.  We urge you to become an Insider and play a part in making Windows Server 2019 the best that it can be.  To do so, go to this link and sign up.

Cluster Sets is a new feature in Windows Server 2019 that was first introduced at Ignite 2017.  Cluster Sets is the new cloud scale-out technology in this preview release that increases cluster node count in a single Software Defined Data Center (SDDC) cloud by orders of magnitude. A Cluster Set is a loosely-coupled grouping of multiple Failover Clusters: compute, storage or hyper-converged.    The Cluster Sets technology enables virtual machine fluidity across member clusters within a cluster set and a unified storage namespace across the set in support of virtual machine fluidity.  This will give you the benefit of hyperscale and continue to maintain great resiliency.  So in more clearer words, you are pseudo clustering clusters together while not putting all your eggs in one basket.  You can now have multiple baskets to maintain greater flexibility.

While preserving existing Failover Cluster management experiences on member clusters, a Cluster Set instance additionally offers key use cases, such as lifecycle management. The Windows Server Preview Scenario Evaluation Guide for Cluster Sets provides you the necessary background information along with step-by-step instructions to evaluate cluster sets technology using PowerShell.

Here is a video providing a brief overview of what Cluster Sets is and can do.

The evaluation guide to read more about Cluster Sets along with information on how to set it up is listed on the Microsoft Docs page where this, and numerous other Microsoft products are covered.  The quick link to the Cluster Sets page is https://aka.ms/Cluster_Sets.

Finally, there is a GitHub lab scenario where you can set this up on your own and try it out that gives you additional instructions.

We hope that you try it out and provide feedback.  Feedback can be done in two ways:

  1. The Feedback Hub on Windows 10
  2. Email Cluster Sets Feedback.  This alias has been set up to provide feedback only.

Thanks,
John Marlin
Senior Program Manager
High Availability and Storage

Let’s block ads! (Why?)

Clustering and High-Availability

Scale-Out File Server Improvements in Windows Server 2019

This blog discusses a new feature in the upcoming release of Windows Server 2019.  Currently, Windows Insiders receive current builds of Server 2019.  We urge you to become an Insider and play a part in making Windows Server 2019 the best that it can be.  To do so, go to this link and sign up.

Failover Clustering Scale-Out File Server was first introduced in Windows Server 2012 to take advantage of Cluster Shared Volumes (CSV).  SOFS works in conjunction with Server Message Block (SMB), so as SMB has been updated through the newer versions, so has Scale-Out File Server.  There are several enhancements that I wanted to bring to light in this post.

SMB Connections move on connect

Scale-Out File Server (SOFS) relies on DNS round robin for inbound connections sent to cluster nodes.  When using Storage Spaces on Windows Server 2016 and older, this behavior can be inefficient: if the connection is routed to a cluster node that is not the owner of the Cluster Shared Volume (aka the coordinator node), all data redirects over the network to another node before returning to the client. The SMB Witness service detects this lack of direct I/O and moves the connection to a coordinator.  This can lead to delays.

In Windows Server 2019, we are much more efficient.  The SMB Server service determines if direct I/O on the volume is possible.  If direct I/O is possible, it passes the connection on.  If it is redirected I/O, it will move the connection to the coordinator before I/O starts.  Synchronous client redirection required changes in the SMB client, so only Windows Server 2019 and Windows 10 Fall 2017 clients can use this new functionality when talking to a Windows 2019 Failover Cluster.  SMB clients from older OS versions will continue relying upon the SMB Witness to move to a more optimal server.

SMB Bypass of the CSV File System

In a Windows Server 2016, SOFS using Storage Spaces, a client connects to the SMB Server, talks to the CSV File System, and the CSV File System talks to NTFS.  All I/O’s from the remote SMB client go through SMB Server, CSVFS, NTFS, and the rest of the storage stack. Since Direct I/O on REFS is not possible, the CSV File System only helps with hiding storage failures.  The same applies for SMB Continuous Availability. We made a change in Windows Server 2019 where we still can keep one layer that hides storage failures, but also bypass the CSV File System. To do that, SMB Server queries from the CSVFS path to REFS and opens files directly on REFS. All I/O’s from these opens will be bypassing CSVFS and going from SMB Server directly to REFS.

Infrastructure Scale-Out File Server

There is a new Scale-Out File Server role in Windows Server 2019 called Infrastructure File Server.  When you create an Infrastructure File Server, it will create a single namespace share automatically for the CSV drive (i.e. \InfraSOFSName\Volume1, etc.).  In hyper-converged configurations, an Infrastructure SOFS allows an SMB client (Hyper-V host) to communicate with guaranteed Continuous Availability (CA) to the Infrastructure SOFS SMB server.  There can be at most only one infrastructure SOFS cluster role on a Failover Cluster.

To create the Infrastructure SOFS, you would need to use PowerShell.  For example:

Add-ClusterScaleOutFileServerRole -Cluster MyCluster -Infrastructure -Name InfraSOFSName

SMB Loopback

There is an enhancement made with Server Message Block (SMB) to work properly with SMB local loopback to itself which was previously not supported.  This hyper-converged SMB loopback CA is achieved via Virtual Machines accessing their virtual disk (VHDx) files where the owning VM identity is forwarded between the client and server.

This is a role that Cluster Sets takes advantage of where the path to the VHD/VHDX is placed as \InfraSOFSName\Volume1.  This \InfraSOFSName\Volume1 path can then be utilized by the virtual machine if it is local or remote.

Identity Tunneling

In Server 2016, if Hyper-V virtual machines are hosted on a SOFS share, you must grant the machine accounts of the Hyper-V compute nodes permission to access the VHD/VHDX files.  If the virtual machines and VHD/VHDX is running on the same cluster, then the user must have rights.  This can make management difficult as two sets of permissions are needed.

In Windows Server 2019 when using SOFS, we now have “identity tunneling” on Infrastructure shares. When you access Infrastructure Share from the same cluster or Cluster Set, the application token is serialized and tunneled to the server, and VM disk access is done using that token. This works even if your identity is Local System, a service, or virtual machine account.

Thanks,
John Marlin
Senior Program Manager
High Availability and Storage

Let’s block ads! (Why?)

Clustering and High-Availability

Cumulative Update #8 for SQL Server 2017 RTM

The 8th cumulative update release for SQL Server 2017 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Cumulative Update #12 for SQL Server 2014 SP2

The 12th cumulative update release for SQL Server 2014 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Cumulative Update #9 for SQL Server 2016 SP1

The 9th cumulative update release for SQL Server 2016 SP1 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Drill Through and Report Server configuration with MDM/EMM tools

Recently Power BI Mobile & Devices team have released some new cool features. These features address some of the most requested items we got from you – we hear you!

Drill Through

A lot of you asked and voted for adding drill through to mobile. Now you got it!

While you can navigate one level down in the hierarchy on a selected data point with drill down, drill through allows you to navigate to another report page, that potentially has more focused data on your selected data point.

In this new release of Power BI Mobile apps, we support drill through for all platforms!

When a drill through is defined in your report (visit here to learn how to add drill through to your report), tapping on a data point, you will see the drill through option in the tooltip overlay.

You might have multiple drill through options, each taking you to a different page. In that case you will need to choose which one you want to drill through.

cd977be9 8bff 4582 8729 63ff9db41407 Drill Through and Report Server configuration with MDM/EMM tools

We also changed our Back-button behavior, so you can return back to the original report page you navigated from, just by using the back button on the top of your screen.

f6df4a9d a92f 40ce a42d 1676fe93dc6c Drill Through and Report Server configuration with MDM/EMM tools

(if you want to navigate up in your page navigation tree and exit the report back to the app/workspace, you can still use the breadcrumbs)

Report Server remote configuration

With this new capability, IT administrator can remotely configure employees’ Power BI Mobile app. IT admins can configure Report Server details the app will connect to, and save the end user from the need to know and enter the server details.

The configuration is done using the organizational MDM/EMM tool (for example: Intune). Once this is done, all the user has to do, is to accept the configuration and provide a password to complete the connection to the server.

IT administrator can create “app configuration policy” as described in this article, and choose the set of users/devices the policy will apply to. Once the configuration is published, Power BI Mobile app users will get the following message when launching the app:

70e4d020 1bf9 43c6 9a11 3cdaaab993bf Drill Through and Report Server configuration with MDM/EMM tools

Now, the sign in process will only require the user to provide password. All other information will be provided from the policy configured.

Note: this feature is currently released for iOS devices only.

Phone reports canvas length

As you know, you can create phone optimized report in Power BI Desktop. And when you publish that report to the service, and access it from your mobile app, you get a tailored portrait view, optimized for using in mobile devices.

The feedback that we got from you, is that this is a great feature, but that the length of the report canvas is not enough. So, we doubled it, and now phone reports can host more visuals in each page.

The increased phone report canvas is be available in Power BI Desktop (June release).

Next steps

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Released: Public Preview for SQL Server Management Packs Update (7.0.5.0) and SSRS Management Pack Update (7.0.6.0)

We are getting ready to update the SQL Server and SQL Server Reporting Services Management Packs. Please install and use this public preview and send us your feedback (sqlmpsfeedback@microsoft.com)! We appreciate the time and effort you spend on these previews which make the final product so much better.

Please download at:

Microsoft System Center Management Packs (Community Technical Preview) for SQL Server

Microsoft System Center Management Pack (Community Technology Preview) for SQL Server 2017+

Microsoft System Center Management Packs (Community Technology Preview) for SQL Server 2008-2016 Reporting Services (Native Mode)

New SQL Server 2008-2012 MP Features and Fixes

  • Updated the “Max worker thread count” data source of the corresponding monitor and performance rule
  • Fixed issue: the “Transaction Log Free Space (%)” monitor does not work
  • Fixed issue: in some environments, DB Space workflows fail when a secondary database is non-readable

New SQL Server 2014-2016 MP Features and Fixes

  • Updated alert severity in some monitors
  • Updated the display strings
  • Updated the “Max worker thread count” data source of the corresponding monitor and performance rule
  • Fixed issue: the “Transaction Log Free Space (%)” monitor does not work

New SQL Server 2017+ MP Features and Fixes

  • Implemented an ability to monitor SQL Server Cluster instances locally; formerly, it was possible only in Agentless and Mixed modes
  • Added the SSIS monitoring
  • Added the “Exclude List” property in DB Engine Discovery in order to filter instances, which are not subject to monitoring
  • Added the “Exclude List” property in DB Discovery in order to filter databases, which are not subject to monitoring
  • Implemented a feature: both “Exclude List” properties support usage of the asterisk character to make the filter more flexible. E.g. “*temp” is used to exclude instances/databases ending with “temp” and having anything in the beginning.
  • Added the “Computers” view
  • Added the “ClusterName” property to the AG class and updated AG alerts in order to display the property within
  • Updated the “SP Compliance” monitor in order to support the Modern Servicing Model for SQL Server: the monitor will check build number instead of Service Pack number
  • Updated the “SPN Status” monitor so that it requires only a single SPN record when only TCP/IP is enabled and the instance is the default one
  • Updated the “Database Backup Status” monitor: it is disabled by default now
  • Updated the DB Space monitors so that their alert descriptions include the actual value of space available
  • Updated the “Configuration Security” section in the guide
  • Fixed issue: the “Database Health Policy” monitor ignores the “Critical” state (on Windows only)
  • Fixed issue: the “Alert severity” property of the “DB File Free Space Left” monitor has incorrect Default Value
  • Fixed issue: the “DB Filegroup Fx Container” rollup monitor has an alert parameter with a wrong value within
  • Fixed issue: “Resource Pool Memory consumption” monitor may not change its state to “Critical” for the “default” resource pool
  • Fixed issue: “Number of Samples” parameter of “Resource Pool Memory consumption” alert displays incorrect data
  • Fixed issue: missed image resources in the SQL Server 2017+ Core Library

New SSRS 2008-2016 MP Features and Fixes

  • Added support for cases when a connection string of the SSRS instance to the SSRS Database is not in the “MachineName\InstanceName” format; e.g. “<IP Address>,<Port Number>” or “(local)”, etc. Such connection strings are fully supported for default SQL Server instances hosting the SSRS Database. If the instance is named, workflows targeted at the SSRS Instance object work properly, but those targeted at the Deployment object cannot work, as there is no possibility to learn the FQDN of the server.
  • Updated the Deployment Seed discovery so that it does not check if the SQL Server instance hosting the SSRS Database is running

For more details, please refer to the user guides that can be downloaded along with the corresponding Management Packs.
We are looking forward to hearing your feedback at sqlmpsfeedback@microsoft.com.

Let’s block ads! (Why?)

SQL Server Release Services

Cumulative Update #1 for SQL Server 2016 SP2

The 1st cumulative update release for SQL Server 2016 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Importing JSON Collections into SQL Server

It is fairly easy to Import JSON collections of documents into SQL Server if there is an underlying ‘explicit’ table schema available to them. If each of the documents have different schemas, then you have little chance. Fortunately, schema-less data collections are rare.

In this article we’ll start simply and work through a couple of sample examples before ending by creating a SQL server database schema with ten tables, constraints and keys. Once those are in place we’ll then import a single JSON Document, filling the ten tables with the data of 70,000 fake records from it.

Let’s start this gently, putting simple collections into strings which we will insert into a table. We’ll then try slightly trickier JSON documents with embedded arrays and so on. We’ll start by using the example of sheep-counting words, collected from many different parts of Great Britain and Brittany. The simple aim is to put them into a table. I don’t use Sheep-counting words because they are of general importance but because they can be used to represent whatever data you are trying to import.

You will need access to SQL Server version, 2016 and later or Azure SQL Database or Warehouse to play along and you can download data and code from GitHub.

Converting Simple JSON Arrays of Objects to Table-sources

We will start off by creating a simple table that we want to import into.

We then choose a simple JSON Format

We can very easily use OpenJSON to create a table-source that reflects the contents.

Once you have a table source, the quickest way to insert JSON into a table will always be the straight insert, even after an existence check. It is a good practice to make the process idempotent by only inserting the records that don’t already exist. I’ll use the MERGE statement just to keep things simple, though the left outer join with a null check is faster. The MERGE is often more convenient because it will accept a table-source such as a result from the OpenJSON function. We’ll create a temporary procedure to insert the JSON data into the table.

Now we try it out. Let’s assemble a couple of simple JSON strings from a table-source.

Now we can EXECUTE the procedure to store the Sheep-Counting Words in the table

Check to see that they were imported correctly by running this query:

word image Importing JSON Collections into SQL Server

Converting to Table-source JSON Arrays of Objects that have Embedded Arrays

What if you want to import the sheep-counting words from several regions? So far, what we’ve been doing is fine for a collection that models a single table. However, real life isn’t like that. Not even Sheep-Counting Words are like that. A little internalized Chris Date will be whispering in your ear that there are two relations here, a region and the name for a number.

Your JSON for a database of sheep-counting words will more likely look like this (I’ve just reduced it to two numbers in the sequence array rather than the original twenty). Each JSON document in our collection has an embedded array.

After a bit of thought, we remember that the OpenJSON function actually allows you to put a JSON value in a column of the result. This means that you just need to CROSS APPLY each embedded array, passing to the ‘cross-applied’ OpenJSON function the JSON fragment representing the array, which it will then parse for you.

I haven’t found the fact documented anywhere, but you can leave out the path elements from the column declaration of the WITH statement if the columns are exactly the same as the JSON keys, with matching case.

The ability to drill into sub-arrays by cross-joining OpenJSON function calls allows us to easily insert a large collection with a number of documents that have embedded arrays. This is looking a lot more like something that could, for example, tackle the import of a MongoDB collection as long as it was exported as a document array with commas between documents. I’ll include, with the download on GitHub, the JSON file that contains all the sheep-counting words that have been collected. Here is the updated stored procedure:

We can now very quickly ingest the whole collection into our table, pulling the data in from file. We include this file with the download on GitHub, so you can try it out. There are thirty-three different regions in the JSON file

We can now check that it is all in and correct

Giving …

word image 1 Importing JSON Collections into SQL Server

Just as a side-note, this data was collected for this article in various places on the internet but mainly from Yan Tan Tethera. Each table was pasted into Excel and tidied up. The JSON code was created by using three simple functions, one for the cell-level value, one for the row value and a final summation. This allowed simple adding, editing and deleting of data items. The technique is only suitable where columns are of fixed length.

Importing a More Complex JSON Data Collection into a SQL Server Database

We have successfully imported the very simplest JSON files into SQL Server. Now we need to consider those cases where the JSON document or collection represents more than one table.

In any relational database, we can use two approaches to JSON data, we can accommodate it, meaning we treat it as an ‘atomic’ unit and store the JSON unprocessed, or we can assimilate it, meaning that we turn the data into a relational format that can be easily indexed and accessed.

  • To accommodate JSON, we store it as a CLOB, usually NVARCHAR(MAX), with extra columns containing the extracted values for the data fields with which you would want to index the data. This is fine where all the database has to do is to store an application object without understanding it.
  • To assimilate JSON, we need to extract all the JSON data and store it in a relational form.

Our example represents a very simple customer database with ten linked tables. We will first accommodate the JSON document by creating a table (dbo.JSONDocuments) that merely stores, in each row, the reference to the customer, along with all the information about that customer, each aspect (addresses, phones, email addresses and so on) in separate columns as CLOB JSON strings.

We then use this table to successively assimilate each JSON column into the relational database.

This means that we need parse the full document only once.

To be clear about the contents of the JSON file, we will be cheating by using spoof data. We would never have unencrypted personal information in a database or a JSON file. Credit Card information would never be unencrypted. This data is generated entirely by SQL Data Generator, and the JSON collection contains 70,000 documents. The method of doing it is described here.

We’ll make other compromises. We’ll have no personal identifiers either. We will simply use the document order. In reality, the JSON would store the surrogate key of person_id.

The individual documents will look something like this

We will import this into a SQL Server database designed like this:

word image 2 Importing JSON Collections into SQL Server

The build script is included with the download on GitHub.

So, all we need now is the batch to import the JSON file that contains the collection and populate the table with the data. We will now describe individual parts of the batch.

We start out by reading the customersUTF16.json file into a variable.

The next step is to create a table at the document level, with the main arrays within each document represented by columns. (In some cases, there are sub-arrays. The phone numbers, for example, have an array of dates.) This means that this initial slicing of the JSON collection needs be done only once. In our case, there are

  • The details of the Name,
  • Addresses,
  • Credit Cards,
  • Email Addresses,
  • Notes,
  • Phone numbers

We fill this table via a call to openJSON. By doing this, we have the main details of each customer available to us when slicing up embedded arrays. The batch is designed so it can be rerun and should be idempotent. This means that there is less of a requirement to run the process in a single transaction.

Now we fill this table with a row for each document, each representing the entire date for a customer. Each item of root data, such as the id and the customer’s full name, is held as a column. All other columns hold JSON. This table will be an ‘accomodation’ to the JSON data, in that each row represents a customer, but each JSON document in the collection is shredded to provide a JSON string that represents the attributes and relations of that customer. We can now assimilate this data step-by-step

First we need to create an entry in the person table if it doesn’t already exist, as that has the person_id. We need to do this first because otherwise the foreign key constraints will protest.

Now we do the notes. We’ll do this first because it is a bit awkward. This has the complication because there is a many to many relationship with the notes and the people, because the same standard notes can be associated with many customers such an overdue invoice payment etc. We’ll use a table variable to allow us to guard against inserting duplicate records.

Addresses are complicated because they involve three tables. There is the address, which is the physical place, the abode, which records when and why the person was associated with the place, and a third table which constrains the type of abode. We create a table variable to support the various queries without any extra shredding.

Credit cards are much easier since they are a simple sub-array.

Email Addresses are also simple. We’re on the downhill slopes now.

Now we add these customers phones. The various dates for the start and end of the use of the phone number are held in a subarray within the individual card objects. That makes things slightly more awkward

Conclusion

JSON support in SQL Server has been the result of a long wait, but now that we have it, it opens up several possibilities.

No SQL Server Developer or admin needs to rule out using JSON for ETL (Extract, Transform, Load) processes to pass data between JSON-based document databases and SQL Server. The features that SQL Server has are sufficient, and far easier to use than the SQL Server XML support.

A typical SQL Server database is far more complex than the simple example used in this article, but it is certainly not an outrageous idea that a database could have its essential static data drawn from JSON documents: These are more versatile than VALUE statements and more efficient than individual INSERT statements.

I’m inclined to smile on the idea of transferring data between the application and database as JSON. It is usually easier for front-end application programmers, and we Database folks can, at last, do all the checks and transformations to accommodate data within the arcane relational world, rather than insist on the application programmer doing it. It will also decouple the application and database to the extent that the two no longer would need to shadow each other in terms of revisions.

JSON collections of documents represent an industry-standard way of transferring data. It is today’s CSV, and it is good to know that SQL Server can support it.

Let’s block ads! (Why?)

SQL – Simple Talk