Tag Archives: Memory

New memory options for Analysis Services

We’re happy to introduce some new memory settings for Azure Analysis Services and SQL Server Analysis Services tabular models. These new settings are primarily for resource governance, and in some cases can speed up data refresh.

IsAvailableInMdx

The IsAvailableInMdx column property is available in Azure Analysis Services and SQL Server Analysis Services 2017 CU7.

It prevents building attribute hierarchies, reducing memory consumption. This means the column is not available for group-by queries to MDX clients like Excel. Fact (transactional) table columns often don’t need to be grouped and are often the ones that use up the most memory.

This property can also improve performance of refresh operations; particularly for tables with lots of partitions. Building attribute hierarchies can be very expensive because they’re built over all the data in the table.

Currently, this property is not yet exposed in SSDT. It must be set in the JSON-based metadata by using Tabular Model Scripting Language (TMSL) or the Tabular Object Model (TOM). This property is specified as a Boolean.

The following snippet of JSON-based metadata from the Model.bim file disables attribute hierarchies for the Sales Amount column:

  {
    "name": "Sales Amount",
    "dataType": "decimal",
    "sourceColumn": "SalesAmount",
    "isAvailableInMdx": false
  }

QueryMemoryLimit

The Memory\QueryMemoryLimit property can be used to limit memory spools built by DAX queries submitted to the model. Currently, this property is only available in Azure Analysis Services.

Changing this property can be useful in controlling expensive queries that result in significant materialization. If the spooled memory for a query hits the limit, the query is cancelled and an error is returned, reducing the impact on other concurrent users of the system. Currently, MDX queries are not affected. It does not account for other general memory allocations used by the query.

The settable value of 1 to 100 is a percentage. Above that, it’s in bytes. The default value of 0 means not specified and no limit is applied.

You can set this property by using the latest version of SQL Server Management Studio (SSMS), in the Server Properties dialog box. See the Server Memory Properties article for more information.

QueryMemoryLimit New memory options for Analysis Services

DbpropMsmdRequestMemoryLimit

The DbpropMsmdRequestMemoryLimit XMLA property can be used to override the Memory\QueryMemoryLimit server property value for a connection. Currently, this property is only available in Azure Analysis Services.  The unit of measure is in kilobytes. See the Connection String Properties article for more information.

RowsetSerializationLimit

The OLAP\Query\RowsetSerializationLimit server property limits the number of rows returned in a rowset to clients. Currently, this property is only available in Azure Analysis Services.

This property, set in the Server Properties dialog box in the latest version of SSMS, applies to both DAX and MDX. It can be used to protect server resources from extensive data export usage. Queries submitted to the server that would exceed the limit are cancelled and an error is returned. The default value is -1, meaning no limit is applied.

Let’s block ads! (Why?)

Analysis Services Team Blog

You may see “out of user memory quota” message in errorlog when you use In-Memory OLTP feature

Recently we got an inquiry from a customer who received the following message in errorlog and wanted to know why.

[INFO] HkDatabaseTryAcquireUserMemory(): Database ID: [7]. Out of user memory quota: requested = 131200; available = 74641; quota = 34359738368; operation = 1.

This is my first time to see this error.  As usual, I relied on source code to find answers.   The message is a result of enforcing memory quota for In-memory OLTP usage.  As documented in “In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1”, SQL Server 2016 SP1 started to allow In-Memory OLTP to be used in all editions but enforce memory quotas for editions other than Enterprise edition.  The above message is simply telling you that you have reached the quota and what ever operation you did was denied.

In addition to the message in errorlog, end user (application) should have received the following message per blog “In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1”.

Msg 41823, Level 16, State 171, Line 6 Could not perform the operation because the database has reached its quota for in-memory tables. See ‘http://go.microsoft.com/fwlink/?LinkID=623028’ for more information.

Let’s take a closer look at the message printed in errorlog:

[INFO] HkDatabaseTryAcquireUserMemory(): Database ID: [7]. Out of user memory quota: requested = 131200; available = 74641; quota = 34359738368; operation = 1.

  1. 131200 was number of bytes requested.
  2. 74641 was number of bytes available to use. 
  3. 34359738368 was the quota in bytes (which is 32 GB for standard edition)
  4. operation = 1 means the memory was requested to create a memory optimized table variable.

In order to help you see what kind of operation, here is a complete list of ‘operation’:

number operation
0 create memory optimized table
1 create memory optimized table variable
2 insert
3 update

Knowing what each number means will give you more information to investigate.   

Additionally, if you are using memory optimized table in the context of table valued parameters repeatedly, you may encounter this error even you use them in a separate batch and memory gets released.  We are looking to address this issue in future cumulative updates.  For now, you either need to use enterprise edition or periodically restart SQL or offline/online the database.

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Let’s block ads! (Why?)

CSS SQL Server Engineers

Memory optimized table variable and cardinality estimate

In a previous blog, I talked about memory optimized table consumes memory until end of the batch.   In this blog, I want to make you aware of cardinality estimate of memory optimized table as we have had customers who called in for clarifications.  By default memory optimized table variable behaves the same way as disk based table variable. It will have 1 row as an estimate.   In disk based table variable, you can control estimate by using option (recompile) at statement level (see this blog) or use trace flag 2453

You can control the same behavior using the two approaches on memory optimized table variable if you use it in an ad hoc query or inside a regular TSQL stored procedure.  The behavior will be the same. This little repro will show the estimate is correct with option (recompile).

create database IMOLTP
go
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA 
go
ALTER DATABASE imoltp ADD FILE (name=’imoltp_mod1′, filename=’c:\sqldata\imoltp_mod2′) TO FILEGROUP imoltp_mod 
go

use IMOLTP
go

CREATE TYPE dbo.test_memory AS TABLE
(c1 INT NOT NULL INDEX ix_c1,
c2 CHAR(10))
WITH (MEMORY_OPTIMIZED=ON);


go

DECLARE @tv dbo.test_memory
set nocount on

declare @i int
set @i = 1
while @i < 10000
begin
    insert into @tv values (@i, ‘n’)
    set @i = @i + 1
end
set statistics xml on
–this will work and the etimate will be correct
select * from @tv t1 join @tv t2 on t1.c1=t2.c1 option (recompile, querytraceon 2453)
set statistics xml off
go

image thumb152 Memory optimized table variable and cardinality estimate

But the problem occurs when you use it inside a natively compiled stored procedure.  In this case, it will always estimate 1 row.  You can’t change it because natively compiled procedure doesn’t allow statement level recompile.  If you try to create a natively compiled procedure, you will get errors that disallow you to create the procedure.


create procedure test
with native_compilation, schemabinding 
as  
begin atomic with 
(transaction isolation level = snapshot, 
language = N’English’) 

DECLARE @tv dbo.test_memory
declare @i int
set @i = 1
while @i < 10000
begin
    insert into @tv values (@i, ‘n’)
    set @i = @i + 1
end
–you can’t add TF 3453 or recompile
select t1.c1, t2.c1 from @tv t1 join @tv t2 on t1.c1=t2.c1 option (recompile, querytraceon 2453)
end 
go


Msg 10794, Level 16, State 45, Procedure test, Line 17 [Batch Start Line 39]
The query hint ‘RECOMPILE’ is not supported with natively compiled modules.
Msg 10794, Level 16, State 45, Procedure test, Line 17 [Batch Start Line 39]
The query hint ‘QUERYTRACEON’ is not supported with natively compiled modules.

So that is the solution?   For natively compiled procedure, here are some advices

1. limit number of rows inserted into the memory optimized table variable.

2. if you are joining with memory optimized table variable that has lots of rows, consider use a schema_only memory optimized table

3. if  you know your data, you can potentially re-arrange the join and use option (force order) to get around it.

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Let’s block ads! (Why?)

CSS SQL Server Engineers

Unable to issue alter table for a memory optimized tables

Recently I assisted on a customer issue where customer wasn’t able to alter a memory optimized table with the following error

Msg 41317, Level 16, State 3, Procedure ddl, Line 4 [Batch Start Line 35]
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.

If you access a memory optimized table, you can’t span database or access model or msdb.  The alter statement doesn’t involve any other database.

To cut to the chase, we got a SQL Server userdump when the error is raised.  From analyzing the dump, we discovered that a DDL trigger is defined for alter table.  In that DDL trigger, it inserts events into MSDB.

The solution is to disable DDL trigger and then issue alter

Demo

CREATE DATABASE imoltp
CONTAINMENT = NONE
ON  PRIMARY
( NAME = N’imoltp’, FILENAME = N’C:\SQLData\SQL16A\imoltp.mdf’ ),
FILEGROUP [InMemory] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT
( NAME = N’imoltpfg’, FILENAME = N’C:\SQLData\SQL16A\imoltpfg’ , MAXSIZE = UNLIMITED)
LOG ON
( NAME = N’imoltp_log’, FILENAME = N’C:\SQLData\SQL16A\imoltp_log.ldf’ , SIZE = 25600KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

go
use IMOLTP
go

CREATE TRIGGER ddl_trigger     ON DATABASE   
FOR  ALTER_TABLE     AS       set nocount on   
  begin    insert into msdb.dbo.tblTrack values(DB_NAME(), EVENTDATA().value(‘(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]’,’varchar(max)’) ) 
  end  
go


CREATE TABLE [dbo].t
(
    [id] [bigint] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_id]  PRIMARY KEY NONCLUSTERED HASH
(
    [id]
)WITH ( BUCKET_COUNT = 16777216)

)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )

go

–the following alter will cause error
Msg 41317, Level 16, State 3, Procedure ddl, Line 4 [Batch Start Line 35]
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.


alter table t add c2 int not null DEFAULT 1 WITH VALUES

go


disable trigger ddl_trigger on database


–this will succeed as ddl trigger is disabled
alter table t add c2 int not null DEFAULT 1 WITH VALUES

–reenable
enable trigger ddl_trigger on database

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Let’s block ads! (Why?)

CSS SQL Server Engineers

Micron on the Role of Memory in Digital Transformation, AI, VR, Mobility and IoT

hqdefault Micron on the Role of Memory in Digital Transformation, AI, VR, Mobility and IoT

Have any of you spent time considering how digital transformation, artificial intelligence, IoT, mobility and virtual and augmented reality are impacted by computer memory? Me neither until this week at GSMA’s Mobile World Congress 2017 in Barcelona.  I had the privilege of interviewing Micron Technologies’ Gino Skulick, II and getting educated on it.  Very cool information!!!


  1. Artificial Intelligence Out of Doors in the Kingdom of Robots
  2. How Digital Leaders are Different
  3. The Three Tsunamis of Digital Transformation – Be Prepared!
  4. Bots, AI and the Next 40 Months
  5. You Only Have 40 Months to Digitally Transform
  6. Digital Technologies and the Greater Good
  7. Video Report: 40 Months of Hyper-Digital Transformation
  8. Report: 40 Months of Hyper-Digital Transformation
  9. Virtual Moves to Real in with Sensors and Digital Transformation
  10. Technology Must Disappear in 2017
  11. Merging Humans with AI and Machine Learning Systems
  12. In Defense of the Human Experience in a Digital World
  13. Profits that Kill in the Age of Digital Transformation
  14. Competing in Future Time and Digital Transformation
  15. Digital Hope and Redemption in the Digital Age
  16. Digital Transformation and the Role of Faster
  17. Digital Transformation and the Law of Thermodynamics
  18. Jettison the Heavy Baggage and Digitally Transform
  19. Digital Transformation – The Dark Side
  20. Business is Not as Usual in Digital Transformation
  21. 15 Rules for Winning in Digital Transformation
  22. The End Goal of Digital Transformation
  23. Digital Transformation and the Ignorance Penalty
  24. Surviving the Three Ages of Digital Transformation
  25. The Advantages of an Advantage in Digital Transformation
  26. From Digital to Hyper-Transformation
  27. Believers, Non-Believers and Digital Transformation
  28. Forces Driving the Digital Transformation Era
  29. Digital Transformation Requires Agility and Energy Measurement
  30. A Doctrine for Digital Transformation is Required
  31. Digital Transformation and Its Role in Mobility and Competition
  32. Digital Transformation – A Revolution in Precision Through IoT, Analytics and Mobility
  33. Competing in Digital Transformation and Mobility
  34. Ambiguity and Digital Transformation
  35. Digital Transformation and Mobility – Macro-Forces and Timing
  36. Mobile and IoT Technologies are Inside the Curve of Human Time



************************************************************************

Kevin Benedict
Senior Analyst, Center for the Future of Work, Cognizant
View my profile on LinkedIn
Follow me on Twitter @krbenedict
Subscribe to Kevin’s YouTube Channel
Join the Linkedin Group Strategic Enterprise Mobility
Join the Google+ Community Mobile Enterprise Strategies


***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

Be aware of 701 error if you use memory optimized table variable in a loop

In blog “Importance of choosing correct bucket count of hash indexes on a memory optimized table”,  I talked about encountering performance issues with incorrect sized bucket count.  I was actually investigating an out of memory issues with the following error.

Msg 701, Level 17, State 103, Line 11
There is insufficient system memory in resource pool ‘default’ to run this query.

I simplified the scenario but customer’s code is very similar to the loop below.  Basically, this customer tried to insert 1 million row into a memory optimized table variable and process them.  Then he deleted the rows from the memory optimized table variable and inserted another 1 million.  His goal was to process 1 billion rows.  But before he was able to process 1 billion rows, he would run out of memory (701 error as above)

DECLARE @t2 AS [SalesOrderDetailType_inmem]
insert into @t2 select * from t2

while 1 = 1  –note that this customer didn’t use 1=1. I just simplified to reproduce it
begin
       delete @t2
       insert into @t2 select * from t2
end

This customer was puzzled because he delete existing rows. At any given time, there should not be more than 1 million rows.  SQL Server should not have run out of memory.

This is actually by-design behavior documented in “Memory-Optimized Table Variables”).  Here is what is state “Unlike memory-optimized tables, the memory consumed (including deleted rows) by table variables is freed when the table variable goes out of scope)”.  With a loop like above, all deleted rows will be kept and consume memory until end of the loop. 

Complete Repro
=============

Step 1 Create  a disk based table and populate 1 million rows

CREATE table t2(
  [OrderQty] [smallint] NOT NULL,
  [ProductID] [int] NOT NULL,
  [SpecialOfferID] [int] NOT NULL,
  [LocalID] [int] NOT NULL
  )


Step 2 create a type using memory optimized table

CREATE TYPE [SalesOrderDetailType_inmem] AS TABLE(
  [OrderQty] [smallint] NOT NULL,
  [ProductID] [int] NOT NULL,
  [SpecialOfferID] [int] NOT NULL,
  [LocalID] [int] NOT NULL,

  INDEX [IX_ProductID] HASH ([ProductID]) WITH ( BUCKET_COUNT = 1000000),
  INDEX [IX_SpecialOfferID] NONCLUSTERED (LocalID)
)
WITH ( MEMORY_OPTIMIZED = ON )


Step 3 run the following query and eventually you will run of memory
DECLARE @t2 AS [SalesOrderDetailType_inmem]
insert into @t2 select * from t2

while 1 = 1
begin
       delete @t2
       insert into @t2 select * from t2
end

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Let’s block ads! (Why?)

CSS SQL Server Engineers

How it is build: the SSAS memory report

So recently I created a new SSAS memory report that you can use to analyze the memory usage of your SSAS servers, read more on the actual report here. In this blog post we’ll look at how this report was build.

First let’s look at what is needed for us to get the information we need for tabular models with compatibility level 1200 or higher. The most detailed information for the tabular model we can get by querying the DMV DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS. This will give us plenty of data to work with like:

  • Databasename
  • Servername
  • Table (actually dimension name)
  • Column
  • Partition
  • Recordcount
  • Size
  • Compression type

And more, but with the above we should be able to make a great report. Unfortunately there is one issue, the query above can only be send per database not for the entire server. This used to be a problem with the previous memory reports but with PowerQuery we can solve this. What we will do is do a second discover that will give us all the databases on the server “DBSCHEMA_CATALOGS”. Here we will also filter out any database that doesn’t have the compatibility level 1200 or higher. We will then use the results of that query to send the DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS per query.

So to build this the first thing I did was create a query in Power BI desktop that will give a list of all the databases on my server using “DBSCHEMA_CATALOGS”. Unfortunately you have to specify a Database on the server for each connection to SSAS while this should be an optional argument in Analysis Services.Database. I have asked the M folks to put it on the backlog but I can live with this for now, this scenario is not that common.

This gives me the list of databases:

 How it is build: the SSAS memory report

Next I want someone to be able to have the user type in the server name and database trough a prompt instead of changing the M script, so I created 2 parameters:

 How it is build: the SSAS memory report

Next I use these 2 parameters in my M expression:

let
Source = AnalysisServices.Database(Server, Database, [Query="select * from $  system.DBSCHEMA_CATALOGS#(lf)where compatibility_level >= '1200'"]),
#"Renamed Columns" = Table.RenameColumns(Source,{{"CATALOG_NAME", "DatabaseName"}}),
#"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"DATE_QUERIED", "CURRENTLY_USED", "POPULARITY"}),
#"Added Custom" = Table.AddColumn(#"Removed Columns", "Server", each Server)
in
#"Added Custom"

Done   How it is build: the SSAS memory report .

Now that was the easy part. Next I need to use the list of databases to get the information per database. I first do one manually so I again import directly against my SSAS server using “DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS”

This gives me the information I care about for a single database on the server (I renamed and removed some columns):

 How it is build: the SSAS memory report

This is the query I used, again leveraging the parameters I defined earlier:

Source = (Database as text, Server as text) => let
Source = AnalysisServices.Database(Server, Database, [Query="select * from $  system.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS"]),
#"Renamed Columns" = Table.RenameColumns(Source,{{"DATABASE_NAME", "Cube"}, {"MEASURE_GROUP_NAME", "Table"}, {"PARTITION_NAME", "Partition"}}),
#"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"DIMENSION_NAME"}),
#"Renamed Columns1" = Table.RenameColumns(#"Removed Columns",{{"TABLE_ID", "TableId"}, {"COLUMN_ID", "ColumnId"}, {"SEGMENT_NUMBER", "SegmentNr"}, {"TABLE_PARTITION_NUMBER", "TablePartitionNr"}, {"RECORDS_COUNT", "RecordCount"}, {"ALLOCATED_SIZE", "AllocatedSize"}, {"USED_SIZE", "UsedSize"}, {"COMPRESSION_TYPE", "CompressionType"}, {"BITS_COUNT", "BitsCount"}, {"BOOKMARK_BITS_COUNT", "BookmarkBits"}, {"VERTIPAQ_STATE", "VertiPaqState"}})
in
#"Renamed Columns1"

Now the interesting part. Now we need to call the above code that gets the information per database for each database on the server. To do this I create a function out of the M script above:

let
Source = (Database as text, Server as text) => let
Source = AnalysisServices.Database(Server, Database, [Query="select * from $  system.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS"]),
#"Renamed Columns" = Table.RenameColumns(Source,{{"DATABASE_NAME", "Cube"}, {"MEASURE_GROUP_NAME", "Table"}, {"PARTITION_NAME", "Partition"}}),
#"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"DIMENSION_NAME"}),
#"Renamed Columns1" = Table.RenameColumns(#"Removed Columns",{{"TABLE_ID", "TableId"}, {"COLUMN_ID", "ColumnId"}, {"SEGMENT_NUMBER", "SegmentNr"}, {"TABLE_PARTITION_NUMBER", "TablePartitionNr"}, {"RECORDS_COUNT", "RecordCount"}, {"ALLOCATED_SIZE", "AllocatedSize"}, {"USED_SIZE", "UsedSize"}, {"COMPRESSION_TYPE", "CompressionType"}, {"BITS_COUNT", "BitsCount"}, {"BOOKMARK_BITS_COUNT", "BookmarkBits"}, {"VERTIPAQ_STATE", "VertiPaqState"}})
in
#"Renamed Columns1"
in
Source

This allows me to pass in Database and Server and run the code. The function is called “fnDatabaseSize”:

 How it is build: the SSAS memory report

Now we need to pass in that information automatically form the data . I do this using the following M script:

let
Source = Databases,
#"Removed Columns" = Table.RemoveColumns(Source,{"DESCRIPTION", "ROLES", "DATE_MODIFIED", "COMPATIBILITY_LEVEL", "TYPE", "VERSION", "DATABASE_ID"}),
#"Invoked Custom Function" = Table.AddColumn(#"Removed Columns", "fnDatabaseSize", each fnDatabaseSize([DatabaseName], [Server])),
#"Expanded fnDatabaseSize" = Table.ExpandTableColumn(#"Invoked Custom Function", "fnDatabaseSize", {"AllocatedSize", "BitsCount", "BookmarkBits", "ColumnId", "CompressionType", "Cube", "CUBE_NAME", "Partition", "RecordCount", "SegmentNr", "Table", "TableId", "TablePartitionNr", "UsedSize", "VertiPaqState"}, {"AllocatedSize", "BitsCount", "BookmarkBits", "ColumnId", "CompressionType", "Cube", "CUBE_NAME", "Partition", "RecordCount", "SegmentNr", "Table", "TableId", "TablePartitionNr", "UsedSize", "VertiPaqState"}),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded fnDatabaseSize",{{"AllocatedSize", Int64.Type}, {"BitsCount", Int64.Type}, {"BookmarkBits", Int64.Type}, {"RecordCount", Int64.Type}, {"SegmentNr", Int64.Type}, {"TablePartitionNr", Int64.Type}, {"UsedSize", Int64.Type}})
in
#"Changed Type"

I get to the these steps by doing the following steps in the UI. First I base my query on the information that is retrieved from the databases query and then I remove all the columns expect name and server:

 How it is build: the SSAS memory report

And then for the magic part, in the next step I invoke a custom function:

 How it is build: the SSAS memory report

Fortunately Power BI desktop automatically recognized the parameters in my function and asks me which column values it should use to enter the variables.

This results in a new table being added for each database:

 How it is build: the SSAS memory report

This I now can expand and clean up to get the full results.

 How it is build: the SSAS memory report

This gives me all the information I need for all the databases on my server. Unfortunately you will probably have gotten an error message about data privacy or firewall, what happens is that Power Query is trying to prevent some sort of data injection. Based what we are doing here I could also have used information from you SSAS server and pass that over to some web service and steal some data so Power Query prevents that. To solve this you can go to options, Privacy, Always ignore Privacy Level settings. This will allow these queries to call each other.

I ended up with 6 queries, 2 parameter. The DBSize function is the base for my function, for this one I disabled loading data as I am not planning to use it.

 How it is build: the SSAS memory report

Now I can start creating the reports as described in the my other blog post here. As last step I saved the report as a template so other users can start clean without my data, you can do this through File, Export, Power BI template.

Hope this gave you some interesting new ideas on what can be done with Power BI Desktop  How it is build: the SSAS memory report.

Let’s block ads! (Why?)

Kasper On BI

How it is build: the SSAS memory report

So recently I created a new SSAS memory report that you can use to analyze the memory usage of your SSAS servers, read more on the actual report here. In this blog post we’ll look at how this report was build.

First let’s look at what is needed for us to get the information we need for tabular models with compatibility level 1200 or higher. The most detailed information for the tabular model we can get by querying the DMV DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS. This will give us plenty of data to work with like:

  • Databasename
  • Servername
  • Table (actually dimension name)
  • Column
  • Partition
  • Recordcount
  • Size
  • Compression type

And more, but with the above we should be able to make a great report. Unfortunately there is one issue, the query above can only be send per database not for the entire server. This used to be a problem with the previous memory reports but with PowerQuery we can solve this. What we will do is do a second discover that will give us all the databases on the server “DBSCHEMA_CATALOGS”. Here we will also filter out any database that doesn’t have the compatibility level 1200 or higher. We will then use the results of that query to send the DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS per query.

So to build this the first thing I did was create a query in Power BI desktop that will give a list of all the databases on my server using “DBSCHEMA_CATALOGS”. Unfortunately you have to specify a Database on the server for each connection to SSAS while this should be an optional argument in Analysis Services.Database. I have asked the M folks to put it on the backlog but I can live with this for now, this scenario is not that common.

This gives me the list of databases:

image thumb How it is build: the SSAS memory report

Next I want someone to be able to have the user type in the server name and database trough a prompt instead of changing the M script, so I created 2 parameters:

image thumb 1 How it is build: the SSAS memory report

Next I use these 2 parameters in my M expression:

let
Source = AnalysisServices.Database(Server, Database, [Query="select * from $  system.DBSCHEMA_CATALOGS#(lf)where compatibility_level >= '1200'"]),
#"Renamed Columns" = Table.RenameColumns(Source,{{"CATALOG_NAME", "DatabaseName"}}),
#"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"DATE_QUERIED", "CURRENTLY_USED", "POPULARITY"}),
#"Added Custom" = Table.AddColumn(#"Removed Columns", "Server", each Server)
in
#"Added Custom"

Done  wlEmoticon smile How it is build: the SSAS memory report .

Now that was the easy part. Next I need to use the list of databases to get the information per database. I first do one manually so I again import directly against my SSAS server using “DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS”

This gives me the information I care about for a single database on the server (I renamed and removed some columns):

image thumb 2 How it is build: the SSAS memory report

This is the query I used, again leveraging the parameters I defined earlier:

Source = (Database as text, Server as text) => let
Source = AnalysisServices.Database(Server, Database, [Query="select * from $  system.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS"]),
#"Renamed Columns" = Table.RenameColumns(Source,{{"DATABASE_NAME", "Cube"}, {"MEASURE_GROUP_NAME", "Table"}, {"PARTITION_NAME", "Partition"}}),
#"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"DIMENSION_NAME"}),
#"Renamed Columns1" = Table.RenameColumns(#"Removed Columns",{{"TABLE_ID", "TableId"}, {"COLUMN_ID", "ColumnId"}, {"SEGMENT_NUMBER", "SegmentNr"}, {"TABLE_PARTITION_NUMBER", "TablePartitionNr"}, {"RECORDS_COUNT", "RecordCount"}, {"ALLOCATED_SIZE", "AllocatedSize"}, {"USED_SIZE", "UsedSize"}, {"COMPRESSION_TYPE", "CompressionType"}, {"BITS_COUNT", "BitsCount"}, {"BOOKMARK_BITS_COUNT", "BookmarkBits"}, {"VERTIPAQ_STATE", "VertiPaqState"}})
in
#"Renamed Columns1"

Now the interesting part. Now we need to call the above code that gets the information per database for each database on the server. To do this I create a function out of the M script above:

let
Source = (Database as text, Server as text) => let
Source = AnalysisServices.Database(Server, Database, [Query="select * from $  system.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS"]),
#"Renamed Columns" = Table.RenameColumns(Source,{{"DATABASE_NAME", "Cube"}, {"MEASURE_GROUP_NAME", "Table"}, {"PARTITION_NAME", "Partition"}}),
#"Removed Columns" = Table.RemoveColumns(#"Renamed Columns",{"DIMENSION_NAME"}),
#"Renamed Columns1" = Table.RenameColumns(#"Removed Columns",{{"TABLE_ID", "TableId"}, {"COLUMN_ID", "ColumnId"}, {"SEGMENT_NUMBER", "SegmentNr"}, {"TABLE_PARTITION_NUMBER", "TablePartitionNr"}, {"RECORDS_COUNT", "RecordCount"}, {"ALLOCATED_SIZE", "AllocatedSize"}, {"USED_SIZE", "UsedSize"}, {"COMPRESSION_TYPE", "CompressionType"}, {"BITS_COUNT", "BitsCount"}, {"BOOKMARK_BITS_COUNT", "BookmarkBits"}, {"VERTIPAQ_STATE", "VertiPaqState"}})
in
#"Renamed Columns1"
in
Source

This allows me to pass in Database and Server and run the code. The function is called “fnDatabaseSize”:

image thumb 3 How it is build: the SSAS memory report

Now we need to pass in that information automatically form the data . I do this using the following M script:

let
Source = Databases,
#"Removed Columns" = Table.RemoveColumns(Source,{"DESCRIPTION", "ROLES", "DATE_MODIFIED", "COMPATIBILITY_LEVEL", "TYPE", "VERSION", "DATABASE_ID"}),
#"Invoked Custom Function" = Table.AddColumn(#"Removed Columns", "fnDatabaseSize", each fnDatabaseSize([DatabaseName], [Server])),
#"Expanded fnDatabaseSize" = Table.ExpandTableColumn(#"Invoked Custom Function", "fnDatabaseSize", {"AllocatedSize", "BitsCount", "BookmarkBits", "ColumnId", "CompressionType", "Cube", "CUBE_NAME", "Partition", "RecordCount", "SegmentNr", "Table", "TableId", "TablePartitionNr", "UsedSize", "VertiPaqState"}, {"AllocatedSize", "BitsCount", "BookmarkBits", "ColumnId", "CompressionType", "Cube", "CUBE_NAME", "Partition", "RecordCount", "SegmentNr", "Table", "TableId", "TablePartitionNr", "UsedSize", "VertiPaqState"}),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded fnDatabaseSize",{{"AllocatedSize", Int64.Type}, {"BitsCount", Int64.Type}, {"BookmarkBits", Int64.Type}, {"RecordCount", Int64.Type}, {"SegmentNr", Int64.Type}, {"TablePartitionNr", Int64.Type}, {"UsedSize", Int64.Type}})
in
#"Changed Type"

I get to the these steps by doing the following steps in the UI. First I base my query on the information that is retrieved from the databases query and then I remove all the columns expect name and server:

image thumb 4 How it is build: the SSAS memory report

And then for the magic part, in the next step I invoke a custom function:

image thumb 5 How it is build: the SSAS memory report

Fortunately Power BI desktop automatically recognized the parameters in my function and asks me which column values it should use to enter the variables.

This results in a new table being added for each database:

image thumb 6 How it is build: the SSAS memory report

This I now can expand and clean up to get the full results.

image thumb 7 How it is build: the SSAS memory report

This gives me all the information I need for all the databases on my server. Unfortunately you will probably have gotten an error message about data privacy or firewall, what happens is that Power Query is trying to prevent some sort of data injection. Based what we are doing here I could also have used information from you SSAS server and pass that over to some web service and steal some data so Power Query prevents that. To solve this you can go to options, Privacy, Always ignore Privacy Level settings. This will allow these queries to call each other.

I ended up with 6 queries, 2 parameter. The DBSize function is the base for my function, for this one I disabled loading data as I am not planning to use it.

image thumb 8 How it is build: the SSAS memory report

Now I can start creating the reports as described in the my other blog post here. As last step I saved the report as a template so other users can start clean without my data, you can do this through File, Export, Power BI template.

Hope this gave you some interesting new ideas on what can be done with Power BI Desktop wlEmoticon smile How it is build: the SSAS memory report.

Let’s block ads! (Why?)

Kasper de Jonge Microsoft BI Blog

New SSAS memory usage report using Power BI

I have created several version of the SSAS memory usage report here and here. But with the new 1200 compatibility level these memory reports stopped working due to the DMV’s not working anymore for these databases. So I decided it would be time to create a new one, this time of course using Power BI. Here I can also use Power Query to do some additional things I couldn’t do before wlEmoticon smile 1 New SSAS memory usage report using Power BI.

So let’s look at the reports and data available we have available, in a follow up post we’ll look at how I build it.

To start you can download the Power BI Template here on GitHub (more on GitHub later).

Double clicking the file gives you a prompt where you can enter your server name and a database name (pick any, unfortunately PQ mandates us to connect to AS with a database even though it is not needed).

image thumb New SSAS memory usage report using Power BI

This will start loading the report with the data from your server (you do have to be admin on the server to run this report).

In the report we will have two tables:

  1. “Databases”. This table contains all the databases on the server that are compatibility level 1200 or higher (to future proof it)
  2. “DatabaseSize”. This table contains the low level detail with databasename, partition, column, rowcount, size in memory and much interesting stuff more.

Using that I created a few reports:

  1. Databases on server by Record Count, which shows all the databases on the server by recordcount
    image thumb 1 New SSAS memory usage report using Power BI
  2. And then a similar one that show Databases by size in MB
    image thumb 2 New SSAS memory usage report using Power BI
  3. Then one that show Tables and partitions with the potential to select a database so you can filter the results down to a single database.
    Here you can see the report filtered by a single database.
    image thumb 3 New SSAS memory usage report using Power BI
    The treemap show the tables and within them the partitions so you can quickly identify either one. When you hover over the partition you get more information like size, recordcount and the size a single row takes
    image thumb 4 New SSAS memory usage report using Power BI
  4. In the next report you can look at the top 10 partitions for either the entire server or per database:
    image thumb 5 New SSAS memory usage report using Power BI
  5. In the last report you will be able to look at the most “expensive” column based on the memory it uses per row of data
    image thumb 6 New SSAS memory usage report using Power BI

That’s it. Lot’s of interesting information already but there is more hidden in the model that I haven’t explored yet.

As mentioned before I put this Power BI desktop template on GitHub. Anyone can download it there or even better make updates to it with nifty new things they discovered!

Have fun!

Let’s block ads! (Why?)

Kasper de Jonge Microsoft BI Blog

Decision Fundamentals: Building Institutional Memory

Institutional Memory Decision Fundamentals: Building Institutional Memory

This is my fourth of five blogs expanding on the key points of my FICO World 2016 address. Through these blogs I’m exploring the fundamental ways organizations need to improve their operational decision-making. Organizations can reduce complexity and create competitive advantage through:

1. Capturing subject matter expertise
2. Intelligent solution creation
3. Faster insight to execution
4. Building institutional memory
5. Greater analytic accessibility

Let’s talk about institutional memory.

How Elephants Survive

In my FICO World presentation I included a slide with a photo of elephants—not the usual stuff of a corporate deck. But a tale about elephant survival in Tanzania’s Tarangire Park provides a valuable allegory for business.

The Wildlife Conservation Society examined patterns of elephant calf mortality during the park’s drought of 1993. The drought was the region’s most severe in over 40 years. Sadly, during a nine-month period of that year, 16 out of 81 elephant calves died. This mortality rate of 20% was 10 times higher than during non-drought years.

Researchers noted correlations between calf survivorship, the movements of the groups, and the ages of the female members of the groups. The two groups that left the park during the drought suffered lower mortality rates than the one group that remained.

The groups that left the park had the oldest matriarchs, elephants that may have drawn upon their memories a Tarangire drought that lasted from 1958 to 1961. The group staying behind had no elephants old enough to have remembered that event.

This story about drought survival in Africa is highly applicable in a business context…institutional memory brings tremendous benefits.

Building Institutional Memory

Most professionals have been in a situation where they had to examine decisions made by a group of individuals no longer with the company, or who had moved to other departments. There was no supporting data to be had, nor any documentation of the decision process. Just as in Africa, without any institutional memory, there can be dire consequences.

Institutional memory is critical in operationalizing data-driven decision-making. Unless you intend to rely on “matriarchs” who have been around for 30+ years, you’re going to need a system for this, and the system requires five things:

1. Inventory of all decision assets: The most important thing, when it comes to capturing and learning from institutional knowledge, is the creation of a single repository of decision assets that can be mined and searched.

This inventory of decision assets needs to hold information at all levels of granularity, and regardless of whether that asset was created in your shop, or bought or borrowed from outside. All of the following elements must be centrally managed:

  • The variables used to create a score characteristic or a business rule
  • The knowledge models that use those variables, such as a predictive model or a strategy tree
  • The decision models that define how that knowledge is applied to data to make decisions.

2. Decisions correlated with data: We need to correlate all those decision assets with decision data (the data from the decisions made with those assets). This can provide a full picture of what decisions were made, how they were made, and how they have impacted key performance metrics in the past.

3. Record of decision workflow: It’s necessary to know, at any given time, who did what, with what, to what and whom, with whose approval, and why it was done. This record of decision workflow puts you in a position to build and better leverage your institutional memory. It’s a good business practice that can unlock value, and is also increasingly mandated in regulated industries.

4. Transparency of governance process: Decision assets need to be managed across their lifecycle, with a transparent governance process establishing how each asset can be used, updated or replaced. This is especially valuable in highly regulated industries.

5. Validation and analysis: We must understand an asset’s role and constantly validate if its performance has deteriorated over time. In addition, we must proactively measure that performance to be able to anticipate issues (and opportunities).
Let’s see how all of these steps come together in the real world.

ANZ Builds Institutional Memory with FICO Decision Central Solution

Models are a challenging area for banks to build institutional memory. ANZ is an Australian bank with a complex inventory of 170 models across many portfolios and several countries. ANZ had many different individuals manually creating, modifying, monitoring performance and demonstrating regulatory compliance for those models, creating significant challenges, particularly in transferring knowledge of models if an individual changed roles or left the bank. ANZ recognized that it needed a new approach to help reducing the amount of time spent monitoring, documenting and identifying deteriorating models.

The bank implemented FICO® Decision Central™ to transform these models into a powerful institutional asset via automated reports, streamlined model reviews and simplified documentation, to sustain better model health and streamlined regulatory compliance. They created a “single source of truth” for multiple teams, which improves collaboration, enhances productivity and overall business results. Consistent analytic assets are now produced automatically across all portfolios, creating a long-lived institutional asset, which permits individuals to focus on innovation and insight generation.

In my next blog I’ll wrap up my discussion of decision fundamentals with a post about greater analytic accessibility.

Let’s block ads! (Why?)

FICO