• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: reduce

4 Ways to Reduce the Cost of Custom Reports in Microsoft Dynamics 365

September 4, 2020   CRM News and Info
crmnav 4 Ways to Reduce the Cost of Custom Reports in Microsoft Dynamics 365

When estimating the cost of Microsoft Dynamics 365 projects, reporting can sometimes represent a significant portion of the quote.  Many customers that want the Dynamics 365 system to fit within their budget ask if there are ways to reduce the project cost. If a large amount of effort is dedicated to building reports, it is an area worth revisiting. Often, the customer can take on the task of building some reports on their own, and reducing the overall project cost.

When you look at the cost of Dynamics 365 reporting you might ask:

  • To what degree can a Dynamics 365 user develop, create, and run their own reports?
  • Is SQL or SSRS (SQL Services Reporting Server) experience required?
  • What is the difference between “reports” that users can create themselves and a report that has to be developed by and experienced Dynamics 365 Partner?
  • What kind of data could be viewed in dashboards instead of custom reports?

The answer is that there are a number of options for creating reports in Microsoft Dynamics 365.

  1. Simple on-demand reports, i.e. push a button, get a report, can be easily created using the Report Wizard. If you understand the data and where it lives you don’t need SSRS skills to create insightful and actionable reports.
  2. More complex Microsoft Dynamics 365 reports would require SSRS skills, however, we have many customers who have come up to speed on SSRS and turn out some fairly complex reports. Having internal resources that are well-versed in creating reports is gold. It’s an area data driven companies will want to invest in.
  3. Another simple way to accomplish reporting is through dynamic filtered views. Views allow you to pull out groups of Contacts or Opportunities that have things in common, like location or sales stage. You can then tie views to charts and create some pretty compelling dashboards. Dashboards may also allow you to combine multiple reports into one view. You can also export views to Excel for great on-the-fly reporting.
  4. If the reports are well-defined in terms of layout, criteria and parameters….and you need SSRS reports, you could look to outsource the report development. If you work with a reputable, reliable resource well versed in SSRS, you could get your report development done at a fraction of the cost.

When you work with a Microsoft Dynamics 365 partner that has your best interests in mind, they will help you evaluate the realistic options to lower the cost of custom reporting, and still get the end results you need.  The Crowe CRM team can help.

If you are interested in evaluating Microsoft Dynamics 365 contact us today.

By Ryan Plourde, Crowe, a Microsoft Dynamics 365 Gold Partner www.CroweCRM.com

Follow us on Twitter: @CroweCRM

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Machine learning groups form Consortium for Python Data API Standards to reduce fragmentation

August 18, 2020   Big Data
 Machine learning groups form Consortium for Python Data API Standards to reduce fragmentation

Automation and Jobs

Read our latest special issue.

Open Now

Deep learning framework Apache MXNet and Open Neural Network Exchange (ONNX) today launched the Consortium for Python Data API Standards, a group that wants to make it easier for machine learning practitioners and data scientists no matter which framework, library, or tool from the Python ecosystem it came from. ONNX is a group initially formed by Facebook and Microsoft in 2017 to power interoperability between frameworks and tools. Today the group includes near 40 organizations influential in AI and data science like AWS, Baidu, and IBM as well as hardware makers like Arm, Intel, and Qualcomm.

The group, which will develop standards for dataframes and arrays or tensors, said the consortium is necessary due to fragmentation of the kinds of frameworks  of the data ecosystem in recent years.

Other major frameworks include TensorFlow, PyTorch, and NumPy; the Python programming language is also used for Python dataframes like Pandas, PySpark, and Apache Arrow. PyTorch, one of the most popular machine learning frameworks in use today is not a part of the consortium, a Facebook company spokesperson told VentureBeat in an interview.

“Currently, array and dataframe libraries all have similar APIs, but with enough differences that using them interchangeably isn’t really possible,” group members said in a blog post today. “We aim to grow this Consortium into an organization where cross-project and cross-ecosystem alignment on APIs, data exchange mechanisms and other such topics happens. These topics require coordination and communication to a much larger extent than they require technical innovation. We aim to facilitate the former, while leaving the innovating to current and future individual libraries.”

Initial efforts will start with a working group then request feedback from array and dataframe library maintainers and iterate before the first version of the standard is made available for use. The first feedback session begins next month. As part of the launch, the group is releasing tools for comparing array or tensor and tracking some of the primary functions of a dataframe library.

While AI research dates back to the 1950s, the practical need to create standards and build an infrastructure for benchmark testing, interoperability, and other practical developer needs led to the formation of groups like ONNX. Beyond machine learning, other examples of tech groups formed to create standards include C++ and Open Geospatial Consortium.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Reduce CPU of Large Analytic Queries Without Changing Code

March 27, 2020   BI News and Info

When Microsoft came out with columnstore in SQL Server 2012, they introduced a new way to process data called Batch Mode. Batch mode processes a group of rows together as a batch, instead of processing the data row by row. By processing data in batches, SQL Server uses less CPU than row by row processing. To take advantage of batch mode, a query had to reference a table that contained a column store index. If your query only involved tables that contain data in row stores, then your query would not use batch mode. That has now changed. With the introduction of version 15.x of SQL Server, aka SQL Server 2019, Microsoft introduced a new feature call Batch Mode on Rowstore.

Batch Mode on Rowstore is one of many new features that was introduced in the Azure SQL Database and SQL Server 2019 to help speed up rowstore queries that don’t involve a column store. The new Batch Mode on Rowstore feature can improve performance of large analytic queries that scan many rows, where these queries aggregate, sort or group selected rows. Microsoft included this new batch mode feature in the Intelligent Query Processing (IQP). See Figure 1 for a diagram from Microsoft’s documentation that shows all the IQP features introduced in Azure SQL Database and SQL Server 2019. It also shows the features that originally were part of Adaptive Query Processing included in the older generation of Azure SQL Database and SQL Server 2017.

word image 46 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 1: Intelligent Query Processing

Batch Mode on Rowstore can help speed up your big data analytic queries but might not kick in for smaller OLTP queries (more on this later). Batch mode has been around for a while and supports columnstore operators, but it wasn’t until SQL Server version 15.x that batch mode worked on Rowstores without performing a hack. Before seeing the new Batch Mode on Rowstore feature in action, let me first explain how batch mode processing works.

How Batch Mode Processing Works

When the database engine processes a transact SQL statement, the underlying data is processed by one or more operators. These operators can process the data using two different modes: Row or Batch. At a high level, row mode can be thought of as processing rows of data, one row at a time. Whereas, batch mode processes multiple rows of data together in a batch. The processing of batches of rows at a time versus row by row can reduce CPU usage.

When batch mode is used for rowstore data, the rows of data are scanned and loaded into a vector storage structure, known as a batch. Each batch is a 64K internal storage structure. This storage structure can contain between 64 and 900 rows of data, depending on the number of columns involved in the query. Each column used by the query is stored in a continuous column vector of fixed size elements, where the qualifying rows vector indicates which rows are still logically part of the batch (see Figure 2 which came from a Microsoft Research paper).

Rows of data can be processed very efficiently when an operation uses batch mode, as compared to row mode processing. For instance, when a batch mode filter operation needs to qualify rows that meet a given column filter criteria, all that is needed is to scan the vector that contains the filtered column and mark the row appropriately in the qualifying rows vector, based on whether or not the column value meets the filter criteria.

word image 47 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 2: A row batch is stored column-wise and contains one vector for each column plus a bit vector indicating qualifying rows

SQL Server executes fewer instructions per row when using batch mode over row mode. By reducing the number of instructions when using batch mode, queries typically use less CPU than row mode queries. Therefore, if a system is CPU bound, then batch mode might help reduce the environment’s CPU footprint.

In a given execution plan, SQL Server might use both batch and row mode operators, because not all operators can process data in batch mode. When mixed-mode operations are needed, SQL Server needs to transition between batch mode and row mode processing. This transition comes at a cost. Therefore, SQL Server tries to minimize the number of transitions to help optimize the processing of mixed-mode execution plans.

For the engine to consider batch mode for a rowstore, the database compatibility level must be set to 150. With the compatibility level set to 150, the database engine performs a few heuristic checks to make sure the query qualifies to use batch mode. One of the checks is to make sure the rowstore contains a significate number of rows. Currently, it appears that the magic number seems to be 131,072. Dmitry Pilugin wrote an excellent post on this magic number. I also verified that this is still the magic number for the RTM release of SQL Server 2019. That means that batch mode doesn’t kick in for smaller tables (less than 131,072 rows), even if the database is set to compatibility mode 150. Another heuristic check verifies that the rowstore is using either a b-tree or heap for its storage structure. Batch mode doesn’t kick in if the table is an in-memory table. The cost of the plan is also considered. If the database optimizer finds a cheaper plan that doesn’t use Batch Mode on Rowstore, then the cheaper plan is used.

To see how this new batch mode feature works on a rowstore, I set up a test that ran a couple of different aggregate queries against the WideWorldImportersDW database.

Batch Mode on Rowstore In Action

This section demonstrates running a simple test aggregate query to summarize a couple of columns of a table that uses heap storage. The example runs the test aggregate query twice. The first execution uses compatibility level 140, so the query must use row mode operators to process the test query. The second execution runs under compatibility mode 150 to demonstrate how batch mode improves the query processing for the same test query.

After running the test query, I’ll explain how the graphical execution plans show the different operators used between the two test query executions. I’ll also compare the CPU and Elapsed time used between the two queries to identify the performance improvement using batch mode processing versus row mode processing. Before showing my testing results, I’ll first explain how I set up my testing environment.

Setting up Testing Environment

I used the WideWorldImportersDW database as a starting point for my test data. To follow along, you can download the database backup for this DB here. I restored the database to an instance of SQL Server 2019 RTM running on my laptop. Since the Fact.[Order] table in this database isn’t that big, I ran the code in Listing 1 to create a bigger fact table named Fact.OrderBig. The test query aggregates data using this newly created fact table.

Listing 1: Code to create the test table Fact.OrderBig

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

USE WideWorldImportersDW;

GO

CREATE TABLE Fact.[OrderBig](

[Order Key] [bigint],

[City Key] [int] NOT NULL,

[Customer Key] [int] NOT NULL,

[Stock Item Key] [int] NOT NULL,

[Order Date Key] [date] NOT NULL,

[Picked Date Key] [date] NULL,

[Salesperson Key] [int] NOT NULL,

[Picker Key] [int] NULL,

[WWI Order ID] [int] NOT NULL,

[WWI Backorder ID] [int] NULL,

[Description] [nvarchar](100) NOT NULL,

[Package] [nvarchar](50) NOT NULL,

[Quantity] [int] NOT NULL,

[Unit Price] [decimal](18, 2) NOT NULL,

[Tax Rate] [decimal](18, 3) NOT NULL,

[Total Excluding Tax] [decimal](18, 2) NOT NULL,

[Tax Amount] [decimal](18, 2) NOT NULL,

[Total Including Tax] [decimal](18, 2) NOT NULL,

[Lineage Key] [int] NOT NULL);

GO

INSERT INTO Fact.OrderBig

   SELECT * FROM Fact.[Order];

GO 100

The code in Listing 1 created the Fact.OrderBig table that is 100 times the size of the original Fact.[Order] table with 23,141,200 rows.

Comparison Test Script

To do a comparison test between batch mode and row mode, I ran two different test queries found in Listing 2.

Listing 2: Test script

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

USE WideWorldImportersDW;

GO

– Turn on time statistics

SET STATISTICS IO, TIME ON;

– Clean buffers so cold start performed

DBCC DROPCLEANBUFFERS

GO

– Prepare Database Compatibility level for Test #1

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 140;

GO

– Test #1

SELECT [Customer Key],

       SUM(Quantity) AS TotalQty,

       AVG(Quantity) AS AvgQty,

       AVG([Unit Price]) AS AvgUnitPrice

FROM Fact.[OrderBig]

WHERE [Customer Key] > 10 and [Customer Key] < 100

GROUP BY [Customer Key]

ORDER BY [Customer Key];

GO

– Clean buffers so cold start performed

DBCC DROPCLEANBUFFERS

GO

– Prepare Database Compatibility level for Test #2

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 150;

GO

– Test #2

SELECT [Customer Key],

     SUM(Quantity) AS TotalQty,

AVG(Quantity) AS AvgQty,

AVG([Unit Price]) AS AvgUnitPrice

FROM Fact.[OrderBig]

WHERE [Customer Key] > 10 and [Customer Key] < 100

GROUP BY [Customer Key]

ORDER BY [Customer Key];

GO

The code in Listing 2 executes two different tests, collects some performance statistics, and cleans the data buffer cache between each test. Both tests run the same simple aggregate query against the Fact.OrderBig table. Test #1 runs the aggregate SELECT statement using compatibility level 140, whereas Test #2 runs the same aggregate SELECT statement using compatibility level 150. By setting the compatibility level to 140, Test #1 uses row mode processing. Whereas Test #2, uses compatibility level 150, so batch mode can be considered for the test query. Additionally, I turned on the TIME statistics so I could measure performance (CPU and Elapsed time) between each test. By doing this, I can validate the performance note in Figure 3, that was found in this Microsoft documentation.

word image 48 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 3: Documentation Note on Performance

When I ran my test script in Listing 2, I executed it from a SQL Server Management Studio (SSMS) query window. In that query window, I enabled the Include Actual Execution Plan option so that I could compare the execution plans created for both of my tests. Let me review the execution artifacts created when I ran my test script in Listing 2.

Review Execution Artifacts

When I ran my test script, I collected CPU and Elapsed Time statistics as well as the actual execution plans for each execution of my test aggregate query. In this section, I’ll review the different execution artifacts to compare the differences between row mode and batch mode processing.

The CPU and Elapsed time statistics, as well as the actual execution plan for when I ran my first test query, which was using compatibility level 140, can be found in Figure 4 and Figure 5 respectfully.

word image 49 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 4: CPU and Elapsed Time Statistics for Test #1

word image 50 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 5: Actual Execution Plan under Compatibility Level 140 for Query 1

Figure 6 and 7 below, show the time statistics and the actual execution plan when I ran my test query under compatibility level 150.
word image 51 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 6: Execution Statistics for Test #2

word image 52 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 7: Execution Plan for Test #2

The first thing to note is that the plan that ran under compatibility level 150 (Figure 7) is more streamlined than the one that ran under compatibility mode 140 (Figure 6). From just looking at the execution plan for the second test query, I can’t tell whether or not the query (which ran under compatibility mode 150) uses batch mode or not. To find out, you must right-click on the SELECT icon in the execution for the Test #2 query (Figure 7) and then select the Properties item from the context menu. Figure 8 shows the properties of this query.

word image 53 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 8: Properties for Compatibility Level 150 Query (Test #2)

Notice that the property BatchModeOnRowstoreUsed is True. This property is a new showplan attribute that Microsoft added in SSMS version 18. When this property is true, it means that some of the operators used in processing Test #2 did use a batch mode operation on the Rowstore Fact.OrderBig table.

To review which operators used Batch Mode on Rowstore, you must review the properties of each operator. Figure 9 has some added annotations to the execution plan that shows which operators used batch mode processing and which ones used row mode processing.

word image 54 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 9: Execution Plan for Batch Mode query with Operator property annotations

If you look at the Table Scan (Heap) operator, you can see that the Fact.OrderBig table is a RowStore by reviewing the Storage Property. You can also see that this operation used batch mode by looking at the Actual Execution Mode property. All the other operators ran in batch mode, except the Parallelism operator, which used row mode.

The test table (Fact.OrderBig) contains 23,141,200 rows and the test query referenced 3 different columns. The query didn’t need all those rows because it was filtered to include the rows where the customerid was greater than 10 and less than 100. To determine the number of batches the query created, look at the properties of the table scan operator in the execution plan, which is shown in Figure 10.

word image 55 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 10: Number of batches used for Test #2.

The Actual Number of Batches property in Figure 8 shows that the table scan operator of the test #2 query created 3,587 batches. To determine the number of rows in each batch, use the following formula: Actual Number of Rows divided by the Actual Number of Batches. By using this formula, I got, on average, 899.02 rows per batch.

The cost estimate for each of the queries is the same, 50%. Therefore, to measure performance between batch mode and row mode, I’ll have to look at the TIME statistics.

Comparing Performance of Batch Mode and Row Mode

To compare performance between running batch mode and row mode queries, I ran my test script in Listing 2 ten different times. I then averaged the CPU and Elapsed times between my two different tests and then graphed the results in the chart found in Figure 11.

word image 56 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 11: CPU and Elapsed time Comparison between Row Mode and Batch Mode

The chart in Figure 11 shows that the row mode test query used a little more than 30% more CPU over the batch mode test query. Both the batch and row mode queries ran about the same elapsed time. Just like the note (Figure 4) above suggested, this first test showed considerable CPU improvement could be gained when a simple aggregate query uses Batch Mode processing. But not all queries are created equal when it comes to performance improvements using Batch Mode versus Row Mode.

Not All Queries are Created Equal When It Comes to Performance

The previous test showed a 30% improvement in CPU but little improvement in Elapsed Time. The resource (CPU and Elapsed Time) improvements using Batch Mode operations versus Row mode depend on the query. Here is another contrived test that shows some drastic improvements in Elapsed Time, using the new Batch Mode on Rowstore feature. The test script I used for my second performance test can be found in Listing 3.

Listing 3: Stock Item Key Query Test Script

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

– Turn on time statistics

SET STATISTICS IO, TIME ON;

– Clean buffers so cold start performed

DBCC DROPCLEANBUFFERS

GO

– Prepare Database Compatibility level for Test #1

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 140;

GO

SELECT [Stock Item Key],[City Key],[Order Date Key],[Salesperson Key],

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key]) AS StockAvgQty,

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key],[City Key])

        AS StockCityAvgQty,

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key],[City Key],

        [Order Date Key]) AS StockCityDateAvgQty,  

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key],[City Key],

        [Order Date Key],[Salesperson Key])

        AS StockCityDateSalespersonAvgQty

FROM Fact.OrderBig

WHERE [Customer Key] > 10 and [Customer Key] < 100

– Clean buffers so cold start performed

DBCC DROPCLEANBUFFERS

GO

– Prepare Database Compatibility level for Test #2

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 150;

GO

SELECT [Stock Item Key],[City Key],[Order Date Key],[Salesperson Key],

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key]) AS StockAvgQty,

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key],[City Key])

        AS StockCityAvgQty,

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key],[City Key],

        [Order Date Key]) AS StockCityDateAvgQty,  

    AVG(Quantity) OVER(PARTITION BY [Stock Item Key],[City Key],

        [Order Date Key],[Salesperson Key])

        AS StockCityDateSalespersonAvgQty

FROM Fact.OrderBig

WHERE [Customer Key] > 10 and [Customer Key] < 100

In Listing 3, I used the OVER clause to create four different aggregations, where each aggregation had a different PARTITION specification. To gather the performance statistics for Listing 3 queries, I ran this script ten different times. Figure 12 shows the numbers for CPU and Elapsed Time numbers graphically.

word image 57 Reduce CPU of Large Analytic Queries Without Changing Code

Figure 12: CPU and Elapsed Time comparison for Window Function Query test

As you can see by creating the different aggregation in Listing 3, I once again saw a big performance improvement in CPU (around 72%). This time, I also got a big improvement in Elapsed Time (a little more than 45%) when batch mode was used. My testing showed that not all queries are created equal when it comes to performance. For this reason, I recommend you test all the queries in your environment to determine how each query performs using this new Batch Mode on Rowstore feature. If you happen to find some queries that perform worse using batch mode, then you can either rewrite the queries to perform better or consider disabling batch mode for those problem queries.

Disabling Batch Mode on Row Store

If you find you have a few queries that don’t benefit from using batch mode, and you don’t want to rewrite them, then you might consider turning off the Batch Mode on Rowstore feature with a query hint.

If you use the DISALLOW_BATCH_MODE hint, you can disable Batch Mode on Rowstore feature for a given query. The code in Listing 4 shows how I disabled batch mode for the first test query I used in this article.

Listing 4: Using “DISALLOW BATCH MODE” hint to disable batch mode for a single query

SELECT [Customer Key],

       SUM(Quantity) AS TotalQty,

       AVG(Quantity) AS AvgQty,

       AVG([Unit Price]) AS AvgUnitPrice

FROM Fact.[OrderBig]

WHERE [Customer Key] > 10 and [Customer Key] < 100

GROUP BY [Customer Key]

ORDER BY [Customer Key]

OPTION(USE HINT(‘DISALLOW_BATCH_MODE’));

When I ran the query in Listing 4 against the WideWorldImportersDW database running in compatibility mode 150, the query didn’t invoke any batch mode operations. I verified this by reviewing the properties of each operator. They all processed using a row mode operation. The value of using the DISALLOW_BATCH_MODE hint is I can disable the batch mode feature for a single query. This means it’s possible to be selective on which queries will not consider batch mode when your database is running under compatibility level 150.

Alternatively, you could disable the Batch Mode on Rowstore feature at the database level, as shown in Listing 5.

Listing 5: Disabling Batch Mode at the database level

– Disable batch mode on rowstore

ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ON_ROWSTORE = OFF;

Disabling the batch mode feature at the database level still allows other queries to take advantages of the other new 15.x features. This might be an excellent option to use if you wanted to move to version 15.x of SQL Server while you complete testing of all of your large aggregation queries to see how they are impacted by the batch mode feature. Once testing is complete, reenable batch mode by running the code in Listing 6.

Listing 6: Enabling Batch Mode at the database level

– Enable batch mode on rowstore

ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ON_ROWSTORE = ON;

By using the hint or database scoped configure method to disable batch mode, I have control over how I want this new feature to affect the performance of my row mode query operations. It is great that the team at Microsoft allows these different methods to disable/enable the Batch Mode on Rowstore feature. By allowing these different options for enable/disabling batch mode on rowstore, I have more flexibility in how I roll out the batch mode feature across a database.

Which Editions Support Batch Mode?

Before you get too excited about how this feature might help the performance of your large analytic queries, I have to tell you the bad news. Batch Mode on Rowstore is not available to all version of SQL. Like many cool new features that have come out in the past, they are first introduced in Enterprise edition only, and then over time, they might become available in other editions. Batch Mode on Rowstore is no exception. As of the RTM release of SQL Server 2019, the Batch Mode on Rowstore feature is only available in Enterprise Edition, as documented here. Also note that developer edition supports Batch Mode on Rowstore, but of course cannot be used for production work. Be careful when doing performance testing of this new feature on the developer edition of SQL Server 2019 if you plan to roll out your code into any production environment except Enterprise. If you want to reduce your CPU footprint using this new feature, then you better get out your checkbook and upgrade to Enterprise edition, or just wait until Microsoft rolls this feature out to other editions of SQL Server. It also works on Azure SQL Database.

Reduce CPU of Large Analytic Queries Without Changing Code

If you have large analytic queries that perform aggregations, you might find that using the new Batch Mode on Rowstore feature improves CPU and Elapsed time without changing any code if your query environment meets a few requirements. The first requirement is that your query needs to be running using SQL Server version 15.x (SQL Server 2019) or better. The second requirement is you need to be running on an edition of SQL Server that supports the Batch Mode on Rowstore feature. Additionally, the table being queried needs to have at least 131,072 rows and be stored in a b-tree or heap before batch mode is considered for the table.

I am impressed by how much less CPU and Elapsed time was used for my test aggregation queries. If you have a system that runs lots of aggregate queries, then migrating to SQL Server 2019 might be able to eliminate your CPU bottlenecks and get some of your queries to run faster at the same time.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Google researchers use multiple cameras to reduce error rates for robot insertion and stacking tasks

February 25, 2020   Big Data

Object-manipulating robots rely on cameras to make sense of the world around them, but these cameras often require careful installation and ongoing calibration and maintenance. A new study published by researchers at Google’s Robotics division and Columbia University proposes a solution, which involves a technique that learns to accomplish tasks using multiple color cameras without an explicit 3D representation. They say that it achieves superior task performance on difficult stacking and insertion tasks compared with baselines.

This latest work builds on Google’s vast body of robotics research. Last October, scientists at the company published a paper detailing a machine learning system dubbed Form2Fit, which aims to teach a picker robot with a suction arm the concept of assembling objects into kits. Google Brain researchers are pursuing a novel robot task planning technique involving deep dynamics models, or DDM, that they claim enables mechanical hands to manipulate multiple objects. And more recently, a Google team took the wraps off of ClearGrasp, an AI model that helps robots better recognize transparent objects.

As the researchers point out, until recently, most automated solutions were designed for rigid settings where scripted robot actions are repeated to move through a predefined set of positions. This approach calls for a highly calibrated setup that can be expensive and time-consuming, and one that lacks the robustness needed to handle changes in the environment. Advancements in computer vision have led to better performance in grasping, but tasks like stacking, insertion, and precision kitting remain challenging. That’s because they require accurate 3D geometric knowledge of the task environment including object shape and pose, relative distances and orientation between locations, and other factors.

By contrast, the team’s method leverages a multi-camera view and a reinforcement learning framework that takes in images from different viewpoints and produces robot actions in a closed-loop fashion. By combining and learning directly from the camera views without an intermediary reconstruction step, they say it’s able to improve state estimation while at the same time increasing the robustness of the system’s actions.

 Google researchers use multiple cameras to reduce error rates for robot insertion and stacking tasks

 Google researchers use multiple cameras to reduce error rates for robot insertion and stacking tasks

In experiments, the researchers deployed their setup to a simulated environment containing a Kuka arm equipped with a gripper, two bins placed in front of the robot, and three cameras mounted to overlook those bins. The arm was first tasked with stacking one bin with a single block in a random position, starting with a single block either blue or orange in color. In other tasks, it had to insert a block firmly into a middle fixture and to stack blocks one on top of the other.

The researchers ran 180 data collection jobs across 10 graphics cards to train their reinforcement learning model, with each producing roughly 5,000 episodes per hour for the insertion tasks. They report it achieved success, with “large reductions” to error rates on precision-based tasks — specifically 49.18% on the first stacking task, 56.84% on the second stacking task, and 64.1% on the insertion task. “The effective use of multiple views enables a richer observation of the underlying state relevant to the task,” wrote the paper’s coauthors. “Our multi-view approach enables 3D tasks from RGB cameras without the need for explicit 3D representations and without camera-camera and camera-robot calibration. In the future, similar multi-view benefits can be achieved with a single mobile camera by learning a camera placement policy in addition to the task policy.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

5 Ways Cloud ERP Helps Solar Installers Reduce Soft Costs

January 28, 2020   NetSuite
blu%20banyan 5 Ways Cloud ERP Helps Solar Installers Reduce Soft Costs

Posted by John Goode, Senior Director, Channel Marketing

The amount of solar energy connected to the national electric grid has increased more than 20-fold since 2008, as millions of Americans choose clean technology to power their lives. As solar use has grown, technology development, commercialization and the scaling of manufacturing processes have all helped drive down solar hardware costs, according to the Department of Energy (DOE).

The “soft costs” of going solar have not declined nearly as rapidly. Defined as the non-hardware costs associated with moving to solar, soft costs include permitting, financing and installing solar panels, as well as all the expenses solar companies incur to acquire new customers, pay suppliers, train staff and cover their bottom lines.

These soft costs comprise as much as 64% of the total cost of a new solar system, according to the DOE, and make up the largest component of any installation. When a solar installer runs its business using incompatible technology applications that don’t “talk” to each other—and when each area of that business has its own individual applications and data sets—the soft costs can add up pretty quickly.

Blu Banyan, a Berkeley, Calif.-based Solution Provider and SuiteApp developer with a dedicated NetSuite practice for the solar industry, recognizes the challenges of these soft costs and developed SolarSuccess, a NetSuite application specifically optimized for residential and commercial solar installers of all sizes. The NetSuite/SolarSuccess solution helps installers leverage the rapid decline in hardware costs over the past decade, while making the reductions in soft-costs necessary to improve organizational efficiency, profitability and competitive position.

5 Ways ERP Reduces Solar Soft Costs

Here are five ways that a unified, cloud ERP built for the solar industry helps installers effectively reduce their projects’ soft costs and improve their profitability on every project.

1. Replaces widely-used, off-the-shelf software. As companies grow, incompatible “point” applications that don’t scale effectively or profitably in combination—including widely-used programs like QuickBooks and Salesforce—all eventually have to be replaced, at ever-greater financial and disruption costs. In their place, companies are turning to integrated management solutions that support all the core company functions, including accounting, finance, project management, inventory/procurement, sales/marketing and human resources.

2. Provides reliable, real-time data. Without good data providing visibility across all company operations, scaling productively is next to impossible for solar installers. “When you’re working with incompatible, incomplete, out-of-date data, it can turn into a nightmare pretty quickly,” said Blu Banyan’s Chief Executive Officer Jan Rippingale. “To improve efficiency and profitability, solar installers must have real-time visibility into their entire end-to-end businesses.”

3. Breaks down information silos. Many installers continue to use programs like QuickBooks to run their businesses even as they scale up to meet the growing demand for solar power. Because QuickBooks doesn’t “talk” to Salesforce.com—and because Salesforce doesn’t integrate with job-costing tools—an installer’s procurement, accounting and sales teams are working from different playbooks. This siloed approach simply doesn’t cut it in today’s flat, collaborative world, where both internal and external partners need common technology platforms and data compatibility.

4. Integrates all aspects of the installation business. SolarSuccess brings sales pipeline management, CRM, accounting, purchasing, installation project management (including per project costing and profitability), inventory management, customer invoicing, universal financier connectivity and business intelligence all onto a single platform. The solution provides end-to-end visibility on cash flow, profitability, acquisition costs, project tracking and alerts, sub-contractor monitoring and other functions that are keys to a solar installer’s success. “When you’re aggregating a group of point applications—and regardless of how good each of those programs is individually—you really need them to be able to talk to one another,” said Rippingale. “The only way to make that happen, and to effectively reduce the soft costs identified by the DOE, is with an integrated application suite that provides reliable data and a common interface across all functions.”

5. Supports scale and growth. Using an end-to-end, fully-integrated software suite that was specifically optimized for the solar industry, installers can effectively meet the demands of their growing customer bases while also keeping soft costs to a minimum, ensuring profitable growth for the installer. “A lot of companies in this space are at a point where the acceleration in demand literally puts them in the position where they can’t scale or grow further,” said Rippingale. “In fact, many of them won’t even be able to sustain their current profitability levels and competitive position without switching to a system that allows them to use the same data across all of their departments.”

Blu Banyan’s premier end-to-end solar solution has been built on the NetSuite platform to give installers a software stack that’s specifically tailored for their needs. Optimized for residential and commercial solar installers of any size, SolarSuccess helps installers successfully leverage the rapid decline in hardware costs over the last decade with soft-cost reductions that improve margins even as overall installation costs decline. Access this whitepaper to learn more about Blu Banyan’s SolarSuccess product and how it helps drive down the soft costs in a solar installer’s business.

Posted on Mon, January 27, 2020
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

What can I do to Reduce my Dynamics 365 Storage Costs?

December 13, 2019   Microsoft Dynamics CRM

Sooner or later, all organizations reach the free storage limit of Dynamics 365. In this post, we will examine possible ways of dealing with the issue of running out of storage in Dynamics 365 in the cloud. We will take into account the cost and how easily we can implement it.

A common issue with D365 storage space

Here is one of the requests Dynamics 365 managers often get:

One of our clients is having a problem with increasing database size very quickly. The huge database size is also becoming a performance issue. Besides, the client is looking for decreasing the storage costs. Current database size is 530GB. We have checked the free add-on from Microsoft, but we are reluctant about bringing it in our org full scale. Besides, looking for more functionality like extracting old and new documents.

Here’s a kicker: we have sensitive documents so we would prefer them not to run through any external service.

We also cannot employ anything outside the constraints of what we currently have in our Azure tenancy, that being: Dynamics 365 with the option of uploading custom plugins and/or custom workflow tasks or currently paid-for PaaS facilities, notably Flow. This eliminates the option of having a console-based application, or even a web app.

This request from a D365 admin reflects a typical constraint of the cloud CRM: the 10GB storage provided as part of a first subscription (Base license) of Customer Engagement or Finance, Supply Chain Management, Retail and Talent applications runs out quickly. When you are out of storage, you have only two options: pay for extra storage or look for ways to free up space.

Try reducing the storage needed in D365

When monitoring D365 health to ensure the system’s optimal performance, you should also keep an eye on whether or not there is enough space for growth.

When you reach 80% of D365 total storage capacity, the system should notify you so that you can take action.

Here is what you can do to reduce used up storage space:

  • Delete old records, such as cases and opportunities
  • Remove unnecessary attachments (from emails and notes) through advanced find and bulk deletion
  • Delete audit logs
  • Remove suspended workflows you won’t use again
  • Delete bulk import instances
  • Remove bulk duplicate detection jobs and bulk deletion job instances (strangely enough, those take-up space)

Are you still lacking space after doing all this? The problem most likely lies in attachments and documents. According to several surveys, documents and attachments take up 70% of storage space in Dynamics 365 on average. If your organization tracks emails in the CRM system as most companies do, free storage shrinks quickly – and it is not reversible with traditional measures.

Leveraging document management systems

Dynamics 365, just like other cloud CRM systems, was primarily designed to manage customer relations and not to store documents. For that reason, the best-proven practice for avoiding extra costs with document storage is moving them to other systems. These have cheaper storage and some times extra features.

Among the most popular are Document Management Systems (DMS) like SharePoint, and storage solutions like Azure Blob and Azure File Storage. These have a much lower price per GB than Dynamics 365 storage.

SharePoint, in particular, besides offering cheaper storage, offers organizations immense collaborating opportunities.

How to synchronize Dynamics 365 with Azure Blob or SharePoint

Before a D365 user can send attachments and other documents and to another system, the two systems need to be synchronized. Of course, manually extracting the documents is also a possibility in theory, but it is not feasible in practice because it takes up too much time.

Currently, there are a couple of solutions on the market for extracting attachments from Dynamics 365. Let’s have a look.

The first one comes from Microsoft Labs. Attachment Management is a free add-on feature to Dynamics 365 Customer Engagement. It works fully online, and it creates an attachment in Azure when you add a note or email attachment to Dynamics 365 (then deleting the file in Dynamics once it is successfully in Azure Blob storage).

Nonetheless, D365 experts advise that you use this free add-on with caution.

Firstly, you have to consider that tech support for free apps is usually limited. In case of any trouble, you might have to rely on your own.

Secondly, current customers seem to not be so happy. Some can’t get it to work “Could not get it working, Plugins said it succeeded but nothing in Azure after waiting 20 mins! Documentation was not too detailed either” . Others feel the automation is limited “in order to migrate all of our 61,000+ attachments, we have to hit a “move to blob” button in 160+ attachment increments. You basically have to micro manage the migration by repetitiously clicking a button that could easily be automated. The monotony is horrendous. The concept is great, the execution is terrible.” or “Report is slow to load (several minutes) and you have to manually execute 100 at a time. Would have taken several days of pushing the button.” (reviews from AppSource).

Thirdly, this add-on only works on what you do from the installation onward, which is not good if you were already reaching the limit by then.

Finally, this solution works only for Azure Blob, so if you prefer SharePoint and the document collaboration advantages it offers, you need to look for other options.

CB Dynamics 365 Seamless Attachment Extractor

Connecting Software has been working on synchronization solutions since 2009, and we have noticed that the limited storage space is a chronic problem in many cloud CRM systems. In 2019, we launched CB Dynamics 365 Seamless Attachment Extractor.

The solution solves the problem of Dynamics 365’s expensive storage space. It transfers seamlessly (thus, the name) any attachment files from CRM to other configured storage. For the end-user, it still looks like the attachment files are in Dynamics.

The user can still work with the attachments the same way as if they were stored in Dynamics. Yet, the add-on has actually offloaded them to another file storage, namely SharePoint, Azure File Storage, or Azure Blob Storage. This is entirely transparent to the user, who goes on working the exact same way they did before.

Each document will not take up storage space in D365 while, at the same time, it will remain reachable to users without any change to their workflows in D365. Any additional modification of those files will be transferred to the configured external file storage automatically.

It is important to note that, with the CB Dynamics 365 Seamless Attachment Extractor add-on, your documents are not leaving the Dynamics and attachment storage systems. There is no external service in between. When security is a concern, when you have sensitive data or want to ensure regulatory compliance (GDPR comes to mind…), then this is a crucial point, as you can see in the video (click on the image below).

Another convenient feature, unique for the market, is that is can compress/decompress files on-the-fly. It can also encrypt/decrypt them with AES256 encryption, which was adopted by the U.S. government and now used worldwide.

On top of that, if you have reached 80% of the storage space limit, this add-on will automatically go through the attachments that existed before you installed the add-on. You can move those attachments to SharePoint, Azure Blob, or Azure Storage. It works exactly the same way as it does for the ones that come up after the install. Isn’t that great news?

Want to know more on how to reduce Dynamics 365 storage costs?

If you want to know more on how CB Dynamics 365 Seamless Attachment Extractor can help your organization save on storage costs and more, our experts will be glad to talk to you and walk you through a free demo.

xana framed.png.pagespeed.ic.cVqanoYude What can I do to Reduce my Dynamics 365 Storage Costs?By Ana Neto, 

Software engineer since 1997, she is now a technical advisor for Connecting Software. 

Connecting Software is a global producer of integration and synchronization software solutions since 2004. 

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Amazon researchers reduce data required for AI transfer learning

October 28, 2019   Big Data
 Amazon researchers reduce data required for AI transfer learning

Cross-lingual learning is an AI technique involving training a natural language processing model in one language and retraining it in another. It’s been demonstrated that retrained models can outperform those trained from scratch in the second language, which is likely why researchers at Amazon’s Alexa division are investing outsize time investigating them.

In a paper scheduled to be presented at this year’s Conference on Empirical Methods in Natural Language Processing, two scientists at the Alexa AI natural understanding group — Quynh Do and Judith Gaspers — and colleagues propose a data selection technique that halves the amount of required training data. They claim that it surprisingly improves rather than compromises the model’s overall performance in the target language.

“Sometimes the data in the source language is so abundant that using all of it to train a transfer model would be impractically time consuming,” wrote Do and Gaspers in a blog post. “Moreover, linguistic differences between source and target languages mean that pruning the training data in the source language, so that its statistical patterns better match those of the target language, can actually improve the performance of the transferred model.”

In the course of experiments, Do, Gaspers, and team employed two methods to cut the source-language data set in half: the aforementioned data selection technique and random sampling. They pretrained separate models on the two halved data sets and on the full data set, after which they fine-tuned the models on a small data set in the target language

Do and Gaspers note that all of the models were trained simultaneously to recognize intents (requested actions) and fill slots (variables on which the intent acts), and that they took as inputs multilingual embeddings (a word or sequences of words from different languages mapped to a single point in a multidimensional space) to bolster model accuracy. The team combined the multilingual embedding of each input word with a character-level embedding that encoded information about words’ prefixes, suffixes, and roots, and they tapped language models trained on large text corpora to select the source-language data that’d be fed to the transfer model.

Within the system the researchers engineered, a bilingual dictionary translated each utterance in the source data set into a string of words in the target language. Four language models were applied to the resulting strings while a trigram model handled character embeddings. And for each utterance in the the sum of probabilities computed by the four language models, only those which yielded the highest normalized score were selected. .

To evaluate their approach, the team first transferred a model from English to German with different amounts of training data in the target language (10,000 and 20,000 utterances, respectively, versus millions of utterances in the full source-language data set). Then, they trained the transfer model on three different languages — English, German, and Spanish — before transferring it to French (with 10,000 and 20,000 utterances in the target language). They claim that the transfer models outperformed a baseline model trained only on data in the target language, and that relative to the model trained on the target language alone, the model trained using the novel data selection technique showed improvements of 3% to 5% on the slot-filling task and about 1% to 2% on intent classification.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Leveraging Your Restaurant’s PMIX to Reduce Food Cost

September 27, 2019   NetSuite
gettyimages 998408416 Leveraging Your Restaurant’s PMIX to Reduce Food Cost

Posted by Brady Thomason, NetSuite Solution Manager, Restaurant & Hospitality 

You may have heard it referred to as a product mix, sales mix, or a menu item sales report—a PMIX has many names, but one major purpose: to provide insight to effectively manage food cost. That insight changes when using the PMIX daily versus weekly and monthly. Many successful operators use it frequently, but perhaps not as comprehensively as they should. This deeper dive will help you leverage this valuable report to ensure your restaurant is achieving its best food cost possible.

Daily

The daily PMIX provides quick insight for managers, shining light on crucial metrics including daily prep usage and menu item performance by day of the week.

Daily Prep Usage 

One leading practice to identify and stop waste is to check the variance between actual and theoretical prep usage. This exercise should be performed daily by someone who has an intimate working knowledge of ingredients, recipes and station prep schematics, like a kitchen supervisor or manager.

Start by checking how many of one item you sold on the PMIX, then compare that to actual prep usage in that specific station on the line.

Example: If you sold 12 orders of mahi mahi tacos yesterday, there should be an equal depletion of the prep for mahi mahi tacos on the line. So, if a full pan of cabbage mix yields 24 orders of tacos and the pan was full yesterday, there should be approximately a half pan left. If there is less on hand, you just uncovered a problem you need to research. Are the cooks adding too much cabbage to the tacos? Was
the cabbage thrown away due to over-prepping? Use the daily PMIX to pinpoint waste.

Menu Item Performance by Day of Week

Another leading practice is to keep your historical PMIX reports in a binder tabbed by day of week (i.e. Mon, Tues, Wed, so on.). When filling out your daily prep list, you can make informed decisions of how much of an item to prep based on trends.

 Example: looking at the PMIX for the past four Sundays you notice that you sell 50% less buffalo 

wings than on Saturdays. Since you spotted this trend, you’re able to flex your par for buffalo wing prep between Saturday and Sunday, consequently reducing waste and improving freshness.

Weekly

A weekly PMIX will provide insight into activities performed less frequently, like ordering or product shelf life analysis.

Ordering

Viewing your rolled-up menu item sales quantities for a full week will provide helpful insights into

setting order pars. It’s a good idea to start with analyzing your most expensive food items and adjust your order pars accordingly.

Example: if you know you sell an average of 100 orders of mahi mahi tacos per week and there is 4 oz of fish per order, you know you’ll need about 25 pounds of fish on hand per week, assuming a 100% yield.

Product Shelf Life Analysis

The magic balance in a restaurant is to produce fresh food without excessive waste and labor, and shelf lives help maintain that balance. The goal for operators is to prep enough of something to last its full shelf life. Analyzing the weekly product mix to make sure you’re prepping to hit the shelf life “sweet spot” will help you manage food quality, reduce waste AND save labor.

Example: if there are 20 ingredients in your ranch dressing and it has a 4-day shelf life, you can see how prepping ranch every day would be a waste of valuable time. Oppositely, if you’re prepping too much and it doesn’t taste as good after four days, you’ll risk wasting it or serving an inferior product to your guests.

Monthly

Running a monthly PMIX is a great way to analyze the performance of each of your menu items.

Menu Item Performance

The science of menu engineering is complex. Basically, it all starts by categorizing menu item

performance based on popularity and profitability. Knowing which category each of your menu items fall into will help you make informed decisions about what action to take to improve your menu’s performance. Here are the four groups along with examples of possible actions to take:

  • Star: high popularity, high profit—this is a winner! Keep it.
  • Plow horse: high popularity, low profit—think about reformulating the item to improve margin
  • Puzzle: low popularity, high profit—highlight or reposition on the menu, or run a promotion.
  • Dog: low popularity, low profit—replace with a different item on your next menu rollout.
Posted on Thu, September 26, 2019
by NetSuite filed under

Let’s block ads! (Why?)

The NetSuite Blog

Read More

Upstream Oil And Gas CFOs: Increase Oilfield Margins And Reduce Operating Costs

September 13, 2019   SAP
 Upstream Oil And Gas CFOs: Increase Oilfield Margins And Reduce Operating Costs

As an upstream oil and gas CFO, you are well aware that your business has a lot of boom-and-bust cycles.

For example, four years ago, the price of oil dropped to approximately US$ 30 a barrel – and wells became hugely unprofitable. Three years later, it went up to $ 70 per barrel, and there was a lot of irrational exuberance. Geopolitical risk and supply/demand factors added to the instability.

But you need to moderate those boom-and-bust, bust-and-boom cycles – and it’s not easy when everyone around you is having a hair-on-fire moment.

After all, your wells are your bread and butter. If they’re not producing, you’re not surviving.

Profound change

Upstream oil and gas CFOs are seeing profound changes in the financial landscape. They’re seeing oil price swings, new technology, more restrictive regulations, and oilfield production challenges. They’re seeing drilling, operating, and maintenance costs increasing while upstream production margins are dropping. In addition, they’re seeing declining well production, increasing water disposal costs, procurement/supply chain inefficiency, and Health and Safety Executive (HSE) reporting and compliance issues.

Digitizing operations improves well profitability

Did you know that you could get a 10% to 20% reduction in financial operating costs by using intelligent finance solutions?

Here are some operational benefits:

  • A/R finance: 5%-10% reduction in A/R write-offs
  • Financial close: 5%-10% reduction in business and operations analysis and reporting analysis
  • Finance operations: 10%-40% improvement in invoice-processing productivity

Your financial planning and analysis processes could produce increased visibility into overall spend and expenses by 25%-50% – as well as a reduction in financial planning cycle time and budgeting and forecasting costs. And you could eliminate silos across business units by 25%-50%. These could go a long way to improving value to your shareholders.

Source: SAP Performance Benchmarking

Finance focused on well profitability

Now finance can be the intelligence center of both corporate and business units. You can take advantage of a fully integrated oil and gas financial management system, including finance, joint venture accounting (JVA), production revenue accounting (PRA), and upstream operations management. The aim is to optimize well profitability and reduce operating costs.

  • Well profitability: Gain a real-time view of well profitability and operating costs by completion interval and detailed drill-down on all payables and receivables with financial and operational KPIs that drive the business.
  • Operating cost reduction: Instantly access accurate production and oilfield services information to assess vendor performance, actual versus budgeted costs, and business-unit margins. Drill down into detailed operating costs and production margins.
  • Simplified reporting and analysis from the oil and gas business unit to the boardroom: You can see a single version of the truth for all financial and operational data. You’ll also see real-time business and operational performance metrics.

Customer success story

SAP recently worked with a global company in the emerging oil and gas development market. It provides a full range of upstream products and services including oil/gas field development, oilfield supply chain, equipment maintenance, and technical services.

The key challenges were material and equipment cost inefficiencies and no timely information about operations status and financial flow. In addition, the finance team was using manual procurement processes and Excel spreadsheets.

The company implemented a hybrid environment with end-to-end finance and procurement business processes and fully integrated capabilities for marketing, operations, R&D, and knowledge management. Newly created finance and procurement templates are adaptable to future business requirements.

The result was a $ 4.6 million savings on material and equipment costs per year, $ 630,000 reduction in reporting costs per year, and $ 2.3 million savings on external procurement costs per year.

A finance capability roadmap

Your roadmap to the future depends on five fundamentals:

  1. Centralize financial reporting with a focus on well profitability: Centralize financial reporting from all oilfield systems, including integration of production, procurement, supply chain, and oilfield services costs. Enable real-time operational visibility to well performance and field profitability.
  1. Digitize core finance processes: Drive updates to a single journal, general ledger, cost accounting, and JVA/PRA. Enable increased hydrocarbon margin transparency, intercompany transactions and reconciliations, and granular visibility into all accounting transactions.
  1. AP and AR automation: Reduce accounts payable (AP) invoice-processing costs related to integrated invoice routing, exception handling, and invoice management. Use credit management, disputes and collections management, and self-billing. Integrate commodity transactions to reduce the need for daily bulk-data transfers to third-party solutions, provide tighter management and enforcement of credit limits, and automate cash application.
  1. Compliance and governance controls: Use compliance and governance controls (GRC), segregation of duties (SOD), and access controls. Provide continuous compliance of SOD conflicts to eliminate manual audit work. Offer cybersecurity to prevent threats to the core transactional system by bad actors.
  1. Treasury and risk management: Manage every activity associated with cash, payments, liquidity, risk, and compliance. Help finance gain more control over payment batches, foreign exchange, and commodity exposures.

Your morning coffee

In your next daily or weekly meeting, ask your CEO, COO, and VP of field operations these questions:

  • What are our actual fully burdened production costs?
  • What are our detailed operating costs?
  • What wells should we produce today that might not be profitable at $ 30 a barrel or $ 40 a barrel?
  • What are the best projects we can work on today?

It’s sure to spark a lively conversation.

Join our webinar series on the SAP S/4HANA Movement program and learn from the program’s manager Bjoern Braemer how it enables your organization to manage a seamless transition to SAP S/4HANA.

Follow SAP Finance online: @SAPFinance (Twitter) | LinkedIn | Facebook | YouTube

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Should Reduce give all cases when $\sqrt{x y} = \sqrt x \sqrt y$?

July 20, 2019   BI News and Info

My understanding is that Reduce gives all conditions (using or) where the input is true.

Now, $ \sqrt{xy} = \sqrt x \sqrt y $ , where $ x,y$ are real, under the following three conditions/cases

$ $
\begin{align*}
x\geq 0,y\geq0\
x\geq0,y\leq0\
x\leq0,y\geq 0 \
\end{align*}
$ $

but not when $ x<0,y<0$

This is verified by doing

ClearAll[x,y]
Assuming[Element[{x,y},Reals]&&x>= 0&&y<= 0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]]
Assuming[Element[{x,y},Reals]&&x<= 0&&y>= 0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]]
Assuming[Element[{x,y},Reals]&&x<= 0&&y>= 0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]]
Assuming[Element[{x,y},Reals]&&x<= 0&&y<=  0,Simplify[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]]]

dj9xa Should Reduce give all cases when $\sqrt{x y} = \sqrt x \sqrt y$?

Then why does

 Reduce[ Sqrt[x*y] - Sqrt[x]*Sqrt[y]==0,{x,y},Reals]

Give only one of the 3 cases above?

kNd60 Should Reduce give all cases when $\sqrt{x y} = \sqrt x \sqrt y$?

Is my understanding of Reduce wrong or should Reduce have given the other two cases?

V 12 on windows.

1 Answer

As Coolwater says in his comment, using the domain specification Reals means that all function values are constrained to be real. Clearly Sqrt[x] is not real when $ x<0$ . Instead, constrain x and y to be real using Element:

Reduce[Sqrt[x y] - Sqrt[x] Sqrt[y] == 0 && (x|y) ∈ Reals, {x,y}] //Simplify

(y ∈ Reals && x > 0) || x == 0 || (x <= 0 && y >= 0)

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More
« Older posts
  • Recent Posts

    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited