• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Faster

Making a faster alternative for {PatternSequence[1, PatternSequence[2, 3 ..] ..] ..}

September 23, 2020   BI News and Info

 Making a faster alternative for {PatternSequence[1, PatternSequence[2, 3 ..] ..] ..}

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

AI analysis finds that in-app ad issues are fixed faster on Android than iOS

August 28, 2020   Big Data
 AI analysis finds that in app ad issues are fixed faster on Android than iOS

Automation and Jobs

Read our latest special issue.

Open Now

A new study finds evidence that in-app ad issues with popular Android apps on Google Play are addressed more quickly than their iOS counterparts on the App Store. In what the authors claim is a first-of-its-kind survey, a team investigated the ads in 32 cross-platform apps that rank in the App Store’s and Google Play’s respective top 100 lists. They say the results imply developers should pay attention to platform differences during ad design and consider ways to automatically customize and test apps to improve ad experiences.

The study is noteworthy for its use of supervised multi-label classification, an AI technique that predicts the labels of unseen instances (in this case ads) by analyzing labeled training data. The researchers say it enabled them to canvass and categorize far more information than in previous studies, laying the groundwork for automated analysis tools. Large-scale perceptual studies on mobile ads could help developers prioritize their work, for example by choosing to spend more time fixing problems on iOS than Android.

In-app ads are massive revenue drivers on mobile. For instance, in 2016, mobile ad revenue accounted for 76% of Facebook’s total sales in the first quarter. Many free apps, which make up more than 68% of the over two million apps in Google Play, leverage some form of in-app advertising for monetization. But previous research suggests that users find these ads intrusive. Growth Tower reports that almost 50% of users said they’d uninstall apps just because of mobile ads.

In selecting which apps to analyze, the researchers, who hail from the Harbin Institute of Technology (Shenzhen, China), The Chinese University of Hong Kong, Singapore Management University, and Melbourne’s Monash University, looked at apps across 15 categories with over 100,000 reviews on both app stores. They built a simple web crawler to automatically scrape user reviews, downloading 1,840,349 reviews from the App Store and 3,243,450 from Google Play published between September 2014 and March 2019. Using a filter and several post-processing steps, they isolated reviews containing keywords related to ads (e.g., “ad,” “ads,” “advert”), extracting 18,302 ad-related reviews in total.

To determine how quickly (or slowly) developers addressed in-app ad complaints, the researchers recorded the number of versions of apps released between the time issues were reported and the time they were fixed. The coauthors report it took an average of 1.23 updates per app before problems were addressed on Google Play, while it took nearly two updates (1.78) per app on the App Store. But certain issues were fixed faster on iOS compared with Android. For instance, iOS developers were quick to address orientation, auto-play, and notification complaints. Android developers responded more quickly to orientation, volume, and non-skippable ad issues.

The researchers categorized each review into one or more ad issue type, using a combination of keyword matching and AI classifier models. They found that:

  • 8.81% (1,613) of the reviews mentioned ad content as an issue.
  • 25.02% (4,580) of the reviews mentioned ad frequency, or how often the ads appeared, as an issue.
  • 13.52% (2,475) of the reviews took issue with the way the ads suddenly “popped up.”
  • 45.51% (8,329) of the reviews mentioned there were too many ads.
  • 3.84% (703) of the reviews complained about non-skippable ads.
  • 12.11% (2,216) of the reviews said ads were too lengthy.
  • 2.10% (385) of the reviews said the ads were too large.
  • 6.47% (1,233) of the reviews complained about ad placement and position.
  • 1.96% (359) of the reviews complained about auto-playing ads.
  • 0.87% (159) of the reviews were frustrated about ad volume.

Interestingly, complaints weren’t the same across Google Play and the App Store. Security (i.e., unauthorized data collection or permission usage), orientation (the orientation of app screens impacted by ads), timing, and auto-play complaints were more common among iOS users, while Android users reported obtrusive notifications in the status bar, volume, and app slowdowns as top sources of consternation.

The study coauthors propose developers prioritize ad issues on platforms differently and optimize ad display settings like the number of ads, display frequency, and display style. They also suggest designing strategies to manage ads with long display period. “Inappropriate ad design could adversely impact app reliability and ad revenue,” the coauthors wrote. “Understanding common in-app advertising issues can provide developers practical guidance on ad incorporation.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Best Practices to Get Paid Faster – Building a Successful Recurring/Subscription Business​

June 17, 2020   Microsoft Dynamics CRM

As Simon Sinek says – “Start with Why”?

Volumes play an important role in a subscription business. This revenue is billed more frequently, in fact monthly- for 1 customer you are generating 12 invoices a year! The invoices are typically smaller in size than one-off sales. If you are reselling products (which is true for most Microsoft CSP Partners), the margins are tight. If customers don’t pay or delay payments, there is an impact on cash-flow.

How to Collect Cash Faster?

Step 0: Sell to who will pay: In order to make sure there is no risk of non-payments you need to establish credit limits, do background checks and ensure that you won’t be left with the proverbial “bag in hand”- this is typically part of your Sales Process.

Step 1: Invoice Accurately. An accurate invoice is correct on all counts.

  • Who has the check or credit card? It could be the finance department, the procurement department, and the IT manager or relevant stakeholder who would need to approve your invoice.
  • What are exactly are they getting billed for? Are all the services and subscriptions correct, prices, changes in quantities, and refunds clearly shown in the invoice so that there isn’t any confusion?
  • When do they need to pay and most importantly receive the invoice? Is the invoice being sent to the customer on the correct date and with the correct due dates? The invoice needs delivery needs to predictable. We see so many cases where customers have cash flow issues but are not able to generate an accurate invoice on the correct date based on the billing frequency.

Step 2: Invoice: Delivery, Persistent, Actionable, Convenient: It’s imperative your invoice is delivered to the right stakeholders in a medium that is acceptable to them (email, mailed physical signed copies). Is the invoice persistent? Can the invoice be easily accessed by the intended audience? A self-service portal that shows their orders and invoices would allow them to access your invoice when they need it. Is the invoice actionable: Does it clearly state how to pay you and how convenient is it to pay you?

“The customer experience is the sum total of all customer moments. The invoice is a defining customer moment. “

What Systems are Involved in Generating an Invoice and Collecting Cash?

The problem with the above situation is that while you’ve got all the pieces together:

  • There are multiple logins/identities
  • Your brand is diluted and there is no consistency is not there because of the different looks and feels.
  • Data is the new oil – and it’s messy to have oil all over the place
  • Access to these systems is limited and to a few people. The lack of access to this data creates problems like selling to customers you don’t want to, inability to create an accurate invoice.

What is Required for a Successful Billing Automation Solution to Generate Accurate Invoices and Collect Cash?

  • Customer data in one system
  • Single Identity
  • Single Portal for everything: what they’ve bought from you, when they bought is from you, agreements, invoices.

Cash Collection is a business function:

  • Credit Limit should be associated with Aging Data and affect new Sales
  • The customer Payment information should be in the customer system
  • Automate Collections using Payment Gateways
  • Don’t reward bad behavior – Credit Holds so that provisioning is stopped

Work 365 offers a Billing and Subscription application that helps cloud companies archive the necessary requirements and processes to grow your revenue.

I am a Dynamics 365 enthusiast. I enjoy building systems and working with cross-functional teams to solve problems and build processes from lead generation to cash collection. Work 365 is a global developer of the Billing Automation and subscription application for Dynamics. Helping companies to streamline business processes and scale their recurring revenue.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

The Need for Speed: Faster Data Access as Competitive Edge

June 5, 2020   Sisense

The cloud isn’t the future; it’s right now. In the Clouds is where we explore the ways cloud-native architecture, cloud data storage, and cloud analytics are changing key industries and business practices, with anecdotes from experts, how-to’s, and more to help your company excel in the cloud era.

The world of data is constantly changing and speeding up every day. Companies are storing more types of data from applications as well as the Internet of Things. This data flows into cloud-native warehouses where data teams manipulate it, allowing analysts to derive vital insights from it, and product teams embed those insights into products. Data is the bedrock on which the future of business is being built.

As the data that these businesses need to thrive continues to grow and its pace of change accelerates, it’s never been more important for employees at all levels of an organization to have fast access to actionable data in order to make strategic, operational, and tactical decisions. It’s both basic table stakes for success in the “new normal,” as well as a defining edge that companies can use to stay ahead of the competition.

packages CTA banners Cloud Data Teams The Need for Speed: Faster Data Access as Competitive Edge

Saving lives in real-time

Easier access to fast-updating datasets isn’t just about making better decisions or powering the next killer app. It also can also save lives and change the way the world works.

“There are a wide range of scenarios where having super-fast access to real-time data can make a huge difference,” said Christelle Scharff, a professor and computer scientist based at Pace University in New York. “Fast access to data captured by video surveillance systems, for example, can improve security… It’s also the driving force behind autonomous cars. Our biggest industrial firms can use it for preventative maintenance — saving potentially millions of dollars. And almost all organizations can use it to avoid potential threats from security breaches and malware attacks.”

The success of COVID-tracing efforts will depend on fast access to multiple data sources.

George Thiruvathukal, professor of computer science at Loyola University

During our current pandemic, access to real-time data can also save lives. “Health officials are investigating how contact-tracing apps can help manage the ‘reopening’ after we begin to reopen the country after the COVID-19 lockdown,” said George Thiruvathukal, professor of computer science at Loyola University in Chicago. “The success of this will depend on fast access to multiple data sources.”

There’s an impact on customer expectations too. According to recent research from IDC, consumers are embracing personalized real-time engagements and resetting their expectations for data delivery. As their digital world overlaps with their physical realities, they expect to access products and services wherever they are, over whatever connection they have, and on any device. They want data in the moment, on the go, and personalized. As a result, IDC predicts that nearly 30% of the global datasphere will be real-time by 2025.

Pressure on infrastructure builds

As enterprises demand data infrastructures that can meet this growth in real-time data — and ultimately assist with their product differentiation strategy — the pressure put on product teams is huge.

Product teams are already having to manage the growing complexities that come with modern data environments.

Chandana Gopal, Business Analytics Research Director, IDC

“Product teams are already having to manage the growing complexities that come with modern data environments,” said Chandana Gopal, research director for business analytics at IDC. “Not only do they have to deal with data that is distributed across on-premises, hybrid, and multi-cloud environments, but they have to contend with structured, semi-structured, and unstructured data types. Multiple technologies to manage data at rest and in motion have compounded the challenge of managing data and making it accessible to decision-makers in the right time, in the right format, and in the right context.”

Managing cloud data is a key challenge for data and product teams who are tasked with connecting to a wide array of datasets stored in cloud-native warehouses and other locations. In a large-enough company, there can even be multiple clouds being operated by different divisions and teams. BI and analytics providers have had to design their platforms to serve up fast insights no matter where the data being analyzed resides, even partnering with third-party companies to make sure that their platforms can handle data from oft-used services like AWS, Google Cloud, and others.

Which makes sense, as customers searching for an analytics solution are also often grappling with their recently-purchased or possible cloud options:

“When customers come to us for their BI and analytics needs, in the same sentence they’re often telling us that they’re considering their cloud options,” said Erin Winkler-McCue, Lead for Strategic Partnerships & Special Projects at Sisense. “These conversations are no longer siloed. Customers want to know that our platform will work seamlessly with their chosen cloud vendor, even if that just means something as basic-sounding as making sure queries between the vendor and Sisense are optimized.”

The challenges these teams face become even more daunting when one looks towards the future, as new technologies like the internet of things, machine learning, 5G, and augmented reality will add a new level of demand. Forbes Insights data shows that in order to benefit from emerging technologies like these, 92% of CIOs and CTOs say their business will require faster download and response time in the near future. What’s concerning is that, despite recognizing this, just 1% of data center engineers believe their data centers are updated ahead of current needs.

linux TCO CTA banner for white paper 770x250 The Need for Speed: Faster Data Access as Competitive Edge

Competing priorities within companies

All of this results in a lot of friction within data-driven organizations. “Multiple technologies are required for managing, integrating, and controlling the flow and consumption of data from the edge to the cloud and all points in between. That’s without mentioning outdated metadata—the data about data that provides data intelligence,” said Gopal.

An upcoming skills gap might compound the problem: According to the Forbes Insights research, 37% of engineers say they will likely retire in the next 10 years.

Adding to this hurdle is the fact that some firms are led by executives that don’t understand or champion the importance of having contextual and timely data embedded into applications. Recent research by Exasol has found that less than half of decision-makers believe that those working in senior management (40%) or mid-management roles (32%) are very effectively informed of their organization’s data strategy.

Creating a path to success

Gopal believes that future success requires that data teams take a structured approach focused on people, processes, and technology in order to make data available to all.

“Data teams should identify short- and long-term data and analytics use cases that will demonstrate business value with input from stakeholders at all levels—both business and IT,” she said. “They should also identify data-related assets that will be required for the project and be realistic about time constraints. They should then look to deliver measurable value with short term projects to build business cases for more expensive or longer projects.”

Teams should look to deliver measurable value with short term projects to build business cases for more expensive or longer projects.”

Chandana Gopal, Business Analytics Research Director, IDC

From a technology perspective, the introduction of new technologies, such as 5G-enabled edge computing, will have an impact on IT staffing. According to the Forbes Insight report, almost three-quarters (74%) of C-suite executives believe staffing will be reduced or handled by external cloud or edge service providers. The ability to implement new technologies like these in the data center will be a competitive differentiator, as will better security (according to 43% of respondents), and bandwidth (according to 27%).

Ultimately, it’s clear that organizations need to act quickly if they want to succeed. “Continuous efforts to update the data center will be integral to business success,” states the Forbes Insight report. “Partnering with external third parties is a central part of the data center journey in the age of hyper-connectivity.”

These collaborations are happening between companies and their cloud providers and between platforms like Sisense and companies like Amazon and Google:

“Partnerships with companies like AWS, Snowflake, Microsoft, and Google are only becoming more important as the modern data landscape evolves,” said Erin Winkler-McCue, Lead for Strategic Partnerships & Special Projects at Sisense. “We feel like every customer is either already in the cloud or it’s only a matter of time until they contemplate setting up a hybrid- or full-cloud model.”

Gopal agrees that adopting new technology through new partnerships is key: “A new class of intelligent data operations platforms are emerging that can reduce friction, improve efficiencies with automation, provide flexibility and openness with policy and metadata-driven processes that can accommodate the diversity and distribution of data in modern environments,” she said. Equipped with these, product teams will be much better prepared for a new and exciting future.

packages CTA banners Product Teams The Need for Speed: Faster Data Access as Competitive Edge

Lindsay James is a journalist and copywriter with over 20 years’ experience writing for enterprise business audiences. She has had the privilege of creating all sorts of copy for some of the world’s biggest companies and is a regular contributor to The Record, Compass, and IT Pro.

Let’s block ads! (Why?)

Blog – Sisense

Read More

How to use Sow and Reap instead of AppendTo for faster computations?

May 7, 2020   BI News and Info

In this example I used appendTo.

f[x_] := x + 1;
data = {};
For[x = 1, x <= 10, x++,
f[x];
spec = AppendTo[data, {x, f[x]}]] // Timing // First

Out[37]= 0.000054

ListPlot[spec]

tfaA9 How to use Sow and Reap instead of AppendTo for faster computations?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Best Practices to Get Paid Faster – Building a Successful Recurring/Subscription Business​

April 29, 2020   Microsoft Dynamics CRM

As Simon Sinek says – “Start with Why”?

Volumes play an important role in a subscription business. This revenue is billed more frequently, in fact monthly- for 1 customer you are generating 12 invoices a year! The invoices are typically smaller in size than one-off sales. If you are reselling products (which is true for most Microsoft CSP Partners), the margins are tight. If customers don’t pay or delay payments, there is an impact on cash-flow.

Related post – The Need for Collecting Cash Fast

How to Collect Cash Faster?

Step 0: Sell to who will pay: In order to make sure there is no risk of non-payments you need to establish credit limits, do background checks and ensure that you won’t be left with the proverbial “bag in hand”- this is typically part of your Sales Process.

Step 1: Invoice Accurately. An accurate invoice is correct on all counts.

  • Who has the check or credit card? It could be the finance department, the procurement department, and the IT manager or relevant stakeholder who would need to approve your invoice.
  • What are exactly are they getting billed for? Are all the services and subscriptions correct, prices, changes in quantities and refunds clearly shown in the invoice so that there isn’t any confusion?
  • When do they need to pay and most importantly receive the invoice? Is the invoice being sent to the customer on the correct date and with the correct due dates? The invoice needs delivery needs to predictable. We see so many cases where customers have cash flow issues but are not able to generate an accurate invoice on the correct date based on the billing frequency.

Step 2: Invoice: Delivery, Persistent, Actionable, Convenient: It’s imperative your invoice is delivered to the right stakeholders in a medium that is acceptable to them (email, mailed physical signed copies). Is the invoice persistent? Can the invoice be easily accessed by the intended audience? A self-service portal that shows their orders and invoices would allow them to access your invoice when they need it. Is the invoice actionable: Does it clearly state how to pay you and how convenient is it to pay you?

“The customer experience is the sum total of all customer moments. The invoice is a defining customer moment. “

What Systems are Involved in Generating an Invoice and Collecting Cash?

xNot so Ideal Landscape 625x368.png.pagespeed.ic.cLpOKfrhgJ Best Practices to Get Paid Faster – Building a Successful Recurring/Subscription Business​

The problem with the above situation is that while you’ve got all the pieces together:

  • There are multiple logins/identities
  • Your brand is diluted and there is no consistency is not there because of the different looks and feels.
  • Data is the new oil – and it’s messy to have oil all over the place
  • Access to these systems is limited and to a few people. The lack of access to this data creates problems like selling to customers you don’t want to, inability to create an accurate invoice.

What is Required for a Successful Billing Automation Solution to Generate Accurate Invoices and Collect Cash?

  • Customer data in one system
  • Single Identity
  • Single Portal for everything: what they’ve bought from you, when they bought is from you, agreements, invoices

xBilling Automation 625x296.png.pagespeed.ic.BtXdYrH47B Best Practices to Get Paid Faster – Building a Successful Recurring/Subscription Business​

Cash Collection is a business function:

  • Credit Limit should be associated with Aging Data and affect new Sales
  • The customer Payment information should be in the customer system
  • Automate Collections using Payment Gateways
  • Don’t reward bad behavior – Credit Holds so that provisioning is stopped

Work 365 offers a Billing and Subscription application that helps cloud companies archive the necessary requirements and processes to grow your revenue. Learn more here.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Get Your Scalar UDFs to Run Faster Without Code Changes

February 13, 2020   BI News and Info

Over the years, you probably have experienced or heard that using user-defined functions (UDF’s) do not scale well as the number of rows processed gets larger and larger. Which is too bad, because we have all heard that encapsulating your code into modules promotes code reuse and is a good programming practice. Now the Microsoft SQL Server team have added a new feature to the database engine in Azure SQL Database and SQL Server 2019 that allows UDF’s performance to scale when processing large recordsets. This new feature is known as T-SQL Scalar UDF Inlining.

T-SQL Scalar UDF Inlining is one of many new features to improve performance that was introduced in the Azure SQL Database and SQL Server 2019. This new feature contains many options available in the Intelligent Query Processing (IQP) feature set. Figure 1 from Intelligent Query Processing in SQL Databases shows all the IQP features introduced in Azure SQL Database and SQL Server 2019, as well as features that originally were part of the Adaptive Query Processing feature set that was included in the older generation of Azure SQL Database and SQL Server 2017.

word image 7 Get Your Scalar UDFs to Run Faster Without Code Changes

Figure 1: Intelligent Query Processing

The T-SQL Scalar UDF Inlining feature will automatically scale UDF code without having to make any coding changes. All that is needed is for your UDF to be running against a database in Azure SQL Database or SQL Server 2019, where the database has the compatibility level set to 150. Let me dig into the details of the new inlining feature a little more.

T-SQL Scalar UDF Inlining

The new T-SQL Scalar UDF Inlining feature will automatically change the way the database engine interprets, costs, and executes T-SQL queries when a scalar UDF is involved. Microsoft incorporated the FROID framework into the database engine to improve the way scalar UDFs are processed. This new framework refactors the imperative scalar UDF code into relational algebraic expressions and incorporates these expressions into the calling query automatically.

By refactoring the scalar UDF code, the database engine can improve the cost-based optimization of the query as well as perform set based optimization that allows the UDF code to go parallel if needed. Refactoring of scalar UDFs is done automatically when a database is running under compatibility level 150. Before I dig into the new scalar UDF inlining feature, let me review why scalar UDF’s are inherently slow, and discuss the differences between imperative and relational equivalent code.

Why are Scalar UDF Functions inherently slow?

When running a scaler UDF on a database with a compatibility level set to less than 150, they just don’t scale well. By scale, I mean they work fine for a few rows but run slower and slower as the number of rows processed gets larger and larger. Here are some of the reasons why scalar UDF’s don’t work well with large recordsets.

  • When a T-SQL statement uses a scalar function, the database engine optimizer doesn’t look at the code inside a scalar function to determine its costing. This is because Scalar operators are not costed, whereas relational operators are costed. The optimizer considers scalar functions as a black box that uses minimal resources. Because scalar operations are not costed appropriately, the optimize is notorious for creating very bad plans when scalar functions perform expensive operations.
  • A Scalar function is evaluated as a batch of statements where each statement is run sequentially one statement after another. Because of this, each statement has its own execution plan and is run in isolation from the other statements in the UDF, and therefore can’t take advantage of cross-statement optimization.
  • The optimize will not allow queries that use a scalar function to go parallel. Keep in mind, parallelism may not improve all queries, but when a scalar UDF is being used in a query, that query’s execution plan will not go parallel.

Imperative and Relational Equivalent Code

Scalar UDFs are a great way to modularize your code to promote reuse, but all too often they contain procedural code. Procedural code might contain imperative code such as variable declarations, IF/ELSE structures, as well as WHILE looping. Imperative code is easy to write and read, hence why imperative code is so widely used when developing code for applications.

The problem with imperative code is that it is hard to optimize, and therefore query performance suffers when imperative code is executed. The performance of imperative code is fine when a small number of rows are involved, but as the row count grows, the performance starts to suffer. Because of this, you should not use them for larger record sets if they are executed on a database running with a compatibility less than 150. With the introduction of version 15.x of SQL Server, the scaling problem associated with UDFs has been solved by the refactoring of imperative code using a new optimization technique known as the FROID framework.

The FROID framework refactors imperative code into a single relational equivalent query. It does this by analyzing the scalar UDF imperative code and then converts blocks of imperative code into relational equivalent algebraic expressions. These relational expressions are then combined into a single T-SQL statement using APPLY operators. Additionally, the FROID framework looks for redundant or unused code and removes it from the final execution plan of the query. By converting the imperative code in a scalar UDF into re-factored relational expressions, the query optimizer can perform set-based operations and use parallelism to improve the scalar UDF performance. To further understand the difference between imperative code and relational equivalent code, let me show you an example.

Listing 1 contains some imperative code. By reviewing this listing, you can see it includes a couple of DECLARE statements and some IF/ELSE logic.

Listing 1: Imperative Code Example

1

2

3

4

5

6

7

8

9

10

DECLARE @Sex varchar(10) = ‘Female’;

DECLARE @SexCode int;

IF @Sex = ‘Female’

SET @SexCode = 0

ELSE

IF @Sex = ‘Male’

   SET @SexCode = 1;

     ELSE

        SET @SexCode = 2;

SELECT @SexCode AS SexCode;

I have then re-factored the code in Listing 1 into a relational equivalent single SELECT statement in Listing 2, much like the FROID framework might doing it when compiling a scalar UDF.

Listing 2: Relational Code Example

SELECT B.SexCode FROM (SELECT ‘Female’ AS Sex) A

OUTER APPLY  

  (SELECT CASE WHEN A.Sex = ‘Female’ THEN 0

          WHEN A.Sex = ‘Male’ THEN 1

     ELSE 2

END AS SexCode) AS B;

By looking at these two examples, you can see how easy it is to read the imperative code in Listing 1 to see what is going on. Whereas in Listing 2, which contains the relational equivalent code, requires a little more analysis/review to determine exactly what is happening.

  • Currently, the FROID framework is able to rewrite the following scalar UDF coding constructs into relational algebraic expressions:
  • Variable declaration and assignments using DECLARE or SET statement
  • Multiple variable assignments in a SELECT statement
  • Conditional testing using IF/ELSE logic
  • Single or multiple RETURN statements
  • Nested/recursive function calls in a UDF
  • Relational operations such as EXISTS and ISNULL

The two listings found in this section only logically demonstrate how the FROID framework might convert imperative UDF code into relational equivalent code using the FROID framework. For more detailed information on the FROID framework, I suggest you read this technical paper.

In order to see FROID optimization in action, let me show you an example that compares the performance of a scalar UDF running with and without FROID optimization.

Comparing Performance of Scalar UDF with and Without FROID Optimization

To test how a scalar UDF would perform with and without FROID optimization, I will run a test using the sample WorldWideImportersDW database (download here). In that database, I’ll create a scalar UDF called GetRating. The code for this UDF can be found in Listing 3.

Listing 3: Scalar UDF that contains imperative code

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

CREATE OR ALTER FUNCTION dbo.GetRating(@CityKey int)

RETURNS VARCHAR(13)

AS

BEGIN

   DECLARE @AvgQty DECIMAL(5,2);

   DECLARE @Rating VARCHAR(13);

   SELECT @AvgQty  = AVG(CAST(Quantity AS DECIMAL(5,2)))

   FROM Fact.[Order]

   WHERE [City Key] = @CityKey;

   IF @AvgQty / 40 >= 1  

  SET @Rating = ‘Above Average’;

   ELSE

  SET @Rating = ‘Below Average’;

   RETURN @Rating

END

By reviewing the code in Listing 3 you can see that I am creating my scalar UDF that I will be using for testing. This function calculates a rating for a [City Key] value. The rating returned is either “Above Average” or “Below Average” based on 40 being the average rating. Note that this UDF contains imperative code.

In order to test how scalar inlining can improve performance I will be running the code in Listing 4.

Listing 4: Code to test performance of scalar UDF

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

– Turn on Time Statistics

SET STATISTICS TIME ON;

GO

USE WideWorldImportersDW;

GO

– Set Compatibility level to 140

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 140;

GO

– Test 1

SELECT DISTINCT ([City Key]), dbo.GetRating([City Key]) AS CityRating

FROM Dimension.[City]

– Set Compatibility level to 150

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 150;

GO

– Test 2

SELECT DISTINCT ([City Key]), dbo.GetRating([City Key]) AS CityRating

FROM Dimension.[City]

GO

The code in Listing 4 runs two tests. The first test (Test 1) calls the scaler UDF dbo.GetRating using compatibility level 140 (SQL Server 2017). For the second test (Test 2), I only changed the compatibility level to 150 (SQL Server 2019) and ran the same UDF as Test 1 without making any coding changes to the UDF.

When I run Test 1 in Listing 4, I get the execution statistics shown in Figure 2 and the execution plan shown in Figure 3.

word image 8 Get Your Scalar UDFs to Run Faster Without Code Changes

Figure 2: Execution Statistics for Test 1
word image 9 Get Your Scalar UDFs to Run Faster Without Code Changes
Figure 3: Execution plan when using compatibility level 140 using Test 1

Prior to reviewing the time statistics and execution plan for Test 1 let me run Test 2. The time statistics and execution plan for Test 2 can be found in Figure 4 and Figure 5, respectfully.

word image 10 Get Your Scalar UDFs to Run Faster Without Code Changes

Figure 4: Execution Statistics for Test 2

word image 11 Get Your Scalar UDFs to Run Faster Without Code Changes

Figure 5: Execution plan when using compatibility level 150 using Test 2

Performance Comparison between Test 1 and Test 2

The only change I made between Test 1 and Test 2 was to change the compatibility level from 140 to 150. Let me review how the FROID optimization changed the execution plan and improved the performance when I executed my test using compatibility level 150.

Before running the two different tests, I turned on statistics time. Figure 6 compares the time statistics between the two different tests.

word image 12 Get Your Scalar UDFs to Run Faster Without Code Changes
Figure 6: CPU and Elapsed Time Comparison Between Test 1 and Test 2

As you can see, when I executed the Test 1 SELECT statement in Listing 4 using compatibility level 140, the CPU and elapsed time took a little over 30 seconds. Whereas, when I changed the compatibility level to 150 and ran the Test 2 SELECT statement in Listing 4, my CPU and Elapsed time used just over 1second of time each. As you can see, Test 2, which used compatibility level 150 and the FROID framework, ran magnitudes faster than List 1 which ran under compatibility 140 without the FROID framework optimization. The improvement I gained using the FRIOD framework and compatibility level 150 achieved this performance improvement without changing a single line of code in my test scalar UDF. To better understand why the time comparisons were so drastically different between these two executions of the same SELECT statement, let me review the execution plans produced by each of these test SELECT queries.

If you look at Figure 3, you will see a simple execution plan when the SELECT statement was run under compatibility 140. This execution plan didn’t go parallel and only includes two operators. All the work related to calculating the city rating in the UDF using the data in the Fact.[Order] table is not included in this execution plan. To get the rating for each city, my scalar function had to run multiple times, once for every [City Key] value found in the Dimension.[City] table. You can’t see this in the execution plan, but if you monitor the query using an extended event, you can verify this. Each time the database engine needs to invoke my UDF in Test 1, a context switch has to occur. The cost of the row by row operation nature of calling my UDF over and over again causes the query in Test 1 to run slow.

If we look at the execution plan in Figure 5, which is for Test 2, you see a very different plan as compared to Test 1. When the SELECT statement in Test 2 was run, it ran under compatibility level 150, which allowed the scalar function to be inlined. By inlining the scalar function, FROID optimization converted my scalar UDF into a relational operation which allowed my UDF logic to be included in the execution plan of the calling SELECT statement. By doing this, the database engine was able to calculate the rating value for each [City Key] using a set-based operation, and then joins the rating value to all the cities in the Dimension.[City] table using an inner join nested loop operation. By doing this set based operation in Test 2, my query runs considerably faster and uses fewer resources than the row by row nature of my Test 1 query.

Not all Scalar Functions Can be Inlined

Not all scalar function can be inlined. If a scalar function contains coding practices that cannot be converted to relational algebraic expressions by the FRIOD framework, then your UDF will not be inlined. For instance, if a scalar UDF contains a WHILE loop, then the scalar function will not be inlined. To demonstrate this, I’m going to modify my original UDF code so it contains a dummy WHILE loop. My new UDF is called dbo.GetRating_Loop and can be found in Listing 5.

Listing 5: Scalar UDF containing a WHILE loop

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

CREATE OR ALTER FUNCTION dbo.GetRating_Loop(@CityKey int)

RETURNS VARCHAR(13)

AS

BEGIN

   DECLARE @AvgQty DECIMAL(5,2);

   DECLARE @Rating VARCHAR(13);

– Dummy code to support WHILE loop

   DECLARE @I INT = 0;

   WHILE @I < 1

   BEGIN

  SET @I = @I + 1;

   END

   SELECT @AvgQty  = AVG(CAST(Quantity AS DECIMAL(5,2)))

   FROM Fact.[Order]

   WHERE [City Key] = @CityKey;

   IF @AvgQty / 40 >= 1  

  SET @Rating = ‘Above Average’;

   ELSE

  SET @Rating = ‘Below Average’;

   RETURN @Rating

END

By reviewing the code in Listing 5, you can see I added a dummy WHILE loop at the top of my original UDF. When I run this code using the code in Listing 6, I get the execution plan in Figure 7.

Listing 6: Code to run dbo.GetRating_Loop

1

2

3

4

5

6

7

8

9

10

USE WideWorldImportersDW;

GO

– Set Compatibility level to 150

ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 150;

GO

– Test UDF With WHILE Loop

SELECT DISTINCT ([City Key]),

    dbo.GetRating_Loop([City Key]) AS CityRating

FROM Dimension.[City]

GO

word image 13 Get Your Scalar UDFs to Run Faster Without Code Changes

Figure 7: Execution plan created while execution Listing 6.

By looking at the execution plan in Figure 7, you can see that my new UDF didn’t get inlined. The execution plan for this test looks very similar to the execution plan I got when I ran my original UDF in Listing 3 under database compatibility level 140. This example shows not all scalar UDF functions will be inlined. Just those scalar UDF that use only the functionality support by the FRIOD framework will be inline.

Disabling Scalar UDF Inlining

With this new version of SQL Server, the design team wanted to make sure you could disable any new features at the database level or statement level. Therefore, you can use the code in Listing 6 or 7 to disable scalar UDF inlining. Listing 6 shows how to disable scalar UDF inlining at the database level.

Listing 6: Disabling inlining at the database level

ALTER DATABASE SCOPED CONFIGURATION SET TSQL_SCALAR_UDF_INLINING = OFF;

Listing 7 shows how to disable scalar inlining when the scalar UDF is created.

Listing7: Disabling when defining UDF

CREATE FUNCTION dbo.MyScalarUDF (@Parm int)

RETURNS INT

WITH INLINE=OFF

...

Make Your Scalar UDF just Run faster by Using SQL Server version 15.x

If you want to make your Scalar UDF run faster without making any coding changes, then SQL Server 2019 is for you. With this new version of SQL Server, the FROID framework was added. This framework will refactor a scalar UDF function into relational equivalent code that can be placed directly into the calling statement’s execution plan. By doing this, a scalar UDF is turned into a set-based operation instead of being called for every candidate row. All it takes to have a scalar UDF refactored is to set your database to compatibility level 150.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

IBM’s biology-inspired AI generates hash codes faster than classical approaches

January 22, 2020   Big Data
 IBM’s biology inspired AI generates hash codes faster than classical approaches

Ever heard of FlyHash? It’s an algorithm inspired by fruit flies’ olfactory circuits that’s been shown to generate hash codes — numeric representations of objects — with performance superior to classical algorithms. Unfortunately, because FlyHash uses random projections, it can’t learn from data. To overcome this limitation, researchers at Princeton, the University of San Diego, IBM Research, and the MIT-IBM Watson AI Lab developed BioHash, which applies “local” and “biologically plausible” synaptic plasticity rules to produce hash codes. They say that it outperforms previously published benchmarks for various hashing methods and that it could yield binary representations of things useful for similarity searches.

As the researchers explain in a preprint paper detailing their work, the phenomenon known as expansive representation is nearly ubiquitous in neurobiology. “Expansion” in this context refers to the mapping of high-dimensional input data to an even higher-dimensional secondary representation. For instance, in the abovementioned fruit fly olfactory system, approximately 50 neurons send their activities to about 2,500 cells called Kenyon cells, achieving an approximately 50 times expansion.

From a computational perspective, expansion can among other things increase the memory storage capacity of an AI model. It’s with this motivation that the team designed the hashing algorithm BioHash, which can be used in similarity search.

In similarity search, given a query, a similarity measure, and a database containing any number of items, the objective is to retrieve a ranked list of items from the database most similar to the query. When the data is high-dimensional (e.g. images or documents) and the databases are large (in the millions or billions of items), it’s a computationally challenging problem. However, approximate solutions are generally acceptable, including a hashing scheme called locality-sensitive hashing (LHS) in which each database entry is encoded with binary representations and retrieves closely related entries.

FlyHash leverages LHS, as does BioHash. But importantly, BioHash is faster and much more scalable.

The researchers trained and tested Biohash on MNIST, a data set of 70,000 grayscale images of handwritten digits with 10 classes of digits ranging from “0” to “9”, and CIFAR-10, a corpus comprising 60,000 images from 10 classes (e.g., “car,” “bird”). They say that BioHash demonstrated the best retrieval performance in terms of speed, substantially outperforming other methods, and that a refined version of BioHash — BioConvHash — performed even better thanks to the incorporation of purpose-built filters.

The team asserts that this provides evidence that the reason expansive representations are common in living things is because they perform LHS. In other words, they cluster similar stimuli together and push distinct stimuli far apart. “[Our] work provides evidence toward the proposal that LHS might be a fundamental computational principle utilized by the sparse expansive circuits … [Biohash] produces sparse high dimensional hash codes in a data-driven manner and with learning of synapses in a neurobiologically plausible way.”

As it turns out, the fields of neurobiology and machine learning go hand in hand. Google parent company Alphabet’s DeepMind earlier this month published a paper investigating whether the brain represents possible future rewards not as a single average but as a probability distribution, a mathematical function that provides the probabilities of occurrence of different outcomes. And scientists at Google and the Max Planck Institute of Neurobiology recently demonstrated a recurrent neural network — a type of machine learning algorithm that’s often used in handwriting and speech recognition — that maps the brain’s neurons.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

5 Ways to Close Deals Faster Using Contract Management Software Integrated with Microsoft Dynamics CRM

October 18, 2019   CRM News and Info

xClose Deals Faster Contract Management Software Integrated with Microsoft Dynamics CRM.png.pagespeed.ic.JHJqkgNNMK 5 Ways to Close Deals Faster Using Contract Management Software Integrated with Microsoft Dynamics CRM

According to research by Forrester and Aberdeen, it takes an average of 3.4 weeks to get a contract approved, but using Contract Management software can reduce contract approval time an average of 82%.

Deals don’t always close as fast as we’d like. The contract review, approval, and negotiation phase is all too often a sticking point on the journey from quote to cash.

We have shared a lot of information about Configure-Price-Quote and the benefits of integrating a quote generation tool with Microsoft Dynamics CRM for Sales. But, another part of the sales process that creates delays, and can kill deals, is the contract phase.

Manual contract generation and approval is a time-consuming and error-prone process. Contracts can languish for days or weeks while Finance, Operations, and Legal review and approve contracts. What’s needed is a level of control over managing contracts, while ensuring contracts move forward quickly.

5 Ways Contract Management Software Helps Close Deals Faster

Contract management software, integrated with Microsoft Dynamics CRM, helps close deals faster in these ways:

  1. Organization: Automated workflows prompt internal parties such as sales, finance, and legal to perform their review, redlining, and approval then move the contract forward to keep the approval process on track.
  2. Simplicity: Using pre-built templates and workflows, sales representatives can quickly create and send contracts without difficulty.
  3. Productivity: Approvals move quickly since all parties in the process have a structured process to follow, step-by-step.
  4. Efficiency: Conversations, redlines, and documents are all located in one digital collaborative environment, eliminating the back and forth of emails and hastening the negotiation process.
  5. Storage: Contracts, and all versions of each document, are stored and synced in one place. Version control is a non-issue, and contracts can be retrieved from a central repository with ease.

A standard automated workflow for each buyer, includes review by business teams, redlining, e-Signature, storage of agreements, and submission to buyers. The same stages and steps are followed for each customer interaction, ensuring efficiency and accuracy.

Why Choose Contract Management Software?

Adding new sales software is an investment, but one that can increase your sales teams’ success is certainly worth considering. An Aberdeen study shows that about twice as many Best-in-Class organizations report automating their contract management processes than do other organizations. Automating the contract management process enables Sales, Operations, Finance, and Legal to spend less effort on the contract process and generate greater revenue.

When selecting contract management software, you should select a solution that is packed with features to hasten the sales process, including:

  • Tracking and monitoring
  • Customizable templates
  • Contract change notifications
  • Contract redlining capability
  • Ability to digitally collaborate, edit, and share documents internally and externally
  • Easy set up of legal requirements and internal business rules

DealHub’s Best-in-Class Contract Management Software

DealHub Contract Management Software for Microsoft Dynamics CRM has several features that stand out, including predictive sales playbooks, automated workflows, and deal tracking. This unique solution provides buyer insights that allow sales reps to track contract progress and pinpoint bottlenecks.

Other benefits of DealHub Contract Management Software include:

  • Real-time data and analytics
  • Contract redlining
  • e-Signature
  • Unified platform – from quote to close
  • User-friendly
  • Approval workflows
  • Seamless Microsoft Dynamics CRM integration

Learn more about DealHub’s Contract Management Software.

DealHub is Easy to Use

DealHub Sales Proposal Software, which includes contract management, was designed to be easy to set up and modify for business leaders and sales representatives. Your company can be set up quickly, and since it’s easy to use, your sales team will be up and running in no time.

Check out DealHub’s reviews on G2 to learn why we are rated the #1 Easiest to Use Contract Management Software.

If you’re looking for a centralized, simplified contract management process that helps your sales team close deals faster, it’s time to consider implementing contract management software in your business.

DealHub Contract Management Solution includes Contract Redlining which enhances the buyer’s experience by replacing a cumbersome negotiation process with an efficient digital environment. Watch the webinar to learn more.

Redline Demo17 01 1 5 Ways to Close Deals Faster Using Contract Management Software Integrated with Microsoft Dynamics CRM

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

BeaconUnited Delivers 10x Faster Insights with TIBCO Spotfire

March 26, 2019   TIBCO Spotfire

BeaconUnited creates brand positioning solutions for its clients’ consumer packaged goods (CPG)— toothpaste, peanut butter, paper towels, and other everyday products. It works with retailers to tailor their promotional programs to achieve the highest level of visibility for their products. If you’ve ever seen an experimental new product introduced in your local grocery store such as squeezable applesauce packets, chances are BeaconUnited was involved.

Riteway Foods, a division of BeaconUnited, is specifically tasked with helping its sales teams understand trends and opportunities impacting various grocery retailers. The company’s goal is to find opportunities that will have the greatest impact on sales for its clients. As the number of BeaconUnited’s clients expanded, the growing amount of data created a real challenge. The company wound up outgrowing the existing tool’s capabilities. The company also receives data from Nielsen and point-of-sale data from retailers, which is unusable in the raw state. BeaconUnited needed a solution to wrangle the data as well as a more efficient way to derive insights from actionable information instead of merely providing static sales reporting.

The company required a tool to connect to its data, visualize data in multiple ways, use geolocation data to map to store level, and incorporate predictive analytics and statistical modeling. BeaconUnited also needed a tool that allowed for interactive visualizations that enabled its sales team to work independently to find insightful information. After evaluating more than 50 data visualization tools, including Tableau, MicroStrategy, Birst, Domo, and Qlik, BeaconUnited chose TIBCO Spotfire®.

Spotfire® allowed BeaconUnited to have the flexibility and scalability it needed, especially with the tool being hosted on the cloud through Amazon Web Services. Users can access the data anywhere with them, allowing them to better interact with their clients. As a result, BeaconUnited is delivering 10x faster insights and a 100 percent adoption rate across the company within the first year of use.

Additional benefits include:

  • Differentiation that opens doors through software that aids both individual brands and end retailers by using the data to answer questions on the fly
  • Business growth and customer success,  playing  a key role in meeting new clients and helping their growth and distribution
  • Adoption, collaboration, and relationship building through rapidly refreshed data and analytics across 80 categories, giving the sales team the analytics to answer meaningful questions.

To learn more about how Spotfire® helped BeaconUnited gain flexibility for greater insight, read the full case study.

Let’s block ads! (Why?)

The TIBCO Blog

Read More
« Older posts
  • Recent Posts

    • Why the open banking movement is gaining momentum (VB Live)
    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
    • Is Your Business Ready for the New Generation of Analytics?
    • Contest for control over the semantic layer for analytics begins in earnest
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited