Tag Archives: Analysis

7/12 Webinar: Text Analysis in Power BI with Cognitive services with Leila Etaati

 

This week we will be delivering a sneak peak of some of the Business Applications Summit content!:

Text Analysis in Power BI with Cognitive services with Leila Etaati

Data that we collected always is not about numbers and structured data. In any organization, there is a need to analyze the text data such as customer comments, extract the primary purpose of a call from its scripts, detect the language of customer feedback and translate it and so forth. To address this issue, Microsoft Cognitive Services provides a set of APIs, SDKs, and services available to developers to do text analysis without writing R or Python codes. In this session, I will explain what is text analysis such as sentiment analysis, key phrase extraction, Language detection and so forth. Next, the process of text analysis in Power BI using cognitive services will be demonstrated.

When: July 12, 2018 10:00 AM PST

Where: https://www.youtube.com/watch?v=WWod8ETS7J8

About Leila Etaati

Leila 7/12 Webinar: Text Analysis in Power BI with Cognitive services with Leila Etaati

Leila is Data Scientist, PhD, MVP, and BI Consultant, and Speaker. She has over 10 years’ experience working with databases and software systems. She was involved in many large-scale projects for big sized companies. Leila has PhD of Information System department, University of Auckland, MS and BS in computer science.

She worked in Industries including banking financial, power and utility, manufacturing … She is a lecturer and trainer in Business intelligence and data base design course in University of Auckland.

Leila speaks in international SQL Server and BI conferences such as Microsoft Ignite, PASS Summit, PASS Business Analytics, PASS Rally, and many SQL Saturdays in USA, Europe, Australia, and New Zealand on Machine Learning and Analytics topics.

Ask Leila about these technologies and areas;

  • Machine Learning
  • Advanced Analytics
  • Predictive, Prescriptive, and Descriptive Analytics
  • Azure Machine Learning
  • R
  • Python
  • On-Prem Machine Learning with SQL Server, R and Python
  • Power BI and R
  • Power BI
  • Stunning Visualization with R
  • Microsoft BI Suite in SQL Server
  • Azure Stream Analytics
  • IOT
  • Real-time Streaming

When Leila is not doing data related work, his hobbies are;

  • Playing Game Boards!
  • Watching all type of movies!
  • Swimming if it is not too cold!

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

IBM's Watson Enlisted for B2B Sentiment Analysis

B2B customer support firm
TeamSupport on Tuesday announced the addition of sentiment analysis technology, powered by IBM’s Watson, to its customer relationship platform.

The new tech provides automatic text analysis of email or chat responses to assess how a customer feels, so customer support teams can prioritize and personalize their outreach efforts. It includes predefined categories such as “satisfied” and “frustrated.”

The sentiment analysis capability is an addition to TeamSupport’s Customer Distress Index, which analyzes ticket and response data to categorize levels of customer satisfaction at a company level.

IBM Watson data will be used to score response data at both the ticket and overall customer level to help inform customer strategy and provide a framework for customer success, TeamSupport said.

The company also plans to use data from IBM Watson to expand other areas of its service, noted TeamSupport CEO Robert Johnson.

Customer Support Table Stakes

“Sentiment analysis for chat and text has been around for a long time,” observed Rebecca Wettemann, VP of research at Nucleus Research.

“RightNow had it before the Oracle acquisition. IBM had it long before Watson was its marketing albatross,” she told CRM Buyer.

Sentiment analysis “has evolved to be more sophisticated and cost-effective, and there are varying levels of effectiveness of such solutions,” Wettemann pointed out. “It’s really table stakes by now.”

The technology’s long history in customer support has revealed many uses, noted Ray Wang, principal analyst at Constellation Research.

“Many pioneers of this technology have shown how it could — in customer support — change prioritization of issues [and] response time to cases,” he told CRM Buyer.

More importantly, it enables reps to know the customer interaction history before responding, Wang said.

Outdoing IBM

What is interesting about TeamSupport’s approach is that it “has made the investment to operationalize Watson in a way IBM has — so far — been unable to do,” Nucleus’ Wettemann noted. “This may make it accessible to smaller teams without the resources to build out their own sentiment model.”

However, “there are plenty of in-app options available at multiple price points today,” she remarked.

Other companies that offer sentiment analysis to help customer support efforts include Medallia and Clarabridge, Wang said, “but they have traditionally been more focused on B2C than B2B.”

Help Desk on Steroids

TeamSupport’s tools were designed from the ground up by B2B support professionals for organizations providing external customer support. The company compares its offerings to help desk software from Zendesk, Freshdesk, Service Cloud, Parature, Intercom, Kayako and HappyFox.

Customers include the American Lung Association, the United States National Basketball Association, Agilent Technologies and Comcast.

“If the training programs for TeamSupport improve the accuracy of existing capabilities in the market, they have a good shot at gaining market share,” Wang observed.

Resolving the Growing B2B Buyer-Seller Divide

There’s a growing
divide between B2B buyers and sellers, according to the Miller Heiman Group, with buyers seeing sales teams as product representatives and engaging them later in the purchasing process.

Sentiment analysis may help breach that divide.

For example, Insights, a feedback analysis solution Yotpo announced at Shoptalk in March, uses natural language processing (NLP), artificial intelligence and machine learning technology to analyze reviews and their related opinions.

Retail brands can use Insights to analyze customer feedback at scale to identify trends or untapped markets, receive alerts about new issues, and gain instant feedback on new product releases.

Sentiment analysis “uses NLP to identify, prioritize and suggest responses to de-escalate customer issues,” said Constellations’ Wang, “and, more importantly, proactively resolve issues before they become a problem.”
end enn IBM's Watson Enlisted for B2B Sentiment Analysis


Richard%20Adhikari IBM's Watson Enlisted for B2B Sentiment Analysis
Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology.
Email Richard.

Let’s block ads! (Why?)

CRM Buyer

New memory options for Analysis Services

We’re happy to introduce some new memory settings for Azure Analysis Services and SQL Server Analysis Services tabular models. These new settings are primarily for resource governance, and in some cases can speed up data refresh.

IsAvailableInMdx

The IsAvailableInMdx column property is available in Azure Analysis Services and SQL Server Analysis Services 2017 CU7.

It prevents building attribute hierarchies, reducing memory consumption. This means the column is not available for group-by queries to MDX clients like Excel. Fact (transactional) table columns often don’t need to be grouped and are often the ones that use up the most memory.

This property can also improve performance of refresh operations; particularly for tables with lots of partitions. Building attribute hierarchies can be very expensive because they’re built over all the data in the table.

Currently, this property is not yet exposed in SSDT. It must be set in the JSON-based metadata by using Tabular Model Scripting Language (TMSL) or the Tabular Object Model (TOM). This property is specified as a Boolean.

The following snippet of JSON-based metadata from the Model.bim file disables attribute hierarchies for the Sales Amount column:

  {
    "name": "Sales Amount",
    "dataType": "decimal",
    "sourceColumn": "SalesAmount",
    "isAvailableInMdx": false
  }

QueryMemoryLimit

The Memory\QueryMemoryLimit property can be used to limit memory spools built by DAX queries submitted to the model. Currently, this property is only available in Azure Analysis Services.

Changing this property can be useful in controlling expensive queries that result in significant materialization. If the spooled memory for a query hits the limit, the query is cancelled and an error is returned, reducing the impact on other concurrent users of the system. Currently, MDX queries are not affected. It does not account for other general memory allocations used by the query.

The settable value of 1 to 100 is a percentage. Above that, it’s in bytes. The default value of 0 means not specified and no limit is applied.

You can set this property by using the latest version of SQL Server Management Studio (SSMS), in the Server Properties dialog box. See the Server Memory Properties article for more information.

QueryMemoryLimit New memory options for Analysis Services

DbpropMsmdRequestMemoryLimit

The DbpropMsmdRequestMemoryLimit XMLA property can be used to override the Memory\QueryMemoryLimit server property value for a connection. Currently, this property is only available in Azure Analysis Services.  The unit of measure is in kilobytes. See the Connection String Properties article for more information.

RowsetSerializationLimit

The OLAP\Query\RowsetSerializationLimit server property limits the number of rows returned in a rowset to clients. Currently, this property is only available in Azure Analysis Services.

This property, set in the Server Properties dialog box in the latest version of SSMS, applies to both DAX and MDX. It can be used to protect server resources from extensive data export usage. Queries submitted to the server that would exceed the limit are cancelled and an error is returned. The default value is -1, meaning no limit is applied.

Let’s block ads! (Why?)

Analysis Services Team Blog

In-Depth Cash Flow Analysis and Forecasting Webinar

CRM Blog In Depth Cash Flow Analysis and Forecasting Webinar

Having enough cash flow to keep your business going is a primary focus for a business owner who wants a successful business. That’s why we developed a series that will provide you with all the accounting tools you’ll need to run a successful business. In this webinar, the following topics will be covered.

-In-Depth Cash Flow Analysis
-Cash-Flow Forecasting

Time: 12:00PM EST – 12:30PM EST
Date: Thursday May 24, 2018

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

How to Perform a Competitive Analysis of Your Brand

20180307 bnr competitive analysis 351x200 How to Perform a Competitive Analysis of Your Brand

Let’s find out how to perform a competitive analysis – and how you can use that information to your advantage.

Why do you need a competitive analysis? Building a successful marketing plan requires knowing your customers and their pain points intimately. That is step one. And step two is knowing how your competitors are addressing their needs (if they are) and where opportunities may exist for your company.

Marketing ethics 101: Don’t be shady!

Before I go any further, I want to state something up front. This post is meant to teach you how and why to do a competitive analysis. It’s not meant to encourage or even hint at something more malicious. Stealing – whether from your competitors or your neighbors – is never cool. Please don’t do it!

What is a competitive analysis?

Now that we have that out of the way, let’s get into the topic at hand: competitive analysis. What is a competitive analysis?

A competitive analysis is a formal process of evaluating what your competitors are up to. And in this case, we’re talking about a qualitative analysis, not a quantitative or cost analysis. You’d look at things like their product, their marketing approach, their customers, their strengths and weaknesses – and how those may become threats. This may sound similar to a SWOT exercise – and that is for a reason. SWOT (which stands for Strengths, Weaknesses, Opportunities, and Threats) and Competitive Analyses are related concepts. They’re marketing cousins, so to speak.

Why: the benefits of doing a competitive analysis

With a successful competitive analysis, you look at what the other guys are doing, and, more importantly, what you can do in response. A competitive analysis pulls you out of your myopic bubble to find what else is going on in the world … to take a step back and then analyze your findings.

Here are some other benefits of doing a competitive analysis:

  • Get a view of the market landscape. See what else is out there – and where customers may go if they don’t choose you. You or your predecessor probably did this when your company or product was initially concepted. This isn’t a one-and-done exercise, though. You need to constantly keep an eye on the competitive landscape.
  • Meet a need. Ultimately your goal as a marketer (or R&D team member) is to find a niche (or chasm) to fill a need that isn’t already being met. To create a product or service that customers want or need – and expertly sell it in a way that garners the most traffic. Essentials of Marketing, a textbook on all things marketing that sits on my bookshelf, words it this way: “The search for a breakthrough opportunity – or some sort of competitive advantage – requires an understanding not only of customers but also of competitors.”
  • Find ways to get new customers – or hang on to the ones you have. During a competitive analysis, you get time to think about acquisition and retention. For example, how are you competitors winning new customers or retaining them? Look at their loyalty programs and win-back strategies. Also consider what you could do to win customers from your competitors and take over more market share.

When to perform a competitive analysis

There are a variety of times to perform a competitive analysis. Maybe you’re launching a new product and need to know how to approach it. Maybe your brand is feeling stale and you need ideas. Or, maybe it’s simply a downtime in your marketing schedule and you want to do extra credit work. Whatever the case, I encourage you to perform competitive analyses at least once or twice per year.

How to perform a competitive analysis

Now let’s get into the how-tos.

  1. First, scope out the competition and name them. In order to perform a competitive analysis, you have to know who your competitors are. If you have a list, great. Go grab it. If not – or if you’re launching something new and don’t yet have your competitors ID’d – let’s walk through a few questions to help identify them:Who else is doing what you’re doing, or striving to do? Who is the closest competitor? Also consider less obvious competitors, such as brands or products that may not replicate your product, but may feel similar in tone. And also think about those whose work is similar enough – or those who have a similar ethos. Consider that they could pose a future threat if they develop something new.
  2. Next, ID their strengths and weaknesses. This is where SWOT comes in. Literally write down each competitor’s name and product, and then run a mini SWOT on them.
  3. Pay attention to the details. What kind of marketing campaigns are your competitors running – and in what channels? What keywords are they using? What obvious keywords are they avoiding? Take note of your observations. You may make some very interesting discoveries here, so take detailed notes.
  4. Get a little help from your friends – and the Internet. You don’t have to do all of this research on your own. Turn to your colleagues, for example, the folks on your R&D team who came up with the product you’re in charge of marketing. It’s possible that they’ve done similar competitive research while concepting and creating the product. Get your hands on that material.Also, if you’re willing to put in the work, you can access a wealth of consumer-authored and brand-authored information online. As we know by now, today’s consumers do a ton of pre-work before they make a decision. In fact, I just did this exact thing: I’m in the market for a new blender and spent a good chunk of time doing research online before making my decision. I read the product descriptions, reviews, and user comments. I paid attention to all the details – the good, the bad, and the ugly.

As a marketer, you can use that same data and strategy to gain understanding of what customers want and how your competitors are responding. For example, scour competitors’ product reviews and their social media feeds. Pay attention both to what consumers are saying and how brands respond. These primary sources may be time-consuming to shuffle through, but yield invaluable results.

  1. Turn your research inward and make recommendations. The ultimate goal of a competitive analysis is to better position your own product against the competitors. Once you’ve found your competitors and what they’re doing, turn your focus to your own brand or product. How do you – or could you – do things differently? What are your strengths above theirs – and/or differentiators? For ideas, turn back to your SWOT here and look at your “S” and “O” categories specifically. What have you learned, and what could you try?Remember to use your knowledge for good. It’s worth repeating: Once you’ve done your work, you will likely have a great playbook of what the competition does and how they do it. Now’s the time to put your white hat back on and remind yourself and your colleagues of good business practices. Ideate ‒ don’t imitate.
  2. Formalize your findings. Take copious notes, and pull them into a format that you and your colleagues can use. It might look like this:
    1. Devote one page (or slide) per competitor/product.
    2. On each page note the competitor’s name and the specific product name. Include a quick SWOT grid. Note three top-level findings.
    3. In your conclusion, summarize your findings and recommendations.

Don’t forget to add the date of your research; a lot can change in a few months, so it’s helpful to notate when research was performed.

Practice makes perfect

The first time you perform a competitive analysis can be shaky. Personally, I ended up with a laundry list of brands, products, and notes. Once I went through the exercise a few more times, I got more of a rhythm with what I was trying to do – and what kind of information I was looking for. As with anything, practice makes perfect (or at least makes you more comfortable).

For bonus points, look outside your own rabbit hole. Try performing a competitive analysis for a totally unrelated company or industry. For example, if you’re a marketer for software, pretend instead that you’re marketing a specialty clothing line. Or do a competitive analysis for a retail coffee chain, or an e-learning platform, or … the possibilities are endless. I recommend doing these exercises to get practice with the craft of competitive analysis, but also to gain perspective and inspiration. You may even want to do a competitive analysis of an industry or company that you really want to work in. This way you have it in your back pocket when you secure an interview!

Let’s block ads! (Why?)

Act-On Blog

Automation of Analysis Services with NuGet packages

We are pleased to announce that the Analysis Services Management Objects (AMO) and ADOMD client libraries are now available from NuGet.org! This simplifies the development and management of automation tasks for Azure Analysis Services and SQL Server Analysis Services.

NuGet provides benefits including the following.

  • Azure Functions require less manual intervention to work with Azure Analysis Services.
  • ISVs and developers of community tools for Analysis Services such as DAX Studio, Tabular Editor, BISM Normalizer and others will benefit from simplified deployment and reliable (non-GAC) references.

Visit this site for information on what NuGet is for and how to use it.

AMO

https://www.nuget.org/packages/Microsoft.AnalysisServices.retail.amd64/

AMO contains the object libraries to create and manage both Analysis Services multidimensional and tabular models. The object library for tabular is called the Tabular Object Model (TOM). See here for more information.

ADOMD

https://www.nuget.org/packages/Microsoft.AnalysisServices.AdomdClient.retail.amd64/

ADOMD is used primarily for development of client tools that submit MDX or DAX queries to Analysis Services for user analysis. See here for more information.

MSI installer

We recommend that all developers who use these libraries migrate to NuGet references instead of using the MSI installer. The MSI installer for AMO and ADOMD are still available here. Starting from MAJOR version 15 to the foreseeable future, we plan to release the client libraries as both NuGet packages and the MSI installer. In the long term, we want to retire the MSI installer.

NuGet versioning

NuGet package assemblies AssemblyVersion will follow semantic versioning: MAJOR.MINOR.PATCH. This ensures NuGet references load the expected version even if there is a different version in the GAC (resulting from MSI install). We will increment at least the PATCH for each public release. AMO and ADOMD versions will be kept in sync.

MSI versioning

The MSI version will continue to use a versioning scheme for AssemblyVersion like 15.0.0.0 (MAJOR version only). The MSI installs assemblies to the GAC and overrides previous MSI installations for the same MAJOR version. This versioning scheme ensures new releases do not affect NuGet references. It also ensures a single entry in Add/Remove Programs for each installation of a MAJOR version.

Windows Explorer file properties

The AssemblyFileVersion (visible in Windows Explorer as the File Version) will be the full version for both MSI and NuGet (e.g. 15.0.1.333).

The AssemblyInformationalVersion (visible in Windows Explorer as the Product Version) will be the semantic version for both MSI and NuGet (e.g. 15.0.1.0).

For anyone who has done significant development with Analysis Services, we hope you enjoy the NuGet experience for Analysis Services!

Let’s block ads! (Why?)

Analysis Services Team Blog

Asynchronous Refresh with the REST API for Azure Analysis Services

We are pleased to introduce the REST API for Azure Analysis Services. Using any programming language that supports REST calls, you can now perform asynchronous data-refresh operations. This includes synchronization of read-only replicas for query scale out.

Data-refresh operations can take some time depending on various factors including data volume and level of optimization using partitions, etc. These operations have traditionally been invoked with existing methods such as using TOM (Tabular Object Model), PowerShell cmdlets for Analysis Services, or TMSL (Tabular Model Scripting Language). The traditional methods may require long-running HTTP connections. A lot of work has been done to ensure the stability of these methods, but given the nature of HTTP, it may be more reliable to avoid long-running HTTP connections from client applications.

The REST API for Azure Analysis Services enables data-refresh operations to be carried out asynchronously. It therefore does not require long-running HTTP connections from client applications. Additionally, there are other built-in features for reliability such as auto retries and batched commits.

Base URL

The base URL follows this format:

https://.asazure.windows.net/servers//models//

For example, consider a model named AdventureWorks, on a server named myserver, located in the West US Azure region. The server name is:

asazure://westus.asazure.windows.net/myserver

The base URL is:

https://westus.asazure.windows.net/servers/myserver/models/AdventureWorks/

Using the base URL, resources and operations can be appended based on the following diagram:

Objects Asynchronous Refresh with the REST API for Azure Analysis Services

  • Anything that ends in “s” is a collection.
  • Anything that ends with “()” is a function.
  • Anything else is a resource/object.

For example, you can use the POST verb on the Refreshes collection to perform a refresh operation:

https://westus.asazure.windows.net/servers/myserver/models/AdventureWorks/refreshes

Authentication

All calls must be authenticated with a valid Azure Active Directory (OAuth 2) token in the Authorization header and must meet the following requirements:

  • The token must be either a user token or an application service principal.
  • The user or application must have sufficient permissions on the server or model to make the requested call. The permission level is determined by roles within the model or the admin group on the server.
  • The token must have the correct audience set to: “https://*.asazure.windows.net”.

POST /refreshes

To perform a refresh operation, use the POST verb on the /refreshes collection to add a new refresh item to the collection. The Location header in the response includes the refresh ID. The client application can disconnect and check the status later if required because it is asynchronous.

Only one refresh operation is accepted at a time for a model. If there is a current running refresh operation and another is submitted, the 409 Conflict HTTP status code will be returned.

The body may, for example, resemble the following:

{
    "Type": "Full",
    "CommitMode": "transactional",
    "MaxParallelism": 2,
    "RetryCount": 2,
    "Objects": [
        {
            "table": "DimCustomer",
            "partition": "DimCustomer"
        },
        {
            "table": "DimDate"
        }
    ]
}

Here’s a list of parameters:

Name Type Description Required? Default
Type Enum The type of processing to perform. The types are aligned with the TMSL refresh command types: full, clearValues, calculate, dataOnly, automatic, add and defragment. False automatic
CommitMode Enum Determines if objects will be committed in batches or only when complete.

Modes include: default, transactional, partialBatch.

False transactional
MaxParallelism int This value determines the maximum number of threads on which to run processing commands in parallel. This aligned with the MaxParallelism property that can be set in the TMSL Sequence command or using other methods. False 10
RetryCount int Indicates the number of times the operation will retry before failing. False 0
Objects Array[] An array of objects to be processed. Each object includes:

“table” when processing the entire table or “table” and “partition” when processing a partition.

If no objects are specified, the whole model is refreshed.

False Process the entire model

CommitMode equal to partialBatch can be used when doing an initial load of a large dataset that may take hours. At time of writing, the batch size is the MaxParallelism value, but this may change. If the refresh operation fails after successfully committing one or more batches, the successfully committed batches will remain committed (it will not roll back successfully committed batches).

GET /refreshes/

To check the status of a refresh operation, use the GET verb on the refresh ID. Here’s an example of the response body. The status field returns “inProgress” if the operation is in progress.

{
    "startTime": "2017-12-07T02:06:57.1838734Z",
    "endTime": "2017-12-07T02:07:00.4929675Z",
    "type": "full",
    "status": "succeeded",
    "currentRefreshType": "full",
    "objects": [
        {
            "table": "DimCustomer",
            "partition": "DimCustomer",
            "status": "succeeded"
        },
        {
            "table": "DimDate",
            "partition": "DimDate",
            "status": "succeeded"
        }
    ]
}

GET /refreshes

To retrieve a list of historical refresh operations for a model, use the GET verb on the /refreshes collection. Here is an example of the response body. At time of writing, the last 30 days of refresh operations are stored and returned, but this is subject to change.

[
    {
        "refreshId": "1344a272-7893-4afa-a4b3-3fb87222fdac",
        "startTime": "2017-12-09T01:58:04.76",
        "endTime": "2017-12-09T01:58:12.607",
        "status": "succeeded"
    },
    {
        "refreshId": "474fc5a0-3d69-4c5d-adb4-8a846fa5580b",
        "startTime": "2017-12-07T02:05:48.32",
        "endTime": "2017-12-07T02:05:54.913",
        "status": "succeeded"
    }
]

DELETE /refreshes/

To cancel an in-progress refresh operation, use the DELETE verb on the refresh ID.

POST /sync

Having performed refresh operations, it may be necessary to synchronize the new data with replicas for query scale out. To perform a synchronize operation for a model, use the POST verb on the /sync function. The Location header in the response includes the sync operation ID.

GET /sync?operationId=

To check the status of a sync operation, use the GET verb passing the operation ID as a parameter. Here’s an example of the response body:

{
    "operationId": "cd5e16c6-6d4e-4347-86a0-762bdf5b4875",
    "database": "AdventureWorks2",
    "UpdatedAt": "2017-12-09T02:44:26.18",
    "StartedAt": "2017-12-09T02:44:20.743",
    "syncstate": 2,
    "details": null
}

Possible values for syncstate include the following:

  • 0: Replicating. Database files are being replicated to a target folder.
  • 1: Rehydrating. The database is being rehydrated on read-only server instance(s).
  • 2: Completed. The sync operation completed successfully.
  • 3: Failed. The sync operation failed.
  • 4: Finalizing. The sync operation has completed but is performing clean up steps.

Code sample

Here’s a C# code sample to get you started:

https://github.com/Microsoft/Analysis-Services/tree/master/RestApiSample

To use the code sample, first do the following:

  1. Clone or download the repo. Open the RestApiSample solution.
  2. Find the line “client.BaseAddress = …” and provide your base URL (see above).

The code sample can use the following forms of authentication:

  • Interactive login or username/password
  • Service principal

Interactive login or username/password

This form of authentication requires an Azure application be set up with the necessary API permissions assigned. This section describes how to set up the application using the Azure portal.

  1. Select the Azure Active Directory section, click App registrations, and then New application registration.

New App Asynchronous Refresh with the REST API for Azure Analysis Services

  1. In the Create blade, enter a meaningful name, select Native application type, and then enter “urn:ietf:wg:oauth:2.0:oob” for the Redirect URI. Then click the Create button.

Create App Asynchronous Refresh with the REST API for Azure Analysis Services

  1. Select your app from the list and take note of the Application ID.

App ID Asynchronous Refresh with the REST API for Azure Analysis Services

  1. In the Settings section for your app, click Required permissions, and then click Add.

Required Permissions Asynchronous Refresh with the REST API for Azure Analysis Services

  1. In Select an API, type “SQL Server Analysis Services” into the search box. Then select “Azure Analysis Services (SQL Server Analysis Services Azure)”.

API Permissions Asynchronous Refresh with the REST API for Azure Analysis Services

  1. Select Read and Write all Models and then click the Select button. Then click Done to add the permissions. It may take a few minutes to propagate.

API Permissions 2 Asynchronous Refresh with the REST API for Azure Analysis Services

  1. In the code sample, find the method UpdateToken(). Observe the contents of this method.
  2. Find the line “string clientID = …” and enter the application ID you previously recorded.
  3. Run the sample.

Service principal

Please see the Automation of Azure Analysis Services with Service Principals and PowerShell blog post for how to set up a service principal and assign the necessary permissions in Azure Analysis Services. Having done this, the following additional steps are required:

  1. Find the line “string authority = …” and enter your organization’s tenant ID in place of “common”. See code comments for further info.
  2. Comment/uncomment so the ClientCredential class is used to instantiate the cred object. Ensure the and values are accessed in a secure way or use certificate-based authentication for service principals.
  3. Run the sample.

Let’s block ads! (Why?)

Analysis Services Team Blog

Exploratory and Confirmatory Analysis: What’s the Difference?

1200x628 Explorer2 Exploratory and Confirmatory Analysis: What’s the Difference?

How does a detective solve a case? She pulls together all the evidence she has, all the data that’s available to her, and she looks for clues and patterns.

At the same time, she takes a good hard look at individual pieces of evidence. What supports her hypothesis? What bucks the trend? Which factors work against her narrative? What questions does she still need to answer… and what does she need to do next in order to answer them?

Then, adding to the mix her wealth of experience and ingrained intuition, she builds a picture of what really took place – and perhaps even predicts what might happen next.

But that’s not the end of the story. We don’t simply take the detective’s word for it that she’s solved the crime. We take her findings to a court and make her prove it.

In a nutshell, that’s the difference between Exploratory and Confirmatory Analysis.

Data analysis is a broad church, and managing this process successfully involves several rounds of testing, experimenting, hypothesizing, checking, and interrogating both your data and approach.

Putting your case together, and then ripping apart what you think you’re certain about to challenge your own assumptions, are both crucial to Business Intelligence.

Before you can do either of these things, however, you have to be sure that you can tell them apart.

What is Exploratory Data Analysis?

Exploratory data analysis (EDA) is the first part of your data analysis process. There are several important things to do at this stage, but it boils down to this: figuring out what to make of the data, establishing the questions you want to ask and how you’re going to frame them, and coming up with the best way to present and manipulate the data you have to draw out those important insights.

That’s what it is, but how does it work?

As the name suggests, you’re exploring – looking for clues. You’re teasing out trends and patterns, as well as deviations from the model, outliers, and unexpected results, using quantitative and visual methods. What you find out now will help you decide the questions to ask, the research areas to explore and, generally, the next steps to take.

Exploratory Data Analysis involves things like: establishing the data’s underlying structure, identifying mistakes and missing data, establishing the key variables, spotting anomalies, checking assumptions and testing hypotheses in relation to a specific model, estimating parameters, establishing confidence intervals and margins of error, and figuring out a “parsimonious model” – i.e. one that you can use to explain the data with the fewest possible predictor variables.

In this way, your Exploratory Data Analysis is your detective work. To make it stick, though, you need Confirmatory Data Analysis.

What is Confirmatory Data Analysis?

Confirmatory Data Analysis is the part where you evaluate your evidence using traditional statistical tools such as significance, inference, and confidence.

At this point, you’re really challenging your assumptions. A big part of confirmatory data analysis is quantifying things like the extent any deviation from the model you’ve built could have happened by chance, and at what point you need to start questioning your model.

Confirmatory Data Analysis involves things like: testing hypotheses, producing estimates with a specified level of precision, regression analysis, and variance analysis.
In this way, your confirmatory data analysis is where you put your findings and arguments to trial.

Uses of Confirmatory and Exploratory Data Analysis

In reality, exploratory and confirmatory data analysis aren’t performed one after another, but continually intertwine to help you create the best possible model for analysis.

Let’s take an example of how this might look in practice.

Imagine that in recent months, you’d seen a surge in the number of users canceling their product subscription. You want to find out why this is, so that you can tackle the underlying cause and reverse the trend.

This would begin as exploratory data analysis. You’d take all of the data you have on the defectors, as well as on happy customers of your product, and start to sift through looking for clues. After plenty of time spent manipulating the data and looking at it from different angles, you notice that the vast majority of people that defected had signed up during the same month.

On closer investigation, you find out that during the month in question, your marketing team was shifting to a new customer management system and as a result, introductory documentation that you usually send to new customers wasn’t always going through. This would have helped to troubleshoot many teething problems that new users face.

Now you have a hypothesis: people are defecting because they didn’t get the welcome pack (and the easy solution is to make sure they always get a welcome pack!).

But first, you need to be sure that you were right about this cause. Based on your Exploratory Data Analysis, you now build a new predictive model that allows you to compare defection rates between those that received the welcome pack and those that did not. This is rooted in Confirmatory Data Analysis.

The results show a broad correlation between the two. Bingo! You have your answer.

Exploratory Data Analysis and Big Data

Getting a feel for the data is one thing, but what about when you’re dealing with enormous data pools?

After all, there are already so many different ways you can approach Exploratory Data Analysis, by transforming it through nonlinear operators, projecting it into a difference subspace and examining your resulting distribution, or slicing and dicing it along different combinations of dimensions… add sprawling amounts of data into the mix and suddenly the whole “playing detective” element feels a lot more daunting.

The important thing is to ensure that you have the right tech stack in place to cope with this, and to make sure you have access to the data you need in real time.

Two of the best statistical programming packages available for conducting Exploratory Data Analysis are R and S-Plus; R is particularly powerful and easily integrated with many BI platforms. That’s the first thing to consider.

The next step is ensuring that your BI platform has a comprehensive set of data connectors, that – crucially – allow data to flow in both directions. This means that you can keep importing Exploratory Data Analysis and models from, for example, R to visualize and interrogate results – and also send data back from your BI solution to automatically update your model and results as new information flows into R.

In this way, you not only strengthen your Exploratory Data Analysis, you incorporate Confirmatory Data Analysis, too – covering all your bases of collecting, presenting and testing your evidence to help reach a genuinely insightful conclusion.

Your honor, we rest our case.

Ready to learn how to incorporate R for deeper statistical learning? You can watch our webinar with renowned R expert Jared Lander to learn how R can be used to solve real-life business problems.

Let’s block ads! (Why?)

Blog – Sisense

Qubole raises $25 million for data analysis service

 Qubole raises $25 million for data analysis service

Qubole, which provides software to automate and simplify data analytics, announced today that it has raised $ 25 million in a round co-led by Singtel Innov8 and Harmony Partners. Existing investors Charles River Ventures (CRV), Lightspeed Venture Partners, Norwest Venture Partners, and Institutional Venture Partners (IVP) also joined.

Founded in 2011, the Santa Clara, California-based startup provides the infrastructure to process and analyze data more easily.

It’s possible for companies to store large amounts of information in public clouds without building their own datacenters. But they still need to process and analyze the data, which is where Qubole comes in.

“Many companies struggle with creating data lakes,” CEO Ashish Thusoo noted. His solution is providing a cloud-based infrastructure to break the raw data down without having to break it into silos.

The chief executive is well-versed in the matter, as he led a team of engineers at Facebook that focused on data infrastructure. “It gave us a front row seat of how modern enterprises should be using data,” he said.

Qubole claims to be processing nearly an exabyte of data in the cloud per month for more than 200 enterprises, which include Autodesk, Lyft, Samsung, and Under Armour. In the case of Lyft, the ride-sharing company uses Qubole to process and analyze its data for route optimization, matching drivers with customers faster.

Qubole offers a platform as a service (PaaS) that currently runs on Amazon Web Services (AWS), Microsoft Azure, and Oracle Cloud. “Google is something we’re looking at,” said Thusoo.

He said the biggest competitors in the sector include AWS, Cloudera, and Databricks, which recently closed a $ 140 million round of funding.

To date, Qubole has raised a total of $ 75 million. It plans on using the new money to further develop its product, increase sales and marketing efforts, and expand in the Asia Pacific (APAC) region.

“There is a significant opportunity for big data in the Asia Pacific region,” said Punit Chiniwalla, senior director at Singtel Innov8, in a statement.

Qubole currently employs 240 people across its offices in California, India, and Singapore.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

Improve Sales, Marketing, SEO and more with Sales Call Analysis

blog title rethink podcast amit bendov 351x200 Improve Sales, Marketing, SEO and more with Sales Call Analysis

This transcript has been edited for length. To get the full measure, listen to the podcast.

Michelle Huff: What are people trying to learn from analyzing all these sales conversations?

Amit Bendov: Sales is a pretty complex craft. There’s a lot of things that you need to do right. Anything from being a good listener, to asking the right questions at the right time, to making sure that you uncover customer problems, that you create a differentiated value proposition in their minds, to make sure you have concrete action items. So, there’s quite a lot to learn. And then there’s also product and industry specific knowledge. For example, if you hire a news salesperson at Act-On, and first they need to be good salespeople; second, they need to understand marketing automation, they need to understand the industry, they need to understand competitive differentiation.

So, all those things are little skills that Gong identifies how are you doing and how you compare against some of the best reps in your company. And then it starts coaching either by providing you feedback or a manager feedback, and improving your reps so you can become better at all these little things, becoming a better listener, asking better questions, and becoming a better closer.

Although we sell primarily to the sales team, the product marketing and marketing teams are also big fans, because you can see if there are new messages that are being rolled out, and are customers responding to the new messages, are we telling the story right. What are customers asking about, both from a marketing and sales perspective, but also from a product, which features they like, which features they don’t like, which features they like about our competitors. So, it’s a great insight tool for marketeers and product guys.

Michelle: What do you think we could learn from unsuccessful sales calls?

Amit: I’m a believer we could learn more from successful calls. There are fewer ways to succeed than ways to fail. So, learning from what works is usually more powerful. But we all have our failures. And nobody’s perfect. Even some of the better calls have lots of areas to improve. One of the first things that people notice is the listen to talk ratio. And one of the things Gong measures is how much time you’re speaking on a call versus how much time the prospect is speaking. The optimal ratio, if you’re curious, is 46 percent. So, the salesperson is speaking for 46 percent of the time of the call and the prospect’s filling in the rest.

A lot of the sales reps – especially, the new hires – tend to speak as much as 80 percent of the time. Maybe it’s because they’re insecure, maybe they feel they need to push more. And that’s almost never a good idea.

Michelle: That’s such a good feedback loop. Forty six percent, that’s a good ratio, right? It’s not not saying anything at all, but letting them do the majority of talking. That’s interesting.

Amit: It is like a Fitbit for sales calls. Once people see that feedback, it’s pretty easy to cure: ‘Oh my God, I spoke for 85 percent.’ And then they start setting personal goals to bring it down. And because they get feedback on every call, every time, it’s pretty easy to fix this problem.

Michelle: What are certain keywords when it comes to customer timelines that you train people to look for?

Amit: One of the things we’ve analyzed, and we ran these on a very large number of calls, is what’s a reliable response to the question regarding the time of the project. If a salesperson would ask a customer, when would you like to be live? And there’s a range of options. We found the word “probably,” as in probably mid-February, is a pretty good indicator that they’re serious about it. We don’t know exactly why. But I mean we can only guess that maybe they’ve taken their response more seriously. Versus “like we need this yesterday,” or “we have to have it tomorrow,” which are not really thoughtful answers. Or obviously, “well, maybe sometime next year,” which is very loose. So, the word probably is actually a pretty good indicator that the deal, if it happens, it doesn’t mean that it will close, but if it will, it will probably happen on that timeframe.

Michelle: That is super insightful. Because it’s almost counterintuitive. You’d think that if you hear the word probably, it wouldn’t, they’re not very firm.

Amit: Here’s another interesting one. A lot of the sales managers and coaches are obsessed with filler words, things like you know, like, basically. And sometimes they would drive the salespeople nuts with trying to bring down their filler word portions. And what we’ve found, we analyzed a large number of calls, and tried to see if there’s an impact on close rates, in calls where there are a lot of filler words and calls where there are not a lot of filler words. And we found absolutely zero correlation between the words and success. So, my advice to our listener, just don’t worry about it. Just say what you like. It doesn’t make a big difference. Or at least there is no proof that it makes a difference.

And my theory that it’s more annoying when you listen to it in a recording versus in a live conversation. Because in a live conversation, both you and the customer are focused on the conversation and trying to understand what’s going on, and you don’t pay attention to those filler words. But when you listen to a recording, they’re much more prominent.

Michelle: I think at the end of the day, if there’s a connection, people are buying from people. I feel like those are all things where it’s good feedback, where you can just improve on how you up your game across the board on all your conversations.

Amit: Absolutely. You shine a light on this huge void that is sales conversation, what’s happening in conversation, just how people see clearly the data, versus just rely on opinions, or self-perception, or subjective opinions on what is actually happening.

Michelle: We talked a lot about the sales use case. What are some of the other areas we can use Gong? How about customer success?

Amit: Almost all of our customers now use it for customer success as well. Again, here’s where you want to know what the customers are thinking. Are we taking good care of them? Are we saying the right things? What is it like, which customers are unhappy with our service, what do we need to improve? So that again shines a light on how customers feel. Because without that, all you have is really some usage metrics and KPIs and surveys that are important, but don’t tell the complete story. What people actually tell you, I mean you could have customers that use the product a lot and are not very thrilled with your product. Or customers that don’t use it as much as you think that they should be, but they’re very excited. So definitely Gong is used for customer service in almost all of our customers.

Here is another interesting application. I know a lot of our audience are marketers. A lot of what we do in marketing has to do with messages. First is like how we describe the product. So, you can learn a lot about it from what your customers say, not what your salespeople are saying. But if you listen to real customer calls, I mean existing user, and you’ll hear how they describe the product, this is probably a very good language for you to use. If you listen to enough calls, you can get a theme that will help you explain your product better.

You might have heard that I’ve used ‘shine a light on your sales conversation.’ I didn’t make this up. We’ve interviewed 20 VPs of sales that use the product. And we looked at what are the common themes they use when they describe the product. And this came from them. So, it’s a great messaging research tool.

The other application you could use Gong for is SEO and SEM. Usually one of the first things people will say when they join an introductory call is, we’re looking for “X.” That “X” might not be what’s on your website, or how you describe your product. If you listen to a lot of calls and use their own words, those are the words you want to bid on or optimize for. Because that will drive a lot of traffic. And this is what people really search for and not the words you would normally use in your website. We do that, too.

Michelle: In our conversation, you’re bringing up your research and the insights you’re able to gain. Having these research reports seems to be a key part of your marketing and brand awareness. How do you leverage these research reports? What are you doing? And how is it helping?

Amit: We identify that’s the key strategic marketing capability that we’re going to be counting on. Something that people have not seen before, so it’s like the first pictures from the Hubble telescope. OK, so here’s what the universe looks like. Or the first pictures of the Titanic. This is what it really looks like and that’s where it lies. So, we’re doing the same thing for sales conversations. It’s something that people are very passionate about. There’s some 10,000 books on Amazon on how to sell. But nobody really has the facts. So, we identified this as an opportunity.

And we’re trying to do things that are interesting and useful. We try to keep it short. We take a small chunk, we investigate it, we publish the results, and try to make sure that whatever we do it’s something that people have at least some takeaway. So, it’s both interesting and immediately be applicable to what they do.

That does generate activity on social media. We get hundreds of likes per post. We get press from it. Some of it got published on Business Insider, and Forbes, and it drives a lot of traffic. Plus, it’s in line with our message, shining the light on your sales conversation. We do get a lot of people coming into sales calls, ‘Hey, I read this blog on LinkedIn, this is very fascinating about how much I should be talking, what kind of questions I want to apply to my own team, which is what we sell.’

Michelle: I enjoyed the conversation. Thank you so much for taking the time to speak with us.

Amit: My pleasure, Michelle. I had a lot of fun. Thank you.

Let’s block ads! (Why?)

Act-On Blog