• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Platform

Conversational Platform Trends for 2021

January 21, 2021   CRM News and Info

Live chat and conversational platform technologies have made significant advancements in the past few years. Thanks to AI and machine learning, these implementations have gone beyond just being a customer support tool, to a crucial component of an e-commerce website’s revenue engine.

With the pandemic ongoing, e-commerce is booming right now at the expense of brick-and-mortar. But with Amazon still accounting for 44 percent of all online transactions, e-commerce business owners must constantly search for ways to keep customers on their site and returning for future business.

Live chat is one of the most important tools that retailers can have on their website so that when a customer has any questions or concerns about a product or service, they can get instant customer support. This is particularly important to service consumers with a high-level of intent to purchase, so they don’t bounce to a competitor to get their questions answers and make the buy.

However, a live chat tool is only as good as the team and technology behind it — and live chat experiences can vary dramatically from retailer to retailer. There are essential factors that every online seller who is serious about conversions needs to consider.

Here’s what retailers need to know to get maximum conversions and satisfied customers from live chat tools in 2021 and beyond.

So Many Channels, So Little Time

When today’s consumers have a question for a retailer, there are a growing number of communications channels they might try to send a message through: the website, email, Facebook, Twitter, Instagram, etc. The longer a customer is left waiting for a response, the more likely a serious shopper will have searched elsewhere for what they were looking for.

But now, live chat technology can integrate into all the various communication channels. Customer messages that arrive via channel can be sent to an app on the smartphones of the retailer’s support staff, so they can instantly respond to any customer messages that come through any channel, at any time of day.

AI, Machine Learning

Very few technologies haven’t benefited from AI and machine learning — and live chat is no exception. There is a fine balance between technology and human experience when it comes to customer satisfaction. At the end of the day, the advantage of automation is that it saves businesses time and money. The downside is that sometimes your customers just want to speak with a human and get frustrated dealing with bots.

Hiring humans to answer the same questions over and over makes little sense. This is why many retailers offer a chatbot to respond to common questions, but these can often be a lackluster or frustrating experience from the customer’s perspective.

A live chat system with good AI will monitor hundreds of thousands or millions of live chat conversations between customers and sales associates to identify common customer problems and determine which questions can be automated verses requiring human interaction.

AI can now analyze all your customers’ live chat sessions to understand the most common questions, integrate these answers into a bot, and redirect customers who want to speak to a human instantly with the right person.

AI and Customer Intent

One of the hottest topics recently for retailers has been customer intent. Only 17 percent of customers who visit a website have a serious intention to buy. In turn, this means that 83 percent of people visiting a website have no intent of actually buying anything.

Thanks to AI, state-of-the-art live chat systems can help identify which shoppers on a website have a serious intent to purchase. This gives the retailer a chance to open up a live chat portal and offer these customers the chance to speak to a sales associate. Very much like the in-store experience, these sales associates can help answer any questions, maximize the chance of converting these customers, upsell more products, and offer the best customer experience.

Integrating Freelance Product Experts

One of the primary reasons that retailers don’t have a 24/7 team of people on standby to live chat with their online customers is expense.

However, another key trend that has unfolded recently is integrating independent product experts and sales associates who work on a freelance model (similar to Uber), in the e-commerce live chat system.

Instead of paying full-time staff to respond to customer queries, retailers can now tap into a pool of freelance product experts who work from home and will help customers on behalf of the retailer.

For example, Lowes uses a team of freelance home improvement experts to respond to its customers. These gig-economy experts are pre-vetted on their expertise — in this case, home improvement. Many other retailers utilize freelance experts for chat to help with beauty products, photography, consumer electronics, etc.

These expert advisors work from home, choose their own hours, and jump onto the conversational platform whenever they want to live chat with customers of a particular retailer, or even multiple retailers’ customers, to answer product questions or upsell. The freelance advisor gets paid per chat session they participate in and earns a commission of sales they help to make.

From a retailer’s perspective, they have best-of-breed product experts helping convert and upsell customers who are identified as having serious intent on their website, without the expense of paying full-time staff.

Conclusion

E-commerce continues to grow exponentially, but with Amazon always just one click away, retailers can outdo the competition by providing a better customer experience and adding the human element to their website. Live chat is the main portal for connecting online customers to people from your company, or someone who works on your behalf.

A live chat or conversational platform is only as good as the technology and the team behind it. Retailers who see it as a crucial part of their e-commerce experience and use the technology to the best of its ability will experience greater conversions and customer satisfaction scores compared to those who do not.
end enn Conversational Platform Trends for 2021


Terrence%20Fox Conversational Platform Trends for 2021
Terrence Fox is head of innovation and strategy at iAdvize.

Let’s block ads! (Why?)

CRM Buyer

Read More

Kenshoo acquires Signals Analytics to supplement its AI-powered marketing platform

December 23, 2020   Big Data
 Kenshoo acquires Signals Analytics to supplement its AI powered marketing platform

Kenshoo, a provider of a platform for managing marketing campaigns, yesterday announced its intent to acquire Signals Analytics, which provides a service for collecting unstructured consumer data that allows marketers to model campaigns using AI technologies. Signals Analytics counts organizations including Procter & Gamble, Nestle, Johnson & Johnson, Bayer, Roche, and Mars as its customers. Terms of the acquisition were not disclosed.

The Signal Analytics service optimizes marketing campaign designs using consumer data from external sources such as social media feeds and normalizes it for customers. That capability allows marketers to employ machine learning algorithms to reach specific types of consumers without necessarily having to rely on a digital marketing agency, said Signals Analytics co-founder and CEO Gil Sadeh.

All the data loading and preparation activities are automated using AI tools the company provides for that purpose, he noted. He said that there’s no need for organizations to construct a massive data lake of their own to analyze consumer data.

Going forward, the combined entity will further enable the automated marketing campaign execution based on the models customers create using the Signals Analytics service. “Our outputs will become their inputs,” said Sadeh. “We’re going to create the first full lifecycle knowledge graph built for marketers.”

Other providers of market and customer intelligence platforms include Stravito and Precima. Marketing departments also employ an array of analytics applications that have been customized to analyze data stored in various types of data lakes that IT teams typically construct in the cloud on their behalf.

As organizations look to engage consumers online in the wake of the COVID-19 pandemic, they’re typically faced with a stark choice: Either invest dollars in building their own marketing platforms on top of a data lake or opt to employ a service that continuously collects and normalizes consumer data on their behalf. It would take most IT organizations several years to acquire, build, and deploy a platform that collects consumer data from multiple sources at a level of scale Signal Analytics has already achieved.

Once built, that data lake also needs to be maintained, otherwise it will potentially turn into a data swamp. Signals Analytics not only collects and normalizes massive amounts of data, it also classifies that data in a way that makes it possible to search and query it using natural language processing (NLP) tools.

What enterprise IT organizations are investing is customer data platforms (CDPs) that enable them to better engage their existing customers. SAP, for example, acquired Emarsys last fall to provide such a platform. At the beginning of the year, Salesforce acquired Evergage. Other providers of CDP platforms include Oracle, Adobe, Arm Treasure Data, and QuickPivot.

The level of investment being made in these classes of platforms tends to vary widely depending on the impact the pandemic has had on a vertical industry. However, with many organizations prioritizing digital customer engagements, the need for platforms that employ AI to sort and analyze massive amounts of data have become more profound.

Of course, collecting customer data is only the first step. Organizations will also need to invest time and resources integrating analytics tools with the marketing automation platforms they have deployed. The combined entity that Kenshoo will create once the deal is completed will provide organizations with a pre-integrated platform.

It’s not clear to what degree the convergence of marketing intelligence and automation platforms is imminent. However, give the urgency to launch market campaigns faster at a lower total cost to the organization, chances are good that more mergers among these platform providers is imminent.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Intel Geospatial is a cloud platform for AI-powered imagery analytics

October 28, 2020   Big Data

The audio problem

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences.

Access here

Intel today quietly launched Intel Geospatial, a cloud platform that features data engineering solutions, 3D visualizations, and basic analytics tools for geovisual workloads. Intel says it’s designed to provide access to 2D and 3D geospatial data and apps through an ecosystem of partners, addressing use cases like vegetation management, fire risk assessment and inspection, and more.

The geospatial analytics market is large and growing, with a recent Markets and Markets report estimating it will be worth $ 96.34 billion by 2025. Geospatial imagery can help companies manage assets, for example network assets prone to damage during powerful storms. Moreover, satellite imagery and the AI algorithms trained to analyze it have applications in weather prediction, defense, transportation, insurance, and even health care, namely because of their ability to capture and model environments over extended periods of time.

Using Intel Geospatial, which is powered by Intel datacenters, customers can ingest and manage geovisual data from a mobile- and desktop-accessible web portal. They’re able to view slope, elevation, and other data layers in a 3D environment with zoom, pan, and tilt controls and auto-updated time and date stamps. Moreover, they can analyze the state of various target assets as well as run analytics to extract insights that can then be passed to existing enterprise systems.

 Intel Geospatial is a cloud platform for AI powered imagery analytics

Intel Geospatial offers data from satellites, manned aircraft, and unmanned aerial vehicles (UAVs) like drones, with data from Mobileye — Intel’s autonomous vehicle subsidiary — available upon request. The platform’s user interface auto-populates with area-specific datasets and allows for search based on street addresses or GPS coordinates, which are standardized for analytics.

Intel Geospatial offers out-of-the-box algorithms for risk classification, object counting, distance measuring, and public and private record reconciliation. Intel says it’s leveraging startup Enview’s AI to power 3D geospatial classification for faster lidar analytics turnaround. Meanwhile, LiveEO is delivering algorithmic monitoring for railway, electricity, and pipelines.

 Intel Geospatial is a cloud platform for AI powered imagery analytics

Intel’s new service joins the list of geospatial products already offered by companies including Google, Microsoft, and Amazon. Google’s BigQuery GIS lets Google Cloud Platform customers analyze and visualize geospatial data in BigQuery. Microsoft offers Azure Maps, a set of geospatial APIs to add spatial analytics and mobility solutions to apps. Amazon provides a registry of open geospatial datasets on Amazon Web Services. And Here Technologies, the company behind a popular location and navigation platform, has a service called XYZ that enables anyone to upload their geospatial data — such as points, lines, polygons, and related metadata — and create apps equipped with real-time maps.


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

D-Wave’s 5,000-qubit quantum computing platform handles 1 million variables

September 29, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

D-Wave today launched its next-generation quantum computing platform available via its Leap quantum cloud service. The company calls Advantage “the first quantum computer built for business.” In that vein, D-Wave today also debuted Launch, a jump-start program for businesses that want to begin building hybrid quantum applications.

“The Advantage quantum computer is the first quantum computer designed and developed from the ground up to support business applications,” D-Wave CEO Alan Baratz told VentureBeat. “We engineered it to be able to deal with large, complex commercial applications and to be able to support the running of those applications in production environments. There is no other quantum computer anywhere in the world that can solve problems at the scale and complexity that this quantum computer can solve problems. It really is the only one that you can run real business applications on. The other quantum computers are primarily prototypes. You can do experimentation, run small proofs of concept, but none of them can support applications at the scale that we can.”

Quantum computing leverages qubits (unlike bits that can only be in a state of 0 or 1, qubits can also be in a superposition of the two) to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. But D-Wave doesn’t sell quantum computers anymore. Advantage and its over 5,000 qubits (up from 2,000 in the company’s 2000Q system) are only available via the cloud. (That means through Leap or a partner like Amazon Braket.)

5,000+ qubits, 15-way qubit connectivity

If you’re confused by the “over 5,000 qubits” part, you’re not alone. More qubits typically means more potential for building commercial quantum applications. But D-Wave isn’t giving a specific qubit count for Advantage because the exact number varies between systems.

“Essentially, D-Wave is guaranteeing the availability of 5,000 qubits to Leap users using Advantage,” a D-Wave spokesperson told VentureBeat. “The actual specific number of qubits varies from chip to chip in each Advantage system. Some of the chips have significantly more than 5,000 qubits, and others are a bit closer to 5,000. But bottom line — anyone using Leap will have full access to at least 5,000 qubits.”

Advantage also promises 15-way qubit connectivity, thanks to a new chip topology, Pegasus, which D-Wave detailed back in February 2019. (Pegasus’ predecessor, Chimera, offered six connected qubits.) Having each qubit connected to 15 other qubits instead of six translates to 2.5 times more connectivity, which in turn enables the embedding of larger and more complex problems with fewer physical qubits.

“The combination of the number of qubits and the connectivity between those qubits determines how large a problem you can solve natively on the quantum computer,” Baratz said. “With the 2,000-qubit processor, we could natively solve problems within 100- to 200-variable range. With the Advantage quantum computer, having twice as many qubits and twice as much connectivity, we can solve problems more in the 600- to 800-variable range. As we’ve looked at different types of problems, and done some rough calculations, it comes out to generally we can solve problems about 2.6 times as large on the Advantage system as what we could have solved on the 2000-qubit processor. But that should not be mistaken with the size problem you can solve using the hybrid solver backed up by the Advantage quantum computer.”

1 million variables, same problem types

D-Wave today also announced its expanded hybrid solver service will be able to handle problems with up to 1 million variables (up from 10,000 variables). It will be generally available in Leap on October 8. The discrete quadratic model (DQM) solver is supposed to let businesses and developers apply hybrid quantum computing to new problem classes. Instead of accepting problems with only binary variables (0 or 1), the DQM solver uses other variable sets (integers from 1 to 500, colors, etc.), expanding the types of problems that can run on Advantage. D-Wave asserts that Advantage and DQM together will let businesses “run performant, real-time, hybrid quantum applications for the first time.”

Put another way, 1 million variables means tackling large-scale, business-critical problems. “Now, with the Advantage system and the enhancements to the hybrid solver service, we’ll be able to solve problems with up to 1 million variables,” Baratz said. “That means truly able to solve production-scale commercial applications.”

 D Wave’s 5,000 qubit quantum computing platform handles 1 million variables

Depending on the technology they are built on, different quantum computers tend to be better at solving different problems. D-Wave has long said its quantum computers are good at solving optimization problems, “and most business problems are optimization problems,” Baratz argues.

Advantage isn’t going to be able to solve different types of problems, compared to its 2000Q predecessor. But coupled with DQM and the sheer number of variables, it may still be significantly more useful to businesses.

“The architecture is the same,” Baratz confirmed. “Both of these quantum computers are annealing quantum computers. And so the class of problems, the types of problems they can solve, are the same. It’s just at a different scale and complexity. The 2000-qubit processor just couldn’t solve these problems at the scale that our customers need to solve them in order for them to impact their business operations.”

D-Wave Launch

In March, D-Wave made its quantum computers available for free to coronavirus researchers and developers. “Through that process what we learned was that while we have really good software, really good tools, really good training, developers and businesses still need help,” Baratz told VentureBeat. “Help understanding what are the best problems that they can benefit from the quantum computer and how to best formulate those problems to get the most out of the quantum computer.”

D-Wave Launch will thus make the company’s application experts and a set of handpicked partner companies available to its customers. Launch aims to help anyone understand how to best leverage D-Wave’s quantum systems to support their business. Fill out a form on D-Wave’s website and you will be triaged to determine who might be best able to offer guidance.

“In order to actually do anything with the quantum processor, you do need to become a Leap customer,” Baratz said. “But you don’t have to first become a Leap customer. We’re perfectly happy to engage with you to help you understand the benefits of the quantum computer and how to use it.”

D-Wave will make available “about 10” of its own employees as part of Launch, plus partners.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Monitoring the Power Platform: Azure DevOps – Orchestrating Deployments and Automating Release Notes

August 31, 2020   Microsoft Dynamics CRM

Summary

 

DevOps has become more and more ingrained into our Power Platform project lifecycle. Work item tracking and feedback tools for teamwork. Continuous integration and delivery for code changes and solution deployments. Automated testing for assurance, compliance and governance considerations. Microsoft’s tool, Azure DevOps provides native capabilities to plan, work, collaborate and deliver. Each step along the way in our Power Platform DevOps journey can be tracked and monitored which will be the primary objective of this article.

In this article, we will focus on integrating Azure DevOps with Microsoft Teams to help coordinate and collaborate during a deployment. We will explore the various bots and how to set them up. From there we will walk through a sample scenario involving multiple teams working together. Finally, we will look to automate release notes using web hooks and Azure Function.

Sources

 

Sources of Azure DevOps events that impact our delivery can come from virtually any area of the platform including work items, pipelines, source control, testing and artifact delivery. For each one of these events, such as completed work items, we can setup visualizations such as charts based on defined queries. Service hooks and notification subscriptions can be configured to allow real time reporting of events to external parties and systems allowing for us to stay in a state of continuous communication and collaboration.

AUTHOR NOTE: Click on each image to enlarge for detail.

3288.pastedimage1598622673004v1 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Microsoft Teams, Continuous Collaboration and Integration

 

Azure DevOps bots with Microsoft Teams has quickly grown into one of my favorite features. For instance, Azure DevOps dashboards and kanban boards can be added to channels for visualizations of progress as shown below.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Multiple Azure DevOps bots can be configured to deliver messages to and from Microsoft Teams to allow for continuous collaboration across multiple teams and channels. These bots can work with Azure Pipelines, work items and code pull requests.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Work Items Code Pipelines
6431.pastedimage1598622673006v4 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes 1145.pastedimage1598622673006v5 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes 5047.pastedimage1598622673007v6 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

For monitoring and orchestrating deployments across our various teams, the Azure Pipelines bot is essential. Let’s begin by setting up subscriptions to monitor a release pipeline.

NOTE: The rest of this document will be using a release pipeline as an example, but this will also work with multi-stage build pipelines that utilize environments.

Configuring the Azure Pipelines Bot in Microsoft Teams

 

Use the “subscriptions” keyword with the Azure Pipelines bot to review and modify existing subscriptions and add new ones.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

In the above example, we are subscribing to any changes in stages or approvals for a specific release pipeline. Its recommend to filter to a specific pipeline to reduce clutter in our Teams messaging. The Azure Pipeline bot, using actions described in the article “Azure DevOps – Notifications and Service Hooks“, can be further filtered by build statuses. This is helpful to isolate the messages delivered to a specific Teams channel.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Once configured, as soon as our pipeline begins to run, Microsoft Teams will begin to receive messages. Below is an example showing the deployment of a specific release including stages and approval requests. What I find nice about this is that Microsoft Teams works on both my mobile devices and even Linux based operating systems, allowing any team on any workload to utilize this approach.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

I also want to point out that Azure DevOps also has the ability to natively integrate with other 3rd party tools such as Slack (Similar to the Teams bots), ServiceNow and Jenkins.

Release Pipelines

 

Quality Deployments

 

Deployments within a release pipeline allow for numerous ways to integrate monitoring into Azure DevOps processes. Each deployment include pre and post conditions which can be leveraged to send events and metrics. For instance, the Azure Function gate can be used to invoke a micro service that writes to Azure Application Insights, creates ServiceNow tickets or even Kafka events. The possibilities are endless, imagine sending messages back to the Power Platform for each stage of a deployment!

Approvals

 

Pre and Post approvals can be added to each job in the release pipeline. Adding these can assist during a complex deployment requiring coordination between multiple teams dispersed geographically. Shown below is a hypothetical setup of multiple teams each with specific deliverables as part of a release.

7002.pastedimage1598622673008v10 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

In this scenario, a core solution needs to be deployed and installed before relying features can begin. When any of the steps in the delivery process begins, the originating team needs to be notified in case of any issues that come up.

Using approvals allows the lead of the specific feature team to align the resources and communicate to the broader team that the process can move forward. The full example can be found below.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Here is an example of an approval within Microsoft Teams, notifying the lead of the core solution team that the import process is ready. The approval request shows the build artifacts (e.g. solutions, code files, etc), the branch and pipeline information.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Deployment Gates

 

At the heart of a gated deployment approach is the ability to search for inconsistencies or negative signals to minimize unwanted impact further in the process. These gates, which can be set to run before or after a deployment job, allow us to query for potential issues and alerts. They also could be used to notify or perform an operation on an external system.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Queries and Alerts

 

Deployment gates provide the ability to run queries on work items within your Azure DevOps project. For instance this allows release coordinators and deployment managers to check for bugs reported from automated testing using RSAT for Dynamics 365 F&O or EasyRepro for Dynamics 365 CE. These queries are created within the Work Items area of Azure DevOps. From there they are referenced within the pipeline and based on the data returned, upper and lower thresholds can be set. If these thresholds are crossed, the gate condition is not successful and the process will halt until corrections are made.

External Integrations

 

As mentioned above Azure Function is natively integrated within deployment gates for Release Pipelines. These can be used for both a pre condition and post condition to report or integrate with external systems.

Deployment gates can also invoke REST API endpoints. This could be used within the Power Platform to query the CDS API or run Power Automate flows. An example could be to query the Common Data Service for running asynchronous jobs, creating activities within a Dynamics 365 environment or admin actions such as enabling Admin mode. Another could be to use the robust approval process built in Power Automate for pre and post approvals outside of the Azure DevOps licensed user base.

Using Build Pipelines or Release Pipelines

 

In the previous section I described how to introduce quality gates to a release securing each stage of the pipeline. Release pipelines are useful to help control and coordinate deployments. That said, environments and build pipelines allow for use of YAML templates which are flexible across both Azure DevOps and GitHub and allow for teams to treat pipelines like other source code.

Environments

 

Environments in Azure DevOps allow for targeted deployment of artifacts to a collection of resources. In the case of the Power Platform, this can be thought of a release to an Power Platform environment. The use of pipeline environments is optional, that is unless you begin work using Release pipelines which do require environments. Two of the main advantages of environments are deployment history and security and permissions.

Environment Security Checks

 

Environment security checks, as mentioned above, can provide quality gates similar to the current capabilities of Release Pipelines. Below is an example of the current options compared to Release Pre and Post Deployment Quality Gates.

3603.pastedimage1598622673009v14 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Here is an example of linking to a template in GitHub.

0876.pastedimage1598622673009v15 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Compare this to the Release Pipeline Pre or Post Deployment Quality Gates.

 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Scenario: Orchestrating a Release

 

Ochestrate%20Release%20with%20Teams%20 %20Full Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

In the above example, we have a multi-stage release pipeline that encompasses multiple teams from development to support to testing. The pipeline relies on multiple artifacts and code branches for importing and testing.

In this example, we have a core solution containing Dynamics 365 entity changes that are needed by integrations. They will need to lead the deployment and test and notify the subsequent teams that everything has passed and can move on.

Below is an example of coordination between the deployment team and the Core team lead.

Ochestrate%20Release%20with%20Teams Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Below is an image showing the entire release deployment with stages completed.

4760.pastedimage1598622673010v19 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

Automating Release Notes

 

Azure Application Insights Release Annotations

 

The Azure Application Insights Release Annotations task is a marketplace extension from Microsoft allowing a release pipeline to signal an event in a release pipeline. An event could be the start of the pipeline, the end, or any event we are interested in. From here we can use native functionality of Azure Application Insights to stream metrics and logs.

Using an Azure Function with Web Hooks

 

Service Hooks are a great way of staying informed of events happening within Azure DevOps allowing you to be freed up to focus on other things. Examples include pushing notifications to your teams’ mobile devices, notifying team members on Microsoft Teams or even invoking Microsoft Power Automate flows.

2043.pastedimage1598622673010v20 Monitoring the Power Platform: Azure DevOps   Orchestrating Deployments and Automating Release Notes

The sample code for generating Azure DevOps release notes using an Azure Function can be found here.

Next Steps

 

In this article we have worked with Azure DevOps and Microsoft Teams to show an scenario to collaborate on a deployment. Using the SDK or REST API, Azure DevOps can be explored in detail, allowing us to reimagine how we consume and work with the service. This will help with automating release notes and inviting feedback from stakeholders.

Previously we looked at setting up notifications and web hooks to popular services. We then reviewed the Azure DevOps REST API to better understand build pipelines and environments.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

The Platform of No

August 24, 2020   Humor

The Republican Party has announced that it will have no platform. The announcement looks like something Donald Trump would write. It is one page of bullet points (but without bullets). It tries to blame everything on the media and “the failed policies of the Obama-Biden administration” (so much for being a “positive” convention).

Bottom line: the RNC’s official platform is to “strongly” and “enthusiastically” support Donald Trump and everything he does.

The GOP is no longer a political party. Any pretense of actually governing is gone. The have no ideals or beliefs. They are now merely a cult of personality for Trump.

 If you liked this, you might also like these related posts:
  1. Lurking
  2. What does the GOP stand for?
  3. No Gift
  4. The Cult of Trump
  5. Rumor Control

Let’s block ads! (Why?)

Political Irony

Read More

Monitoring the Power Platform: Model Driven Apps – Monitor Tool Part 1: Messages and Scenarios

August 16, 2020   Microsoft Dynamics CRM

Summary

 

Monitoring Dynamics 365 or Model Driven Applications is not a new concept. Understanding where services are failing, users are running into errors, where form and business processes could be tuned for performance are key drivers for most if not all businesses, from small companies to enterprises. Luckily, the Dynamics 365 platform provides many tools to help audit and monitor business and operational events.

This article will cover user events and where they are sourced from. From there, we will dive into the Monitor tool and look at individual messages within. We will work with a few sample scenarios and see what we can gain from markers and messages within the Monitor tool.

Collecting User Events

 

Before we discuss techniques on how to capture events in Dynamics 365 let’s examine some meaningful events. From the client perspective this may include performance counters and metrics, user click events and navigation. Other data points include geolocations and preferences of users. Luckily, client events are easier to capture and we have many tools including browser based (Developer Tools) to applications (Fiddler) that are readily available. Some features of the platform allow for collecting markers while other events of interest we will have to supplement with custom delivery mechanisms.

For server side, external integrations and execution context contain identifiers for response codes that may require additional validation. For sandboxed plug-ins and custom workflow activities, we are limited somewhat to what tools we can leverage.

Upcoming articles will detail how to collect and push events of interest to a central area for analytics.

NOTE: The rest of this article will cover collecting and analyzing messages focused on the client. That said, server side events play a major role and can impact the client experience. I’ll address server side events in another article pertaining to Azure Application Insights and Model Driven Apps. In the meantime, check out this GitHub repo that includes experimental Plug-In code.

AUTHOR NOTE: Click each image to enlarge for detail

2161.pastedimage1597360820527v28 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The Monitor Tool

 

The Monitor tool can be launched from the Power Apps Maker Portal. Once launched, the Play Model Driven App button can be pressed to begin a session attached to the tool.

1513.pastedimage1597360820529v29 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The Monitor tool can also be started by adding “&monitor=true” to the URL of your Model Driven Application.

After consenting or allowing to start a session, the Monitor tool will light up rapidly with various messages. Similar to the the article “Canvas Driven Apps – The Monitoring Tool“, each row can be further drilled into for investigation.

7776.pastedimage1597360820530v30 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Jesse Parsons’ article on the Monitor Tool, titled Monitor now supports model-driven apps provides a through deep dive including sample scenarios.

I highly suggest reviewing and keeping it close by for reference.

Key Performance Indicators

 

Key Performance Indicators represent major lifecycle events within a particular user action, such as loading a form. Consider the image below.

6138.pastedimage1597360820531v31 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

By sorting the records on the “KPI” category, these events begin to emerge. The image below shows the major lifecycle events or KPIs for a standard form load within Dynamics 365. Beginning with PageNavigationStart and ending with RenderedEditReady, these events represent the completion of a form load.

3302.pastedimage1597360820533v32 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Scenario: Determining a Form’s Load Impact

 

Consider the scenario of a user logging into the system and opening a lead form for the first time. When performing this action, the form and data have not had a chance to be cached or stored locally which results in all items needed to be downloaded. This is sometimes referenced as a cold load. Reviewing the timeline event “FullLoad” we can determine what type of load the form rendered as.

 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Now, once captured, the user opens the fly out window to choose another lead record but using the same form. Again using the “FullLoad” KPI timeline event we can see the LoadType is now Two.

 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Finally, imagine the user needs to navigate back to the original lead record opened on the same form. We can see now the LoadType is now Three. Comparing this to the LoadType Zero image above, the entityId is the same.

Here is a sample scenario in full showing the differences in loading new and existing records and how changing a form can impact network requests to the server.

FullLoad LoadTypeAndMetadataNetworkRequestExample Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Attribution

 

On certain Key Performance Indicators a property is included called “Attribution” which represents specific events within a user action. This includes the commands or actions executed within a KPI. For example, during a form load lifecycle, controls are rendered, ribbon rule are evaluated, onload event handlers are executed, etc. The Attribution property will specify and group, in chronological order, which events happened. An added bonus is the duration is also shown for each event along with if any synchronous calls were made. Consider the image below.

Scenario: Locating dynamically added form changes and events

 

6012.pastedimage1597360820540v36 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The image above shows for the FullLoad KPI. In this image we see three main groups of events: CustomControl (form controls), RuleEvaluation (ribbon rules) and onload (form event handlers). What’s interesting here is each of these is grouped, but also the solution they are part of and event solution layering is present. The RuleEvaluation and onload groups above both show an unmanaged or “Active” layer that contained customizations.

Compare that image with the one below.

3276.pastedimage1597360820541v37 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

A scenario came up during an investigation into the an increased duration of a form save. To begin, we went through the user events as normal with the Monitor tool running.

5277.pastedimage1597360820542v38 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Upon review, you can see additional events occurred, tabstatechanged and onsave. The onsave was expected due to the registered event handler on the form. However the tabstatechange was not, this was found to be due to a recent code addition that triggered the setDisplayState of the tab.

if (control.getName() == tabName) {
	control.setDisplayState("expanded");
}

By reviewing the Attribution property we were able to identify what caused the increase of 5 seconds.

7178.pastedimage1597360820544v39 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Zone Activity and Other Data Points

 

Within Key Performance Indicators, are other data points that prove useful when debugging performance related issues. Latency and Throughput are shown as well as timings for network related calls and custom script executions. Within the ZoneActivity property we see events grouped by web service calls, browser resource timings and performance observer events. The CustomScriptTime shows the duration of all of the registered event handlers that fired during this particular Key Performance Indicator.

0160.pastedimage1597360820545v40 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Performance Messages

 

Performance categorized messages detail potential performance issues that a user may run into. At the time of this writing, I’ve uncovered only synchronous XHR calls being sent but I anticipate growth here.

Scenario: Locating and Evaluating Synchronous Resource Timings

 

Requests from a Model Driven Application represent outgoing calls from the client to another source. These calls can occur either synchronously meaning the thread executing the call waits for the response, or asynchronously meaning the thread continues and listens for the response. Its preferred to eliminate all sync calls to reduce any potential disruption to a user’s experience within the application.

The Monitor tool helps by identifying these requests and calling them out. These call outs can be found in the “Performance” category as shown below.

7343.pastedimage1597360820546v41 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Examining the performance entry, we can see the “dataSource” property shows the XHR URL. However, it doesn’t show the source of the call which is needed to better understand how and why the request was made. For that, we need to find and examine KPIs such as FullLoad or SaveForm.

1778.pastedimage1597360820547v42 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Here is a gif showing how to use the Monitor tool, coupled with the Browser Developer Tools to locate the line of code that needs to be updated.

FullLoad Perf SyncXhr Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Using these messages, along with outputs from Power Apps Checker, we can begin to uncover gaps in code analysis. In the next article, I’ll cover in depth an approach to help identify and remediate these gaps.

Next Steps

 

This article describes where user and platform events may originate from and how they can be monitored. Gaining insights and understanding into the SaaS version of Dynamics 365 allows us to uncover the black box and find answers to some of our questions. Think about how the Monitor tool can be used to find out where API calls may have started and coupled with other articles in this series, how we can correlate to provide a true end to end monitoring solution. The next article in this series will cover how we can extract and analyze the sessions further.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Monitoring the Power Platform: Model Driven Apps – Monitor Tool Part 2: Session Consumption and Analytics

August 15, 2020   Microsoft Dynamics CRM

Summary

 

Monitoring Dynamics 365 or Model Driven Applications is not a new concept. Understanding where services are failing, users are running into errors, where form and business processes could be tuned for performance are key drivers for most if not all businesses, from small companies to enterprises. Luckily, the Dynamics 365 platform provides many tools to help audit and monitor business and operational events.

This article will cover collecting, querying and analyzing user interface events, specifically from the recently announced Monitor Tool for Model Driven Apps. The previous article covered message data points and how to perceive them. In this go round, we will have a little fun exploring ways to utilize the output sessions. We will discuss how to build robust work items in Azure DevOps with Monitor output. We’ll look at consuming and storing outputs for visualizations and analytics with Kusto queries. Finally, samples will be provided to parse and load session messages into Azure Application Insights and Azure Log Analytics.

The Monitor Tool

 

The Monitor Tool allows users and team members to collect messages and work together in debugging sessions. To begin, the Monitor Tool can be launched from the Power Apps Maker Portal. Once launched, the Play Model Driven App button can be pressed to begin a session attached to the tool.

AUTHOR’S NOTE: Click on each image to enlarge for more detail

0647.pastedimage1597357184199v1 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

The Monitor Tool can also be started by adding “&monitor=true” to the URL of your Model Driven Application.

After consenting or allowing to start a session, the Monitor Tool will light up rapidly with various messages. Similar to the “Canvas Driven Apps – The Monitoring Tool” article, each row can be further drilled into for investigation.

8712.pastedimage1597357184200v2 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Jesse Parsons’ article on the Monitor Tool, titled ‘Monitor now supports model-driven apps‘ provides a through deep dive including sample scenarios. I highly suggest reviewing and keeping close by for reference.

Thoughts on Canvas Apps

 

The Monitor tool works with Power Apps Canvas Driven Apps as shown in the article “Canvas Driven Apps – The Monitoring Tool“. While this article is focused on Model Driven Apps, remember these techniques can also be utilized to serve Canvas Apps as well.

Consuming Monitor Sessions

 

Each time the Monitor tool is opened, a new session is created. Within each session are events that describe actions taken within the session as well as other helpful messages. Storing these sessions allow support teams to better understand errors and issues that arise during testing and production workloads. The previous article, “Monitor Tool Part 1: Messages and Scenarios“ covers scenarios that support users can use to better understand the data points delivered in the Monitor Tool.

The Monitor tool can also help analysts who want to learn more about the platform. For instance, user tendencies such as how long they spent on a page and which controls they interacted with in Canvas Driven Apps. For testing, the tool can help with non functional test strategies like A/B testing. Analyzing performance messages can point to potential code coverage gaps or advise on user experience impact. Network calls can be securitized to determine if queries can be optimized or web resources minified. The Monitor tool, in my opinion, really can open up a new view on how the platform is consumed and how users react with it.

Attaching to Azure DevOps Work Items

 

The Monitor Tool download artifacts work nicely with Azure DevOps Work Items. They can be attached to Bugs, Tasks and even Test Cases when performing exploratory or other types of tests.

1033.pastedimage1597357184200v3 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Working with test cases within Azure DevOps, Analysts craft work items with use cases and expected outcomes to deliver to Makers and Developers. Specifically with Test Cases, Quality Analysts can leverage the Monitor Tool in conjunction with the Test and Feedback Browser Extension. This allows for robust test cases complete with steps, screenshots, client information and the Monitor Tool output attached. Consider the gif below showing an example of using both the browser extension and the Monitor tool output.

AttachMonitorOutputToDevOpsBug Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

In that example, we see an analyst has found a performance issue with a Dynamics 365 form. The analyst logged a new bug, included annotations and screenshots and the Monitor tool output. A developer can be assigned, pick this up and begin working on the bug. By having the Monitor tool output the developer can now see each call made and review the Attributions within the respective KPI. For more information, refer to the Attribution section within Part 1.

0020.pastedimage1597357184200v5 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Storing Monitoring Sessions

 

The Monitor tool output comes in two flavors: CSV and JSON. Both make for light weight storage and are fairly easy to parse as shown later. These files can be attached to emails or stored in a shared location like a network drive.

Power BI and CSV Files

 

The csv files downloaded from the Monitor tool can be added Azure Blob Storage or stored locally and displayed in a Power BI Dashboard. This allows for analysts and support teams to drill down into sessions to gain further insights. The csv files can work both locally with Power BI Desktop and online. The below image shows a sample taken from a Canvas App Monitor Session. Additional information and samples can be found in the “Canvas Driven Apps – The Monitoring Tool” article.

7462.pastedimage1597357184201v6 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Experimenting with Extraction for Analytics

 

Storing outputs in Azure Blob Storage

 

During this writing and the writing of the Monitor Tool for Canvas Apps, I began to collect outputs from both and storing within Azure Blob Storage containers. There are multiple reasons why I chose to utilize Azure Blob Storage, mainly cost but also interoperability with other services such as Power BI, Azure Data Lake and Azure Event Grid.

Azure Blob Storage also integrates very well with both Azure Logic Apps, Azure Functions and Power Automate Flows. Each of these include a triggering mechanism on a new blob added to a container, working as a sort of drop folder. This may the choice to use Azure Blob Storage easy for me but I will also point out that specifically Power Automate Flows also can be triggered from OneDrive or SharePoint. This allows Makers to stay within the Microsoft 365 ecosphere and avoid spinning up multiple Azure services if desired.

Extracting to Log Stores

 

Extracting the messages with in the Monitor tool to a log store allows for analysts and support teams to parse and query the sessions. Determining how we want to store these messages will determine what services we leverage.

Choosing Azure Application Insights

 

If we want distributed transaction tracing I’d suggest Azure Application Insights. Application Insights will allow for pushing messages to specialized tables that feed dashboards and features native to the service such as End to End Transactions and Exception parsing.

Network messages can be stored in the requests or dependencies tables, which are designed, along with page views, to visual a typical web application’s interactions. Fetch network messages, representing calls to an API, fit nicely into the requests table as shown below:

8358.pastedimage1597357184201v7 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Dependencies on the other hand can represent dependent web resources.

8524.pastedimage1597357184201v8 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Using Azure Function to serve data

 

Azure Application Insights works well with microservices built using Azure Functions. A benefit of Azure Functions is the ability to have a function fire on create of a blob within a container. For more information and a helpful quick start to working with Blob triggered Azure Functions, check out this reference. Below is the method signature of a sample Azure Function:

[FunctionName("BlobTriggeredMonitorToolToApplicationInsights")]
public void Run([BlobTrigger("powerapps-monitortool-outputs/{name}", Connection = "")]Stream myBlob, string name, ILogger log)

In the sample provided you’ll see that the function takes the JSON payload, parses it and determines how to add to Azure Application Insights. Depending on the messageCategory property, it will funnel messages to custom events, requests or dependencies.

As always, review the Telemetry Client for ideas and techniques to enrich messages sent to Azure Application Insights. Also, if desired, review how to Sample messages to reduce noise and keep cost down.

Choosing Azure Log Analytics

 

Azure Log Analytics allow for custom tables that provide the greatest flexibility. The Data Collector API has a native connector to Power Automate that allows Makers to quickly deliver Monitor messages with a no or low code solution. Both Power Automate and Azure Logic Apps both offer triggers on create of a blob providing flexibility on which service to choose.

Using Power Automate to serve data

 

To work with Power Automate, begin by creating a new Power Automate flow. Set the trigger type to use the Azure Blob Storage trigger action “When a blob is added or modified“. If a connection hasn’t been established, create a connection. Once created locate the blob container to monitor. This container will be our drop folder.

The trigger is only designed to tell us a new blob is available, so the next step is to get the blob content. Using the FileLocator property we can now get the serialized messages and session information and deliver to Log Analytics.

Within Power Automate, search for the “Send Data” action. Set the JSON Request body field to be the File Content value from the “Get blob content” action.

7510.pastedimage1597357184201v9 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

The advantage here is with three actions and no code written I am able to listen to a drop folder for output files and send to Azure Log Analytics. The native JSON serialization option from the Monitor tool really serves us well here, allowing a seamless insertion into our custom table.

1680.pastedimage1597357184201v10 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Ideally we would expand the Power Automate flow to parse the JSON and iterate through messages to allow for individual entries into the table.

0211.pastedimage1597357184202v11 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Just remember the content maybe encoded to “octet-stream” and will need to be converted.

Sample Kusto Queries

 

Below are sample Kusto queries designed for Azure Application Insights.

General Messages

//Review Performance Messages
customEvents 
| extend cd=parse_json(customDimensions)
| where cd.messageCategory == "Performance"
| project session_Id, name, cd.dataSource

Browser Requests

//Request Method, ResultCode, Duration and Sync
requests 
| extend cd=parse_json(customDimensions)
| extend data=parse_json(tostring(cd.data))
| project session_Id, name, data.method, resultCode, data.name, data.duration, data.sync, cd.fileName

//Request Method, ResultCode, Duration and Resource Timings
requests 
| extend cd=parse_json(customDimensions)
| extend data=parse_json(tostring(cd.data))
| project session_Id, name, data.method, resultCode, data.name, data.duration,
data.startTime, 
data.fetchStart,
data.domainLookupStart,
data.connectStart,
data.requestStart,
data.responseStart,
data.responseEnd

Review the documentation on Resource Timings located here to better understand what these markers are derived from.

3301.pastedimage1597357184202v12 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Key Performance Indicators

pageViews 
| extend cd=parse_json(customDimensions)
| extend cm=parse_json(customMeasurements)
| extend data=parse_json(tostring(cd.data))
| extend attribution=parse_json(tostring(data.Attribution))
| where name=="FullLoad"
| order by tostring(data.FirstInteractionTime), toint(cm.duration)
| project session_Id, name, data.FirstInteractionTime,cm.duration, attribution

Sample Code

 

Azure Function and Azure Application Insights – Monitor Tool Extractor

Power Automate and Azure Log Analytics

Optional Azure Application Insights Custom Connector

Next Steps

 

In this article we have covered how to work with the Monitor tool output files. Viewing within Power BI Dashboards, attaching to DevOps work items and storing in Azure backed log stores are all possibilities. Included sample code and Kusto queries to help you get started have also been provided.

This article showcases use cases and strategies for working with the Monitor tool but really only represents the tip of the iceberg. Continue collecting, examining and churning the output for deep insight into user and platform trends.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Quantexa raises $64.7 million for AI platform that extracts insights from big data

July 23, 2020   Big Data
 Quantexa raises $64.7 million for AI platform that extracts insights from big data

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

Big data analytics startup Quantexa today closed a $ 64.7 million financing round, which a spokesperson told VentureBeat will be put toward accelerating the company’s product roadmap and expansion in Europe, North America, and Asia Pacific regions. It comes after a year in which Quantexa landed customers like SBC, Standard Chartered Bank, and OFX and expanded the availability of its platform to more than 70 countries.

Enterprises have multiple data buckets to wrangle — upwards of 93% say they’re storing data in more than one place — and some of those buckets inevitably become underused, partially used, or forgotten. A Forrester survey found that between 60% and 73% of all data within corporations is never analyzed for insights or larger trends, while a separate Veritas report found that 52% of information stored by organizations is of unknown value. The opportunity cost of this unused data is substantial, with the Veritas report pegging it as a cumulative $ 3.3 trillion by the year 2020 if the current trend holds.

Quantexa uses AI tools to uncover risk and opportunities by providing a view of data in a single place, solving challenges across financial crime, customer intelligence, credit risk, and fraud. The company’s contextual decision intelligence platform connects tens of billions of internal and external data points to create a single overview, enriched with information about relationships between people, organizations, and places.

Quantexa’s entity resolution technology connects internal and external data sources without unique match keys, in real time or batch, and generates a single enterprise-wide portal of profiles even with poor data quality. Network generation links resolve entities into a network of relevant, real-world connections, producing different networks for different use cases that reveal the context of how people, organizations, places, and transactions relate to each other.

On the analytics side, Quantexa leverages resolved entities and network context to build features, scenarios, and models, increasing accuracy. It helpfully manages dependencies between scores, alerts, and data and develops analytics that can be run dynamically, with interfaces supporting a range of languages and libraries. The platform also delivers visualizations that enable thousands of users to search, graph, and explore content while investigating and thematically analyzing it in a single dashboard and reviewing flags within a context, highlighting points of interest.

For one insurance customer, Quantexa says, it helped operationalize data to uncover the connections between events, such as claims and policy renewals, as well as people, places, and organizations. It’s scaling to large data records and using context to provide actionable information for investigators and underwriters, providing teams with a view of all customer data while offering risk assessment services and self-service options.

Quantexa says the funding round announced this week — a series C that brings its total raised to date to $ 90 million  — was led by Evolution Equity Partners with “major participation” from investors Dawn Capital, AlbionVC, HSBC, and ABN AMRO Ventures. As a part of it, Evolution Equity Partners founding and manager partner Richard Seewald joined the board of directors.

Founded in 2016, Quantexa now has over 250 employees across offices in New York, Boston, Belgium, Toronto, Singapore, Melbourne, and Sydney. It’s headquartered in London.

Sign up for Funding Weekly to start your week with VB’s top funding stories.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Marketing Automation Platform Implementation Guide for Microsoft Dynamics

July 22, 2020   CRM News and Info
Platform Screenshots Three Screens Marketing Automation Platform Implementation Guide for Microsoft Dynamics

There are lots of questions to answer as you consider what sort of marketing automation platform to connect to your CRM system, but one question that’s often overlooked is how will you implement the software? Will the process be painful or easy? And what do you need to know about that process before you sign a contract?

As you move into the final buying stage, here are a few questions that can ultimately guide your decision-making process and ensure you buy a marketing automation platform that can achieve your goals:

How Does the Marketing Automation Platform Connect to Microsoft Dynamics?

Some marketing automation platforms are reliant entirely on Microsoft Dynamics to function, meaning that your marketing users will likely need to log into Microsoft Dynamics in order to use their marketing automation platform. From a security standpoint, IT teams tend to like this approach because they can easily control who does and doesn’t have access to the marketing automation platform, but it also puts your marketing team at risk of downtime (from Dynamics), can significantly increase your Dynamics licenses costs, and means that your marketing team could have a difficult time separating Marketing Qualified Leads from Sales Qualified Leads in Dynamics—which could lead to conflict with the sales team.

On the other side of that coin, some marketing automation platforms offer just a surface-level integration that connects only standard fields (and not any custom Dynamics fields that may be important to your business). For some integrations like this, you may have to figure in the cost of a third-party tool that sends data back and forth or custom work on the Dynamics side to make the integration work at a level you’re looking for.

There are marketing automation platforms that sit somewhere in between as well—the emfluence Marketing Platform, for example, can be accessed via a browser or from within Microsoft Dynamics and allows for custom field integrations (as well as standard fields). You would also want to know if it’s possible to integrate to Custom Entities, if you have them—not many standalone platforms can do this.

With any of these scenarios, another important question to ask is how will the sales team get Leads and/or Contacts into the right lists? Every platform will handle this differently, so be sure to document how the one you pick will do this so that you can ensure your sales and marketing teams are in alignment.

Who Needs to be Involved in the Marketing Automation Implementation?

It’s easy for workplaces to get stuck in silos, but integrating your marketing automation platform to your CRM system requires getting sales, IT, and marketing on the same page. Depending on your use cases, you may want to additionally include your HR and/or recruitment teams and your customer success teams as well. You’re looking for:

  • What skills are required to implement this software tool? Who has those skills internally, and who do we need to contract externally?
  • Who needs to be a user of the marketing automation platform? What level of access does each user need? How is access/permissions controlled in the marketing automation platform?
  • What use cases does the marketing team have mapped out? What data fields will they need to make these use cases happen? Do they need workflows or Flows or other development to facilitate automation?
  • Who’s in charge of training? The marketing automation software company? An IT provider, like a Microsoft Partner? Self guided?
  • What’s in your existing marketing automation platform that needs to be migrated? Who will be in charge of migration?

Ongoing Maintenance

Once you have your marketing automation platform integrated to Dynamics, ask the following questions about how the software will be continuously maintained:

  • Who will be in charge of continuous updates?
  • Who will handle support requests?
  • How are feature requests handled?
  • Who will be alerted about product updates?
  • What if you need additional training or marketing help?
  • Who owns the relationship with the marketing automation provider?

The Full Migration Guide for Software to Software

After you document and outline the logistics of selecting the right marketing automation platform for Microsoft Dynamics for your company, be sure to check out our full migration guide for implementing a new marketing automation platform here.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More
« Older posts
  • Recent Posts

    • Pearl with a girl earring
    • Dynamics 365 Monthly Update-January 2021
    • Researchers propose Porcupine, a compiler for homomorphic encryption
    • What mean should I use for this exemple?
    • Search SQL Server error log files
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited