• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Application

Dynamics 365 CE On Premises: Application Pool Recycling

January 6, 2021   Microsoft Dynamics CRM

Dynamics 365 CE – IIS Application Pool Recycling

If you didn’t change the configuration of IIS / CRM App Pool, it’s very probable it is configured to execute in a regular time interval of 1,740 minutes (29 hours) – default IIS App Pool set. Let’s suppose your recycle ran for the last time at 5 AM, that seems to be ok and outside business hours. But in this scenario the recycling will occur again at 10 AM next day, during business hours and the users can experience errors due the recycle because their session states will be lost.

app pool Dynamics 365 CE On Premises: Application Pool Recycling

In this case it will be better to set the recycling to occur at specific time(s) outside business hours according your organization rules. It is important specially if your applications need to store session states. And, if you are thinking about disable IIS App Pool recycling, don’t do it: Worker process isolation mode offers process recycling, in which IIS automatically refreshes Web applications by restarting their worker processes. Process recycling keeps problematic applications running smoothly, and is an especially effective solution in cases where it is not possible to modify the application code.

It applies to any app pool, not only Dynamics 365-related.

Considerations When Recycling Applications

When applications are recycled, it is possible for session state to be lost. During an overlapped recycle, the occurrence of multi-instancing is also a possibility.

Loss of session state: Many IIS applications depend on the ability to store state. IIS can cause state to be lost if it automatically shuts down a worker process that has timed out due to idle processing, or if it restarts a worker process during recycling.

Occurrence of multi-instancing: In multi-instancing, two or more instances of a process run simultaneously. Depending on how the application pool is configured, it is possible for multiple instances of a worker process to run, each possibly loading and running the same application code. The occurrence of an overlapped recycle is an example of multi-instancing, as is a Web garden in which two or more processes serve the application pool regardless of the recycling settings.

If your application cannot run in a multi-instance environment, you must configure only one worker process for an application pool (which is the default value), and disable the overlapped recycling feature if application pool recycling is being used.

More information about IIS Process Recycling can be found here.

Walter Carlin – MBA, MCSE, MCSA, MCT, MCTS, MCPS, MBSS, MCITP, MS

Senior Customer Engineer – Dynamics 365 – Microsoft – Brazil

2605.microsoft Dynamics 365 CE On Premises: Application Pool Recycling

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Microsoft Launches Lists, A New Smart Tracking Application — Formerly SharePoint Lists

November 23, 2020   Microsoft Dynamics CRM

Your CRM system gives you a better way to manage your external business interactions and relationships. Everything you need to know about your customer interactions is readily available at any time. While your external processes may be organized, what about your internal processes? If you’ve found that your internal business processes are unorganized and chaotic, Microsoft Lists, the latest information tracking tool from Microsoft, is the solution.

Lists is a new tool that helps you to keep track of the information that’s most valuable to your team with smart, flexible features. We’ll review each of the tools available within Lists and how it can help you better organize your business processes.

What is Lists?

Lists is a tool that allows you to better track, organize, and manage the large amounts of data coming into your office each day. Using pre-made templates and customizable views you can track information all the way up the organization ladder, helping you to keep tasks and projects more organized. Additionally, Lists allows you to easily manage issues, routines, contacts, and inventory and notify staff members of any changing information or updates using alerts and rules.

Unlike Microsoft ToDo, Lists allows users to do more with information and features smarter tools. Additionally, while Microsoft ToDo is available to everyone, Lists is only accessible with a paid subscription to Microsoft 365.

What You Can Do Within Lists

Within Lists, you can do the following:

  • Use pre-existing templates to create lists on both mobile and web platforms.
  • Create unique views and rules to help teams stay connected.
  • Use native integration that can be embedded in a channel.
  • Create smart rules, filters, views, and choices.
  • Create and manage share links.
  • Integrate with Power Platform as well as Power Applications.
  • Access favorite tasks from anywhere.
  • Share personal lists throughout your organization.
  • Create alerts and reminders to keep teams on schedules.

While Lists is still relatively new, the application is powerful, smart, and flexible, helping you to create business solutions in a faster, more efficient way without a lot of coding.

Views Available Within Lists

Presently, Lists features four default views: calendar, gallery, grid, and list.

  • Calendar: For projects or tasks that have a tight schedule or deadline, you’ll want to use the calendar view. Everything from the project’s start date to its finish is displayed on a visual calendar, helping team members see when work is due and plan accordingly. This ensures that all work is moving quickly down the workflow and finished on time.
  • Gallery: If you’re working on a list or project that features a lot of images, the gallery view helps you to arrange information within the card in organized rows.
  • Grid: This is the primary view within Lists. Rows and columns can easily be changed or reconfigured depending on the specific task you’re working on. Grid view is best used if you need to quickly edit or update information.
  • List: The list view has a similar format to retired SharePoint lists; however, this view doesn’t feature point and click capabilities as of today.

Lists is designed to track information and tasks based on the unique needs of your team. To help you do so, each view can be further modified for better, more efficient tracking. Color coding is just one example. Say you have a specific approval process for tasks. You could use various colors to show where each task is in the approval process. Items that are currently being reviewed by stakeholders could be colored red and approved items could be colored green.

Color coded comments can also help keep your team organized. For example, each staff member could be assigned a specific color. Should a question or concern arise that they need to review, you can color code it their color so they’ll know to take a look at it. You could also use color coding to alert your team of changes to a task or deadline. This helps to keep your entire team on the same page and ensures work is finished on time.

Ready to Get Started with Lists?

Setting up Lists is very straightforward: just access it from the left-hand menu bar of Microsoft 365. It will be included on the list of Microsoft additional applications.

As soon as you’ve opened Lists, you can start creating customized lists with ease. However, should you need assistance with some of its features or further customize views or features to meet the needs of your team, reach out to a JourneyTEAM representative. We’re a Microsoft Gold Partner with extensive knowledge and experience with Microsoft products. We’ll show you how to make this new tool work for you and provide the exact amount of support you need. Contact us today!

CLICK HERE – To see full article

Contact JourneyTEAM
We are consultants that can help make Microsoft Lists a reality! Visit our website to find out more, or call us now at 800.439.6456.


Article by: Dave Bollard – Head of Marketing | 801-436-6636

JourneyTEAM is an award-winning consulting firm with proven technology and measurable results. They take Microsoft products; Dynamics 365, SharePoint intranet, Office 365, Azure, CRM, GP, NAV, SL, AX, and modify them to work for you. The team has expert level, Microsoft Gold certified consultants that dive deep into the dynamics of your organization and solve complex issues. They have solutions for sales, marketing, productivity, collaboration, analytics, accounting, security and more. www.journeyteam.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Google proposes applying AI to patent application generation and categorization

November 22, 2020   Big Data
 Google proposes applying AI to patent application generation and categorization

When it comes to customer expectations, the pandemic has changed everything

Learn how to accelerate customer service, optimize costs, and improve self-service in a digital-first world.

Register here

Google asserts that the patent industry stands to benefit from AI and machine learning models like BERT, a natural language processing algorithm that attained state-of-the-art results when it was released in 2018. In a whitepaper published today, the tech giant outlines a methodology to train a BERT model on over 100 million patent publications from the U.S. and other countries using open-source tooling, which can then be used to determine the novelty of patents and generate classifications to assist with categorization.

The global patent corpus is large, with millions of new patents issued every year. It’s complex as well. Patent applications average around 10,000 words and are meticulously wordsmithed by inventors, lawyers, and patent examiners. Patent filings are also written with language that can be unintelligible to lay readers and highly context-dependent; many terms are used to mean completely different things in different patents.

For all these reasons, Google believes that the patents domain is ripe for the application of algorithms like BERT. Patents, the company notes, represent tremendous business value to a number of organizations, with corporations spending tens of billions of dollars a year developing patentable technology and transacting the rights to use the resulting technology and patent offices.

“We hope that our [proposal] will help the broader patent community in its application of machine learning, including corporate patent departments looking to improve their internal models and tooling with more advanced machine learning techniques, patent offices interested in leveraging state-of-the-art machine learning approaches to assist with patent examination and prior art searching, machine learning and natural language processing researchers and academics who might not have considered using the patents corpus to test and develop novel natural language processing algorithms,” Google data scientists Rob Srebrovic and Jay Yonamine wrote in a blog post. “Patent researchers and academics who might not have considered applying the BERT algorithm or other transformer based approaches to their study of patents and innovation.”

As VentureBeat recently reported, businesses aren’t the only ones that stand to benefit from AI with regard to patent processing. The U.S. Patent and Trademark Office (USPTO) built AI models for different categories of patents and then trained the models on text from patent abstracts. Separately, the USPTO’s staff is using AI to more efficiently process patent applications. According to a spokesperson, the agency is now using a “leading RPA provider” to centralize its bot efforts and ensure a proper process and governance model that includes use cases, development, testing, and security before bots are deployed.

“We are working on adding AI tools to help route applications to examiners more quickly and to help examiners search for prior art,” Andrei Iancu, U.S. Under Secretary of Commerce for intellectual property, said in an emailed response to VentureBeat in October. “We’ve also been active on the Trademarks side, exploring the use of AI to help find prior similar images and to identify what we call fraudulent specimens. We are exploring using AI to improve the accuracy and integrity of the trademark register.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Monitoring the Power Platform: Power Apps Portal – Implementing Application Insights

July 19, 2020   Microsoft Dynamics CRM

Summary

 

Power Apps Portal represent a unique offering to the Power Platform, one that allows the Power Platform to reach virtually any and all users your enterprise wants to connect to. It represents the external face of your enterprise to your users, allowing them to interact with other users as well as internal representatives. Users can now provide updates and artifacts to the Common Data Service without the need to communicate with your enterprise reps, freeing them to focus on providing the best customer experience available. Power Apps Portal allow its users to interact anonymously and login using their preferred identity of choice, opening up the Common Data Service like never before.

This article will focus on how Power Apps Portal administrators can implement a monitoring strategy to better understand their user base. Insights into user traffic and interactions with the Power Apps Portal can all be tracked. Using this data, your organization can focus on how to better serve your customer base and provide an optimal solution and experience.

In this article we focus on adding Azure Application Insights to a Power Apps Portal. We explore how to configure and provide context which will provide rich and meaningful telemetry.

What is Azure Application Insights

 

3034.pastedimage1594996026595v1 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

As part of the Azure Monitor suite, Azure Application Insights is an extensible Application Performance Management (APM) service that can be used to monitor applications, tests, etc. Azure Application Insights can be used with any application hosted in any environment. Depending on what’s being monitored there are SDKs available. For other applications connections and message delivery can be programmed using the REST APIs available.

For Power Platform components, Application Insights is recommended due to its direct integration with Power Apps features and tools and its capabilities to deliver to the API.

Once we begin sending telemetry to Application Insights we can review in real time availability tests, user actions, deployment metrics as well as other feedback from our applications. Connecting our messages with correlation identifiers allows us a holistic view into how our apps are interdependent upon each other. This provides the transparency desired and honestly needed with modern era technology.

Adding a Power Apps Portal to a Power Platform Environment

 

To begin working with Power Apps Portal, navigate to the Maker Portal and add a new application. This is similar to adding a new Model or Canvas Driven application within the Maker Portal.

7827.pastedimage1594996026596v2 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Go through the provisioning wizard to define the basic characteristics of your Power Apps Portal including a name, the URL and what language or region to use. Here is a reference to a step by step guide to provisioning the portal that includes important considerations.

When initially configured and provisioned, a new Model Driven Power Application titled ‘Portal Management‘ will appear. This application will serve as the primary customization point for makers and portal developers. This will also be where Azure Application Insights will be configured to work within Power Apps Portals.

1185.pastedimage1594996026596v3 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

NOTE: If your Power Platform environment has migrated from Dynamics 365 and included Dynamics 365 Portals you may see the Model Driven Application called ‘Dynamics 365 Portals‘.

0537.pastedimage1594996026597v4 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Adding Azure Application Insights to Power Apps Portal

 

Similar to web based application encompassing HTML, CSS and JavaScript, Power Apps Portal pages can be injected with the Azure Application Insights JavaScript SDK. Oleksandr Olashyn’s article “PowerApps Portals tracking using Azure Application Insights” does a great job detailing provisioning Azure Application Insights and adding the JavaScript SDK that I describe in this section.

Open the Portal Management Power App

 

When a Power Apps Portal Application is added to a Power Platform environment, a Model Driven Application titled ‘Portal Management‘ will also be added. To add Azure Application Insights, begin by playing the ‘Portal Management‘ app.

Once the application has loaded, locate the ‘Enable Traffic Analytics‘ sub area within the Administration group on the site map and open.

4786.pastedimage1594996026597v5 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Add the Azure Application Insights SDK

 

The Portal Analytics page will prompt makers to choose a portal, depending if multiples exist, and an area to include the Azure Application Insights snippet.

2772.pastedimage1594996026598v6 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

To locate the Azure Application Insights JavaScript SDK snippet, two primary options exist: reference the official Microsoft Docs site or go to the GitHub repository. As referenced in the Power Apps Component Framework article, I tend to go with the GitHub repository but either will work.

NOTE: The version of the SDK may change, the current version of this writing is 2.5.6. For the most current release, refer to this reference.

When adding the Azure Application Insights SDK snippet, the application is updating a adx_contentsnippet entity record titled ‘Tracking Code‘.

Configure the Azure Application Insights SDK

 

Once the snippet for the JavaScript SDK has been added, it needs to be configured to point to the organization’s Azure Application Insights resource. The instrumentation key is a 32 digit GUID located on the Overview page of the Azure Application Insights resource.

6675.pastedimage1594996026598v7 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

The instrumentation key will need to be added to the telemetry client’s configuration property ‘instrumentationKey‘ as shown below.

cfg: { // Application Insights Configuration    instrumentationKey: "YOUR_INSTRUMENTATION_KEY_GOES_HERE"    /* ...Other Configuration Options... */ }});

Once the instrumentation key has been added and the changes have been applied to the Power Apps Portal, telemetry will begin to flow to Azure Application Insights. With that in mind, as discussed in previous articles, enriching the messages sent is key to optimizing the various features of Azure Application Insights. Continuing that train of thought, let’s explore some options Power Apps Portal will allow for enrichment.

Enriching the Power Apps Portal Messages

 

Establishing a strong session and operational context is key to not only Power App Portal telemetry but practically any service or application. Review the information found in the Power Apps Component Framework article, while using TypeScript, is applicable here working with JavaScript.

6740.pastedimage1594996026598v8 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Working with Liquid Templating Objects

Liquid Templating allows makers and developers to add contextual information dynamically to Power Apps Portal web pages. Logged in user information, site settings, Common Data Service data, etc can all be referenced. For example, to work with user contextual information, refer to this article, which describes user as an entity object (contact).

Below is an example of referencing a logged in user’s information to set the authenticated user context within the Azure Application Insights telemetry client.

Author’s Note: Big thanks to Nikita Polyakov for his assistance with creating and identifying enhancements to this script!

5736.pastedimage1594996026599v9 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

0638.pastedimage1594996026599v10 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Working with Portal Web Page Variables

 

The window object on each Power Apps Portal web page contains a Microsoft object which includes information about the portal and user. I haven’t seen this documented so I would not rely on it being supported.

3527.pastedimage1594996026599v11 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Using this object we can set the “ai.cloud.role” or other contextual attributes to set the type of Portal:

3683.pastedimage1594996026600v12 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Reviewing Initial Messages in Azure Application Insights

 

Now that the Azure Application Insights snippet has been added, the configuration established and the context enriched, the final step to getting started is to review messages. Navigating to the Azure Portal and Application Insights scripts, we can write basic Kusto queries to see our users interaction with the portal.

5684.pastedimage1594996026600v13 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Portal, being a Web App by nature, works really well with various features of Azure Application Insights. Some highlights include User Session Timelines and User Flows. These features tend to provide a good visualization and help answer questions like “where do my users typically go from the home page?” or “how long were they on a page before navigating away?“

4670.pastedimage1594996026600v14 Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

As discussed above, enriching messages is key to understanding and gaining insight into potential problems. In the example below I have created a scenario where a AJAX call has caused a performance concern.

ApplicationMapAndDependencyAjaxCallPerformance Monitoring the Power Platform: Power Apps Portal   Implementing Application Insights

Sample Code

 

The sample code used in this article can be found in the MonitoringPowerPlatform GitHub repo located here.

Next Steps

 

In this article we have discussed how to set up Azure Application Insights with Power Apps Portals. We covered using the content snippet to add the Azure Application Insights JavaScript SDK. We discussed how to extend the SDK to include values from liquid templating and window objects. Finally we reviewed page views and how they can represented in Application Insights.

For next steps, continue exploring liquid templating and how to continuous enrich your messages sent to Azure Application Insights. Consider the custom property and metrics bag and how these can be supplemented with Common Data Service content or browser resource timings. Also consider ways add instrumentation to other aspects of Power Apps Portal than just collecting Page Views including page exceptions, dependent API calls, JavaScript processing, etc.

Continuing this series we will cover how to implement and use Azure Blob Storage for diagnostic logging.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Monitoring the Power Platform: Custom Connectors – Building an Application Insights Connector

June 28, 2020   Microsoft Dynamics CRM

Summary

 

Connectors are used throughout Power Platform Pillars such as Microsoft Power Automate and Microsoft Power Apps. They are also used in Azure services such as Azure Logic Apps. Connectors are a wrapper around first and third party APIs to provide a way for services to talk to each other. They represent the glue between services, allowing users to setup Connections to connect various accounts together. These connectors encompass a wide range of SaaS providers including Dynamics 365, Office 365, Dropbox, Salesforce and more.

This article will demonstrate how to build a custom connector for use with Power Automate and Power Apps Canvas Apps. This custom connector will attempt to build a connection to Azure Application Insights to assist Makers with sending messages during run time. We will discuss building and deploying an Azure Function and how to construct a Custom Connector. Finally we will discuss testing and supplying run time data from Power Automate.

Overview of Azure Function

 

Azure Functions provide a great way to build micro services including ones to help surface run time data from Power Automate Flow or Model Driven Application Plug-in tracing. Azure Functions can be written using .NET Core and included is native integration with Azure Application Insights. Alternatively, we can import the Azure Application Insights SDK to provide a streamlined approach to delivering messages. This article will focus on using the HTTP entry point and Azure Application Insights SDK to deliver messages to Application Insights.

Overview of Custom Connectors

 

Custom Connectors allow developers to supply custom actions and triggers that can be used by Microsoft Power Automate and Microsoft Power Apps. These connectors provide a reusable no or low code approach to integrating with an Application Programmable Interface otherwise known as an API. The complexity of interacting and implementing a connection and call to the API is hidden from makers, allowing focus on finding a solution to whatever business objective is at hand.

The custom connector can be thought of as a solution to a no cliffs approach to empowering makers. If a connector doesn’t exist for your particular need, for instance an in house API, a custom connector can be used to bridge the gap. No longer are we having to build and send HTTP requests or manage connection flows as the connector will fill the gap.

For additional information, including considerations for Solution Aware Custom Connectors allowing for migration between environments, refer to the article Monitoring the Power Platform: Connectors, Connections and Data Loss Prevention Policies.

Overview of the Open API Specification

 

Open API is a specification built on the desire and need to standardize how we describe API endpoints. Built from Swagger, the Open API dictates how an API should be used. Everything from security to required fields are detailed allowing integrators to focus on developing and not chasing down API specs. One great feature is the ability to design a specification first without relying on code being written. This allows Makers to define what they are looking for with Custom Connectors.

This specification is used by Power Platform Custom Connectors to build out the various triggers and actions that will be made available. This will be covered in more detail in the Building a Custom Connector section.

For more information regarding Open API please refer to this reference from Swagger.

Building the Azure Function

 

The section below will document the steps I took to create the Azure Application Insights Azure Function. There are several ways to build, frameworks to use, additional requirements to adhere to, etc that are not represented here. That said these steps should allow developers to create a proof of concept that can be used to learn and build from.

Creating the Visual Studio Project and Gathering Dependencies

 

To build the Azure Function, I started with Visual Studio 2019 and created a new Azure Function project.

AUTHOR NOTE: CLICK EACH IMAGE TO ENLARGE FOR DETAIL

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

I chose a name and left everything else as the default values. From there I chose the HTTP trigger and .NET Core Framework v2 version for my Azure Function.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Once loaded I add the latest NuGet package for Azure Application Insights.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

The goal with this Azure Function project is to avoid any additional dependencies and simply deliver messages to Azure Application Insights and return an object that could help me track messages. An image of the code I used is below. For the full Azure Function code, please refer to the Samples folder within the MonitoringPowerPlatform GitHub repo that includes all samples from the Monitoring the Power Platform series. For this sample, a direct link can be found here.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

In the sample, I’ve embedded the custom connector definition described below and image within the solution file.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Testing and Deploying to Azure

 

For testing I typically use Postman to build a collection of test requests. Its a free application that is an industry standard for testing APIs. Also as noted, its in the documentation for crafting a specification for a Power Platform Custom Connector.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Once you’ve tested and are ready to to deploy, right click on the Azure Function project and choose Publish. For my example I published using the Zip Deploy mechanism.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Building the Custom Connector

 

The Custom Connector can be built by hand or using an Open API definition. For my connector, I defined and tested my Azure Function and deployed using Visual Studio. Once running, I was able to use Azure API Management to assist with defining the specification for use with custom connectors. The documentation points to using Postman as a primary tool for the specification, however I wanted to mention other techniques to achieve the same goal. To follow a step by step guide using Azure API Management, refer to the article Create an OpenAPI definition for a serverless API using Azure API Management.

Icon, Description, Host and Base URL

 

The first section of the wizard will expect the endpoint from your Azure Function App. The specific operations will be defined later but for now, insert the Host and “/api” if part of your Azure Function URL. In my Azure Function this looked like “<functionapp>.azurewebsites.net“.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Next, the icon background color and image will need to be updated. I’ve noticed that the image will look skewed when used as an action but as a connection listed in a Power Automate Flow or Canvas App it looked ok.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Blue?…No thanks!

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Security

 

The custom connector will need security defined which will be how we establish our connection, similar to other connectors. Many options exist, including OAuth 2.0 and Basic authentication, but for Azure Functions an API Key works well. Name the parameter “code” and set the location to “Query” as shown below.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

The code will need to be standardized across functions or you may have to create multiple connections. A quick way to do this is to create a key for the host.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

NOTE: Whatever this string is will be what is needed for the Custom Connector we will define below so I would suggest generating this yourself.

The Definition

 

The custom connector definition is where we begin to realize how our connector will be used within Power Automate flows. Each action will need to be defined within the “paths” section of the specification. Its important to point out here why the choice to use Azure Function helps here. By defining individual functions we can create actions that align directly with Azure Application Insights tables or to domain or workload specific actions. The example I’m using only shows that direct alignment but depending on the need an action can be invoked which from Azure Function can send multiple messages to various tables or even log stores (e.g. Azure Log Analytics).

Each action requires an operationId and response. Optional parameters include summary, description and parameters. Parameters define the fields in our custom connector action. Consider the following image, showing the Track Event action:

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Now compare that to the definition of the specification:

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Property Considerations

 

One item I ran into fairly early on was how to distinguish and work with json objects within other objects. Consider the scenario below:

{
    "correlationid":"testcorrelation",
    "name": "test app insight",
    "properties":
        {
            "user":"John",
            "userTwo":"Jane"
        }
}

My plan for this object was to take the properties shown above and add this to the customDimensions field within the customEvents table in Azure Application Insights. The issue I encountered was the data type of the field, thinking that I could use a serialized object as a string data type.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

However the custom connector encoded this string resulting in a mismatch from what I was expecting in my Azure Function to what was actually delivered.

The expectation now was to reference the Open API or Swagger documentation to utilize the object or array data type to assist.

Testing the Custom Connector

 

The custom connector wizard includes a window for testing each operation. This is useful to see how the request properties are sent to the Azure Function from the custom connector and how the response looks. To begin, start by creating and testing a connection.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Next, choose an operation and fill out the properties defined in the Open API specification. When ready, click the Test operation button. A nice additional feature here is that is generates a cURL command that can be used locally.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

The request and response will be shown from the operation. Here is where we can continually refine the custom connector specification to provide the correct data types to the Azure Function.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

The sample for this custom connector is located here.

Using the Custom Connector

 

Once the Custom Connector has been created it can be used within Power Automate Flows or Power Apps Canvas Apps. I would assume this also applies to Azure Logic Apps. Below is an example using Track Event. In this example I’m including the correlationId that I passed from my originating request as well as the name property. I’m also using workflow, trigger and action objects detailed in the article Monitoring the Power Platform: Power Automate – Run Time Part 1: Triggers, Workflows and Actions.

 Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

To wrap, here is a gif showing an example run of the Application Insights Tester Power Automate Flow. The full sample can be downloaded here.

TestingFromPostman Monitoring the Power Platform: Custom Connectors   Building an Application Insights Connector

Next Steps

 

In this article we have discussed how to build and deploy an Azure Function to help deliver messages to Azure Application Insights. We then created a custom connector to allow Makers the ability to interact with our Azure Function like any other connector.

Continuing down this path we can use this approach for extending other logging APIs such as Azure Log Analytics. We can even extend the Common Data Service connector if needed. Be mindful however, of the nuances between the Open API specification and what Custom Connector requires.

In previous articles, we discussed how to evaluate workflows, triggers and run functions to help deliver insights. We have also discussed how to implement exception handling within Power Automate flows. Using the connector above, we can now send specific results from our scoped actions to Azure Application Insights allowing for proactive monitoring and action.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Monitoring the Power Platform: Canvas Driven Apps – Getting Started with Application Insights

May 15, 2020   Microsoft Dynamics CRM

Summary


 

Power Apps Canvas Apps represent a no or low code approach to building and delivering modern applications for makers. The requirements of knowing programming languages such as C# have been removed allowing makers of virtually any background to build apps. These apps can be used with hundreds of connectors allowing for a flexible user interface layered on top of data sources. Apps can also be generated from data sources automatically allowing you to quickly create and deploy an application to your team or customers.

This article is designed to introduce makers to incorporating Azure Application Insights into Power Apps Canvas Apps. We will cover adding the necessary identifiers, reviewing the tables events are sent to and examine some helpful Kusto queries.

Application Insights


 

2262.pastedimage1589496009445v17 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Azure Application Insights is an extensible Application Performance Management (APM) service that can be used to monitor applications, tests, etc. Azure Application Insights can be used with any application hosted in any environment. Depending on what’s being monitored there are SDKs available. For other applications connections and message delivery can be programmed using the REST APIs available.

For Power Platform components, Application Insights is recommended due to its direct integration with Power Apps features and tools and its capabilities to deliver to the API.

Once we begin sending telemetry to Application Insights we can review in real time availability tests, user actions, deployment metrics as well as other feedback from our applications. Connecting our messages with correlation identifiers allows us a holistic view into how our apps are interdependent upon each other. This provides the transparency desired and honestly needed with modern era technology.

Adding Application Insights to Canvas Apps


 

Adding Azure Application Insights message delivery is a native feature of Power Apps Canvas Apps. Once added it will begin to send messages from both the preview player and once deployed, your application in Power Apps. This article from

To add the Instrumentation Key to your Canvas App, open the Power Apps Studio and locate the ‘App’ in the Tree view.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Next, in the App Object window to the right, add the Azure Application Insights Instrumentation Key.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Adding and Locating Identifiers


 

Identifiers in Canvas Apps come in various formats including the user playing the app, the session id, the player version, app version, etc. These data elements are a must when troubleshooting and monitoring how your users interact with your app. Some of the data points I find valuable are the app and player build numbers which are key to understanding if users are using out of date player versions. The other major data point is the session id. For an app user, to obtain these values, navigate to the Settings window and click ‘Session details‘.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Session Window:

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

If a user reports an issue, having the session id and Power Apps player version can help with troubleshooting. That said, currently I don’t see a way to grab the session id natively using Canvas App functions. However, using the Power Apps connector with Power Automate, the session Id can be obtained and added to a trace entry.

AUTHOR’S NOTE: This article from Aengus Heaney titled “Log telemetry for your Apps using Azure Application Insights” details that this feature is coming. I would suggest, unless needed immediately, to avoid the Power Automate customization. This feature will eliminate the need for using the Power Apps connector in this fashion, I’m very excited to see its coming! I’ll update this article once this is available.

7181.SessionIdBetweenCanvasAppAndAppInsights Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Adding Traces


 

The Trace function is used to send messages to the traces table in Application Insights. This allows makers the ability to add telemetry and instrumentation from practically any control on any screen. This also opens us up to generating identifiers for specific actions which can be used for troubleshooting in a live environment. The below image shows using informational traces to capture the timings between the invocation of a Power Automate flow. The image shows a trace for the button click, and entries for adding instrumentation for a Power Automate flow.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

The image below is the result of the Trace methods showing the message and the time stamp for each entry.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Traces can three types: Information (for operational activities), Warnings (non breaking issue), or Errors (breaking issue).

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Based on the severity level, the Application Insights record will change.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Exploring what’s delivered


 

At the time of this writing the tables data is delivered to natively include the customEvents, pageViews and browserTimings tables. Each table contains generic Azure Application Insights properties as well as specific relevant properties.

7002.pastedimage1589496009449v27 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

The customEvents table shows when the published app was started.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

pageViews – The pageViews table is included when Azure Application Insights is added to Canvas Apps. The message received by Application Insights contains the URL, the name of the screen within your app and the duration as well as a handy performance bucket. Using the duration along with contextual information from the session and user we can begin to identify performance concerns.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

NOTE: I have seen the pageViews duplicate duration across all screens. Consider adding trace events in the Maker Defined Events section or a technique to find the difference between pageView entries in Kusto.

browserTimings – This represents browser interactions including the send and receive duration, the client processing time and the all up total duration. Similar to the pageViews table a performance bucket is present allowing for a general visualization.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Maker Defined Events

 

The traces table contains information sent from the app by the platform and the maker using the Trace() method. For the platform trace events, the user agent string and client information is captured. For the trace method, as shown above an enumeration is used to set the severity from the Canvas App. In Application Insights this translates to a number.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Useful Kusto Queries


 

The Kusto Query Language allows analysts to write queries for Azure Application Insights. Below I’ve included some queries to help you get started with each table events are currently delivered to.

Pulling ms-app identifiers from custom dimensions:

//This query shows how to parse customDimensions for the app identifiers
traces
| union pageViews, customEvents, browserTimings
| extend cd=parse_json(customDimensions)
| project timestamp, 
itemId, //this changes for each call
itemType,
operation_Id , operation_ParentId , //this does not changes for each call
operation_Name , session_Id , user_Id, 
message, cd.['ms-appSessionId'],cd.['ms-appName'],cd.['ms-appId']


​

Pulling Page Views within the same session:

//This query shows how to use a session_id to follow a user's path in the canvas app
pageViews 
// | where session_Id == "f8Pae" //Windows 10
// | where session_Id == "YhUhd" //iOS
| where (timestamp >= datetime(2020-05-13T10:02:52.137Z) and timestamp <= datetime(2020-05-14T12:04:52.137Z)) 

Slow Performing Pages or screens:

// Slowest pages 
// What are the 3 slowest pages, and how slow are they? 
pageViews
| where notempty(duration) and client_Type == 'Browser'
| extend total_duration=duration*itemCount
| summarize avg_duration=(sum(total_duration)/sum(itemCount)) by operation_Name
| top 3 by avg_duration desc
| render piechart 

Connecting the Dots


 

The Power Apps Canvas App platform provides app contextual information for each event passed to Application Insights. These include operation name, operation and parent operation identifiers as well as user and session data. Most messages also include custom properties titled ms-appId, ms-appName and ms-appSessionId.

The following Kusto query is an example showing how to isolate for specific operations by a user in a player session. Using the session_id field, we can filter the specific action, which may have generated multiple events, and group them together.

union (traces), (requests), (pageViews), (dependencies), (customEvents), (availabilityResults), (exceptions)
| extend itemType = iif(itemType == 'availabilityResult',itemType,iif(itemType == 'customEvent',itemType,iif(itemType == 'dependency',itemType,iif(itemType == 'pageView',itemType,iif(itemType == 'request',itemType,iif(itemType == 'trace',itemType,iif(itemType == 'exception',itemType,"")))))))
| where 
(
    (itemType == 'request' or (itemType == 'trace' or (itemType == 'exception' or (itemType == 'dependency' or (itemType == 'availabilityResult' or (itemType == 'pageView' or itemType == 'customEvent')))))) 
    and 
    ((timestamp >= datetime(2020-04-26T05:17:59.459Z) and timestamp <= datetime(2020-04-27T05:17:59.459Z)) 
    and 
    session_Id == 'tmcZK'))
| top 101 by timestamp desc

Application Insights contains a User Session feature that can help visualize and provide data points for the specific session. The image below combines custom events and page views.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Next Steps


 

The native functionality of Azure Application Insights traceability is a relatively new feature for Canvas Apps. I would also expect to see additional messages delivered to missing tables above such as exceptions and custom metrics. In the meantime consider using a Power Automate flow to send events, a custom connector or the Log Analytics Data Collector connector. This connector requires a P1 license but does allow to send data to a Log Analytics workspace which can be queried and monitored by Azure Monitor and other platforms.

Utilizing the Azure Application Insights API Key, Microsoft Power BI reports can be created based off the data collected from Power Apps Canvas Apps. Consider using the M Query export or building custom queries using the Application Insights API.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index


 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Startups: Focus your Energy on your Application, not your Infrastructure

February 7, 2020   TIBCO Spotfire
TIBCOStartUpsOEM 696x521 Startups: Focus your Energy on your Application, not your Infrastructure

Reading Time: 2 minutes

While smart infrastructure decisions are vital to application software companies, the most important aspect of intellectual property (IP) sits in the product capabilities expressed at the application feature and function level. The application layer is most likely where your company will realize its greatest IP value. 

Newly formed application software companies typically have one very important job, which is to validate the market with a ‘minimum viable product’(MVP). Behind the scenes, the process to build the MVP is usually driven by a set of market requirements that are almost always closely tied to substantive feedback by different types of potential users. In simpler terms, the product development focus must have a good grasp on what the customer wants and align that to the features within the software solution. As a consequence of the above-stated approach, one can imagine, nearly 90% of important conversations for application product development at an early stage will center around application capability as opposed to application infrastructure. 

An overweight focus on the application functional layer is generally a good thing, as your company’s priority should be to get the application features to align with early customer needs and drive adoption. That said, after going to market and maturing beyond the MVP stage, most software teams will need to start paying attention to the infrastructure capability, which generally sits behind the scenes and will touch the areas of: security, scalability, microservices, and integration, just to name a few.  

Most startups face the difficult decision of how much and where to invest infrastructure capability. They must choose between building it in-house (organic) and using an (OEM) solution as a deeply embedded option. Let’s take a moment to look at the pros/cons and market value impact of the aforementioned options at a high level.

Building your infrastructure in-house vs OEM

Infrastructure Options Pros Cons Market Value Conclusions
Organic
(In-House) 
Very specific to the app use cases
Full IP ownership
Expensive to hire
Evolving requirements will require further investment $ $ $
Longer to go to market
Medium-IP owned by the firm but not viewed as essential by the customer because it’s behind the scenes. So, bang for the buck is moderate at best and execution risk is high given low expertise
OEM (Deeply Embedded) Fastest to market
Provides road-tested capability to reduce risk
Quicker to adapt to customer technology shifts quickly
Less expensive to maintain than hiring, building, and growing in-house infrastructure team
Generally broad capability hammer for a thumbtack
IP is owned by OEM company
Need to invest in learning and embedding OEM through series of POCs
High-market value is driven by much faster time to market and less technology risk and debt that might accrue if product landscape shifts

As you can see from the above chart, embedding your application will give you a much higher market value because you will get your application to market faster. You will also experience less technology risk that might accrue if the product landscape you are working in shifts.

Learn how TIBCO’s deeply embedded OEM solutions can help you get your tech start-up’s infrastructure right and allow you to focus your energy on your applications. 

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Rise Of Low-Code/No-Code Application Development Platforms

January 29, 2020   BI News and Info
 Rise Of Low Code/No Code Application Development Platforms

At the heart of many digital transformation efforts is often the organization’s desire to be more agile or responsive to change. This requires looking for ways to dramatically reduce the time needed to develop and deploy software and to simplify and optimize the processes around its maintenance for quicker, more efficient deployment. Another key outcome that is part of many digital transformation efforts is enabling the organization to be more innovative. That might encompass finding ways to transform how the organization operates and realizes dramatic improvements in efficiency or effectiveness or creating new value by either delivering new products and services or creating new business models.

For organizations using conventional approaches to developing software, this can be a tall order. Developing new applications can take too long or require very specialized and expensive skills that are in short supply or hard to retain. Maintaining existing programs can be daunting, as well, as they struggle with increasing complexity and the weight of mounting technical debt.

Enter “low-code” or “no-code” application development platforms. This emerging category of software provides organizations with an easier to understand – often visual – declarative style of software development augmented by a simpler maintenance and deployment model.

Essentially, these tools allow developers, or even non-developers, to build applications quickly, easily, and rapidly on an ongoing basis. Unlike rapid application development (RAD) tools of the past, they are often offered as a service and accessed via the cloud with ready integrations to various data sources and other applications (often via RESTful APIs) available out of the box. They also come with integrated tools for application lifecycle management such as versioning, testing, and deployment.

With these new platforms, organizations can realize three things:

Faster time to value

The more intuitive nature of these platforms allows organizations to quickly get started and create functional prototypes without having to code from scratch. Prebuilt and reusable templates of common application patterns are often provided, allowing developers to create new applications in hours or days rather than weeks or months. When coupled with agile development approaches, these platforms allow developers to move through the process of ideating, prototyping, testing, releasing, and refining more quickly than they would otherwise do with conventional application development approaches.

Greater efficiency at scale

Low-code/no-code application development platforms allow developers to focus on building the unique or differentiating functionality of their applications and not worry about basic underlying services/functionality – authentication, user management, data retrieval and manipulation, integration, reporting, device-specific optimization, and others.

These platforms also provide tools for developers to easily manage the user interface, data model, business rules, and definitions for simpler, more straightforward ongoing management. So easy, in fact, that even less-experienced developers can do it themselves, lessening the need for costly or hard-to-find expert developers. These tools also insulate the developer and operations folks from the need to keep updating the frameworks, infrastructure, and other underlying technology behind the application because the platform provider manages them.

Innovative thinking

Software development is a highly creative and iterative process. Using low-code or no-code development platforms in combination with user-centric approaches such as design thinking, organizations can rapidly bring an idea to pilot. This way, they can get early user feedback or market validation without spending too much time and effort – a so-called minimum viable product (MVP).

And because these platforms make it easy to get started, even non-professional developers or “citizen developers,” who are more likely to have a deeper or more intimate understanding of the business and end-user or customer needs, can develop the MVP themselves. This allows the organization to translate ideas to action much faster and innovate on a wider scale.

While offering a lot of benefits, low-code/no-code application development platforms are certainly not a wholesale replacement to conventional application development methods (at least not yet). There are still situations where full control of the technology stack can benefit the organization – especially if it’s the anchor or foundation of the business, the source of differentiation, or source of competitive advantage. However, in most cases, organizations will benefit from having these types of platforms as part of their toolbox, especially as they embark on any digital transformation journey.

This article originally appeared on DXC.technology and is republished by permission.

Do you know What Is The API Economy (And Why It Matters)?

Let’s block ads! (Why?)

Digitalist Magazine

Read More

TLC vs. QLC NAND: Pick the best memory technology for your storage application

December 7, 2019   Big Data
This article is part of the Technology Insight series, made possible with funding from Intel.
________________________________________________________________________________________

In case you hadn’t noticed, solid-state drives keep getting bigger and faster. Back in 2008, a state-of-the-art enterprise SSD offered 32GB of capacity and moved files at up to 250 MB/s. Today, a 32TB version can read data sequentially at 3,200 MB/s. That’s a 1000x size increase and more than 10x speed-up.

Those incredible gains are made possible by storing more bits of data in every memory cell, and then fitting more memory cells in each NAND flash chip. For example, the X25-E’s single-level cell flash held one bit in each cell; the new SSD D5-P4326 packs four bits into the same space.

The industry is moving toward higher-capacity SSDs in its effort to keep data close to processing resources. But simply buying the largest SSD out there isn’t the best way for IT decision-makers to construct complex storage systems. Before picking the drives for your next application, make sure you understand how NAND flash affects performance, endurance, and density.

Are you ready for the zettabyte age?

  • ~32ZB of data were created in 2018, according to IDC
  • Current forecasts suggest ~103ZB will be created in 2023
  • Scaling solid-state storage to help satisfy this demand requires denser NAND (x/y axis), more layers of NAND per die (z axis), and more bits per memory cell
  • Quad-level cell (QLC) NAND offers a 33% scaling advantage compared to existing triple-level cell (TLC) memory, but presents write performance and endurance challenges
  • As a result, TLC remains an important memory technology in write-intensive workloads. Expect the two technologies to complement each other.

3D QLC NAND: Where we’re going

The NAND flash in Intel’s SSD D5-P4326 is referred to as 3D QLC. When we talk about QLC, or quad-level cell technology, we’re referring to each memory cell’s ability to save four bits of data across 15 different threshold voltages. 3D is a reference to the way memory cells are built.

It used to be that those cells were arranged side by side on a silicon substrate. Their density increased as new lithography processes made it possible to fit more of them on a planar surface. But as it became increasingly difficult to scale along the x- and y-axis, manufacturers started organizing cells vertically, three-dimensionally along the z-axis.

The benefits of 3D NAND over 2D planar NAND naturally include much higher density. 3D NAND can also be written to and erased more times than planar NAND thanks to its larger memory cells. The technology offers lower power consumption, better performance, and less cost per bit of storage.

In a flash device built up 64 layers-tall, 3D NAND enables 64 times the cell density of planar memory. From there, cramming more data into every cell serves as a multiplier. So, QLC technology takes that 64x and turns it into 256x. Specific to Intel’s 64-layer 3D NAND, which it uses in the SSD D5-P4326, the company can fit 1Tb density per die. And more flash memory per die translates to higher-capacity SSDs in the same familiar form factors.

3D TLC NAND: Still cutting-edge memory technology

Whereas QLC NAND stores four bits per cell by sensing one of 16 possible charge states, triple-level cell (TLC) NAND only tracks eight. Of course, that’s still a formidable task. But because fewer bits are written to TLC NAND compared to QLC memory, TLC can withstand a higher number of program/erase cycles before its cells start wearing out.

 TLC vs. QLC NAND: Pick the best memory technology for your storage application

Above: Storing four bits in a QLC memory cell requires differentiating between 16 different charge states. Three bits in TLC NAND can be achieved with eight charge states. Both are far more complex than older MLC or SLC technologies.

Image Credit: IBM Research

TLC flash is faster than QLC, too. It turns out that differentiating between twice as many charge states makes QLC more prone to mistakes than TLC flash. And although both technologies employ error-correcting code algorithms to maintain the integrity of your data, this process consumes a greater number of processing cycles on QLC-based drives, hitting write performance especially hard.

Picking the right performance profile for your application

According to a presentation given at the 2019 Flash Memory Summit, Micron’s Kent Smith made it clear that the latest QLC-based SSDs are designed to augment existing TLC SSDs, not replace them. He pointed out that QLC pricing puts the technology in striking distance of the 55 million 7,200 RPM (or higher) hard drives expected to ship in 2019.

Knowing that 3D TLC and 3D QLC are out there, side by side on the shelf, how (and perhaps more important, why), how do you choose between them? It’s all about understanding your storage application.

Because QLC NAND can be read sequentially just as fast as TLC NAND, it’s great for read-heavy workloads. Conversely, TLC NAND has the upper hand in write performance. When you apply those strengths to the spectrum of read and write ratios, it’s easy to visualize where each technology fits best. Smith went a step further, adding block sizes to his breakdown. He showed QLC SSDs for mixed workloads handling large blocks of data.

 TLC vs. QLC NAND: Pick the best memory technology for your storage application

Above: TLC and QLC SSDs complement each other. The former excels in write-heavy workloads, while the latter offers excellent read performance at a lower cost-per-bit than TLC flash.

Image Credit: Micron

Better still, Smith’s presentation offered up a number of performance-sensitive workloads historically run on hard drives that read data at least 90% of the time, or rely heavily on random reads and sequential writes. AI data lakes, edge analytics (including 5G), big data (Hadoop), object stores, SQL databases, content delivery networks, cloud services, vSAN capacity tiers, and financial regulatory and compliance storage are all prime candidates to make the move to QLC-based SSDs.

Whereas a traditional datacenter I/O pattern might involve four reads for every write, the deep learning algorithms that feed AI are estimated at 5,000 reads for every write, according to data presented by Micron. A larger, cheaper QLC-based SSD is ideal in an application like that.

“Netflix is another good example of where QLC NAND works well,” said Michael Scriber, senior director of server solution management at Supermicro. “They’re going to write a movie to their system once. Then, customers are going to read that movie out a zillion times at the same performance and lower cost compared to TLC.”

Endurance matters, too

Beyond performance, your application’s ratio of reads to writes also affects endurance. Since QLC NAND is rated for fewer program/erase cycles than TLC, write-heavy workloads wear its memory cells faster. Those tasks seem to be the exception, though. According to Micron, four out of five enterprise SSDs shipped in 2018 were rated for less than one drive-write per day (DWPD). That metric tells you what percentage of an SSD’s capacity you can write to the drive each day over its warranty period.

 TLC vs. QLC NAND: Pick the best memory technology for your storage application

Above: According to Micron, 4/5 of all enterprise drives shipped in 2018 were rated for less than 1 DWPD, illustrating a decreasing need for high-endurance SSDs.

Image Credit: Micron

Back in the day of Intel’s X25-E, one drive write—a mere 32GB—would have been grossly insufficient. But when you factor in the capacity of today’s SSDs, a lower endurance rating is easier to stomach.

“If I have an 8TB (TLC) drive good for 1 DWPD, I can write 8TB every day for five years and my warranty is still good,” said Supermicro’s Scriber. “On the other hand, if I have a 16TB (QLC) drive that’s only good for 0.5 DWPD, I can still write 8TB per day for the next five years and it’ll still be fine.”

When you think about 32 SSD D5-P4326s across the front of a 1U server, and the 500TB of pooled capacity they represent, ask if your application will be writing 250TB or 300TB on a daily basis before sounding an alarm over endurance.

TLC and QLC NAND complement each other

By 2025, Western Digital predicts that 50% of the NAND flash bits shipped will be 3D QLC, with 3D TLC making up most of what’s left. QLC NAND will slowly displace some TLC NAND volume between now and then. However, both technologies remain important moving forward.

 TLC vs. QLC NAND: Pick the best memory technology for your storage application

Above: By 2025, half of the bits shipped will be based on 3D QLC NAND. The other half will be 3D TLC. Planar NAND will have all but disappeared.

Image Credit: Western Digital

As a case in point, Intel’s SSD DC P4510 lives alongside its SSD D5-P4326. Both are available in E1.L form factors at capacities as large as 15.36TB, and both are covered by five-year warranties. But the SSD DC P4510 is composed of 3D TLC NAND stacked 64 layers-high and capable of 3.1 GB/s sequential reads and writes. The SSD D5-P4326 employs 64-layer 3D QLC NAND that pushes sequential reads up to 3.2 GB/s, but drops to 1.6 GB/s when you write. Although some of their specifications overlap, these drives are designed for different applications.

Bottom line

Decision-makers have more flexibility than ever to tap the best storage option for their workloads, balancing performance, endurance, density, and cost. QLC NAND’s strengths finally make a case for replacing mechanical disks with much faster and more reliable solid-state drives. Meanwhile, TLC-based SSDs remain the better choice in write-heavy applications. Address each of your workloads with the right storage technology and you’ll hammer out bottlenecks without overspending.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Application Proliferation: Imperatives For The CIO

November 28, 2019   SAP
 Application Proliferation: Imperatives For The CIO

Enterprise applications are integral to running any business. Applications are among the most effective tools of any business and speed up and simplify nearly any task or process. Applications enable businesses to serve their wide range of customers across geographies, create versatile products and solutions customized to diverse customer situations and needs,  sell under difficult and uncertain conditions, unlock the value of or verify specific opportunities, and even uncover root causes.

But should you have too much of a good thing? And how would you ensure that the good things you have continue being good?

Most companies deploy 10-12 large application platforms to manage the various business functions like finances, human resources, sales and marketing activities, customer support, supply chain, manufacturing, procurement, retail store operations, etc., supporting key business needs of data management and analytics. Platforms are the most reliable instruments that can execute perpetual business processes impeccably and can act as custodians of compliance.

However, they can also be a no-go for the agility needed by the rapidly changing business environment. Business leaders often need niche applications to gain a competitive edge. In an aggressive and ambitious marketplace where time is the most valuable commodity, business leaders often decide in favor of specialized SaaS applications to fill this gap and to supplement the platform’s capabilities.

This creates a huge sprawl of SaaS applications tied to platforms like a tassel. Enterprise architecture is a lever that helps CIOs effectively manage the continuously growing application portfolio that, if left unmanaged, can lead to complexity, create confusion, waste staff time, and even potentially jeopardize business relationships.

Business processes and data flow maps

In the age of innovation, some applications fundamentally alter or even eliminate inefficient business processes, replacing them with new ways of doing business. It becomes imperative to continuously update the business process and data flow maps. Dated, inaccurate, or inconsistent details can ruin the internal operations of any corporation.

Application architecture

Application architecture articulates the purpose of each application – business (HR, finance), functional (analytics), and technical (database, middleware). This makes it easy to track the accurate inventory of the applications, expenses, ownership, and contract renewal dates. This also enables business leaders to make calls on the acquisition, deployment, and use of niche applications.

Data security and compliance

In most organizations, it is the responsibility of CIOs and CISOs to understand the regulations and industry standards that govern routine operations. Applications must pass the test of the multiple applicable compliance policies. It is essential to categorize the applications based on the various regulations and standards that they must comply with.

Do you want to understand the business value of the integration between SAP SuccessFactors and Qualtrics Solutions? Save the date for the webinar on December 12th: https://url.sap/ey68kp

Let’s block ads! (Why?)

Digitalist Magazine

Read More
« Older posts
  • Recent Posts

    • Why the open banking movement is gaining momentum (VB Live)
    • OUR MAGNIFICENT UNIVERSE
    • What to Avoid When Creating an Intranet
    • Is Your Business Ready for the New Generation of Analytics?
    • Contest for control over the semantic layer for analytics begins in earnest
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited