• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: model

Google trained a trillion-parameter AI language model

January 12, 2021   Big Data

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


Parameters are the key to machine learning algorithms. They’re the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. For example, OpenAI’s GPT-3 — one of the largest language models ever trained, at 175 billion parameters — can make primitive analogies, generate recipes, and even complete basic code.

In what might be one of the most comprehensive tests of this correlation to date, Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters. They say their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to 4 times speedup over the previously largest Google-developed language model (T5-XXL).

As the researchers note in a paper detailing their work, large-scale training is an effective path toward powerful models. Simple architectures, backed by large datasets and parameter counts, surpass far more complicated algorithms. But effective, large-scale training is extremely computationally intensive. That’s why the researchers pursued what they call the Switch Transformer, a “sparsely activated” technique that uses only a subset of a model’s weights, or the parameters that transform input data within the model.

The Switch Transformer builds on a mix of experts, an AI model paradigm first proposed in the early ’90s. The rough concept is to keep multiple experts, or models specialized in different tasks, inside a larger model and have a “gating network” choose which experts to consult for any given data.

The novelty of the Switch Transformer is that it efficiently leverages hardware designed for dense matrix multiplications — mathematical operations widely used in language models — such as GPUs and Google’s tensor processing units (TPUs). In the researchers’ distributed training setup, their models split unique weights on different devices so the weights increased with the number of devices but maintained a manageable memory and computational footprint on each device.

In an experiment, the researchers pretrained several different Switch Transformer models using 32 TPU cores on the Colossal Clean Crawled Corpus, a 750GB-sized dataset of text scraped from Reddit, Wikipedia, and other web sources. They tasked the models with predicting missing words in passages where 15% of the words had been masked out, as well as other challenges, like retrieving text to answer a list of increasingly difficult questions.

 Google trained a trillion parameter AI language model

The researchers claim their 1.6-trillion-parameter model with 2,048 experts (Switch-C) exhibited “no training instability at all,” in contrast to a smaller model (Switch-XXL) containing 395 billion parameters and 64 experts. However, on one benchmark — the Sanford Question Answering Dataset (SQuAD) — Switch-C scored lower (87.7) versus Switch-XXL (89.6), which the researchers attribute to the opaque relationship between fine-tuning quality, computational requirements, and the number of parameters.

This being the case, the Switch Transformer led to gains in a number of downstream tasks. For example, it enabled an over 7 times pretraining speedup while using the same amount of computational resources, according to the researchers, who demonstrated that the large sparse models could be used to create smaller, dense models fine-tuned on tasks with 30% of the quality gains of the larger model. In one test where a Switch Transformer model was trained to translate between over 100 different languages, the researchers observed “a universal improvement” across 101 languages, with 91% of the languages benefitting from an over 4 times speedup compared with a baseline model.

“Though this work has focused on extremely large models, we also find that models with as few as two experts improve performance while easily fitting within memory constraints of commonly available GPUs or TPUs,” the researchers wrote in the paper. “We cannot fully preserve the model quality, but compression rates of 10 to 100 times are achievable by distilling our sparse models into dense models while achieving ~30% of the quality gain of the expert model.”

In future work, the researchers plan to apply the Switch Transformer to “new and across different modalities,” including image and text. They believe that model sparsity can confer advantages in a range of different media, as well as multimodal models.

Unfortunately, the researchers’ work didn’t take into account the impact of these large language models in the real world. Models often amplify the biases encoded in this public data; a portion of the training data is not uncommonly sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.”  Other studies, like one published in April by Intel, MIT, and Canadian AI initiative CIFAR researchers, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

It’s unclear whether Google’s policies on published machine learning research might have played a role in this. Reuters reported late last year that researchers at the company are now required to consult with legal, policy, and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender, or political affiliation. And in early December, Google fired AI ethicist Timnit Gebru, reportedly in part over a research paper on large language models that discussed risks, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Uber researchers propose AI language model that emphasizes positive and polite responses

January 5, 2021   Big Data
 Uber researchers propose AI language model that emphasizes positive and polite responses

Transform 2021

Join us for the world’s leading event about accelerating enterprise transformation with AI and Data, for enterprise technology decision-makers, presented by the #1 publisher in AI and Data

Learn More


AI-powered assistants like Siri, Cortana, Alexa, and Google Assistant are pervasive. But for these assistants to engage users and help them to achieve their goals, they need to exhibit appropriate social behavior and provide informative replies. Studies show that users respond better to social language in the sense that they’re more responsive and likelier to complete tasks. Inspired by this, researchers affiliated with Uber and Carnegie Mellon developed a machine learning model that injects social language into an assistant’s responses while preserving their integrity.

The researchers focused on the customer service domain, specifically a use case where customer service personnel helped drivers sign up with a ride-sharing provider like Uber or Lyft. They first conducted a study to suss out the relationship between customer service representatives’ use of friendly language to drivers’ responsiveness and the completion of their first ride-sharing trip. Then, they developed a machine learning model for an assistant that includes a social language understanding and language generation component.

In their study, the researchers found that that the “politeness level” of customer service representative messages correlated with driver responsiveness and completion of their first trip. Building on this, they trained their model on a dataset of over 233,000 messages from drivers and corresponding responses from customer service representatives. The responses had labels indicating how generally polite and positive they were, chiefly as judged by human evaluators.

Post-training, the researchers used automated and human-driven techniques to evaluate the politeness and positivity of their model’s messages. They found it could vary the politeness of its responses while preserving the meaning of its messages, but that it was less successful in maintaining overall positivity. They attribute this to a potential mismatch between what they thought they were measuring and manipulating and what they actually measured and manipulated.

“A common explanation for the negative association of positivity with driver responsiveness in … and the lack of an effect of positivity enhancement on generated agent responses … might be a discrepancy between the concept of language positivity and its operationalization as positive sentiment,” the researchers wrote in a paper detailing their work. “[Despite this, we believe] the customer support services can be improved by utilizing the model to provide suggested replies to customer service representatives so that they can (1) respond quicker and (2) adhere to the best practices (e.g. using more polite and positive language) while still achieving the goal that the drivers and the ride-sharing providers share, i.e., getting drivers on the road.”

The work comes as Gartner predicts that by the year 2020, only 10% of customer-company interactions will be conducted via voice. According to the 2016 Aspect Consumer Experience Index research, 71% of consumers want the ability to solve most customer service issues on their own, up 7 points from the 2015 index. And according to that same Aspect report, 44% said that they would prefer to use a chatbot for all customer service interactions compared with a human.

VentureBeat

VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you,
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more.

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

#PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

December 29, 2020   Self-Service BI

I have been playing around with the new awesome (preview) feature in the December Power BI Desktop release where we can use DirectQuery for Power BI datasets and Azure Analysis services (link to blogpost)

In my case I combined data from a Power BI dataset, Azure Analysis Services, and a local Excel sheet. The DirectQuery sources was in a test environment.

I then wanted to try this on the actual production datasets and wanted to change the datasources – and was a bit lost on how to do that but luckily found a way that I want to share with you.

Change the source

First you click on Data source settings under Transform data

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

This will open the dialog for Data source settings and show you the list of Data sources in the current file.

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

Now you can either right click the data source you want to change

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

Or click the button “Change Source…”

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

Depending on your data source different dialogs will appear

This one for my Azure Analysis Services Connection

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

And this one for Power BI Dataset

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

And this one for the Local Excel workbook

 #PowerBI – Change the data source in your composite model with direct query to AS/ Power BI Dataset

Hope this can help you to.

Happy new year to you all.

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More

How to fit 3 data sets to a model of 3 differential equations?

October 15, 2020   BI News and Info

 How to fit 3 data sets to a model of 3 differential equations?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

August 22, 2020   Self-Service BI

I recently created an external tool to PowerBI desktop that connects your Power BI desktop model to Excel (https://eriksvensen.wordpress.com/2020/07/27/powerbi-external-tool-to-connect-excel-to-the-current-pbix-file/) and then I thought – could we also have a need for an external tool that could open the desktop model in Tableau desktop.

So, I downloaded a trial version of the Tableau Desktop to see what is possible.

And sure, enough Tableau can connect to Microsoft Analysis Services and therefor also the localhost port that Power BI Desktop uses.

082020 1030 connectyour1 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

We can also save a data source as a local data source file in Tableau

082020 1030 connectyour2 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

Which gives us a file with a tds extension (Tableau Data Source)

082020 1030 connectyour3 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

When opening the file in Notepad we can see the connection string and some extra data about metadata-records.

082020 1030 connectyour4 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

It turns out that the tds file does not need all the meta data record information – so I cleaned the tds file to contain

082020 1030 connectyour5 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

Opening this file from the explorer will open a new Tableau Desktop file with the connection to the specified model/database/server.

The external tool

Knowing this I could create an external tool the same way as my Excel connector.

First create a PowerShell

OBS – in order to run a powershell script on your pc you need to have to set the execution policy – https://go.microsoft.com/fwlink/?linkid=135170

The PowerShell script

Function ET-TableauDesktopODCConnection
{  

	[CmdletBinding()]
    param
    (
        [Parameter(Mandatory = $  false)]        
		[string]
        $  port,
        [Parameter(Mandatory = $  false)]        
		[string]
        $  database,
        [Parameter(Mandatory = $  false)]        
		[string]
        $  path	
    )
    
        $  tdsXml = "<?xml version='1.0' encoding='utf-8' ?>
<datasource formatted-name='LocalPowerBIDesktopFile' inline='true' source-platform='win' version='18.1' xmlns:user='http://www.tableausoftware.com/xml/user'>
  <document-format-change-manifest>
    <_.fcp.SchemaViewerObjectModel.true...SchemaViewerObjectModel />
  </document-format-change-manifest>
  <connection authentication='sspi' class='msolap' convert-to-extract-prompted='no' dbname='$  database' filename='' server='$  port' tablename='Model'>
</connection>
</datasource>"   
                
        #the location of the odc file to be opened
        $  tdsFile = "$  path\tableauconnector.tds"

        $  tdsXml | Out-File $  tdsFile -Force	

        Invoke-Item $  tdsFile

}

ET-TableauDesktopODCConnection -port $  args[0] -database $  args[1] -path "C:\temp"

The script simply creates a tableauconnectort.tds file and stores it in C:\temp – and the xml content in the file is dynamically referenced as arg(0) and arg(1) when the external tool is called from Power BI Desktop.

Save the script in C:\temp and call it ConnectToTableau.ps1.

The OpenInTableau.pbitool.json file

Next step was to create a pbitool.json file and store it in C:\Program Files (x86)\Common Files\Microsoft Shared\Power BI Desktop\External Tools

{
  "version": "1.0",
  "name": "Open In Tableau",
  "description": "Open connection to desktop model in Tableau ",
  "path": "C:/Windows/System32/WindowsPowerShell/v1.0/powershell.exe",
  "arguments": "C:/temp/ConnectToTableau.ps1 \"%server%\" \"%database%\"",
  "iconData": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAJAAAACQCAYAAADnRuK4AAAABmJLR0QA/wD/AP+gvaeTAAADRklEQVR4nO3dv27TUBiH4WPEitSRS+iCurO0GzdRiS5sXRhAXZhYEAxd2LoUiZtgaxb2iqWXwFiJCzgsqPRPrMb5Jc1x/TxbqgSi5O2xE3+uSwGAUeo2/QRac3R8cla6bvfqB7XOPr19s7e5Z9S2J5t+AoybgIgIiIiAiAiIiICICIiIgIgIiIiAiEziUMbR8cnZovetXbfTlbJ1dbuUy67W80UfP7XDHk83/QQexPVjW/fd9e7trSGPnxqbMCICItLEJqyeljrv593BivbRap0tfNdwH2hVDj58mfuanH5819R+axMBrduQHdvb80BdredT2zEewiaMiICICIiIgIgIiIiAiAiIiICICIiIgIhM4lDGEA5bDGMFIiIgIgIiIiAiAiISTbf1TRK2ZmWTjQvomyRszaomG61ARAREREBEBEREQESaOMdo7eeFjdBYzguzAhEREBHjHP/8fv/i3i8An3/+1dTmowVWICICIiIgIgIiIiAiAiIiICICIiIgIgIiIiAiSx8Lc3Xjcdk/nJ2VWv+/X103+/51dy/9d61ARAREpIlxjilPHvZpbfKwjxWIiICICIiIgIgIiEgTn8KGWmQAfiz/79gH9a1ARG7UP5arG29qBVqHZAXaP5ydDbj7Tqn16v0qXXdZSln4/eo77HFzE+bqxuNy/djW8MdulVLi98smjIiAiNzchI3w6saT1nULv18l3AfqfQrLPnCT80B2ooczD0STRvlF4jp+a/11juVYgYgIiIiAiAiIiICINPEp7Of29txPQC8vLib7qefZq29zX5M/P1439ZpYgYgIiMjSmzCnMY/LKg5bzGMFIiIgIgIiIiAiAiIiICICIiIgIgIiIiAiAiLSxDhHCwzML8cKRERARKJlu2+SsDUPOdnYN0nYmlVNNlqBiAiIiICICIiIgIg08eWZ88Lucl4YkyAgIgIiIiAiAiJinOOWdf0108fKCkREQEQERERARAREREBEBEREQEQERERARCZxKGPw1Y1v3R7y+Kkd9mgioLVPHjZwdeOhWps87GMTRkRARJrYhK1dA1c3fqxGsZ19SOaBhrEJIyIgIgIiIiAiAiIiICICIiIgIgIiIiAAAAAYjb8VJdQbiRXyOAAAAABJRU5ErkJggg=="
}

Test it

Now restart your Power BI desktop and the external tool should be visible in the ribbon

082020 1030 connectyour6 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

Then open a pbix file with a model and hit the button.

A PowerShell screen will shortly be visible and then Tableau opens the tds file and now we have a new tableau book with a connection to active power bi desktop datamodel.

082020 1030 connectyour7 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

And we can start to do visualizations that are not yet supported in Power BI –

082020 1030 connectyour8 Connect your #PowerBI desktop model to #Tableau Desktop via External Tools in PowerBI

How can you try it

You can download the files needed from my github repository – link

Feedback

Let me know what you think and if possible share some of the viz that you make.

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More

Monitoring the Power Platform: Model Driven Apps – Monitor Tool Part 1: Messages and Scenarios

August 16, 2020   Microsoft Dynamics CRM

Summary

 

Monitoring Dynamics 365 or Model Driven Applications is not a new concept. Understanding where services are failing, users are running into errors, where form and business processes could be tuned for performance are key drivers for most if not all businesses, from small companies to enterprises. Luckily, the Dynamics 365 platform provides many tools to help audit and monitor business and operational events.

This article will cover user events and where they are sourced from. From there, we will dive into the Monitor tool and look at individual messages within. We will work with a few sample scenarios and see what we can gain from markers and messages within the Monitor tool.

Collecting User Events

 

Before we discuss techniques on how to capture events in Dynamics 365 let’s examine some meaningful events. From the client perspective this may include performance counters and metrics, user click events and navigation. Other data points include geolocations and preferences of users. Luckily, client events are easier to capture and we have many tools including browser based (Developer Tools) to applications (Fiddler) that are readily available. Some features of the platform allow for collecting markers while other events of interest we will have to supplement with custom delivery mechanisms.

For server side, external integrations and execution context contain identifiers for response codes that may require additional validation. For sandboxed plug-ins and custom workflow activities, we are limited somewhat to what tools we can leverage.

Upcoming articles will detail how to collect and push events of interest to a central area for analytics.

NOTE: The rest of this article will cover collecting and analyzing messages focused on the client. That said, server side events play a major role and can impact the client experience. I’ll address server side events in another article pertaining to Azure Application Insights and Model Driven Apps. In the meantime, check out this GitHub repo that includes experimental Plug-In code.

AUTHOR NOTE: Click each image to enlarge for detail

2161.pastedimage1597360820527v28 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The Monitor Tool

 

The Monitor tool can be launched from the Power Apps Maker Portal. Once launched, the Play Model Driven App button can be pressed to begin a session attached to the tool.

1513.pastedimage1597360820529v29 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The Monitor tool can also be started by adding “&monitor=true” to the URL of your Model Driven Application.

After consenting or allowing to start a session, the Monitor tool will light up rapidly with various messages. Similar to the the article “Canvas Driven Apps – The Monitoring Tool“, each row can be further drilled into for investigation.

7776.pastedimage1597360820530v30 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Jesse Parsons’ article on the Monitor Tool, titled Monitor now supports model-driven apps provides a through deep dive including sample scenarios.

I highly suggest reviewing and keeping it close by for reference.

Key Performance Indicators

 

Key Performance Indicators represent major lifecycle events within a particular user action, such as loading a form. Consider the image below.

6138.pastedimage1597360820531v31 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

By sorting the records on the “KPI” category, these events begin to emerge. The image below shows the major lifecycle events or KPIs for a standard form load within Dynamics 365. Beginning with PageNavigationStart and ending with RenderedEditReady, these events represent the completion of a form load.

3302.pastedimage1597360820533v32 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Scenario: Determining a Form’s Load Impact

 

Consider the scenario of a user logging into the system and opening a lead form for the first time. When performing this action, the form and data have not had a chance to be cached or stored locally which results in all items needed to be downloaded. This is sometimes referenced as a cold load. Reviewing the timeline event “FullLoad” we can determine what type of load the form rendered as.

 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Now, once captured, the user opens the fly out window to choose another lead record but using the same form. Again using the “FullLoad” KPI timeline event we can see the LoadType is now Two.

 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Finally, imagine the user needs to navigate back to the original lead record opened on the same form. We can see now the LoadType is now Three. Comparing this to the LoadType Zero image above, the entityId is the same.

Here is a sample scenario in full showing the differences in loading new and existing records and how changing a form can impact network requests to the server.

FullLoad LoadTypeAndMetadataNetworkRequestExample Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Attribution

 

On certain Key Performance Indicators a property is included called “Attribution” which represents specific events within a user action. This includes the commands or actions executed within a KPI. For example, during a form load lifecycle, controls are rendered, ribbon rule are evaluated, onload event handlers are executed, etc. The Attribution property will specify and group, in chronological order, which events happened. An added bonus is the duration is also shown for each event along with if any synchronous calls were made. Consider the image below.

Scenario: Locating dynamically added form changes and events

 

6012.pastedimage1597360820540v36 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The image above shows for the FullLoad KPI. In this image we see three main groups of events: CustomControl (form controls), RuleEvaluation (ribbon rules) and onload (form event handlers). What’s interesting here is each of these is grouped, but also the solution they are part of and event solution layering is present. The RuleEvaluation and onload groups above both show an unmanaged or “Active” layer that contained customizations.

Compare that image with the one below.

3276.pastedimage1597360820541v37 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

A scenario came up during an investigation into the an increased duration of a form save. To begin, we went through the user events as normal with the Monitor tool running.

5277.pastedimage1597360820542v38 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Upon review, you can see additional events occurred, tabstatechanged and onsave. The onsave was expected due to the registered event handler on the form. However the tabstatechange was not, this was found to be due to a recent code addition that triggered the setDisplayState of the tab.

if (control.getName() == tabName) {
	control.setDisplayState("expanded");
}

By reviewing the Attribution property we were able to identify what caused the increase of 5 seconds.

7178.pastedimage1597360820544v39 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Zone Activity and Other Data Points

 

Within Key Performance Indicators, are other data points that prove useful when debugging performance related issues. Latency and Throughput are shown as well as timings for network related calls and custom script executions. Within the ZoneActivity property we see events grouped by web service calls, browser resource timings and performance observer events. The CustomScriptTime shows the duration of all of the registered event handlers that fired during this particular Key Performance Indicator.

0160.pastedimage1597360820545v40 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Performance Messages

 

Performance categorized messages detail potential performance issues that a user may run into. At the time of this writing, I’ve uncovered only synchronous XHR calls being sent but I anticipate growth here.

Scenario: Locating and Evaluating Synchronous Resource Timings

 

Requests from a Model Driven Application represent outgoing calls from the client to another source. These calls can occur either synchronously meaning the thread executing the call waits for the response, or asynchronously meaning the thread continues and listens for the response. Its preferred to eliminate all sync calls to reduce any potential disruption to a user’s experience within the application.

The Monitor tool helps by identifying these requests and calling them out. These call outs can be found in the “Performance” category as shown below.

7343.pastedimage1597360820546v41 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Examining the performance entry, we can see the “dataSource” property shows the XHR URL. However, it doesn’t show the source of the call which is needed to better understand how and why the request was made. For that, we need to find and examine KPIs such as FullLoad or SaveForm.

1778.pastedimage1597360820547v42 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Here is a gif showing how to use the Monitor tool, coupled with the Browser Developer Tools to locate the line of code that needs to be updated.

FullLoad Perf SyncXhr Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Using these messages, along with outputs from Power Apps Checker, we can begin to uncover gaps in code analysis. In the next article, I’ll cover in depth an approach to help identify and remediate these gaps.

Next Steps

 

This article describes where user and platform events may originate from and how they can be monitored. Gaining insights and understanding into the SaaS version of Dynamics 365 allows us to uncover the black box and find answers to some of our questions. Think about how the Monitor tool can be used to find out where API calls may have started and coupled with other articles in this series, how we can correlate to provide a true end to end monitoring solution. The next article in this series will cover how we can extract and analyze the sessions further.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Monitoring the Power Platform: Model Driven Apps – Monitor Tool Part 2: Session Consumption and Analytics

August 15, 2020   Microsoft Dynamics CRM

Summary

 

Monitoring Dynamics 365 or Model Driven Applications is not a new concept. Understanding where services are failing, users are running into errors, where form and business processes could be tuned for performance are key drivers for most if not all businesses, from small companies to enterprises. Luckily, the Dynamics 365 platform provides many tools to help audit and monitor business and operational events.

This article will cover collecting, querying and analyzing user interface events, specifically from the recently announced Monitor Tool for Model Driven Apps. The previous article covered message data points and how to perceive them. In this go round, we will have a little fun exploring ways to utilize the output sessions. We will discuss how to build robust work items in Azure DevOps with Monitor output. We’ll look at consuming and storing outputs for visualizations and analytics with Kusto queries. Finally, samples will be provided to parse and load session messages into Azure Application Insights and Azure Log Analytics.

The Monitor Tool

 

The Monitor Tool allows users and team members to collect messages and work together in debugging sessions. To begin, the Monitor Tool can be launched from the Power Apps Maker Portal. Once launched, the Play Model Driven App button can be pressed to begin a session attached to the tool.

AUTHOR’S NOTE: Click on each image to enlarge for more detail

0647.pastedimage1597357184199v1 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

The Monitor Tool can also be started by adding “&monitor=true” to the URL of your Model Driven Application.

After consenting or allowing to start a session, the Monitor Tool will light up rapidly with various messages. Similar to the “Canvas Driven Apps – The Monitoring Tool” article, each row can be further drilled into for investigation.

8712.pastedimage1597357184200v2 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Jesse Parsons’ article on the Monitor Tool, titled ‘Monitor now supports model-driven apps‘ provides a through deep dive including sample scenarios. I highly suggest reviewing and keeping close by for reference.

Thoughts on Canvas Apps

 

The Monitor tool works with Power Apps Canvas Driven Apps as shown in the article “Canvas Driven Apps – The Monitoring Tool“. While this article is focused on Model Driven Apps, remember these techniques can also be utilized to serve Canvas Apps as well.

Consuming Monitor Sessions

 

Each time the Monitor tool is opened, a new session is created. Within each session are events that describe actions taken within the session as well as other helpful messages. Storing these sessions allow support teams to better understand errors and issues that arise during testing and production workloads. The previous article, “Monitor Tool Part 1: Messages and Scenarios“ covers scenarios that support users can use to better understand the data points delivered in the Monitor Tool.

The Monitor tool can also help analysts who want to learn more about the platform. For instance, user tendencies such as how long they spent on a page and which controls they interacted with in Canvas Driven Apps. For testing, the tool can help with non functional test strategies like A/B testing. Analyzing performance messages can point to potential code coverage gaps or advise on user experience impact. Network calls can be securitized to determine if queries can be optimized or web resources minified. The Monitor tool, in my opinion, really can open up a new view on how the platform is consumed and how users react with it.

Attaching to Azure DevOps Work Items

 

The Monitor Tool download artifacts work nicely with Azure DevOps Work Items. They can be attached to Bugs, Tasks and even Test Cases when performing exploratory or other types of tests.

1033.pastedimage1597357184200v3 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Working with test cases within Azure DevOps, Analysts craft work items with use cases and expected outcomes to deliver to Makers and Developers. Specifically with Test Cases, Quality Analysts can leverage the Monitor Tool in conjunction with the Test and Feedback Browser Extension. This allows for robust test cases complete with steps, screenshots, client information and the Monitor Tool output attached. Consider the gif below showing an example of using both the browser extension and the Monitor tool output.

AttachMonitorOutputToDevOpsBug Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

In that example, we see an analyst has found a performance issue with a Dynamics 365 form. The analyst logged a new bug, included annotations and screenshots and the Monitor tool output. A developer can be assigned, pick this up and begin working on the bug. By having the Monitor tool output the developer can now see each call made and review the Attributions within the respective KPI. For more information, refer to the Attribution section within Part 1.

0020.pastedimage1597357184200v5 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Storing Monitoring Sessions

 

The Monitor tool output comes in two flavors: CSV and JSON. Both make for light weight storage and are fairly easy to parse as shown later. These files can be attached to emails or stored in a shared location like a network drive.

Power BI and CSV Files

 

The csv files downloaded from the Monitor tool can be added Azure Blob Storage or stored locally and displayed in a Power BI Dashboard. This allows for analysts and support teams to drill down into sessions to gain further insights. The csv files can work both locally with Power BI Desktop and online. The below image shows a sample taken from a Canvas App Monitor Session. Additional information and samples can be found in the “Canvas Driven Apps – The Monitoring Tool” article.

7462.pastedimage1597357184201v6 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Experimenting with Extraction for Analytics

 

Storing outputs in Azure Blob Storage

 

During this writing and the writing of the Monitor Tool for Canvas Apps, I began to collect outputs from both and storing within Azure Blob Storage containers. There are multiple reasons why I chose to utilize Azure Blob Storage, mainly cost but also interoperability with other services such as Power BI, Azure Data Lake and Azure Event Grid.

Azure Blob Storage also integrates very well with both Azure Logic Apps, Azure Functions and Power Automate Flows. Each of these include a triggering mechanism on a new blob added to a container, working as a sort of drop folder. This may the choice to use Azure Blob Storage easy for me but I will also point out that specifically Power Automate Flows also can be triggered from OneDrive or SharePoint. This allows Makers to stay within the Microsoft 365 ecosphere and avoid spinning up multiple Azure services if desired.

Extracting to Log Stores

 

Extracting the messages with in the Monitor tool to a log store allows for analysts and support teams to parse and query the sessions. Determining how we want to store these messages will determine what services we leverage.

Choosing Azure Application Insights

 

If we want distributed transaction tracing I’d suggest Azure Application Insights. Application Insights will allow for pushing messages to specialized tables that feed dashboards and features native to the service such as End to End Transactions and Exception parsing.

Network messages can be stored in the requests or dependencies tables, which are designed, along with page views, to visual a typical web application’s interactions. Fetch network messages, representing calls to an API, fit nicely into the requests table as shown below:

8358.pastedimage1597357184201v7 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Dependencies on the other hand can represent dependent web resources.

8524.pastedimage1597357184201v8 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Using Azure Function to serve data

 

Azure Application Insights works well with microservices built using Azure Functions. A benefit of Azure Functions is the ability to have a function fire on create of a blob within a container. For more information and a helpful quick start to working with Blob triggered Azure Functions, check out this reference. Below is the method signature of a sample Azure Function:

[FunctionName("BlobTriggeredMonitorToolToApplicationInsights")]
public void Run([BlobTrigger("powerapps-monitortool-outputs/{name}", Connection = "")]Stream myBlob, string name, ILogger log)

In the sample provided you’ll see that the function takes the JSON payload, parses it and determines how to add to Azure Application Insights. Depending on the messageCategory property, it will funnel messages to custom events, requests or dependencies.

As always, review the Telemetry Client for ideas and techniques to enrich messages sent to Azure Application Insights. Also, if desired, review how to Sample messages to reduce noise and keep cost down.

Choosing Azure Log Analytics

 

Azure Log Analytics allow for custom tables that provide the greatest flexibility. The Data Collector API has a native connector to Power Automate that allows Makers to quickly deliver Monitor messages with a no or low code solution. Both Power Automate and Azure Logic Apps both offer triggers on create of a blob providing flexibility on which service to choose.

Using Power Automate to serve data

 

To work with Power Automate, begin by creating a new Power Automate flow. Set the trigger type to use the Azure Blob Storage trigger action “When a blob is added or modified“. If a connection hasn’t been established, create a connection. Once created locate the blob container to monitor. This container will be our drop folder.

The trigger is only designed to tell us a new blob is available, so the next step is to get the blob content. Using the FileLocator property we can now get the serialized messages and session information and deliver to Log Analytics.

Within Power Automate, search for the “Send Data” action. Set the JSON Request body field to be the File Content value from the “Get blob content” action.

7510.pastedimage1597357184201v9 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

The advantage here is with three actions and no code written I am able to listen to a drop folder for output files and send to Azure Log Analytics. The native JSON serialization option from the Monitor tool really serves us well here, allowing a seamless insertion into our custom table.

1680.pastedimage1597357184201v10 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Ideally we would expand the Power Automate flow to parse the JSON and iterate through messages to allow for individual entries into the table.

0211.pastedimage1597357184202v11 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Just remember the content maybe encoded to “octet-stream” and will need to be converted.

Sample Kusto Queries

 

Below are sample Kusto queries designed for Azure Application Insights.

General Messages

//Review Performance Messages
customEvents 
| extend cd=parse_json(customDimensions)
| where cd.messageCategory == "Performance"
| project session_Id, name, cd.dataSource

Browser Requests

//Request Method, ResultCode, Duration and Sync
requests 
| extend cd=parse_json(customDimensions)
| extend data=parse_json(tostring(cd.data))
| project session_Id, name, data.method, resultCode, data.name, data.duration, data.sync, cd.fileName

//Request Method, ResultCode, Duration and Resource Timings
requests 
| extend cd=parse_json(customDimensions)
| extend data=parse_json(tostring(cd.data))
| project session_Id, name, data.method, resultCode, data.name, data.duration,
data.startTime, 
data.fetchStart,
data.domainLookupStart,
data.connectStart,
data.requestStart,
data.responseStart,
data.responseEnd

Review the documentation on Resource Timings located here to better understand what these markers are derived from.

3301.pastedimage1597357184202v12 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Key Performance Indicators

pageViews 
| extend cd=parse_json(customDimensions)
| extend cm=parse_json(customMeasurements)
| extend data=parse_json(tostring(cd.data))
| extend attribution=parse_json(tostring(data.Attribution))
| where name=="FullLoad"
| order by tostring(data.FirstInteractionTime), toint(cm.duration)
| project session_Id, name, data.FirstInteractionTime,cm.duration, attribution

Sample Code

 

Azure Function and Azure Application Insights – Monitor Tool Extractor

Power Automate and Azure Log Analytics

Optional Azure Application Insights Custom Connector

Next Steps

 

In this article we have covered how to work with the Monitor tool output files. Viewing within Power BI Dashboards, attaching to DevOps work items and storing in Azure backed log stores are all possibilities. Included sample code and Kusto queries to help you get started have also been provided.

This article showcases use cases and strategies for working with the Monitor tool but really only represents the tip of the iceberg. Continue collecting, examining and churning the output for deep insight into user and platform trends.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Google releases Model Card Toolkit to promote AI model transparency

July 30, 2020   Big Data

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

Google today released the Model Card Toolkit, a toolset designed to facilitate AI model transparency reporting for developers, regulators, and downstream users. It’s based on Google’s Model Cards framework for reporting on model provenance, usage, and “ethics-informed” evaluation, which aims to provide an overview of a model’s suggested uses and limitations.

Google launched Model Cards publicly over the past year, which sprang from a Google AI whitepaper published in October 2018. Model Cards specify model architectures and provide insight into factors that help ensure optimal performance for given use cases. To date, Google has released Model Cards for open source models built on its MediaPipe platform as well as its commercial Cloud Vision API Face Detection and Object Detection services.

The Model Card Toolkit aims to make it easier for third parties to create Model Cards by compiling the necessary information and aiding in the creation of interfaces for different audiences. A JSON schema specifies the fields to include in a Model Card; using the model provenance data stored with ML Metadata (MLMD), the Model Card Toolkit automatically fills the JSON with information including data class distributions and performance statistics. It also provides a ModelCard data API to represent an instance of the JSON schema and visualize it as a Model Card.

 Google releases Model Card Toolkit to promote AI model transparency

Above: An example of a Model Card.

Image Credit: Google

Model Card creators can choose which metrics and graphs to display in the final Model Card, including stats that highlight areas where the model’s performance could deviate from its overall performance. Once the Model Card Toolkit has populated the Model Card with key metrics and graphs, developers can supplement this with information regarding the model’s limitations, intended usage, trade-offs, and ethical considerations otherwise unknown to model users. If a model underperforms for certain slices of data, the Model Cards’ limitations section offers a place to acknowledge that along with mitigation strategies to help address the issues.

“This type of information is critical in helping developers decide whether or not a model is suitable for their use case, and helps Model Card creators provide context so that their models are used appropriately,” wrote Google Research software engineers Huanming Fang and Hui Miao in a blog post. “Right now, we’re providing one UI template to visualize the Model Card, but you can create different templates in HTML should you want to visualize the information in other formats.”

The idea of Model Cards emerged following Microsoft’s work on “datasheets for datasets,” or datasheets intended to foster trust and accountability through documenting data sets’ creation, composition, intended uses, maintenance, and other properties. Two years ago, IBM proposed its own form of model documentation in voluntary factsheets called ” “Supplier’s Declaration of Conformity” (DoC) to be completed and published by companies developing and providing AI. Other attempts at an industry standard for documentation include Responsible AI Licenses (RAIL), a set of end-user and source code license agreements with clauses restricting the use, reproduction, and distribution of potentially harmful AI technology, and a framework called SECure that attempts to quantify the environmental and social impact of AI.

“Fairness, safety, reliability, explainability, robustness, accountability — we all agree that they are critical,” Aleksandra Mojsilovic, head of AI foundations at IBM Research and codirector of the AI Science for Social Good program, wrote in a 2018 blog post. “Yet, to achieve trust in AI, making progress on these issues will not be enough; it must be accompanied with the ability to measure and communicate the performance levels of a system on each of these dimensions.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers seek to advance predictive AI for engineers with CAD model data set

July 21, 2020   Big Data
 Researchers seek to advance predictive AI for engineers with CAD model data set

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

Artificial intelligence appears poised to augment or replace human artists in some cases. Carnegie Mellon University researchers are training a robot to pick up painting techniques by watching humans, and last month MIT researchers introduced a generative model that predicts how humans paint landscape art by training AI with YouTube videos of people painting. Now a team from Princeton hopes to make industrial design more automated as well.

In recent days, researchers from Princeton University’s Intelligent Systems Lab and Columbia University introduced SketchGraphs, a data set of 15 million 2D computer-aided design (CAD) sketches and open source data processing pipeline. AI trained using the data set could eventually assist humans in sketching CAD models.

CAD models can be anything from a single machine component to an entire building. They’re used by architects, engineers, and others creating prototypes in software like Autodesk’s AutoCAD or Dassault’s SolidWorks. The SketchGraphs data set was obtained from the public API of CAD software provider Onshape and includes sketches collected over the past 15 years.

Creators of the data set say it can enable the creation of AI models that give engineers more efficient design workflows or point out real-world constraints or structural issues in a design. Each sketch in the data set comes with a geometric constraint graph and knowledge of the line and shape sequence in which a sketch was made, enabling predictions of what an engineer might draw next. The researchers evaluated the SketchGraphs data set using construction CAD designs to create a generative model and predict constraints when shown certain lines and shapes.

“By learning to predict sequences of sketch construction operations, for example, models may be employed for conditional completion, interactively suggesting next steps to a CAD user. In addition, explicit generative models, estimating probabilities (or densities) of examples, may be used to assess the overall plausibility of a sketch via its graph or construction sequence, offering corrections of dubious operations (similar to ‘autocorrect’),” a paper on the study reads. “SketchGraphs is aimed toward questions not just concerning the what but in particular the how of CAD design; that is, not simply what geometry is present but how it was constructed. To this end, we leverage a data source that provides some insight into the actual operations and commands selected by the designer at each stage of construction.”

Other AI data sets for CAD models Princeton researchers have introduced in the past include ModelNet and ShapeNet. But the SketchGraphs researchers say existing CAD data sets focus on 3D shape modeling, while their data set is focused on the relational structure of parametric CAD sketches.

SketchGraphs was introduced last week at the International Conference on Machine Learning (ICML), which was one of the largest annual AI research conferences in the world in 2019. Among other notable papers from ICML 2020:

  • A team of security, law, and machine learning experts warned that an anti-hacking law ruling by the U.S. Supreme Court in the future and a lack of consensus among circuit courts today could have a chilling effect on the field of adversarial machine learning and cybersecurity, an industry that increasingly relies on AI.
  • Analysis by MIT researchers found systematic faults in the ImageNet data set annotation pipeline, which researchers argue may be common in other large-scale computer vision models that followed the same approach.
  • An OpenAI research paper received an honorable mention from conference organizers for using GPT-2 to classify and generate images using ImageNet.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

What Manufacturers Can Learn From Formula One’s Industrial Optimization Model

July 6, 2020   TIBCO Spotfire
TIBCO Formula1 ModelOps 696x522 What Manufacturers Can Learn From Formula One’s Industrial Optimization Model

Reading Time: 3 minutes

When it comes to optimizing manufacturing processes, no one is better at it than the Mercedes-AMG Petronas Formula One team. From the way the team collects data, to how it analyzes that data and optimizes its systems and processes, it serves as a model for manufacturers. Let’s take a look at what manufacturers can learn from Mercedes-AMG Petronas F1’s industrial optimization model in order to improve their own factories.

Data Collection

When it comes to data, like other manufacturers, the team produces a plethora of data that needs to be collected and analyzed.  This translates to 45 terabytes of data produced during the course of a race week, comprised of 50,000 data points from over 300 sensors. Similarly, in a factory, production machines generate large volumes of data that need to be analyzed and quickly. For example, a CPG company can generate 5,000 data samples every 33 milliseconds. Manufacturers can learn from F1’s amazing ability to collect, analyze, and act on that tremendous amount of data in near real time.

Manufacturers can learn from F1’s amazing ability to collect, analyze, and act on that tremendous amount of data in near real time. Click To Tweet

Data Analysis

For the Mercedes-AMG Petronas F1 team, one of the ways data is collected is from a digital twin simulator, which tests overall car performance. There are billions of combinations of car set-ups that are possible, so the team needs to use analysis and experience to figure out the best ones to test. 

Like F1, in a factory, Industrial Internet of Things (IIoT) data must be analyzed in real time to understand how a process is performing in order to detect anomalies. Digital twins are also used in factories to reduce waste and improve product quality; a faulty product can lead to increased costs, rework, and unhappy customers, in addition to hefty fines and business closures. Digital twins are able to achieve this by mimicking real-world processes by utilizing sensors data in real-time to hone in and predict the key elements and attributes to optimize production, or prevent unnecessary failures.

Optimization 

When everything is properly optimized, the Mercedes-AMG Petronas F1 team sees the most benefit at the track. After careful analysis of the data, the team is able to find the optimum car setup in rapidly changing circumstances leading to significant gains in performance. Other examples include a reduction in anomalies in gearbox changes, resulting in great track performance improvements, helping ensure the best race and qualifying lap times.

 Imagine what that kind of time-saving that type of optimization could do for your company.

In fact, without proper manufacturing optimization, manufacturers face unplanned outages, which translate in a lower Overall Equipment Effectiveness (OEE). However, when optimized, manufacturers see increased performance and higher quality products. 

Looking Ahead

In the coming decade, many manufacturers are going to be switching their smart factory strategy from one that was focused on technology implementation to one that is focused on process-change management. This will result in manufacturers treating their own IIoT assets like internal customers, reducing downtime, equipment failures, and diagnosing and resolving issues. Manufacturers will increasingly leverage digital twins driven by IIoT and machine learning in order to save operational expenses and optimize supply chains. 

In the coming decade, many manufacturers are going to be switching their smart factory strategy from one focused on technology implementation to one that is focused on process-change management. Click To Tweet

While Mercedes-AMG Petronas F1 pioneered the modern industrial optimization model, manufacturers are starting to implement these best practices into their own factories. From data collection, data analysis, and optimization, manufacturers have an opportunity for greater industrial optimization going forward. When utilized properly and with the right technology, factories can increase performance, reduce costs, and produce higher quality products.

Download this infographic to see in greater detail what manufacturers can learn from Mercedes-AMG Petronas F1’s industrial optimization model. And, to learn more about how TIBCO gives the team a competitive advantage, visit our partnership page. 

Let’s block ads! (Why?)

The TIBCO Blog

Read More
« Older posts
  • Recent Posts

    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
    • NOW, THIS IS WHAT I CALL AVANTE-GARDE!
    • Why the open banking movement is gaining momentum (VB Live)
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited