• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Interacting

Google teaches robots how to recognize objects by interacting with their environment

December 12, 2018   Big Data

Google’s teaching AI systems to think more like children — at least, when it comes to object recognition and perception. In a paper (“Grasp2Vec: Learning Object Representations from Self-Supervised Grasping“) and accompanying blog post, Eric Jang, a software engineer at Google’s robotics division, and Coline Devin, a Ph.D. student at Berkeley and former research intern, describe an algorithm — Grasp2Vec — that “learns” the characteristics of objects by observing and manipulating them.

Their work comes a few months after San Francisco-based startup OpenAI demonstrated a computer vision system — dubbed Dense Object Nets, or DON for short — that allows robots to inspect, visually understand, and manipulate object they’ve never seen before. And it’s based on cognitive developmental research on self-supervision, the Google researchers explained.

People derive knowledge about the world by interacting with their environment, time-tested studies on object permanence have shown, and over time learn from the outcomes of the actions they take. Even grasping an object provides a lot of information about it — for example, the fact that it had to be within reach in the moments leading up to the grasp.

“In robotics, this type of … learning is actively researched because it enables robotic systems to learn without the need for large amounts of training data or manual supervision,” Jang and Devin wrote. “By using this form of self-supervision, [machines like] robots can learn to recognize … object[s] by … visual change[s] in the scene.”

The team collaborated with X Robotics to “teach” a robotic arm that could grasp objects “unintentionally,” and in the course of training learn representations of various objects. Those representations eventually led to “intentional grasping” of tools and toys chosen by the researchers.

 Google teaches robots how to recognize objects by interacting with their environment

The team leveraged reinforcement learning — an AI training technique that uses a system of rewards to drive agents toward specific goals — to encourage the arm to grasp objects, inspect them with its camera, and answer basic object recognition questions (“Do these objects match?”). And they implemented a perception system that could extract meaningful information about the items by analyzing a series of three images: an image before grasping, an image after grasping, and an isolated view of the grasped object.

In tests, Grasp2Vec and the researchers’ novel policies achieved a success rate of 80 percent, and worked even in cases where multiple objects matched the target and where the target consisted of multiple objects.

“We show how robotic grasping skills can generate the data used for learning object-centric representations,” they wrote. “We then can use representation learning to ‘bootstrap’ more complex skills like instance grasping, all while retaining the self-supervised learning properties of our autonomous grasping system. Going forward, we are excited not only for what machine learning can bring to robotics by way of better perception and control, but also what robotics can bring to machine learning in new paradigms of self-supervision.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Visualizing and interacting with your Azure Machine Learning Studio experiments

October 13, 2017   Self-Service BI

Microsoft Senior Program Manager Christian Berg is back with another entry in his series on becoming your organization’s strategic advisor with Machine Learning and Power BI. If you need to catch up, you can find the five previous chapters summarized here on the blog.

—————————————————————-

Visualizing and exploring the results of your Azure Machine Learning Studio (ML Studio) experiments is useful both when developing and evaluating the model but most importantly when deploying your model and presenting the results. With Power BI Desktop this can be done in two ways, either by writing the results from the model to a database that you then connect Power BI to, or by using an R script visual (Rviz). Using an Rviz has the advantage that you can dynamically select the subset of the Power BI model that you want to score. This enables powerful What-If Analysis scenarios where you can use R and ML Studio to assess the distribution of likely outcomes based on the counterfactuals.

In this example we will look at connecting to an Azure ML Studio experiment with an Rviz and then building on that to create a dynamic report to explore cross price elasticities. We will also look at a simpler example where we instead use DAX to explore the impact of different discount percentages, based on an assumption about our elasticity.

For the example report we have an ensemble regression model that was built in ML Studio to predict revenue by store, product and month using historic data, marketing information and future prices. The timeseries component is pre-calculated from part two.

Using the example report

For this report to work in Power BI Desktop you need the packages AzureML and ggplot2 installed. If you have not previously used these packages on your computer, install them by going to your R console in R Studio or other R GUI and copy/paste install.packages(“AzureML “), and repeat that for “ggplot2”. If you are new to R, please see my previous post for installation instructions. Please note that AzureML uses networking services which are currently blocked in the Power BI service.

When you open the example report and go to page 1 (Simple what-if analysis) you will see a line chart showing the historic revenue in the data model and the hypothetical revenue amount, based on the average discount and price elasticity parameters. Below the line chart you will find the breakdown of both amounts in a drillable matrix visualization.

b6f9490e 1d21 435f bb31 0386924dbfbd Visualizing and interacting with your Azure Machine Learning Studio experiments

In the top right-hand side corner are the two parameters Average discount (applied across all products) and Price elasticity. By changing these two parameters you will see the other visualizations’ measures beginning with “new” updating. While these are straightforward relationships this can be powerful to allow the report’s users to develop an intuitive understanding for the underlying sensitivities. Especially when there is little prior knowledge about the variation in inputs but the relationships between the inputs and relevant KPIs are well understood and modeled.

On the second page, Azure ML Studio real-time scoring, we are not inputting any assumptions but instead using slicers and Power BI’s filter functions to dynamically select a subset of the data model to be sent to our Predictive ML Studio experiment.

2c49f87d 8ba9 4e9c b34d ec66d943ca51 Visualizing and interacting with your Azure Machine Learning Studio experiments

The three slicers are followed by the Rviz which plots the actual revenue and the model’s predicted value. To the right of it is a normal line chart showing the historical values only. Unlike the previous page where we had to estimate elasticity this model was trained on all available data and uses the price points of the six products (P1-P6) to predict what the revenue would be. However, we are limited to only using real data for the scoring.

On the third page (What-If Analysis with Azure ML Studio) we have combined the previous two pages using both input parameters and real-time scoring. This allows us to test different price points per product with the Azure ML experiment. Just as on the previous page there is no need to estimate elasticity since the model has learnt this relationship through the historic data and by adding parameters for marketing campaigns etc. we can now assess the probable range of any scenario of interest.

dae4798f 398a 45e0 b4b1 855816f26450 Visualizing and interacting with your Azure Machine Learning Studio experiments To create a scenario, start by selecting the regions, stores and products that you are interested in using the corresponding slicers. Then select the time interval that you are interested in (this could have been exclusively or partially future dates). By changing the planned price on the different products, you can then examine which combination has the highest expected revenue or profit. This model takes cross-price elasticities into account, i.e. if one product is a complement or supplement to another product a change in its price will also affect units sold for the other product. In summary, a powerful tool to quickly compare the average Azure ML Studio prediction of the impact of your scenario against the base case.

Recreating the report

PAGE 1

To build what-if like dynamic reports in Power BI start by creating the parameters that the user should be able to input by clicking Modelling -> New Parameter in Power BI Desktop top menu.

53d886e7 07dd 4cd8 b8a8 51e22d338167 Visualizing and interacting with your Azure Machine Learning Studio experiments

This prompts a form that asks for the new parameter’s name, e.g. “Average discount”, data type, min and max values, the granularity of the steps that the slider should use and an optional default value. When you click OK a new table will be created with a calculated column and corresponding measure. The column contains the DAX formula that generates the series and you can double-click on it at any time to change the settings for the parameter. Repeat this for all the parameters that you want to let the user alter to define the scenario. For categorical values, e.g. “Social campaign”, “Display ads”, “Email” you can either Enter data or reference a new or existing table in the Query editor and then displaying it with a slicer. The what-if parameters should typically not be connected to any other tables, so you need to use DAX to use them. In this example I did this with the measure:

New Units_sold = SUM(Sales[Units_sold])*(1+[Average discount Value]*[Price eleasticity Value])

which changes the baseline quantity of products sold (Sales[Units_sold]) by the discount parameter, weighted by the elasticity. The measure:

New price = DIVIDE(SUM(Sales[Revenue]),SUM(Sales[Units_sold]))*(1-[Average discount Value])

calculates the current average price and the applies the average discount in the scenario to it. We can then use these two calculations to determine the new revenue:

New revenue = [New price]*[New Units_sold]

and then by adding these measures to our visualizations the report is complete allowing the user to immediately see the results of their inputs. For stochastic modelling you can use a randomization function like RAND() or Poisson.Dist() to determine for instance the probability of achieving a target based on the parameter input.

Below is an example of this first page, embedded as a publish to web report:

PAGE 2

To create the R based visuals on page 2 & 3, you will need an ML Studio based experiment that is deployed and the AzureML and ggplot2 R packages installed. Please note that since the AzureML R package uses networking capabilities it is not yet available in the service, i.e. only works on-premise.

b16297fc 3787 4911 b27a 900c39c365fe Visualizing and interacting with your Azure Machine Learning Studio experiments279fd626 de79 4e9d a557 5655d565e9bc Visualizing and interacting with your Azure Machine Learning Studio experiments

These two visualizations are Power BI R script visuals that uses the R packages AzureML and ggplot2*. The former connects to an MLStudio experiment. For it to do that the scoring experiment needs to be exhaustively defined. In this example we will use the 1) Workspace ID and 2) Experiment name. In addition, the package needs the 3) Authentication information to be allowed to interact with the web service.

You can find the Workspace ID by signing in to studio.azureml.net and selecting the correct workspace from the top menu:
de66d455 865f 4721 a213 1d51346908c3 Visualizing and interacting with your Azure Machine Learning Studio experiments

You then select the SETTINGS tab from the left-hand side menu. The WORKSPACE ID is the fourth property from the top:

c7a3ab71 4323 4b04 b6ff cc4e82e22e72 Visualizing and interacting with your Azure Machine Learning Studio experiments

Copy this information, e.g. to Notepad.

To get the AUTHORIZATION TOKENS click on the second tab in settings:

78283764 5196 434a ad7b 4a5904212c56 Visualizing and interacting with your Azure Machine Learning Studio experiments

And copy one of the tokens to a new row in your Notepad.

The last thing that you need is the exact name of the predictive experiment. Once you have published it, you can get this by clicking on WEB SERVICES in the left-hand side menu and then on the name of the experiment in the list and you will see something like:

8a4f5052 01db 4087 bd6d 0c6962aa6a34 Visualizing and interacting with your Azure Machine Learning Studio experiments

Copy the entire name, e.g. “pricingdevpoc [predictive exp.]” to your Notepad.

Now that you have the required connection information it is time to add the visuals to Power BI. Open a pbix file with the relevant data (typically a subset of the data that you used to train your model on). On the page that you want to add the interactive scoring: insert a simple standard visual, like a table and add the columns that you want to send to MLStudio. Please note that MLStudio requires an exact schema so the column names, sequence and type have to be identical to what the Web service input expects. I would also recommend that you click on the arrow next to each column name in the Values section for the visual and select Don’t summarize, to ensure that it looks correct without any empty rows or text values mixed with numbers etc.

Once this has been validated convert the visual to an R visual and add the below script updated with your experiments connection information:

29700d6e 8895 4728 8d06 a1bd91c6b705 Visualizing and interacting with your Azure Machine Learning Studio experiments

The R script editor where you past the script is visible when you select the converted Rviz from the canvas:

1f489236 b942 4f29 b7ee aa1522ba3315 Visualizing and interacting with your Azure Machine Learning Studio experiments

Update the below script* with your connection details and paste it into the R script editor:

## Set the workspace id

wsid = "paste your workspace id here within the quotation marks"

## Set the workspace authentication key

auth = "paste your authentication token here within the quotation marks "

## Input the name of the Experiment

serviceName = "paste your service’s name here within the quotation marks "

## Load the AzureML library if not previously installed: install.packages("AzureML")

library("AzureML")

## Create a reference to the workspace

ws <- workspace(wsid,auth)

## Create a reference to the experiment

s <- services(ws, name = serviceName)

## Send the dataset to the Azure ML web service for scoring and store the result in ds

ds <- consume(s,dataset)

## Aggregate the scores to a single value by month

scores <- data.frame(Prediction = tapply(ds$ Scored.Labels, ds$ Month_ID, sum))

## Aggregate the revenue to a single value by month (for comparison)

revenue <- data.frame(Actuals = tapply(ds$ Revenue, ds$ Month_ID, sum))

## Combine the two resulting vectors in the new data.frame timePlot

timePlot <- cbind(scores, revenue)

## Load the ggplot library if not previously installed: install.packages("ggplot2")

require(ggplot2)

## Specify the data to plot and set the x-axis

ggplot(data = timePlot, aes(x = 1:nrow(timePlot))) +

## Plot the two lines

geom_line(aes(y = Prediction, colour = 'Prediction')) +

geom_line(aes(y = Actuals, colour = 'Actuals')) +

## Rename the x and y axis

xlab("Time") +

ylab("Result $ ") +

## Name the legend

labs(colour="Legend") +

## Change the colors of the line

scale_color_manual(values = c("green", "red"))

c12e7699 9cef 4a28 a824 5a8ff7b39b6e Visualizing and interacting with your Azure Machine Learning Studio experiments

After pasting it in the editor, click the Run script play button. If you see an error message, click See details. If the message is “there is no package called…” install the required package, e.g. install.packages(“AzureML”) from your IDE for R, (RGui, RStudio etc.).

Once you have gotten this visual to work you can move on to creating a similar visualization which takes user defined parameters as input. Please see the beginning of this section for information on creating What-If parameters.

PAGE 3

Duplicate the page that you just created and substitute the columns from the data model with the corresponding parameter. If the parameters have the same names as the input schema it should work. In my example on page three I’ve given the parameters slightly different names to what the schema expects. This can be fixed by renaming them at runtime in the Rscript*. Below are two examples of how to do this, by name and by position:

## Rename the columns according to the import schema

## By name

names(dataset)[names(dataset)=="Product1 Price Value"] <- "P1"

## By position

names(dataset)[15] <- "P2"

names(dataset)[16] <- "P3"

names(dataset)[17] <- "P4"

names(dataset)[18] <- "P5"

## Set the workspace id

wsid = "paste your workspace id here within the quotation marks"

## Set the workspace authentication key

auth = "paste your authentication token here within the quotation marks "

## Input the name of the Experiment

serviceName = "paste your service’s name here within the quotation marks "

## Load the AzureML library if not previously installed: install.packages("AzureML")

library("AzureML")

## Create a reference to the workspace

ws <- workspace(wsid,auth)

## Create a reference to the experiment

s <- services(ws, name = serviceName)

## Send the dataset to the Azure ML web service for scoring and store the result in ds

ds <- consume(s,dataset)

## Aggregate the scores to a single value by month

scores <- data.frame(Prediction = tapply(ds$ Scored.Labels, ds$ Month_ID, sum))

## Aggregate the revenue to a single value by month (for comparison)

revenue <- data.frame(Actuals = tapply(ds$ Revenue, ds$ Month_ID, sum))

## Combine the two results vectors in the new data.frame timePlot

timePlot <- cbind(scores, revenue)

## Load the ggplot library if not previously installed: install.packages("ggplot2")

require(ggplot2)

labelNudge <- (max(scores) - min(scores))/15

## Specify the data to plot and set the x-axis

ggplot(data = timePlot, aes(x = 1:nrow(timePlot))) +

## Plot the two lines

geom_line(aes(y = Prediction, colour = 'Prediction')) +

geom_line(aes(y = Actuals, colour = 'Base case')) +

## Rename the x and y axis

xlab("Time") +

ylab("Result $ ") +

## Name the legend

labs(colour="Legend") +

## Change the colors of the lines

scale_color_manual(values = c("blue", "red")) +

geom_text(aes(label = format(round(Prediction, digits = 0), big.mark = ",", big.interval = 3L), y=Prediction), nudge_y = labelNudge)

The other difference to the first Rscript are small layout changes.

Links and downloads

studio.azureml.net

https://1drv.ms/u/s!Aq0_YqoiHrN3lv45rM5Zq5nZmG1qMA

* Third-party programs. This software enables you to obtain software applications from other sources. Those applications are offered and distributed by third parties under their own license terms. Microsoft is not developing, distributing or licensing those applications to you, but instead, as a convenience, enables you to use this software to obtain those applications directly from the application providers. By using the software, you acknowledge and agree that you are obtaining the applications directly from the third-party providers and under separate license terms, and that it is your responsibility to locate, understand and comply with those license terms.

Microsoft grants you no license rights for third-party software or applications that is obtained using this software.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Read More

The CRM Minute: PowerTrivia – Making Interacting with Your Customers Fun and Engaging [VIDEO]

May 18, 2016   Microsoft Dynamics CRM

Want to engage your clientele through SMS messages in a fun and interactive manner?  With PowerTrivia, users can leverage their PowerSurveyPlus, PowerSMS, and PowerWebForm subscriptions to create and track trivia games within CRM! Users can create custom messages for events like perfect score, incorrect questions, or even if the game has

Read More

 The CRM Minute: PowerTrivia – Making Interacting with Your Customers Fun and Engaging [VIDEO]

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited