• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: tool

Amazon launches ML-powered maintenance tool Lookout for Equipment in general availability

April 9, 2021   Big Data
 Amazon launches ML powered maintenance tool Lookout for Equipment in general availability

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.


Amazon today announced the general availability of Lookout for Equipment, a service that uses machine learning to help customers perform maintenance on equipment in their facilities. Launched in preview last year during Amazon Web Services (AWS) re:Invent 2020, Lookout for Equipment ingests sensor data from a customer’s industrial equipment and then trains a model to predict early warning signs of machine failure or suboptimal performance.

Predictive maintenance technologies have been used for decades in jet engines and gas turbines, and companies like GE Digital’s Predix and Petasense offer Wi-Fi-enabled, cloud- and AI-driven sensors. According to a recent report by analysts at Markets and Markets, predictive factory maintenance could be worth $ 12.3 billion by 2025. Startups like Augury are vying for a slice of the segment, beyond Amazon.

With Lookout for Equipment, industrial customers can build a predictive maintenance solution for a single facility or multiple facilities. To get started, companies upload their sensor data — like pressure, flow rate, RPMs, temperature, and power — to Amazon Simple Storage Service (S3) and provide the relevant S3 bucket location to Lookout for Equipment. The service will automatically sift through the data, look for patterns, and build a model that’s tailored to the customer’s operating environment. Lookout for Equipment will then use the model to analyze incoming sensor data and identify early warning signs of machine failure or malfunction.

For each alert, Lookout for Equipment will specify which sensors are indicating an issue and measure the magnitude of its impact on the detected event. For example, if Lookout for Equipment spotted an problem on a pump with 50 sensors, the service could show which five sensors indicate an issue on a specific motor and relate that issue to the motor power current and temperature.

“Many industrial and manufacturing companies have heavily invested in physical sensors and other technology with the aim of improving the maintenance of their equipment. But even with this gear in place, companies are not in a position to deploy machine learning models on top of the reams of data due to a lack of resources and the scarcity of data scientists,” VP of machine learning at AWS Swami Sivasubramanian said in a press release. “Today, we’re excited to announce the general availability of Amazon Lookout for Equipment, a new service that enables customers to benefit from custom machine learning models that are built for their specific environment to quickly and easily identify abnormal machine behavior — so that they can take action to avoid the impact and expense of equipment downtime.”

Lookout for Equipment is available via the AWS console as well through supporting partners in the AWS Partner Network. It launches today in US East (N. Virginia), EU (Ireland), and Asia Pacific (Seoul) server regions, with availability in additional regions in the coming months.

The launch of Lookout for Equipment follows the general availability of Lookout for Metrics, a fully managed service that uses machine learning to monitor key factors impacting the health of enterprises. Both products are complemented by Amazon Monitron, an end-to-end equipment monitoring system to enable predictive maintenance with sensors, a gateway, an AWS cloud instance, and a mobile app.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Nvidia and Harvard develop AI tool that speeds up genome analysis

March 8, 2021   Big Data
 Nvidia and Harvard develop AI tool that speeds up genome analysis

The power of audio

From podcasts to Clubhouse, branded audio is more important than ever. Learn how brands are increasing customer loyalty and personalization with these best practices.

Register Now


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Researchers affiliated with Nvidia and Harvard today detailed AtacWorks, a machine learning toolkit designed to bring down the cost and time needed for rare and single-cell experiments. In a study published in the journal Nature Communications, the coauthors showed that AtacWorks can run analyses on a whole genome in just half an hour compared with the multiple hours traditional methods take.

Most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus. But an individual cell pulls out only the subsection of genetic components that it needs to function, with cell types like liver, blood, or skin cells using different genes. The regions of DNA that determine a cell’s function are easily accessible, more or less, while the rest are shielded around proteins.

AtacWorks, which is available from Nvidia’s NGC hub of GPU-optimized software, works with ATAC-seq, a method for finding open areas in the genome in cells pioneered by Harvard professor Jason Buenrostro, one of the paper’s coauthors. ATAC-seq measures the intensity of a signal at every spot on the genome. Peaks in the signal correspond to regions with DNA such that the fewer cells available, the noisier the data appears, making it difficult to identify which areas of the DNA are accessible.

ATAC-seq typically requires tens of thousands of cells to get a clean signal. Applying AtacWorks produces the same quality of results with just tens of cells, according to the coauthors.

AtacWorks was trained on labeled pairs of matching ATAC-seq datasets, one high-quality and one noisy. Given a downsampled copy of the data, the model learned to predict an accurate high-quality version and identify peaks in the signal. Using AtacWorks, the researchers found that they could spot accessible chromatin, a complex of DNA and protein whose primary function is packaging long molecules into more compact structures, in a noisy sequence of 1 million reads nearly as well as traditional methods did with a clean dataset of 50 million reads.

AtacWorks could allow scientists to conduct research with a smaller number of cells, reducing the cost of sample collection and sequencing. Analysis, too, could become faster and cheaper. Running on Nvidia Tensor Core GPUs, AtacWorks took under 30 minutes for inference on a genome, a process that would take 15 hours on a system with 32 CPU cores.

In the Nature Communications paper, the Harvard researchers applied AtacWorks to a dataset of stem cells that produce red and white blood cells — rare subtypes that couldn’t be studied with traditional methods. With a sample set of only 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells.

“With very rare cell types, it’s not possible to study differences in their DNA using existing methods,” Nvidia researcher Avantika Lal, first author on the paper, said. “AtacWorks can help not only drive down the cost of gathering chromatin accessibility data, but also open up new possibilities in drug discovery and diagnostics.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Teradata Launches New ‘DataDNA’ Data Forensics Tool

November 20, 2020   BI News and Info
teradata vantage datadna automated usage analytics.jpg?width=640&height=336&ext= Teradata Launches New ‘DataDNA’ Data Forensics Tool

New as-a-service offering automates discovery of cross-platform data lineage so customers can better understand data flow and utilization across their entire data analytics ecosystem

Enables simplified and accelerated migration to the cloud

Teradata (NYSE: TDC), the cloud data analytics platform company, today announced the availability of Teradata DataDNA – an automated service that produces data lineage and usage analytics. Using the power of Vantage, the company’s flagship hybrid multi-cloud data analytics software platform, DataDNA delivers transparency into an organization’s data assets and their utilization across the ecosystem, regardless of platform or technology, to ensure maximum analytic value is being derived throughout the enterprise. By giving businesses full insight into their data – including whether data is used, how it is used, and by whom – DataDNA enables customers to use data as their greatest asset, eliminating data redundancy, reducing cost, accelerating data integration, assisting in regulatory compliance, and increasing the return on investment.

Delivered by Teradata or one of its strategic integration and consulting partners, DataDNA also becomes an indispensable tool as companies migrate to Vantage in the cloud – helping them understand the interdependencies of their systems, data usage, and data flow, so they can make informed decisions on which applications to move, consolidate and simplify for their new cloud ecosystem. 

“At Teradata, we have a deep understanding of analytic ecosystems and how data flows through an organization. That’s why we’re leveraging our expertise to help our customers better understand and manage their data assets across any platform,” said Niels Brandt, Vice President, Customer Success & Consulting at Teradata. “By automating data management, our customers will reduce their reliance on IT specialists for repetitive and low impact data management tasks; thereby releasing their productive time for increased collaboration, training and high-value services. And as more of our customers move to Vantage in the cloud, DataDNA provides insight to support ecosystem simplification and helps to identify data dependencies for accelerated migration plans and activities.”

DataDNA is an as-a-service offering that is customized for individual Teradata customers. By delivering this automated view into data assets — including their usage and cross-platform data lineage — DataDNA generates rapid new insights that improve new and existing business use cases by:

  • Simplifying IT ecosystems and reducing associated costs;
  • Eliminating data duplication;
  • Providing self-service business insights;
  • Ensuring efficient and fact-based data governance;
  • Guaranteeing data quality;
  • Reconciling data and processes across platforms; and
  • Generating automated and accurate change impact analysis.

The demand for automated data management has increased dramatically in recent years as data proliferation has accelerated, creating a need for services and solutions that help companies understand their vast data ecosystems. With DataDNA, the insights into what systems do with data, and who is using that data, are derived from metadata. This makes the service much less invasive and the footprint much lighter, so that no intensive system processing is required.

According to Gartner’s Top 10 Trends in Data and Analytics, May 11, 2020, “By 2023, organizations utilizing active metadata, machine learning and data fabrics to dynamically connect, optimize and automate data management processes will reduce time to integrated data delivery by 30%.” This enables companies to leverage more of their data, faster, to gain rapid analytic insights.

Gartner also asserts in the Magic Quadrant for Metadata Management Solutions, October 16, 2019, by analysts Guido De Simoni, Mark Beyer, and Ankush Jain, that “Metadata supports understanding of an organization’s data assets, how those data assets are used, and their business value. Metadata management initiatives deliver business benefits such as improved compliance and corporate governance, better risk management, better shareability and reuse, and better assessments of the impact of change within an enterprise, while creating opportunities and guarding against threats.”

A complete list of DataDNA’s features include:

  • Automated Data Lineage: Understand how data moves across the enterprise, at a column level, based on facts.
  • Automated Data Usage Analysis: Understand who uses what data, when, and how. This can assist with clean up, decommissioning, PII data analysis, and regulatory compliance.
  • Data Asset Catalog: Ability to identify the data that is an asset to an organization along with who uses the data – for data monetization purposes, data as a service, etc.
  • Business Glossary Management: Helps companies build or manage business glossaries by linking the business glossary terms to the physical lineage from which the data arrives.
  • Subject Area Fingerprinting: Understand the subject areas that are being used in IT environments and support Cloud migration use cases, duplication analysis, and much more.
  • PII identification: Identify where PII data is stored and how it moves across an environment, using metadata. This light touch approach significantly reduces the system and human resources required to identify where PII data is held and who accesses it.
  • Impact Assessment: With the touch of a button, run an impact assessment report to determine what impact a change will have across an entire connected lineage.

Availability

Teradata’s DataDNA is available globally, today.

Let’s block ads! (Why?)

Teradata United States

Read More

Why TIBCO Spotfire is the Ideal Tool for Network Analytics

November 15, 2020   TIBCO Spotfire
TIBCO Spotfire NetworkAnalytics scaled e1605197240261 696x365 Why TIBCO Spotfire is the Ideal Tool for Network Analytics

Reading Time: 3 minutes

Any type of network downtime has a significant impact on today’s businesses. According to a recent study by Statista the average cost of IT downtime is between $ 350,000 and $ 400,000 per hour. 

For this reason, IT teams rely on network analytics tools to identify key trends and patterns occurring in a network to ensure their networks perform at an optimal level. 

Today, with a steep increase in network complexity and the added pressure of demand for improved reliability, there’s a need for an evolution in approach to network monitoring. IT teams need richer analytics along with monitoring to surface issues before they become problems. TIBCO Spotfire is a proven solution. Using search and recommendations powered by a built-in artificial intelligence (AI) engine, Spotfire enables you to more easily visualize new discoveries about your network. 

Let’s have a look at some of the capabilities of TIBCO Spotfire that makes it an ideal tool for Network Analytics.

1. Smart, Immersive Visual Analytics

  • Spotfire offers a user-friendly interface providing visual tools for easy data exploration so you can quickly get insights from your data. The rich and interactive dashboards, brushlinking, and point-and-click data exploration provide powerful analytics capabilities. 
 Why TIBCO Spotfire is the Ideal Tool for Network Analytics
  • Powered by AI, Spotfire automatically recommends various visualizations for different relationships in your data, making it far faster and easier to get insights. AI recommendations help in loading, linking, categorizing, and navigating data for an overall faster analysis.  
  • Spotfire makes it easy for analytics and business leaders to build analytical applications through a guided workflow. 

2. Location Analytics

  • Location Analytics in Spotfire provides spatial analytics for everyone. Its depth and analytical capabilities make it easier to understand predictions and optimization through locations. Spotfire automatically adds location context to your analysis that would not be possible using traditional charts and tables.  
  • Multi-layer maps allow you to drill down, within, and between layers to add more context to your maps for location-based insights. This can be further enriched by including other data sources and adding multiple layers to the data. 

3. Real-Time Analytics 

  • Understanding what is happening in real-time is imperative for Network Analytics. TIBCO Spotfire natively supports real time streaming data to provide real-time views into the network, enabling real time analysis. 
  • Spotfire data streams are designed for demanding network enterprise environments with tens of millions of events a day per network, and thousands of continuous, streaming queries. 

Why Your Investment in Network Analytics Demands Spotfire’s Capabilities 

According to an IDC FutureScape Report, there will be an acceleration to cloud-centric technologies and edge deployments will be a top priority. These are the top two trends we could see in the near future; what weaves them together is the network health of your business. To make your business operations resilient and drive competitive advantage, you need network analytics that bring the most valuable insights to you. 

The bottom line: You have to know why your network behaves the way it does. You need to be able to compare historic data with real-time data in a single environment for the most current, comprehensive view of IT networks, and for the insights into behavior that matter most. Network Analytics empowers your IT teams with information about network health, helps your leaders determine performance trends, and provides historical network data that can provide benefits for the entire enterprise. 

From traffic to business initialization with application sharing, IT teams need insights to solve business problems. Network analytics pinpoints specific issues determining what, when, and how. Predictive analytics assists to potentially prevent future performance network issues if a repeated event is suggested by the data. And Spotfire shines in immersive, smart, real-time analytics, enabling predictive and even prescriptive analytics.

Powered by AI, Spotfire automatically recommends various visualizations for different relationships in your data, making it far faster and easier to get insights. Click To Tweet

Forward-thinking IT organizations are increasingly adopting solutions like TIBCO Spotfire, because of the value provided through instant AI solutions to solve real-time requirements with intelligent and automated network insights. To learn more about how to implement Network Analytics for your businesses, please contact us.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Figure the Total Cost of Microsoft Dynamics 365/CRM with Our Quick Quote Tool

October 30, 2020   CRM News and Info
crmnav Figure the Total Cost of Microsoft Dynamics 365/CRM with Our Quick Quote Tool

What a year this has been. Organizations are figuring out how to keep going and thrive despite difficult circumstances. Many businesses are using this time to map their path for the future. Now is a good time to position your organization for the days and years ahead.

Organizations of all types and sizes realize how valuable a comprehensive CRM (Customer Relationship Management) solution is to the growth and success of their business.

If you are increasing your client base, opening new offices, or expanding into new markets nearby or around the globe, you need a powerful, integrated system such as Microsoft Dynamics 365/CRM

Undoubtedly, you have read numerous articles or talked with consultants about the many features of Dynamics 365 and what it can do for your business. But one piece of information you’d like to have before you commit to buying is the price.

There is a difference between the sticker price of the software and the actual cost of the software plus installation plus ongoing costs. When budgeting for a new or upgraded solution, you need to know the total cost of ownership. That’s not always easy to determine.

The CRM Software Blog’s Quick Quote Wizard is here to help. This tool will provide a working estimate of the total costs involved in implementing and operating Microsoft Dynamics 365. Here’s how the Quick Quote Wizard works:

Look for the orange ‘Request Instant Quote Dynamics 365/CRM’ bar at the top right of each page of the CRM Software Blog. Click the bar, and you’ll be taken to a screen with a short form to fill out. You’ll be asked basic questions about your type of business, the number of employees who will be using the system, the level of support you’ll require (don’t worry, the questionnaire will help you answer these questions) and any concerns you have unique to your business. Include your contact information, and you’re done.

The Quick Quote Wizard will use your information to customize a quote which you’ll receive instantly as a PDF sent to your email. After you’ve received your non-binding quote, your contact information will be forwarded to just one of our expert CRM Software Blog members in your area who will be happy to answer any questions you may have. You can choose to work with that partner or not; it’s up to you. The estimate and the referral are a free service we provide for our readers.

The CRM Software Blog Quick Quote Tool can get you started in determining just how much your Microsoft Dynamics 365 system will cost you from the first licensing to long-term support. Simply fill out the quote request form and be well on your way to a deeper understanding of the total cost of owning a Dynamics 365 solution from Microsoft.

By CRM Software Blog Writer. www.crmsoftwareblog.com

Find a Microsoft Partner in your area.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

U.S. NHTSA’s autonomous vehicle test tracking tool is light on data

September 3, 2020   Big Data

Automation and Jobs

Read our latest special issue.

Open Now

In June, the U.S. National Highway Traffic Safety Administration (NHTSA) detailed the Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST), a program that purports to provide a robust source of information about autonomous vehicle testing. Today marked the official launch of AV TEST after several months of ramp-up, beginning with a tool for tracking driverless pilots in 17 cities across nine states. But despite lofty promises from the NHTSA of the transparency AV TEST will herald, its current incarnation — at least from first impressions — is bare-bones at best.

Companies including Beep, Cruise, Fiat Chrysler, Nuro, Toyota, Uber, Local Motors, Navya, and Waymo have agreed to participate in AV TEST so far, along with those that have previously submitted testing information to NHTSA, including Aurora, Easymile, Kodiak Robotics, Lyft, TuSimple, Nvidia, and Zoox. The tracking tool reveals on-road testing locations and activity data like vehicle types, uses, dates, frequency, vehicle counts, and routes. And it shows information about state vehicle operation regulations, emergency response plans, and legislation, as well as links to the voluntary safety reports some vehicle operators publish.

The ostensible goal is to shed light on the breadth of vehicle testing taking place across the country. Owing to a lack of public data around autonomous vehicles, Partners for Automated Vehicle Education (PAVE) reports a majority of Americans don’t think the technology is “ready for primetime.” The federal government maintains no database of autonomous vehicle reliability records, and while states like California mandate that companies testing driverless cars disclose how often humans are forced to take control of the vehicles, critics assert those are imperfect measures of safety.

“The more information the public has about the on-road testing of automated driving systems, the more they will understand the development of this promising technology,” NHTSA deputy administrator James Owens said during a web briefing. “Automated driving systems are not yet available for sale to the public, and the AV TEST Initiative will help improve public understanding of the technology’s potential and limitations as it continues to develop.”

 U.S. NHTSA’s autonomous vehicle test tracking tool is light on data

Above: A screenshot of the AV TEST tracking tool.

Some of the AV TEST tool’s stats are admittedly eye-catching, like the fact that program participants are reportedly conducting 34 shuttle, 24 autonomous car, and 7 delivery robot trials in the U.S. But they aren’t especially informative. Several pilots don’t list the road type (e.g., “street,” “parking lot,” “freeway,”) where testing is taking place, and the entries for locations tend to be light on the details. Waymo reports it is conducting “Rain Testing” in Florida, for instance, but hasn’t specified the number and models of vehicles involved. Cruise is more forthcoming about its tests in San Francisco, but only approximates the number of vehicles in its fleets.

The incompleteness of the data aside, participation in AV TEST appears to be uneven at launch. Major stakeholders like Pony.ai, Baidu, Tesla, Argo.AI, Amazon, Postmates, and Motion seemingly declined to provide data for the purposes of the tracking tool or have yet to make a decision either way. The net effect is a database that’s less useful than it might otherwise be; while the AV TEST tool reports that there are 61 pilots currently active, the actual number is likely to be far greater.

It also remains to be seen how diligently NHTSA will maintain this database. Absent a vetting process, companies have wiggle room to underreport or misrepresent tests taking place. And because the AV TEST program is voluntary, there’s nothing to prevent participating states and companies from demurring as testing resumes during and after the pandemic.

“Unfortunately, NHTSA’s reliance on voluntary industry actions to accomplish this is a recipe for disaster,” Advocates for Highway Safety president Cathy Chase said of the AV TEST program earlier this summer. “It has been reported that at least 80 companies are testing autonomous vehicles. Yet, only 20 have submitted safety assessments to the U.S. DOT under the current voluntary guidelines, iterations of which have been in place for nearly four years … Additionally, over that time, the National Transportation Safety Board has investigated six crashes involving vehicles with autonomous capabilities uncovering serious problems, including inadequate countermeasures to ensure driver engagement, reliance on voluntary reporting, lack of standards, poor corporate safety culture, and a misguided oversight approach by NHTSA.”

Tellingly, the AV TEST program’s launch comes as federal efforts to regulate autonomous vehicles remain stalled. The DOT’s recently announced Automated Vehicles 4.0 (AV 4.0) guidelines request — but don’t mandate — regular assessments of self-driving vehicle safety, and they permit those assessments to be completed by automakers themselves rather than by standards bodies. (Advocates for Highway and Auto Safety also criticized AV 4.0 for its vagueness.) And while the House of Representatives unanimously passed a bill that would create a regulatory framework for autonomous vehicles, dubbed the SELF DRIVE Act, it has yet to be taken up by the Senate. In fact, the Senate two years ago tabled a separate bill (the AV START Act) that made its way through committee in November 2017.

All of this raises the question of whether the AV TEST program will be able to move the needle on autonomous car safety. Time will tell, but today’s rollout doesn’t instill confidence.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Monitoring the Power Platform: Model Driven Apps – Monitor Tool Part 1: Messages and Scenarios

August 16, 2020   Microsoft Dynamics CRM

Summary

 

Monitoring Dynamics 365 or Model Driven Applications is not a new concept. Understanding where services are failing, users are running into errors, where form and business processes could be tuned for performance are key drivers for most if not all businesses, from small companies to enterprises. Luckily, the Dynamics 365 platform provides many tools to help audit and monitor business and operational events.

This article will cover user events and where they are sourced from. From there, we will dive into the Monitor tool and look at individual messages within. We will work with a few sample scenarios and see what we can gain from markers and messages within the Monitor tool.

Collecting User Events

 

Before we discuss techniques on how to capture events in Dynamics 365 let’s examine some meaningful events. From the client perspective this may include performance counters and metrics, user click events and navigation. Other data points include geolocations and preferences of users. Luckily, client events are easier to capture and we have many tools including browser based (Developer Tools) to applications (Fiddler) that are readily available. Some features of the platform allow for collecting markers while other events of interest we will have to supplement with custom delivery mechanisms.

For server side, external integrations and execution context contain identifiers for response codes that may require additional validation. For sandboxed plug-ins and custom workflow activities, we are limited somewhat to what tools we can leverage.

Upcoming articles will detail how to collect and push events of interest to a central area for analytics.

NOTE: The rest of this article will cover collecting and analyzing messages focused on the client. That said, server side events play a major role and can impact the client experience. I’ll address server side events in another article pertaining to Azure Application Insights and Model Driven Apps. In the meantime, check out this GitHub repo that includes experimental Plug-In code.

AUTHOR NOTE: Click each image to enlarge for detail

2161.pastedimage1597360820527v28 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The Monitor Tool

 

The Monitor tool can be launched from the Power Apps Maker Portal. Once launched, the Play Model Driven App button can be pressed to begin a session attached to the tool.

1513.pastedimage1597360820529v29 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The Monitor tool can also be started by adding “&monitor=true” to the URL of your Model Driven Application.

After consenting or allowing to start a session, the Monitor tool will light up rapidly with various messages. Similar to the the article “Canvas Driven Apps – The Monitoring Tool“, each row can be further drilled into for investigation.

7776.pastedimage1597360820530v30 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Jesse Parsons’ article on the Monitor Tool, titled Monitor now supports model-driven apps provides a through deep dive including sample scenarios.

I highly suggest reviewing and keeping it close by for reference.

Key Performance Indicators

 

Key Performance Indicators represent major lifecycle events within a particular user action, such as loading a form. Consider the image below.

6138.pastedimage1597360820531v31 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

By sorting the records on the “KPI” category, these events begin to emerge. The image below shows the major lifecycle events or KPIs for a standard form load within Dynamics 365. Beginning with PageNavigationStart and ending with RenderedEditReady, these events represent the completion of a form load.

3302.pastedimage1597360820533v32 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Scenario: Determining a Form’s Load Impact

 

Consider the scenario of a user logging into the system and opening a lead form for the first time. When performing this action, the form and data have not had a chance to be cached or stored locally which results in all items needed to be downloaded. This is sometimes referenced as a cold load. Reviewing the timeline event “FullLoad” we can determine what type of load the form rendered as.

 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Now, once captured, the user opens the fly out window to choose another lead record but using the same form. Again using the “FullLoad” KPI timeline event we can see the LoadType is now Two.

 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Finally, imagine the user needs to navigate back to the original lead record opened on the same form. We can see now the LoadType is now Three. Comparing this to the LoadType Zero image above, the entityId is the same.

Here is a sample scenario in full showing the differences in loading new and existing records and how changing a form can impact network requests to the server.

FullLoad LoadTypeAndMetadataNetworkRequestExample Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Attribution

 

On certain Key Performance Indicators a property is included called “Attribution” which represents specific events within a user action. This includes the commands or actions executed within a KPI. For example, during a form load lifecycle, controls are rendered, ribbon rule are evaluated, onload event handlers are executed, etc. The Attribution property will specify and group, in chronological order, which events happened. An added bonus is the duration is also shown for each event along with if any synchronous calls were made. Consider the image below.

Scenario: Locating dynamically added form changes and events

 

6012.pastedimage1597360820540v36 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

The image above shows for the FullLoad KPI. In this image we see three main groups of events: CustomControl (form controls), RuleEvaluation (ribbon rules) and onload (form event handlers). What’s interesting here is each of these is grouped, but also the solution they are part of and event solution layering is present. The RuleEvaluation and onload groups above both show an unmanaged or “Active” layer that contained customizations.

Compare that image with the one below.

3276.pastedimage1597360820541v37 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

A scenario came up during an investigation into the an increased duration of a form save. To begin, we went through the user events as normal with the Monitor tool running.

5277.pastedimage1597360820542v38 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Upon review, you can see additional events occurred, tabstatechanged and onsave. The onsave was expected due to the registered event handler on the form. However the tabstatechange was not, this was found to be due to a recent code addition that triggered the setDisplayState of the tab.

if (control.getName() == tabName) {
	control.setDisplayState("expanded");
}

By reviewing the Attribution property we were able to identify what caused the increase of 5 seconds.

7178.pastedimage1597360820544v39 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Zone Activity and Other Data Points

 

Within Key Performance Indicators, are other data points that prove useful when debugging performance related issues. Latency and Throughput are shown as well as timings for network related calls and custom script executions. Within the ZoneActivity property we see events grouped by web service calls, browser resource timings and performance observer events. The CustomScriptTime shows the duration of all of the registered event handlers that fired during this particular Key Performance Indicator.

0160.pastedimage1597360820545v40 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Performance Messages

 

Performance categorized messages detail potential performance issues that a user may run into. At the time of this writing, I’ve uncovered only synchronous XHR calls being sent but I anticipate growth here.

Scenario: Locating and Evaluating Synchronous Resource Timings

 

Requests from a Model Driven Application represent outgoing calls from the client to another source. These calls can occur either synchronously meaning the thread executing the call waits for the response, or asynchronously meaning the thread continues and listens for the response. Its preferred to eliminate all sync calls to reduce any potential disruption to a user’s experience within the application.

The Monitor tool helps by identifying these requests and calling them out. These call outs can be found in the “Performance” category as shown below.

7343.pastedimage1597360820546v41 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Examining the performance entry, we can see the “dataSource” property shows the XHR URL. However, it doesn’t show the source of the call which is needed to better understand how and why the request was made. For that, we need to find and examine KPIs such as FullLoad or SaveForm.

1778.pastedimage1597360820547v42 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Here is a gif showing how to use the Monitor tool, coupled with the Browser Developer Tools to locate the line of code that needs to be updated.

FullLoad Perf SyncXhr Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 1: Messages and Scenarios

Using these messages, along with outputs from Power Apps Checker, we can begin to uncover gaps in code analysis. In the next article, I’ll cover in depth an approach to help identify and remediate these gaps.

Next Steps

 

This article describes where user and platform events may originate from and how they can be monitored. Gaining insights and understanding into the SaaS version of Dynamics 365 allows us to uncover the black box and find answers to some of our questions. Think about how the Monitor tool can be used to find out where API calls may have started and coupled with other articles in this series, how we can correlate to provide a true end to end monitoring solution. The next article in this series will cover how we can extract and analyze the sessions further.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Monitoring the Power Platform: Model Driven Apps – Monitor Tool Part 2: Session Consumption and Analytics

August 15, 2020   Microsoft Dynamics CRM

Summary

 

Monitoring Dynamics 365 or Model Driven Applications is not a new concept. Understanding where services are failing, users are running into errors, where form and business processes could be tuned for performance are key drivers for most if not all businesses, from small companies to enterprises. Luckily, the Dynamics 365 platform provides many tools to help audit and monitor business and operational events.

This article will cover collecting, querying and analyzing user interface events, specifically from the recently announced Monitor Tool for Model Driven Apps. The previous article covered message data points and how to perceive them. In this go round, we will have a little fun exploring ways to utilize the output sessions. We will discuss how to build robust work items in Azure DevOps with Monitor output. We’ll look at consuming and storing outputs for visualizations and analytics with Kusto queries. Finally, samples will be provided to parse and load session messages into Azure Application Insights and Azure Log Analytics.

The Monitor Tool

 

The Monitor Tool allows users and team members to collect messages and work together in debugging sessions. To begin, the Monitor Tool can be launched from the Power Apps Maker Portal. Once launched, the Play Model Driven App button can be pressed to begin a session attached to the tool.

AUTHOR’S NOTE: Click on each image to enlarge for more detail

0647.pastedimage1597357184199v1 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

The Monitor Tool can also be started by adding “&monitor=true” to the URL of your Model Driven Application.

After consenting or allowing to start a session, the Monitor Tool will light up rapidly with various messages. Similar to the “Canvas Driven Apps – The Monitoring Tool” article, each row can be further drilled into for investigation.

8712.pastedimage1597357184200v2 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Jesse Parsons’ article on the Monitor Tool, titled ‘Monitor now supports model-driven apps‘ provides a through deep dive including sample scenarios. I highly suggest reviewing and keeping close by for reference.

Thoughts on Canvas Apps

 

The Monitor tool works with Power Apps Canvas Driven Apps as shown in the article “Canvas Driven Apps – The Monitoring Tool“. While this article is focused on Model Driven Apps, remember these techniques can also be utilized to serve Canvas Apps as well.

Consuming Monitor Sessions

 

Each time the Monitor tool is opened, a new session is created. Within each session are events that describe actions taken within the session as well as other helpful messages. Storing these sessions allow support teams to better understand errors and issues that arise during testing and production workloads. The previous article, “Monitor Tool Part 1: Messages and Scenarios“ covers scenarios that support users can use to better understand the data points delivered in the Monitor Tool.

The Monitor tool can also help analysts who want to learn more about the platform. For instance, user tendencies such as how long they spent on a page and which controls they interacted with in Canvas Driven Apps. For testing, the tool can help with non functional test strategies like A/B testing. Analyzing performance messages can point to potential code coverage gaps or advise on user experience impact. Network calls can be securitized to determine if queries can be optimized or web resources minified. The Monitor tool, in my opinion, really can open up a new view on how the platform is consumed and how users react with it.

Attaching to Azure DevOps Work Items

 

The Monitor Tool download artifacts work nicely with Azure DevOps Work Items. They can be attached to Bugs, Tasks and even Test Cases when performing exploratory or other types of tests.

1033.pastedimage1597357184200v3 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Working with test cases within Azure DevOps, Analysts craft work items with use cases and expected outcomes to deliver to Makers and Developers. Specifically with Test Cases, Quality Analysts can leverage the Monitor Tool in conjunction with the Test and Feedback Browser Extension. This allows for robust test cases complete with steps, screenshots, client information and the Monitor Tool output attached. Consider the gif below showing an example of using both the browser extension and the Monitor tool output.

AttachMonitorOutputToDevOpsBug Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

In that example, we see an analyst has found a performance issue with a Dynamics 365 form. The analyst logged a new bug, included annotations and screenshots and the Monitor tool output. A developer can be assigned, pick this up and begin working on the bug. By having the Monitor tool output the developer can now see each call made and review the Attributions within the respective KPI. For more information, refer to the Attribution section within Part 1.

0020.pastedimage1597357184200v5 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Storing Monitoring Sessions

 

The Monitor tool output comes in two flavors: CSV and JSON. Both make for light weight storage and are fairly easy to parse as shown later. These files can be attached to emails or stored in a shared location like a network drive.

Power BI and CSV Files

 

The csv files downloaded from the Monitor tool can be added Azure Blob Storage or stored locally and displayed in a Power BI Dashboard. This allows for analysts and support teams to drill down into sessions to gain further insights. The csv files can work both locally with Power BI Desktop and online. The below image shows a sample taken from a Canvas App Monitor Session. Additional information and samples can be found in the “Canvas Driven Apps – The Monitoring Tool” article.

7462.pastedimage1597357184201v6 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Experimenting with Extraction for Analytics

 

Storing outputs in Azure Blob Storage

 

During this writing and the writing of the Monitor Tool for Canvas Apps, I began to collect outputs from both and storing within Azure Blob Storage containers. There are multiple reasons why I chose to utilize Azure Blob Storage, mainly cost but also interoperability with other services such as Power BI, Azure Data Lake and Azure Event Grid.

Azure Blob Storage also integrates very well with both Azure Logic Apps, Azure Functions and Power Automate Flows. Each of these include a triggering mechanism on a new blob added to a container, working as a sort of drop folder. This may the choice to use Azure Blob Storage easy for me but I will also point out that specifically Power Automate Flows also can be triggered from OneDrive or SharePoint. This allows Makers to stay within the Microsoft 365 ecosphere and avoid spinning up multiple Azure services if desired.

Extracting to Log Stores

 

Extracting the messages with in the Monitor tool to a log store allows for analysts and support teams to parse and query the sessions. Determining how we want to store these messages will determine what services we leverage.

Choosing Azure Application Insights

 

If we want distributed transaction tracing I’d suggest Azure Application Insights. Application Insights will allow for pushing messages to specialized tables that feed dashboards and features native to the service such as End to End Transactions and Exception parsing.

Network messages can be stored in the requests or dependencies tables, which are designed, along with page views, to visual a typical web application’s interactions. Fetch network messages, representing calls to an API, fit nicely into the requests table as shown below:

8358.pastedimage1597357184201v7 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Dependencies on the other hand can represent dependent web resources.

8524.pastedimage1597357184201v8 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Using Azure Function to serve data

 

Azure Application Insights works well with microservices built using Azure Functions. A benefit of Azure Functions is the ability to have a function fire on create of a blob within a container. For more information and a helpful quick start to working with Blob triggered Azure Functions, check out this reference. Below is the method signature of a sample Azure Function:

[FunctionName("BlobTriggeredMonitorToolToApplicationInsights")]
public void Run([BlobTrigger("powerapps-monitortool-outputs/{name}", Connection = "")]Stream myBlob, string name, ILogger log)

In the sample provided you’ll see that the function takes the JSON payload, parses it and determines how to add to Azure Application Insights. Depending on the messageCategory property, it will funnel messages to custom events, requests or dependencies.

As always, review the Telemetry Client for ideas and techniques to enrich messages sent to Azure Application Insights. Also, if desired, review how to Sample messages to reduce noise and keep cost down.

Choosing Azure Log Analytics

 

Azure Log Analytics allow for custom tables that provide the greatest flexibility. The Data Collector API has a native connector to Power Automate that allows Makers to quickly deliver Monitor messages with a no or low code solution. Both Power Automate and Azure Logic Apps both offer triggers on create of a blob providing flexibility on which service to choose.

Using Power Automate to serve data

 

To work with Power Automate, begin by creating a new Power Automate flow. Set the trigger type to use the Azure Blob Storage trigger action “When a blob is added or modified“. If a connection hasn’t been established, create a connection. Once created locate the blob container to monitor. This container will be our drop folder.

The trigger is only designed to tell us a new blob is available, so the next step is to get the blob content. Using the FileLocator property we can now get the serialized messages and session information and deliver to Log Analytics.

Within Power Automate, search for the “Send Data” action. Set the JSON Request body field to be the File Content value from the “Get blob content” action.

7510.pastedimage1597357184201v9 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

The advantage here is with three actions and no code written I am able to listen to a drop folder for output files and send to Azure Log Analytics. The native JSON serialization option from the Monitor tool really serves us well here, allowing a seamless insertion into our custom table.

1680.pastedimage1597357184201v10 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Ideally we would expand the Power Automate flow to parse the JSON and iterate through messages to allow for individual entries into the table.

0211.pastedimage1597357184202v11 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Just remember the content maybe encoded to “octet-stream” and will need to be converted.

Sample Kusto Queries

 

Below are sample Kusto queries designed for Azure Application Insights.

General Messages

//Review Performance Messages
customEvents 
| extend cd=parse_json(customDimensions)
| where cd.messageCategory == "Performance"
| project session_Id, name, cd.dataSource

Browser Requests

//Request Method, ResultCode, Duration and Sync
requests 
| extend cd=parse_json(customDimensions)
| extend data=parse_json(tostring(cd.data))
| project session_Id, name, data.method, resultCode, data.name, data.duration, data.sync, cd.fileName

//Request Method, ResultCode, Duration and Resource Timings
requests 
| extend cd=parse_json(customDimensions)
| extend data=parse_json(tostring(cd.data))
| project session_Id, name, data.method, resultCode, data.name, data.duration,
data.startTime, 
data.fetchStart,
data.domainLookupStart,
data.connectStart,
data.requestStart,
data.responseStart,
data.responseEnd

Review the documentation on Resource Timings located here to better understand what these markers are derived from.

3301.pastedimage1597357184202v12 Monitoring the Power Platform: Model Driven Apps   Monitor Tool Part 2: Session Consumption and Analytics

Key Performance Indicators

pageViews 
| extend cd=parse_json(customDimensions)
| extend cm=parse_json(customMeasurements)
| extend data=parse_json(tostring(cd.data))
| extend attribution=parse_json(tostring(data.Attribution))
| where name=="FullLoad"
| order by tostring(data.FirstInteractionTime), toint(cm.duration)
| project session_Id, name, data.FirstInteractionTime,cm.duration, attribution

Sample Code

 

Azure Function and Azure Application Insights – Monitor Tool Extractor

Power Automate and Azure Log Analytics

Optional Azure Application Insights Custom Connector

Next Steps

 

In this article we have covered how to work with the Monitor tool output files. Viewing within Power BI Dashboards, attaching to DevOps work items and storing in Azure backed log stores are all possibilities. Included sample code and Kusto queries to help you get started have also been provided.

This article showcases use cases and strategies for working with the Monitor tool but really only represents the tip of the iceberg. Continue collecting, examining and churning the output for deep insight into user and platform trends.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index

 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

#PowerBI – External tool to connect Excel to the current PBIX file

July 28, 2020   Self-Service BI

In the July update of the Power BI Desktop we now can add external tools to the ribbon.

If you install the latest versions of Tabular Editor, DAX Studio and the ALM Toolkit these will be added as tools in the ribbon.

But you can also build and add your own tools.

David Eldersveld (link) has written an excellent series of blogposts about using Python as an external tool – link to part one – and this inspired me to give it a go as well.

The official documentation can be found here.

Short description of what an external tool really is

An external tool will point to an exe file and you can supply the call to the exe file with arguments including a reference to the %server% and %database%.

The information about the external tool needs to be stored in

C:\Program Files (x86)\Common Files\Microsoft Shared\Power BI Desktop\External Tools

And name the file “<tool name>.pbitool.json”.

 #PowerBI – External tool to connect Excel to the current PBIX file

This will give me these buttons in my Power BI Desktop

 #PowerBI – External tool to connect Excel to the current PBIX file

My idea to an external tool

When I build models – I use Excel pivot tables to test and validate my measures and typically I would use DAX Studio to find the localhost port to setup a connection to the currently open PBIX file.

So, I thought it be nice just to click a button in PowerBI Desktop to open a new Excel workbook with a connection to the current model. That would save me a couple of clicks.

If I could create an ODC file when clicking on the button in Power BI and then open the ODC file (Excel is the default application to open these) my idea would work.

I have previously used Rui Romano’s (link) excellent PowerBI powershell tools – link to github and link his blogpost about analyse in Excel – so why not use PowerShell to do this.

Here is a guide to build your own version

Step 1 Create a powershell script

I created a powershell file called ConnectToExcel.ps1 and saved the file in local folder C:\Temp – you can save this where you want it stored. (Link to sample files last in this post)

The script is a modified version of Rui’s function Export-PBIDesktopODCConnection – thank you so much these.

Function
ET-PBIDesktopODCConnection

{

# modified the https://github.com/DevScope/powerbi-powershell-modules/blob/master/Modules/PowerBIPS.Tools/PowerBIPS.Tools.psm1

# the Function Export-PBIDesktopODCConnection

    [CmdletBinding()]

param

(

[Parameter(Mandatory =
$ false)]

        [string]

$ port,

[Parameter(Mandatory =
$ false)]

        [string]

$ path

)

$ port = $ port

$ odcXml= “<html xmlns:o=””urn:schemas-microsoft-com:office:office””xmlns=””http://www.w3.org/TR/REC-html40″”><head><meta http-equiv=Content-Type content=””text/x-ms-odc; charset=utf-8″”><meta name=ProgId content=ODC.Cube><meta name=SourceType content=OLEDB><meta name=Catalog content=164af183-2454-4f45-964a-c200f51bcd59><meta name=Table content=Model><title>PBIDesktop Model</title><xml id=docprops><o:DocumentProperties xmlns:o=””urn:schemas-microsoft-com:office:office”” xmlns=””http://www.w3.org/TR/REC-html40″”&gt; <o:Name>PBIDesktop Model</o:Name> </o:DocumentProperties></xml><xml id=msodc><odc:OfficeDataConnection xmlns:odc=””urn:schemas-microsoft-com:office:odc”” xmlns=””http://www.w3.org/TR/REC-html40″”&gt; <odc:Connection odc:Type=””OLEDB””>

<odc:ConnectionString>Provider=MSOLAP;Integrated Security=ClaimsToken;Data Source=$ port;MDX Compatibility= 1; MDX Missing Member Mode= Error; Safety Options= 2; Update Isolation Level= 2; Locale Identifier= 1033</odc:ConnectionString>

<odc:CommandType>Cube</odc:CommandType> <odc:CommandText>Model</odc:CommandText> </odc:Connection> </odc:OfficeDataConnection></xml></head></html>”

#the location of the odc file to be opened

$ odcFile = “$ path\excelconnector.odc”

$ odcXml|Out-File $ odcFile -Force

# Create an Object Excel.Application using Com interface

$ objExcel=New-Object -ComObject Excel.Application

# Make Excel visible

$ objExcel.Visible = $ true

# Open the Excel file and save it in $ WorkBook

$ WorkBook = $ objExcel.Workbooks.Open($ odcFile)

}

write $ args[0]

ET-PBIDesktopODCConnection -port $ args[0] -path “C:\Temp”

The script contains a function that creates an ODC file where the Datasource and path of the ODC file is determined by to arguments in the function – port and path, The Script also opens Excel and then opens the file.

The scripts contain a

$ args[0]

This will in the end be the value localhost:xxxxx that will be provided when we click the External tool button in Power BI Desktop – and will make more sense after step 2

Notice that I have hardcoded the path where the ODC file will be stored to C:\Temp.

Step 2 Create a .pbitool.json file

The pbitool.json file is relatively simply

 #PowerBI – External tool to connect Excel to the current PBIX file

Name is the text that will appear in the ribbon.

Description is the tooltip that appears in Power BI Desktop according to the documentation – but it doesn’t work at the moment.

Path is the reference to the exe file you want to activate – and only the exe file.

Arguments is the arguments that you want to pass the exe file – and here we have the to built in references %server% and %database%. Arguments are optional so we could just start Excel or any other program if we wanted .

IconData is the icon that you want to appear in the ribbon – I found an icon via google and then used https://www.base64-image.de/ to convert it to the string.

In this tool we use the Powershell.exe file that can be called with arguments where we specify the script file that we want to be executed and we pass the extra arguments server and database as well – in my script I only use the %server% reference which will give me the server name and portnumber of the local instance.

It means that when the button is clicked in PowerBI Desktop it will execute

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe C:\temp\connetToExcel.ps1 localhost:xxxxx databasename

The localhost:xxxxxx can is the first argument provided and the value can then be referred to by using $ args[0].

The file must then be stored in C:\Program Files (x86)\Common Files\Microsoft Shared\Power BI Desktop\External Tools and in my case I called it OpenInExcel.pbitool.json.

Depending on your privileges on your computer you might be warned that you need administrative rights to save files in that location.

And if you save the script file elsewhere you need to modify the pbitool.json file.

Step 3 – Test it

Now we are ready to restart Power BI Desktop – and

 #PowerBI – External tool to connect Excel to the current PBIX file

And it does appear

Next – open a pbix file

This will open a Windows PowerShell window and write the server information

 #PowerBI – External tool to connect Excel to the current PBIX file

And in the background opens Excel and the ODC file – which results in a pivotable connected to the local instance.

 #PowerBI – External tool to connect Excel to the current PBIX file

With a connection to the localhost:52510

 #PowerBI – External tool to connect Excel to the current PBIX file

The files

You can download the files needed from here – https://github.com/donsvensen/erikspbiexcelconnector

Feedback

I think the use of PowerShell opens a lot of interesting scenarios for external tools and I look forward to see what other external tools that appear in the community.

Please let me know what you think and if you find it useful.

Let’s block ads! (Why?)

Erik Svensen – Blog about Power BI, Power Apps, Power Query

Read More

RetrieveGAN AI tool combines scene fragments to create new images

July 22, 2020   Big Data

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

Researchers at Google, the University of California, Merced, and Yonsei University developed an AI system — RetrieveGAN — that takes scene descriptions and learns to select compatible patches from other images to create entirely new images. They claim it could be beneficial for certain kinds of media and image editing, particularly in domains where artists combine two or more images to capture each’s most appealing elements.

AI and machine learning hold incredible promise for image editing, if emerging research is any indication. Engineers at Nvidia recently demoed a system — GauGAN — that creates convincingly lifelike landscape photos from whole cloth. Microsoft scientists proposed a framework capable of producing images and storyboards from natural language captions. And last June, the MIT-IBM Watson AI Lab launched a tool — GAN Paint Studio — that lets users upload images and edit the appearance of pictured buildings, flora, and fixtures.

By contrast, RetrieveGAN captures the relationships among objects in existing images and leverages this to create synthetic (but convincing) scenescapes. Given a scene graph description — a description of objects in a scene and their relationships — it encodes the graph in a computationally-friendly way, looks for aesthetically similar patches from other images, and grafts one or more of the patches onto the original image.

 RetrieveGAN AI tool combines scene fragments to create new images

The researchers trained and evaluated RetreiveGAN on images from the open source COC-Stuff and Visual Genome data sets. In experiments, they found that it was “significantly” better at isolating and extracting objects from scenes on at least one benchmark compared with several baseline systems. In a subsequent user study where volunteers were given two sets of patches selected by RetrieveGAN and other models and asked the question “Which set of patches are more mutually compatible and more likely to coexist in the same image?,” the researchers report that RetrieveGAN’s patches came out on top the majority of the time.

“In this work, we present a differentiable retrieval module to aid the image synthesis from the scene description. Through the iterative process, the retrieval module selects mutually compatible patches as reference for the generation. Moreover, the differentiable property enables the module to learn a better embedding function jointly with the image generation process,” the researchers wrote. “The proposed approach points out a new research direction in the content creation field. As the retrieval module is differentiable, it can be trained with the generation or manipulation models to learn to select real reference patches that improves the quality.”

Although the researchers don’t mention it, there’s a real possibility their tool could be used to create deepfakes, or synthetic media in which a person in an existing imag is replaced with someone else’s likeness. Fortunately, a number of companies have published corpora in the hopes the research community will pioneer detection methods. Facebook — along with Amazon Web Services (AWS), the Partnership on AI, and academics from a number of universities — is spearheading the Deepfake Detection Challenge. In September 2019, Google released a collection of visual deepfakes as part of the FaceForensics benchmark, which was cocreated by the Technical University of Munich and the University Federico II of Naples. More recently, researchers from SenseTime partnered with Nanyang Technological University in Singapore to design DeeperForensics-1.0, a data set for face forgery detection that they claim is the largest of its kind.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • The Easier Way For Banks To Handle Data Security While Working Remotely
    • 3 Ways Data Virtualization is Evolving to Meet Market Demands
    • Did you find everything you need today?
    • Missing Form Editor through command bar in Microsoft Dynamics 365
    • I’m So Excited
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited