Tag Archives: analytics

Kleiner Perkins leads $15 million investment in Incorta’s data analytics software

 Kleiner Perkins leads $15 million investment in Incorta’s data analytics software

Incorta, which provides software to analyze data in real time, today announced that it has secured an additional $ 15 million in funding. Kleiner Perkins Caufield & Byers led the round, with participation from existing investors GV and Ron Wohl, Oracle’s former executive vice president. This new round of funding comes on the heels of a $ 10 million investment announced last March.

The San Mateo, California-based startup provides an analytics platform that enables real-time aggregation of large, complex sets of business data, such as for enterprise resource planning (ERP), eliminating the need to painstakingly prepare the data for analysis.

“With Incorta, there is no additional need to put data into a traditional data warehouse ahead of time,” wrote Incorta cofounder and CEO Osama Elkady, in an email to VentureBeat. “This reduces the time to build analytic applications from months to days.”

The startup claims that over the past year it has increased revenue more than 300 times and signed new customers, including Shutterfly.

“Incorta customers who used Exadata or Netezza appliances, or data engines like Redshift or Vertica, enjoyed performance gains of 50 to 100 times when they switched to Incorta, even on the most complex requests,” wrote Elkady.

The enterprise offering is licensed as an annual subscription on a per user basis and can be deployed on Google Cloud, Microsoft Azure, and Amazon Web Services (AWS). The startup is in talks with other cloud providers, according to Elkady.

Today’s fresh injection of capital will be used to further product development and increase sales and marketing. “It will also enable us to more quickly realize our vision for a third-party marketplace where vendors and content providers can build and distribute applications powered by Incorta’s platform,” wrote Elkady.

Founded in 2014, Incorta has raised a total of $ 27.5 million to date and currently has 70 employees.

Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Let’s block ads! (Why?)

Big Data – VentureBeat

TIBCO Named a Leader in The Forrester Wave™: Streaming Analytics, Q3 2017

forrester wave blog TIBCO Named a Leader in The Forrester Wave™: Streaming Analytics, Q3 2017

Forrester has named TIBCO a Leader in The Forrester Wave™: Streaming Analytics, Q3 2017 among thirteen vendors that were evaluated. For the Strategy category, we received a 4.9 out of a possible 5 points.

TIBCO StreamBase is also recognized for “[unifying] real-time analytics” with a “full-featured streaming analytics solution that integrates with applications to automate actions and also offers Live DataMart to create a real-time visual command center.”

Today’s organizations don’t just want streaming analytics or analytics at rest—they want the ability to operationalize analytics insights and the ability to capture streams, both the raw input and resulting predictions and streaming analytics to analyze and generate new insights, which they then operationalize. Streaming analytics customers will be more successful—and more satisfied in the long term—doing the full analytics round trip, and TIBCO has the tools to do it.

Learn more about TIBCO StreamBase here.

Download a complimentary copy of the report here.

TIBCO is focused on insights. Not the garden variety insights that lay dormant and unactionable on someone’s desk. Rather, TIBCO focuses on perishable insights that companies must act upon immediately to retain customers, remove friction from business processes, and prevent logistics chains from stopping cold. —Excerpt from The Forrester Wave: Streaming Analytics, Q3 2017

Let’s block ads! (Why?)

The TIBCO Blog

Leveraging Open Source Technologies in Analytics Deployments

rsz bigstock open source developer program 125449196 Leveraging Open Source Technologies in Analytics Deployments

Many organizations are eagerly hiring new data scientists fresh out of college. Many of those millennial data scientists have been educated in software development techniques that move away from reliance on traditional and commercial development platforms, toward adoption of open source technologies. Typically, these individuals arrive with skills in R, Python, or other open-source technologies.

Employers, as well as enterprise software vendors like Statistica, are choosing to support the use of these technologies, rather than forcing the new data scientists (who are scarce and highly valued resources) to adopt commercial tools. People with R, Python, C# or other language capabilities can integrate them into the Statistica workspace.

This type of framework allows a simple, one-click deployment. Deploying an R script by itself can be complex and difficult, although there are new, high-level technologies that simplify the process. Statistica has chosen to allow integration of the script directly into a workflow. The developer can then deploy that script into the Statistica enterprise platform, leverage the version control, auditing, digital signatures, and all the tools needed to meet a company’s regulatory requirements.

That’s a key advantage: The ability to incorporate an open source script into a regulated environment with security and version control without jeopardizing regulatory compliance. This capability is not entirely unique—some other, relatively new platforms can provide this ability to degree. But it has been feasible in the Statistica platform for a number of years, and is extensively proven in production deployments.

The capability came out of Statistica’s experience in the pharmaceutical industry, one of the most regulated of all commercial environments. Governments require extensive documentation and validation of virtually every business process involved in producing drugs for human consumption. This includes every aspect of data collection and reporting.

We have taken what we learned in this rigorously constrained context and applied it to a general analytical asset. That body of experience is differentiating among analytics platforms, as is the way in which scripts are incorporated into the Statistica workspace.

Within a workspace, we can create a script, and pull in the packages and components from the open source community. This enables Statistica adopters to leverage the collective intelligence of data scientists throughout the world, and contribute to the development of these open source technologies. This is in character with the open source community, in which developers not only contribute new code but inspect, test, criticize, and fine tune each other’s work. Our users are extending the capabilities of Statistica through these collectively developed packages.

The user creates a node in the workspace that can be parameterized. The developer can create an R script, and we can create a set of parameters attached to that node and then deploy that node as a reusable template. That template can be used like any other node within the workspace by a non-developer business user—what we also call a “citizen data scientist.”

We can import the data, merge, translate, create variables, etc. If we want to create a subset of data, we can deploy an R model developed specifically for this purpose by seasoned data scientist who has, in effect, added it to a library of gadgets that a business user can drop into the workspace, change the parameters, and get the standard output, as well as any downstream data that the script may produce.

From a business perspective, committing to open source adoption is attractive:

  • Because it’s free, so the adopter is spared an additional license cost; and
  • Because it opens up a world of new capabilities. There are new open-source packages being developed every day, and some will have quite compelling functionality.

There are, of course, uncertainties in adopting new code from an unregulated developer community. Because Statistica sells into highly regulated markets, we are audited annually to ensure that our code meets the requirements for regulatory compliance. Open-source code does not undergo that kind of audit, and that can introduce certain risks. But the platform enables deployment of the code into a rigorously controlled operational environment, mitigating this risk.

At least as important as the risk management element, the ability to promote the adoption of open- source scripting provides an attractive work environment for the current generation of data scientists. Given the intense competition for recent graduates, this can be a powerful incentive in itself for employers.

Find out more about TIBCO Statistica’s capabilities.

Let’s block ads! (Why?)

The TIBCO Blog

Looking for HR – Or Any – Job Security? Try Analytics

Last August, a woman arrived at a Reno, Nevada, hospital and told the attending doctors that she had recently returned from an extended trip to India, where she had broken her right thighbone two years ago. The woman, who was in her 70s, had subsequently developed an infection in her thigh and hip for which she was hospitalized in India several times. The Reno doctors recognized that the infection was serious—and the visit to India, where antibiotic-resistant bacteria runs rampant, raised red flags.

When none of the 14 antibiotics the physicians used to treat the woman worked, they sent a sample of the bacterium to the U.S. Centers for Disease Control (CDC) for testing. The CDC confirmed the doctors’ worst fears: the woman had a class of microbe called carbapenem-resistant Enterobacteriaceae (CRE). Carbapenems are a powerful class of antibiotics used as last-resort treatment for multidrug-resistant infections. The CDC further found that, in this patient’s case, the pathogen was impervious to all 26 antibiotics approved by the U.S. Food and Drug Administration (FDA).

In other words, there was no cure.

This is just the latest alarming development signaling the end of the road for antibiotics as we know them. In September, the woman died from septic shock, in which an infection takes over and shuts down the body’s systems, according to the CDC’s Morbidity and Mortality Weekly Report.

Other antibiotic options, had they been available, might have saved the Nevada woman. But the solution to the larger problem won’t be a new drug. It will have to be an entirely new approach to the diagnosis of infectious disease, to the use of antibiotics, and to the monitoring of antimicrobial resistance (AMR)—all enabled by new technology.

sap Q217 digital double feature2 images2 Looking for HR – Or Any – Job Security? Try AnalyticsBut that new technology is not being implemented fast enough to prevent what former CDC director Tom Frieden has nicknamed nightmare bacteria. And the nightmare is becoming scarier by the year. A 2014 British study calculated that 700,000 people die globally each year because of AMR. By 2050, the global cost of antibiotic resistance could grow to 10 million deaths and US$ 100 trillion a year, according to a 2014 estimate. And the rate of AMR is growing exponentially, thanks to the speed with which humans serving as hosts for these nasty bugs can move among healthcare facilities—or countries. In the United States, for example, CRE had been seen only in North Carolina in 2000; today it’s nationwide.

Abuse and overuse of antibiotics in healthcare and livestock production have enabled bacteria to both mutate and acquire resistant genes from other organisms, resulting in truly pan-drug resistant organisms. As ever-more powerful superbugs continue to proliferate, we are potentially facing the deadliest and most costly human-made catastrophe in modern times.

“Without urgent, coordinated action by many stakeholders, the world is headed for a post-antibiotic era, in which common infections and minor injuries which have been treatable for decades can once again kill,” said Dr. Keiji Fukuda, assistant director-general for health security for the World Health Organization (WHO).

Even if new antibiotics could solve the problem, there are obstacles to their development. For one thing, antibiotics have complex molecular structures, which slows the discovery process. Further, they aren’t terribly lucrative for pharmaceutical manufacturers: public health concerns call for new antimicrobials to be financially accessible to patients and used conservatively precisely because of the AMR issue, which reduces the financial incentives to create new compounds. The last entirely new class of antibiotic was introduced 30 year ago. Finally, bacteria will develop resistance to new antibiotics as well if we don’t adopt new approaches to using them.

Technology can play the lead role in heading off this disaster. Vast amounts of data from multiple sources are required for better decision making at all points in the process, from tracking or predicting antibiotic-resistant disease outbreaks to speeding the potential discovery of new antibiotic compounds. However, microbes will quickly adapt and resist new medications, too, if we don’t also employ systems that help doctors diagnose and treat infection in a more targeted and judicious way.

Indeed, digital tools can help in all four actions that the CDC recommends for combating AMR: preventing infections and their spread, tracking resistance patterns, improving antibiotic use, and developing new diagnostics and treatment.

Meanwhile, individuals who understand both the complexities of AMR and the value of technologies like machine learning, human-computer interaction (HCI), and mobile applications are working to develop and advocate for solutions that could save millions of lives.

sap Q217 digital double feature2 images3 1024x572 Looking for HR – Or Any – Job Security? Try Analytics

Keeping an Eye Out for Outbreaks

Like others who are leading the fight against AMR, Dr. Steven Solomon has no illusions about the difficulty of the challenge. “It is the single most complex problem in all of medicine and public health—far outpacing the complexity and the difficulty of any other problem that we face,” says Solomon, who is a global health consultant and former director of the CDC’s Office of Antimicrobial Resistance.

Solomon wants to take the battle against AMR beyond the laboratory. In his view, surveillance—tracking and analyzing various data on AMR—is critical, particularly given how quickly and widely it spreads. But surveillance efforts are currently fraught with shortcomings. The available data is fragmented and often not comparable. Hospitals fail to collect the representative samples necessary for surveillance analytics, collecting data only on those patients who experience resistance and not on those who get better. Laboratories use a wide variety of testing methods, and reporting is not always consistent or complete.

Surveillance can serve as an early warning system. But weaknesses in these systems have caused public health officials to consistently underestimate the impact of AMR in loss of lives and financial costs. That’s why improving surveillance must be a top priority, says Solomon, who previously served as chair of the U.S. Federal Interagency Task Force on AMR and has been tracking the advance of AMR since he joined the U.S. Public Health Service in 1981.

A Collaborative Diagnosis

Ineffective surveillance has also contributed to huge growth in the use of antibiotics when they aren’t warranted. Strong patient demand and financial incentives for prescribing physicians are blamed for antibiotics abuse in China. India has become the largest consumer of antibiotics on the planet, in part because they are prescribed or sold for diarrheal diseases and upper respiratory infections for which they have limited value. And many countries allow individuals to purchase antibiotics over the counter, exacerbating misuse and overuse.

In the United States, antibiotics are improperly prescribed 50% of the time, according to CDC estimates. One study of adult patients visiting U.S. doctors to treat respiratory problems found that more than two-thirds of antibiotics were prescribed for conditions that were not infections at all or for infections caused by viruses—for which an antibiotic would do nothing. That’s 27 million courses of antibiotics wasted a year—just for respiratory problems—in the United States alone.

And even in countries where there are national guidelines for prescribing antibiotics, those guidelines aren’t always followed. A study published in medical journal Family Practice showed that Swedish doctors, both those trained in Sweden and those trained abroad, inconsistently followed rules for prescribing antibiotics.

Solomon strongly believes that, worldwide, doctors need to expand their use of technology in their offices or at the bedside to guide them through a more rational approach to antibiotic use. Doctors have traditionally been reluctant to adopt digital technologies, but Solomon thinks that the AMR crisis could change that. New digital tools could help doctors and hospitals integrate guidelines for optimal antibiotic prescribing into their everyday treatment routines.

“Human-computer interactions are critical, as the amount of information available on antibiotic resistance far exceeds the ability of humans to process it,” says Solomon. “It offers the possibility of greatly enhancing the utility of computer-assisted physician order entry (CPOE), combined with clinical decision support.” Healthcare facilities could embed relevant information and protocols at the point of care, guiding the physician through diagnosis and prescription and, as a byproduct, facilitating the collection and reporting of antibiotic use.

sap Q217 digital double feature2 images4 Looking for HR – Or Any – Job Security? Try Analytics

Cincinnati Children’s Hospital’s antibiotic stewardship division has deployed a software program that gathers information from electronic medical records, order entries, computerized laboratory and pathology reports, and more. The system measures baseline antimicrobial use, dosing, duration, costs, and use patterns. It also analyzes bacteria and trends in their susceptibilities and helps with clinical decision making and prescription choices. The goal, says Dr. David Haslam, who heads the program, is to decrease the use of “big gun” super antibiotics in favor of more targeted treatment.

While this approach is not yet widespread, there is consensus that incorporating such clinical-decision support into electronic health records will help improve quality of care, contain costs, and reduce overtreatment in healthcare overall—not just in AMR. A 2013 randomized clinical trial finds that doctors who used decision-support tools were significantly less likely to order antibiotics than those in the control group and prescribed 50% fewer broad-spectrum antibiotics.

Putting mobile devices into doctors’ hands could also help them accept decision support, believes Solomon. Last summer, Scotland’s National Health Service developed an antimicrobial companion app to give practitioners nationwide mobile access to clinical guidance, as well as an audit tool to support boards in gathering data for local and national use.

“The immediacy and the consistency of the input to physicians at the time of ordering antibiotics may significantly help address the problem of overprescribing in ways that less-immediate interventions have failed to do,” Solomon says. In addition, handheld devices with so-called lab-on-a-chip  technology could be used to test clinical specimens at the bedside and transmit the data across cellular or satellite networks in areas where infrastructure is more limited.

Artificial intelligence (AI) and machine learning can also become invaluable technology collaborators to help doctors more precisely diagnose and treat infection. In such a system, “the physician and the AI program are really ‘co-prescribing,’” says Solomon. “The AI can handle so much more information than the physician and make recommendations that can incorporate more input on the type of infection, the patient’s physiologic status and history, and resistance patterns of recent isolates in that ward, in that hospital, and in the community.”

Speed Is Everything

Growing bacteria in a dish has never appealed to Dr. James Davis, a computational biologist with joint appointments at Argonne National Laboratory and the University of Chicago Computation Institute. The first of a growing breed of computational biologists, Davis chose a PhD advisor in 2004 who was steeped in bioinformatics technology “because you could see that things were starting to change,” he says. He was one of the first in his microbiology department to submit a completely “dry” dissertation—that is, one that was all digital with nothing grown in a lab.

Upon graduation, Davis wanted to see if it was possible to predict whether an organism would be susceptible or resistant to a given antibiotic, leading him to explore the potential of machine learning to predict AMR.

sap Q217 digital double feature2 images5 Looking for HR – Or Any – Job Security? Try Analytics

As the availability of cheap computing power has gone up and the cost of genome sequencing has gone down, it has become possible to sequence a pathogen sample in order to detect its AMR resistance mechanisms. This could allow doctors to identify the nature of an infection in minutes instead of hours or days, says Davis.

Davis is part of a team creating a giant database of bacterial genomes with AMR metadata for the Pathosystems Resource Integration Center (PATRIC), funded by the U.S. National Institute of Allergy and Infectious Diseases to collect data on priority pathogens, such as tuberculosis and gonorrhea.

Because the current inability to identify microbes quickly is one of the biggest roadblocks to making an accurate diagnosis, the team’s work is critically important. The standard method for identifying drug resistance is to take a sample from a wound, blood, or urine and expose the resident bacteria to various antibiotics. If the bacterial colony continues to divide and thrive despite the presence of a normally effective drug, it indicates resistance. The process typically takes between 16 and 20 hours, itself an inordinate amount of time in matters of life and death. For certain strains of antibiotic-resistant tuberculosis, though, such testing can take a week. While physicians are waiting for test results, they often prescribe broad-spectrum antibiotics or make a best guess about what drug will work based on their knowledge of what’s happening in their hospital, “and in the meantime, you either get better,” says Davis, “or you don’t.”

At PATRIC, researchers are using machine-learning classifiers to identify regions of the genome involved in antibiotic resistance that could form the foundation for a “laboratory free” process for predicting resistance. Being able to identify the genetic mechanisms of AMR and predict the behavior of bacterial pathogens without petri dishes could inform clinical decision making and improve reaction time. Thus far, the researchers have developed machine-learning classifiers for identifying antibiotic resistance in Acinetobacter baumannii (a big player in hospital-acquired infection), methicillin-resistant Staphylococcus aureus (a.k.a. MRSA, a worldwide problem), and Streptococcus pneumoniae (a leading cause of bacterial meningitis), with accuracies ranging from 88% to 99%.

Houston Methodist Hospital, which uses the PATRIC database, is researching multidrug-resistant bacteria, specifically MRSA. Not only does resistance increase the cost of care, but people with MRSA are 64% more likely to die than people with a nonresistant form of the infection, according to WHO. Houston Methodist is investigating the molecular genetic causes of drug resistance in MRSA in order to identify new treatment approaches and help develop novel antimicrobial agents.

sap Q217 digital double feature2 images6 1024x572 Looking for HR – Or Any – Job Security? Try Analytics

The Hunt for a New Class of Antibiotics

There are antibiotic-resistant bacteria, and then there’s Clostridium difficile—a.k.a. C. difficile—a bacterium that attacks the intestines even in young and healthy patients in hospitals after the use of antibiotics.

It is because of C. difficile that Dr. L. Clifford McDonald jumped into the AMR fight. The epidemiologist was finishing his work analyzing the spread of SARS in Toronto hospitals in 2004 when he turned his attention to C. difficile, convinced that the bacteria would become more common and more deadly. He was right, and today he’s at the forefront of treating the infection and preventing the spread of AMR as senior advisor for science and integrity in the CDC’s Division of Healthcare Quality Promotion. “[AMR] is an area that we’re funding heavily…insofar as the CDC budget can fund anything heavily,” says McDonald, whose group has awarded $ 14 million in contracts for innovative anti-AMR approaches.

Developing new antibiotics is a major part of the AMR battle. The majority of new antibiotics developed in recent years have been variations of existing drug classes. It’s been three decades since the last new class of antibiotics was introduced. Less than 5% of venture capital in pharmaceutical R&D is focused on antimicrobial development. A 2008 study found that less than 10% of the 167 antibiotics in development at the time had a new “mechanism of action” to deal with multidrug resistance. “The low-hanging fruit [of antibiotic development] has been picked,” noted a WHO report.

Researchers will have to dig much deeper to develop novel medicines. Machine learning could help drug developers sort through much larger data sets and go about the capital-intensive drug development process in a more prescriptive fashion, synthesizing those molecules most likely to have an impact.

McDonald believes that it will become easier to find new antibiotics if we gain a better understanding of the communities of bacteria living in each of us—as many as 1,000 different types of microbes live in our intestines, for example. Disruption to those microbial communities—our “microbiome”—can herald AMR. McDonald says that Big Data and machine learning will be needed to unlock our microbiomes, and that’s where much of the medical community’s investment is going.

He predicts that within five years, hospitals will take fecal samples or skin swabs and sequence the microorganisms in them as a kind of pulse check on antibiotic resistance. “Just doing the bioinformatics to sort out what’s there and the types of antibiotic resistance that might be in that microbiome is a Big Data challenge,” McDonald says. “The only way to make sense of it, going forward, will be advanced analytic techniques, which will no doubt include machine learning.”

Reducing Resistance on the Farm

Bringing information closer to where it’s needed could also help reduce agriculture’s contribution to the antibiotic resistance problem. Antibiotics are widely given to livestock to promote growth or prevent disease. In the United States, more kilograms of antibiotics are administered to animals than to people, according to data from the FDA.

One company has developed a rapid, on-farm diagnostics tool to provide livestock producers with more accurate disease detection to make more informed management and treatment decisions, which it says has demonstrated a 47% to 59% reduction in antibiotic usage. Such systems, combined with pressure or regulations to reduce antibiotic use in meat production, could also help turn the AMR tide.

sap Q217 digital double feature2 images7 1024x572 Looking for HR – Or Any – Job Security? Try Analytics

Breaking Down Data Silos Is the First Step

Adding to the complexity of the fight against AMR is the structure and culture of the global healthcare system itself. Historically, healthcare has been a siloed industry, notorious for its scattered approach focused on transactions rather than healthy outcomes or the true value of treatment. There’s no definitive data on the impact of AMR worldwide; the best we can do is infer estimates from the information that does exist.

The biggest issue is the availability of good data to share through mobile solutions, to drive HCI clinical-decision support tools, and to feed supercomputers and machine-learning platforms. “We have a fragmented healthcare delivery system and therefore we have fragmented information. Getting these sources of data all into one place and then enabling them all to talk to each other has been problematic,” McDonald says.

Collecting, integrating, and sharing AMR-related data on a national and ultimately global scale will be necessary to better understand the issue. HCI and mobile tools can help doctors, hospitals, and public health authorities collect more information while advanced analytics, machine learning, and in-memory computing can enable them to analyze that data in close to real time. As a result, we’ll better understand patterns of resistance from the bedside to the community and up to national and international levels, says Solomon. The good news is that new technology capabilities like AI and new potential streams of data are coming online as an era of data sharing in healthcare is beginning to dawn, adds McDonald.

The ideal goal is a digitally enabled virtuous cycle of information and treatment that could save millions of dollars, lives, and perhaps even civilization if we can get there. D!

Read more thought provoking articles in the latest issue of the Digitalist Magazine, Executive Quarterly.

About the Authors:

Dr. David Delaney is Chief Medical Officer for SAP.

Joseph Miles is Global Vice President, Life Sciences, for SAP.

Walt Ellenberger is Senior Director Business Development, Healthcare Transformation and Innovation, for SAP.

Saravana Chandran is Senior Director, Advanced Analytics, for SAP.

Stephanie Overby is an independent writer and editor focused on the intersection of business and technology.


Let’s block ads! (Why?)

Digitalist Magazine

TIBCO Spotfire Today: Explore the Best in Analytics

SF ads Blog TIBCO Spotfire Today: Explore the Best in Analytics

The brightest minds in analytics use TIBCO Spotfire to turn data into a strategic competitive advantage. From connecting to hundreds of data sources to anticipating what’s next, explore why TIBCO Spotfire provides unrivaled, simple, yet sophisticated visual analytics.

In this webinar, you’ll learn and see live demos of what makes TIBCO Spotfire truly exceptional.

  • Get an overview of data source additions including TIBCO Spotfire Smart Data Catalog, a new data connectivity and management solution
  • See easy inline data preparation for shaping, enriching, and transforming your data
  • Discover smarter visual analytics using location and an AI-driven recommendation engine
  • Delve deeper with TERR/R, TIBCO Statistica, H2O.ai, ML/MLlib, SAS, and MATLAB
  • Accelerate insights with real-time, streaming, and automated actions using StreamBase and TIBCO Live Datamart
  • Put it all together rapidly with Solution Templates and Accelerators

Don’t miss this opportunity to investigate TIBCO Spotfire from an industry expert perspective. Register today.

Presented By:

Bipin Singh, Senior Product Marketing Manager, TIBCO Analytics
Jen Underwood, Founder, Impact Analytix, LLC

Let’s block ads! (Why?)

The TIBCO Blog

Bringing more text analytics to the Bing News Solution Template

We are excited to announce support for ‘bring your own entities’ for the Bing News solution template! The Bing News template allows brand managers to find articles most relevant to them by filtering on sentiment, trending topics as well as entities like locations, organizations and people (learn more about the Bing News template).

One of the most common feature requests we received was the ability to define your own list of words and use that to slice your data. A product manager may want to e.g. create a list of products or company names and use that to discover and read relevant articles. Read on to find out how you can incorporate your own entities into the news template today!

fa8900df 56c6 42e0 b525 f353a7e05734 Bringing more text analytics to the Bing News Solution Template

Setting up the Solution

Navigate to our installer to get started with setting up the Bing News template. Upon reaching the entities page you are now given the option of bringing your own entities. Selecting ‘Yes’ will take you to the entities definition page.

You can define entities in two ways – either by uploading your entities or inputting entities directly inside the experience. In this example, I am setting up a solution that will help me find articles related to the Microsoft brand. I have defined a list of Microsoft products I want to follow. I choose to upload entities and point the installer to a CSV on my computer.

e74f5aad 059a 4226 9446 79a1d216026f Bringing more text analytics to the Bing News Solution Template

This auto populates my ‘Product entity’ which I can now customize by tweaking the list as well as changing the icon and color associated with it.

c67438ac 6f46 41a6 87e4 0e2b5bf4f861 Bringing more text analytics to the Bing News Solution Template

I can now continue adding as many entities as I would like. In this example, I have added another entity for other tech companies I may want to track in conjunction to Microsoft:

0c871e88 7abc 4bdd be15 b6099d0ae2a8 Bringing more text analytics to the Bing News Solution Template

Once I am happy with all the new entities I have created I can click ‘Next’ and carry on setting up the solution.

The end result

Once the template is set up, you can start exploring your reports! You will notice that the Overview page now has both the out of the box entities (like Location) along with your newly added entities (in my case Products and Tech Companies). Selecting something like Google from the companies list will show me all the articles that mention Microsoft and Google together, for whatever date range, news category or sentiment type I am interested in.

862f9045 db40 4501 91d7 36a75a60975c Bringing more text analytics to the Bing News Solution Template

Customizing your entities

Once your template is set up you can always go back and add new entities or modify the current ones. Currently this can be done by directly modifying the “userdefinedentitydefinitions” table inside your Azure SQL database.

c0f5e4ee c0a9 4278 a941 312f55501a2e Bringing more text analytics to the Bing News Solution Template

Try it out & let us know

Go ahead and check out the Bing News solution template. You can try out an interactive sample report, watch a demo video or just go ahead and set things up! The team is always interested in any thoughts or feedback – you can reach us through our alias (pbisolntemplates@microsoft.com) or by leaving a comment on the Power BI Solution Template Community page.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

Announcing the Power BI Solution Template for Azure Activity Log Analytics

Today we are excited to announce the release of the Power BI solution template for Azure Activity Logs. This template provides analytics on top of your Activity Log in the Azure Portal. Gain insight into the activities performed by various resources and people in your subscription. Get started with historical analysis on your last 90 days of Activity Log data, and let an Azure SQL database accumulate these historical events in addition to incoming events.

Getting Started

The first part of our experience is an installer that guides you through setting up a solution template. We prompt you for Azure credentials and ask if you would like to use a new or existing Azure SQL database to store your Activity Log data. All that is left to do now is hit Run, wait a few minutes for the pipeline to get set up, and download a Power BI Desktop file that comes with pre-defined reports.

Behind the scenes, the solution template spins up a total of three Azure Services:

  1. An ‘Event Hub’ that streams the Activity Log events in near real-time to Stream Analytics
  2. A ‘Stream Analytics’ job that processes the Activity Log events in near real-time and inserts the data into SQL
  3. An Azure SQL database which comes with a pre-defined schema for your Activity Log data

As soon as the solution template is spun up, the streaming pipeline awaits new activities, and you should almost immediately see data appear inside your Power BI reports.

Architecture diagram:

Azure Activity Log, Event Hub, Stream Analytics, Azure SQL, Power BI

bb1f5bcc 1c20 4841 a3fc 83b57639f25e Announcing the Power BI Solution Template for Azure Activity Log Analytics

Administrative Page

The “Administrative” page provides a high-level overview of your Activity Log data. View trends in events over time, or leverage cross-filtering and drill-down capabilities to get specific with your data. Key Performance Indicators (KPIs) here compare percentage of successful calls and number of failures over the last 7 days versus the 7 days before that. An example use case on this overview page allows you to find which user on your subscription is causing the most Stream Analytics events over the past 2 weeks.

70e8f79f 628d 4dc1 a54d 54e88d3571dd Announcing the Power BI Solution Template for Azure Activity Log Analytics

Failures Page

The Failures page offers deeper insight into failures on your subscription. Observe week over week changes in the aggregate number of failures and percentage of failures. Choose a specific operation such as Update SQL server; from the table on the left, and drill down to learn exactly what failed.

7753477a 1a81 44a0 b459 b0eccec71b56 Announcing the Power BI Solution Template for Azure Activity Log Analytics

User History Page

The User History page allows for deeper analysis on a per-individual basis, where you can observe which resource groups an individual is most active in, or compare successful versus failed operations over time. An example use case here is to view a caller’s activities and impacted resources from the past week, or to find which resource group is responsible for the spike in a caller’s failures yesterday afternoon.

f508cbf7 e41b 442b be2f 2056c27a9022 Announcing the Power BI Solution Template for Azure Activity Log Analytics

Service Health Page

The Service Health page showcases any Azure Service Health announcements that may be relevant to you. Here, you can filter service health events by region, impacted service, type, and status. Peek at the summaries of your filtered service health events in the Snippets Browser custom visual, or click into the tiles for more details. This page can assist in diagnosing an issue – for example if you notice a spike in failures on the Failures page, head on over to the Service Health page and check to make sure the uptick is not related to some Azure service outage your region.

3d47efdb 8f05 47cb 8be0 47a4eec17d43 Announcing the Power BI Solution Template for Azure Activity Log Analytics


Stream Analytics also supports a real-time flow to a Power BI streaming dataset. Unfortunately we could not automate this step for you due to authentication restrictions, but we have provided clear documentation on how you can set up a real-time streaming dataset.


Give it a try and please let us know what you think. You can reach the team by e-mailing us at PBISolnTemplates@microsoft.com or leaving a comment on the Power BI Community page for solution templates.

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI

The Difference Between Forecasting & Predictive Analytics

image thumb The Difference Between Forecasting & Predictive Analytics

Forecasts Anticipate Trends within Large Populations, On Timelines Typically Measured in Months or Years.
Predictive Analytics Anticipate the Behavior of a Single Individual, Often on a “Right Now” Timeline.

It may sound like silly semantics at first, making a distinction between two synonyms like “forecast” and “predict.”  And silly semantic differentiations ARE one of my personal pet peeves – Chris Rock’s famous distinction between “rich” and “wealthy” is one example that did NOT bother me, because he used it as a device to CREATE an important distinction.  But we’ve all been lectured at one time or another by someone taking two previously-interchangeable words and pretending that there has ALWAYS been a sharp distinction between them… in an effort to make themselves look much smarter than they are (or more importantly, smarter than their audience).

So I’m NOT here to rewrite the English language – “predict” and “forecast” ARE synonyms in the dictionary sense, and will remain so.  In fact their interchangeability in normal parlance is the REASON why companies end up frequently buying the wrong analytical product or service.

But in the analytics world, there ARE two very different kinds of activities hiding behind these synonyms, and it’s important not to buy one when you actually need the other.  You can end up five-to-six figures deep in the wrong thing before you discover the mistake.  I want to help you avoid that, hence this article.

Once you’ve digested the illustration at the top of this article, yeah, you’ve kind already got it.

  • Forecasting is when we anticipate the behavior of “Lots” of people (customers, typically) on “Long” timelines.
  • Predictive Analytics anticipate the behavior of One person (again, typically a customer) on a “Short” timeline.

So…  Macro versus Micro.

But let’s delve just a little bit deeper, in order to “cement” the concepts.  Examples will help, so let’s…

Like many modern organizations, Facebook benefits from a mixture of both Macro and Micro.  They certainly have need to Forecast the trends in their overall business, as well as the need to Predict the behavior of individual users.  Let’s illustrate with some examples.

  • “How many active users will we have at the end of this year?”
  • “Three years from now, what % of our revenue will come from mobile devices as opposed to PC’s?”
  • “How much are we going to spend on server hardware next quarter?”

These are all quite valid and important questions to answer.  They impact strategic planning and guidance provided to investors – about as crucial as you can get, really.

They also all tend to blend human inputs with computational power to produce their answers.  We use data and software as extensions of our brains – very often in a collaborative effort across multiple people (sometimes a handful, sometimes hundreds or more).  In other words, these aren’t the sort of things we hand to machines to handle in a fully-automated manner (at least, not today).

The simplest form of forecasting is the “lowly” line chart:

image thumb 1 The Difference Between Forecasting & Predictive Analytics

Line Charts are an EXCELLENT Forecasting Tool – As Long as the Trend is Smooth and Obvious

(Unless Some Big Structural Change is Coming, It’s Not Hard to Guess Where THIS Trend is Headed)

Of course, trends aren’t always so smooth and obvious.  Try this chart instead:

image thumb 2 The Difference Between Forecasting & Predictive Analytics

“Choppy” Trend:  Not NEARLY as Obvious What the Next Few Months Will Look Like, So Now What?

When faced with a non-obvious trend (and most trends are NOT as obvious as the first example), here are some progressively-more-sophisticated approaches:

1) Add an automatic trendline to the chart.  This SOUNDS sophisticated, but the algorithms behind automatic trendlines are, by necessity, quite primitive in most cases.  EX: They can’t tell the difference between a one-time exception and a significant new development.  For instance, is August 2016 (the big spike in the red chart above) due to a one-time influx of revenue from a source that won’t surface ever again, or representative of a breakthrough in the core business?  Trendline algorithms aren’t typically sophisticated enough to tell the difference – and that’s a GOOD thing, because if they started making assumptions about such things, they’d be WILDLY wrong at times – without warning.

Here’s another way to say it:  when a trend is obvious, WE can draw the trendline with our eyes.  But traditional computer-generated trendlines aren’t much better.  They do basically the same thing that our eyeballs do – they “split the difference” mathematically amongst the known historical points in order to draw a line (or curve).

2) Build a smarter trendline with a combination of biz knowledge and formulas.  Ah, now we’re getting somewhere!  And we do this sort of thing ALL THE TIME with our clients, since DAX (the Excel-style formula language behind Power BI and Power Pivot) is an absolute godsend here.  Let’s factor in seasonality for instance – take how strong our biz has been this year versus the prior year, for instance, and then multiply that by the seasonal “weight” of January (calculated from our past years’ historical data) and we’ll arrive at a much more intelligent guess for January than our auto-trendline would have generated.

And, as humans, we may know quite a bit about that August 2016 spike.  Let’s say a bunch of that revenue came from a one-time licensing deal – something that we cannot responsibly expect to happen again.  Well, a quick usage of the CALCULATE function may be all that’s required – we’ll just remove licensing revenue from consideration while building our smarter trendline formula.

3) Decompose the overall trend into sub-trends.  Just like in my article on The Ten Things Power BI Can Do For You, (and the live presentation version on YouTube), DECOMPOSITION is an incredibly powerful weapon, and arguably a necessary one.  If you break your overall trend out by Region, for instance, it may be easier and more accurate to forecast each Region individually, and THEN re-combine all of those “sub-trends” to come up with a macro trend forecast.  Again, this is something we have done a lot of with our clients, and the Power BI / Power Pivot toolset is fantastic in this capacity.

4) Incorporate external variables.  We often get so caught up in our businesses as being Our Everything, so it’s good to take a step back and remember that the outside world has at LEAST as much influence on “what’s going to happen” as our internally-focused trends and behavior.  In our Facebook example, for instance, knowing whether China is likely to suddenly allow Facebook to operate there would be a big deal.  But in a more mathematical sense, even knowing how many children will be “coming of Internet age” in the next year is an important external variable, since they represent potential new Facebook users.  Similarly, the number of people in developing countries “coming online” as cellular and broadband infrastructure reaches them is also relevant.  You can take important factors like those (as well as expected conversion rates – what % of those new Internet users will become Facebook users?) and mathematically incorporate them into your forecasting models, especially a “sub-trend” model, to arrive at an ever-more-refined estimation of the future.

5) Involve the Humans!  Given the intricacies of the sub-trends approach, it makes sense to “subcontract” each sub-forecast to a specific person or team who knows it best, have them mathematically blend internal trends with external variables, and submit their forecasts back up to you.  This allows the formulas used to be quite different in each sub-segment, to reflect the very-different realities of each segment.

This may sound chaotic, and in larger orgs, it definitely CAN be.  But our friends at Power Planner have an AMAZING system that works in tandem with Power BI when it comes to collaborative forecasting and budgeting.  We’re using it right now to help several of the world’s largest corporations develop an ever-more-accurate picture of the future.

I think that’s enough about Forecasting for such a high level article.  I want to move on to Predictive, and then a parting observation.

So let us know if you’d like to know more about Forecasting in Power BI – whether you want to perform that in a top-down fashion or bottom-up, we’ve got the experience.

image thumb 3 The Difference Between Forecasting & Predictive Analytics

Forecasting = “How Heavy Will The Traffic Be Next Month?”  Predictive = “Will the Yellow Car Get Off at the Next Exit?”

Here are some Predictive Analytics examples that are critical at Facebook:

  • Of our millions of advertisements, which one should we show to user X right now?
  • When User X is logged on via a Mobile Device vs. a PC, should we change up our ad mix, or is their behavior consistent across platforms?
  • What’s the optimal ratio for “real posts” to ads in User X’s news feed (optimal in terms of ad clickthrough rates)?

The difference in these examples leaps off the page – we’ve “zoomed in” on a single user/customer, and we’re making a decision/adjustment in real-time (or near-real-time).

But before we dive a bit deeper into HOW you do this stuff, let’s address a common question we get at this point…

It’s a fair question that smart people ask:  if you get really good at micro-level Predictive, can’t you dispense with macro-level Forecasting altogether?  Well, no.  Let’s drill down on that first example – the “which ad should we show User X right now?” example, to illustrate.

(As we answer this question, we actually start to answer the “How Does One DO Predictive?” question as a bonus!)

In order to predict which ad is most-likely to “succeed” with User X, your Predictive systems need to know some things:

1) Detailed attributes of User X are a Must-Have Input for Predictive.  The obvious stuff like age, gender, and location for starters.  But also…  their history of what they have Liked.  Articles they have clicked through to read more.  Ads they have clicked.  Their Friends Lists, and the Like/Ad behavior of them.

2) Historical behavior of similar users is a tremendous help for filling in gaps.  Let’s say User X has only been on Facebook for a week, and we don’t have really sufficient ad-click history on them yet.  But in that week, we’ve already seen quite a bit of them in terms of Like behavior and Article Clickthrough behavior.  Combine that one-week snapshot with their basic demographics, and we can quickly guess that they are quite similar to “Audience XYZ123” – a collection of users with similar Like and Clickthrough behavior, and now we can lean on THEIR longer-term histories of ad click behavior to get a good guess of what sorts of ads User X will click.

3) Oh yeah, we also need the detailed attributes of the Ad in question!  Is it video or static?  Who is the advertiser TRYING to reach?  What are the associated keywords?  In the time we’ve been running this Ad, who has been clicking on it and who has not?

3) Predictive Analytics systems NEED to make mistakes.  Predictive systems are constantly refining their guesses based on failure.  Let’s say we think our “one week old” User X is gonna love a particular Ad based on their striking similarity to Audience XYZ123, so we serve that ad up, and not only does User X not click the ad, they actually click the little X that says “never show me ads like this again.”  Does our Predictive System give up and stop making guesses for User X?  Heck no!  Instead it’s in some sense thrilled to get this feedback, and uses it to start developing a more-accurate picture of User X.  Over time our system may even “discover” that it should “fork” Audience XYZ123 into two splinter factions – one like User X, and one that more closely-resembles the original picture of Audience XYZ123.

In short, it’s worth summarizing and emphasizing…

  1. In order to make micro-level Predictions, you require a MASSIVE amount of detailed information, AND…
  2. You need to be able to make, and learn from, mistakes.

Predictive simage thumb 4 The Difference Between Forecasting & Predictive Analyticsystems operate without direct human intervention.  This is in direct contrast to Forecasting, which, while highly mathematical and assisted by software, benefits GREATLY from direct human insight and input.  Sure, in a Predictive system, there might be some high level “knobs” that you can adjust to make the system more or less aggressive for instance, but each individual prediction happens without a human being consulted.

This is where Machine Learning, AI, and R shine.  Whereas DAX is fantastic at Forecasting, it really has no role to play in Predictive (except as an adviser in turning those high-level knobs mentioned in the prior paragraph).

This is why Microsoft has rapidly expanded its analytics offerings.  Rewind 24 months and Power BI is basically all you were hearing from Mount Redmond.  But in very short order, we now have HDInsight and Azure Machine Learning making some serious waves in the marketplace.

Let’s start here:  how many chances do you get to be wrong in a Macro-Level Forecast?  One.  You get One.  “Q4 2017” only happens once, full stop.  Machine Learning systems need to experiment, fail, and improve.  But when it comes to Forecasts, YOU don’t get a chance to learn from mistakes – fail badly enough and you’re out of a job.  Assume that Q4 2017 is gonna look just like all of the Q4’s past at your own peril – but even if we DID do that, we wouldn’t need a sophisticated Predictive system to do so, would we?

Next, do you have all of the micro-level detail required?  Nope, not even close.  For starters, you’re gonna have new customers next quarter.  And you know NOTHING about them yet, not even the basics of their individual demographics.  And remember, I’m not talking about broad trends here, I’m talking about hyper-detailed information on each individual customer – THAT’S what you need for a Predictive system to function.  Oh, and perhaps even MORE importantly, you don’t know what Ads are going to be submitted by your advertisers next quarter (sticking to the Facebook example for a moment).

So, you can’t Predict your way to a Forecast, but also:  Forecasts need to be Explainable.  Imagine telling Wall Street that we expect 15% revenue growth next quarter.  They ask why.  And we say “um, our Machine Learning system said so, but we really don’t understand why.”  That’s not gonna fly, but that’s precisely the best you can do really.  Predictive systems are far superior to human beings at their task precisely because they are INhuman.  If they could explain their predictions to us and have us understand, we wouldn’t have needed them in the first place.

So here’s an, ahem, prediction from me:  Forecasting will remain a “humans assisted by DAX-style software” activity until we eventually develop General AI.  As in, the kind that eliminates ALL need for humans to work.  The kind we’re scared might just flat-out replace and discard us, or might usher in an era of unimaginable human thriving.  Either way, when we get to that point, our problems of today are no longer relevant.

In the meantime, we can still do AMAZING focused things with Predictive Analytics.  Not every organization has a need for it yet (or is ready for it yet, more accurately), but chances are good that your org DOES.  If you want help getting started with HDInsight and/or Azure Machine Learning and related technologies, let us know and we’ll be glad to help.

Let’s block ads! (Why?)


How to Set Up a Killer Google Analytics Dashboard

blog title google analytics dashboard 351x200 How to Set Up a Killer Google Analytics Dashboard

Information overload: The condition when a marketer has so much information they can no longer manage or interpret it easily, causing them to basically go numb to their analytics reports. Secondary conditions include ignoring marketing data entirely, making poor decisions, and getting generally disappointing results.

You might not know it to look at them, but many CMOs and Marketing Managers have a terrible condition.

They’re suffering from severe information overload.

And I’m just talking about analytics. Many of us are overwhelmed with analytics reports before we ever look at email or social media.

To wit:

  • 53% of marketing executives feel “overwhelmed” by the amount of data produced by their marketing technologies.
  • 67% of those execs say they have to look at too many different dashboards and reports to get insight.

Unfortunately, having too much information and too many reports is almost inevitable for marketers.

Here’s why: Almost all marketing programs start with a basic patchwork of technology solutions. An email service provider, a pay-per-click account or two. A social media tool. A landing page tool. A link-building program.

As the company grows, it tends to add more martech. Our “marketing stack” gets taller and taller, adding more reports all the time.

This patchwork martech can seem like the least expensive way to go for a while … until it starts incurring other costs.

Even if you’re early on in the process and still trying to be a martech minimalist, different people are always going to want different information.

Exhibit A of this is when a marketing person puts a report in front of a CEO and starts touting all the great things they’ve done. The CEO appreciates the work, of course. But they really only want to know one thing: Is there a positive ROI, or not?

It’s beyond the scope of one blog post to help you with every possible marketing report available, but if we had to zero in on one tool, Google Analytics would be a good start. And so this post aims to simplify your Analytics reporting, and give you an easy, free way to let every member of your team get exactly the data they want ― and nothing more.

As you probably know, Google Analytics is very widely used. In fact, 54.3% of all websites have it installed. And, among sites that use any analytics package, 83.4% use Analytics.

How to create a custom Google Analytics Dashboard

Creating a Google Analytics Dashboard is actually very easy. You can do it in less than a dozen clicks. So let’s walk through creating a brand new dashboard, including how to send an email to one of your co-workers, and how to schedule later emails so they’ll go out whenever you want them to be sent.

  1. Log into your Google Analytics account.
  2. Go to the site profile that you want to create the dashboard for.
  3. Click on the “Customization” navigation link in the upper left-hand column. This will expand the navigation to show four new options.

These are all the analytics dashboards that people have created and then decided to share with the world. If you’ve come across some of the “10 Best Analytics Dashboard” type articles, this is often where you’ll find them.

As you’ll notice, you can sort and filter the options by several parameters. They’re also rated, which is helpful.

  1. To add one of these to your account, just click the “Import” button just below the dashboard you want to use.

You’ll see something like this:

  1. See where the red arrow is pointing? Use that pull-down menu to select the site you want to add this dashboard to.

Once you click the “CREATE” button, you’ll see the new dashboard, active in your account, with your site’s Analytics data filling out all widgets.

Like this:

Cool, right? And not terribly hard, either. It’s taken us about ten clicks to get this far.

Word to the wise: If you’re an SEO agency or a consultant, creating a few dashboards for your clients or as promotional tools for your business might be a good idea. They could be the data wonk’s version of guest blog posts ― useful content available to anyone, offered as a way to demonstrate your expertise.

But you don’t have to just use other people’s dashboards. You can create your own for your company, or for specific people, departments, or teams in your organization.

To create a custom dashboard from scratch, let’s:

  • go back to the main view of your analytics account;
  • find the site you want to create the dashboard for; and
  • go to the new dashboard page.

But this time, instead of selecting “Import from Gallery,” you’re going to name your new dashboard and then click “Create Dashboard.” (Notice also how we’re starting with a “Blank Canvas” instead of a “Starter Dashboard.”)

Here’s what you’ll see next:

You’re immediately being asked to add a widget to this new dashboard. “Widgets” are what Google calls the individual boxes of data that make up dashboards.

For our example, I’m going to start with a time-on-page widget for this dashboard about content engagement. To do that I just:

  • named the widget;
  • chose which data format I wanted the widget to use;
  • picked which time frame I wanted to pull from;
  • picked the metric itself from Google’s rather long list of options; and
  • clicked “Save.”

And voila ― it’s a Google Analytics dashboard!

Okay … maybe not a fully fleshed-out dashboard. So let’s spiff this up a bit.

If you click “Customize Dashboard,” you’ll be able to structure the columns of your dashboard almost any way you want.

Some people recommend using each column for a different type of metric, or a different phase of your sales funnel. So, for example, Column A on the left would be acquisition information, Column B would be engagement information, Column C would be conversion information, and Column D would be retention information.

Here’s what it looks like if I choose a two-column view, and add another widget:

How to automatically send Google Analytics Dashboard reports to your co-workers

Now, if I want to send a report like this to anyone via email, I can do that by clicking the “Email” link in the navigation links row just below the title of the dashboard. That will create a pop-up that looks like this:

The example above is all filled out, complete with a subject line, a message, and a recipient. It’s scheduled to go out every Wednesday.

And you’re done. No more creating reports over and over. No more testing people’s patience by making them wade through Analytics data they don’t need.


The Google Analytics dashboard is a fantastic tool. Used well, it can save you a lot of time and move you closer to becoming a truly data-driven marketer.

But it does have limitations.

The primary limitation has to be that it’s not possible to track people individually, particularly if you want to observe how they behave over a course of time, like through their buyer’s journey. For that, you’ll need a CRM or marketing automation platform (or both).

But if you’re okay with tracking how groups of people behave, and looking for trends in how they respond to specific digital assets and actions, Google Analytics is great.

And now that you know how to slice and dice the reports so everyone can see the information they want, Analytics should be even more useful to you.

Back to you

Are you using Google Analytics dashboards? Do you have different ones for different user profiles at your company? Which dashboard is your favorite? Leave a comment to tell us what you think.

Let’s block ads! (Why?)

Act-On Blog

Explore your Partner Center Analytics Data in Power BI

We’re pleased to announce the public preview of the new Partner Center Analytics App for Power BI for direct partners.

Get a visual representation of your Partner Center Analytics data with the Partner Center Analytics app for Power BI, including:

  • Growth of your customer base, subscriptions, and licenses.
  • Usage of Office 365, Microsoft Dynamics, and Microsoft Azure products.
  • Daily consumption units for each metered resource in each Azure subscription for the last 60 days.
  • Estimated cost based on latest rate card.
  • Ability to export datasets and create custom reports, including per customer.

Get started with the Partner Center Analytics preview now by following the installation steps here.

You can also watch the following video (starting at minute 39) to view a demo of the Power BI app that was presented at Inspire 2017 in “CSP02 – CSP Platform Updates and Roadmap” breakout session on Day 3.

54f9fefb aa4d 4e05 a8b6 685b809eac86 Explore your Partner Center Analytics Data in Power BI

Here is a sample of the dashboard that will be created for you when you use the App

2d4e7d50 6baf 49e1 8ec6 b812fc02d8de Explore your Partner Center Analytics Data in Power BI

And here is a sample of the report pages

91889fce 2167 462b a5a2 f6fc8a6ef8f1 Explore your Partner Center Analytics Data in Power BI

841f413f 7210 4a4a 98d4 0fdadcc394ea Explore your Partner Center Analytics Data in Power BI

We’re always interested in hearing your feedback – please contact us at http://support.powerbi.com to let the team know how your experience was and if there’s anything we can do better. We look forward to your feedback!

Let’s block ads! (Why?)

Microsoft Power BI Blog | Microsoft Power BI