• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: used

Frequency of letters used in English language

February 19, 2021   Humor

Posted by Krisgo

via

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Frequency of letters used in English language


Let’s block ads! (Why?)

Deep Fried Bits

Read More

“Function cannot be used as a variable”

December 15, 2020   BI News and Info

 Function cannot be used as a variable

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

GOOD THING TIPPING IS NOT USED IN JAPAN

November 24, 2020   Humor

Or this guy would be screwed.

He claims to not have spent money except for rent and utilities.

He gets everything via coupons.

We all love coupons and vouchers, but can you imagine living on them almost exclusively for almost four decades? A Japanese man claims to have been doing it for the last 36 years, adding that he hasn’t spent a yen of his own money during that time.

71-year-old Hiroto Kiritani is a minor celebrity in his home country of Japan. His ability to live comfortably on coupons without spending any money unless he really has to is legendary, and he has been invited on numerous television shows and events over the years. Kiritani says that he gets by without spending real money except for utilities and rent. But he’s not as frugal as you might think. He just manages to live comfortably on the coupons he receives from companies he invested in over the years.

Kiritani, who used to be a professional shogi (Japanese Chess), got into stock investment when he was 35. He was invited to teach the staff of an investment company called Tokyo Securities Kyowakai about shogi, and was fascinated by the idea of owning parts of various companies. He bought his first stock in 1984 and quickly developed a taste for it, encouraged by the stock bubble of the 1980s.

Unfortunately, in December of 1989 the Nikkei Stock Average crashed and he lost 100 million yen. It was a terrible blow, but it also helped him discover the worth of investor benefits, an alternative to dividends. Basically, as long as the profitability of a company remains above a certain threshold, shareholders qualify for certain benefits offered in the form of coupons and vouchers.

During the troubled time of the Japanese stock exchange crash of 1989, these investor benefits helped Kiritani get by, allowing him to buy food and clothing without spending any real money. The same happened in 2011, after the Great East Japan Earthquake, when the stock market crashed once again. The coupons he earned were more than enough for him to get by, and as word got out about his ability to live almost exclusively on them, he became famous in Japan.

According to Hiroto Kiritani, if a business performance deteriorates, dividends will be reduced, so this system is advantageous for large investors. Minor shareholders are much better off with the investor benefits that more than 40 percent of large Japanese companies offer, as profitability need only remain over a certain threshold.

Moreover, dividends are dependent on the number of shares a person owns in a company, whereas investor benefits are often times the same regardless of the number of shares. So even owning a single share can qualify investors for various benefits.

Kiritani claims that he gets access to everything he needs with coupons alone. One coupon allows him to go to the cinema for free 300 times a year, another offers free gym membership. He can even buy vegetables with coupons. For example one coupon he gets from the ORIX Corporation allows him to choose a variety of food products from a very generous catalog, for free.

Even though he can get all sorts of groceries with his coupons, Hiroto Kiritani says that he prefers to eat out, which, of course, he can do with coupons. He owns stock in over 1,000 Japanese companies and corporations (of which about 900 are preferential stocks), so he basically has all kinds of coupons to use for everything he needs. He has become so good at living on these pieces of paper that he has been invited on several TV shows and has given interviews for magazines about it.

“I only use cash when paying rent or cover costs that are not 100% covered by my coupons. I don’t spend much cash and live on a special treatment, so in the end, I’m saving more and more money,” Kiritani said.

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

Error message: “-1.99992 cannot be used as a variable”

November 10, 2020   BI News and Info

 Error message:  1.99992 cannot be used as a variable

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Researchers find flaws in algorithm used to identify atypical medication orders

November 5, 2020   Big Data
 Researchers find flaws in algorithm used to identify atypical medication orders

Maintain your employer brand in a pandemic

Read the VentureBeat Jobs guide to employer branding

Download eBook

Can algorithms identify unusual medication orders or profiles more accurately than humans? Not necessarily. A study coauthored by researchers at the Université Laval and CHU Sainte-Justine in Montreal found that one model used by physicians to screen patients performed poorly on some orders. It’s a cautionary tale of the use of AI and machine learning in medicine, where unvetted technology has the potential to negatively impact outcomes.

Pharmacists review lists of active medications — i.e., pharmacological profiles — for inpatients under their care. This process aims to identify medications that could be abused, but most medication orders don’t show drug-related problems. Publications from over a decade ago illustrate the potential of technology to help pharmacists streamline workflows like order review, but while more recent research has investigated AI’s potential in pharmacology, few studies have demonstrated efficacy.

The coauthors of this latest work looked at a model deployed in a tertiary-care mother-and-child academic hospital between April 2020 and August 2020. The model was trained on a dataset of 2,846,502 medication orders from 2005 to 2018 extracted from a pharmacy database and preprocessed into 1,063,173 profiles. Prior to data collection, every month, the model was retrained with ten years of most recent data from the database in order to minimize drift, which occurs when a model loses its predictive power.

Pharmacists at the academic hospital rated medication order in the database as “typical” or “atypical” before observing the predictions; patients were evaluated only once to minimize the risk of including profiles that the pharmacists had previously evaluated. Atypical prescriptions were defined as those that didn’t correspond to the usual prescribing patterns, according to the pharmacist’s expertise, while profiles were considered atypical if at least one medication order within them was labeled as atypical.

The model’s profile predictions were provided to the pharmacists and they indicated whether they agreed or disagreed with each prediction. In all, 12,471 medication orders and 1,356 profiles were shown to 25 pharmacists from seven of the academic hospital’s departments, mostly from obstetrics-gynecology.

The researchers report that the model exhibited poor performance with respect to medication orders, attaining an F1-score of 0.30 (lower is worse). On the other hand, the model’s profile predictions achieved “satisfactory” performance with an F1-score of 0.59.

One reason might be a lack of representative data; research has shown that biased diagnostic algorithms may perpetuate inequalities. A team of scientists recently found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI come from California, New York, and Massachusetts.

Cognizant of this, the coauthors of this study say they don’t believe the model could be used as a standalone decision support tool. However, they believe it could be combined with rules-based approaches to identify medication order issues independent of common practice. “Conceptually, presenting pharmacists with a prediction for each order should be better because it identifies clearly which prescription is atypical, unlike profile predictions which only inform the pharmacist that something is atypical within the profile,” they wrote. “Although [our] focus groups indicated a lack of trust in order predictions by pharmacists, they were satisfied to use them as a safeguard to ensure that they did not miss unusual orders. This leads us to believe that even moderately improving the quality of these predictions in future work could be beneficial.”


How startups are scaling communication:

The pandemic is making startups take a close look at ramping up their communication solutions. Learn how


Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers quantify bias in Reddit content sometimes used to train AI

August 9, 2020   Big Data
 Researchers quantify bias in Reddit content sometimes used to train AI

VB Transform

Watch every session from the AI event of the year

On-Demand

Watch Now

In a paper published on the preprint server Arxiv.org, scientists at the King’s College London Department of Informatics used natural language to show evidence of pervasive gender and religious bias in Reddit communities. This alone isn’t surprising, but the problem is that data from these communities are often used to train large language models like OpenAI’s GPT-3. That in turn is important because, as OpenAI itself notes, this sort of bias leads to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.”

The scientists’ approach uses representations of words called embeddings to discover and categorize language biases, which could enable data scientists to trace the severity of bias in different communities and take steps to counteract this bias. To spotlight examples of potentially offensive content on Reddit subcommunities, given a language model and two sets of words representing concepts to compare and discover biases from, the method identifies the most biased words toward the concepts in a given community. It also ranks the words from the least to most biased using an equation to provide an ordered list and overall view of the bias distribution in that community.

Reddit has long been a popular source for machine learning model training data, but it’s an open secret that some groups within the network are unfixably toxic. In June, Reddit banned roughly 2,000 communities for consistently breaking its rules by allowing users to harass others with hate speech. But in accordance with the site’s policies on free speech, Reddit’s admins maintain they don’t ban communities solely for featuring controversial content, such as those advocating white supremacy, mocking perceived liberal bias, and promoting demeaning views on transgender women, sex workers, and feminists.

To further specify the biases they encountered, the researchers took the negativity and positivity (also called “sentiment polarity”) of biased words into account. And to facilitate analyses of biases, they combined semantically related terms under broad rubrics like “Relationship: Intimate/sexual” and “Power, organizing” that they modeled on the UCREL Semantic Analysis System (USAS) framework for automatic semantic and text tagging. (USAS has a multi-tier structure, with 21 major discourse fields subdivided into fine-grained categories like “People,” “Relationships,” or “Power.”)

One of the communities the researchers examined — /r/TheRedPill, ostensibly a forum for the “discussion of sexual strategy in a culture increasingly lacking a positive identity for men” — had 45 clusters of biased words. (/r/TheRedPill is currently “quarantined” by Reddit’s admins, meaning users have to bypass a warning prompt to visit or join.) Sentiment scores indicated that the first two biased clusters toward women (“Anatomy and Physiology,” “Intimate sexual relationships,” and “Judgement of appearance”) carried negative sentiments, whereas most of the clusters related to men contained neutral or positively connotated words. Perhaps unsurprisingly, labels such as “Egoism” and “Toughness; strong/weak” weren’t even present in female-biased labels.

Another community — /r/Dating_Advice — exhibited negative bias toward men, according to the researchers. Biased clusters included the words “poor,” “irresponsible,” “erratic,” “unreliable,” “impulsive,” “pathetic,” and “stupid,” with words like “abusive” and “egotistical” among the most negative in terms of sentiment. Moreover, the category “Judgment of appearance” was more frequently biased toward men than women, and physical stereotyping of women was “significantly” less prevalent than in /r/TheRedPill.

The researchers chose the community /r/Atheism, which calls itself “the web’s largest atheism forum,” to evaluate religious biases. They note that all the mentioned biased labels toward Islam had an average negative polarity except for geographical names. Categories such as “Crime, law and order,” “Judgement of appearance,” and “Warfare, defense, and the army” aggregated words with evidently negative connotations like “uncivilized,” “misogynistic,” “terroristic,” “antisemitic,” “oppressive,” “offensive,” and “totalitarian.” By contrast, none of the labels were relevant in Christianity-biased clusters, and most of the words in Christianity-biased clusters (e.g., “Unitarian,” “Presbyterian,” “Episcopalian,” “unbaptized,” “eternal”) didn’t carry negative connotations.

The coauthors assert their approach could be applied by legislators, moderators, and data scientists to trace the severity of bias in different communities and to take steps to actively counteract this bias. “We view the main contribution of our work as introducing a modular, extensible approach for exploring language biases through the lens of word embeddings,” they wrote. “Being able to do so without having to construct a-priori definitions of these biases renders this process more applicable to the dynamic and unpredictable discourses that are proliferating online.”

There’s a real and present need for tools like these in AI research. Emily Bender, a professor at the University of Washington’s NLP group, recently told VentureBeat that even carefully crafted language data sets can carry forms of bias. A study published last August by researchers at the University of Washington found evidence of racial bias in hate speech detection algorithms developed by Google parent company Alphabet’s Jigsaw. And Facebook AI head Jerome Pesenti found a rash of negative statements from AI created to generate humanlike tweets that targeted Black people, Jewish people, and women.

“Algorithms are like convex mirrors that refract human biases, but do it in a pretty blunt way. They don’t permit polite fictions like those that we often sustain our society with,” Kathryn Hume, Borealis AI’s director of product, said at the Movethedial Global Summit in November. “These systems don’t permit polite fictions. … They’re actually a mirror that can enable us to directly observe what might be wrong in society so that we can fix it. But we need to be careful, because if we don’t design these systems well, all that they’re going to do is encode what’s in the data and potentially amplify the prejudices that exist in society today.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Researchers detail texture-swapping AI that could be used to create deepfakes

July 8, 2020   Big Data

In a preprint paper published on Arxiv.org, researchers at the University of California, Berkeley and Adobe Research describe the Swapping Autoencoder, a machine learning model designed specifically for image manipulation. They claim it can modify any image in a variety ways, including texture swapping, while remaining “substantially” more efficient compared with previous generative models.

The researchers acknowledge that their work could be used to create deepfakes, or synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. In a human perceptual study, subjects were fooled 31% of the time by images created using the Swapping Autoencoder. But they also say that proposed detectors can successfully spot images manipulated by the tool at least 73.9% of the time, suggesting the Swapping Autoencoder is no more harmful than other AI-powered image manipulation tools.

“We show that our method based on an auto-encoder model has a number of advantages over prior work, in that it can accurately embed high-resolution images in real-time, into an embedding space that disentangles texture from structure, and generates realistic output images … Each code in the representation can be independently modified such that the resulting image both looks realistic and reflects the unmodified codes,” the coauthors of the study wrote.

The researchers’ approach isn’t novel in the sense that many AI models can edit portions of images to create new images. For example, the MIT-IBM Watson AI Lab released a tool that lets users upload photographs and customize the appearance of pictured buildings, flora, and fixtures, and Nvidia’s GauGAN can create lifelike landscape images that never existed. But these models tend to be challenging to design and computationally intensive to run.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

 Researchers detail texture swapping AI that could be used to create deepfakes

By contrast, the Swapping Autoencoder is lightweight, using image swapping as a “pretext” task for learning an embedding space useful for image manipulation. It encodes a given image into two separate latent codes — a “structure” code and a “texture” code — intended to represent structure and texture, and during training, the structure code learns to correspond to the layout or structure of a scene while the texture codes capture properties about the scene’s overall appearance.

In an experiment, the researchers trained Swapping Autoencoder on a data set containing images of churches, animal faces, bedrooms, people, mountain ranges, and waterfalls and built a web app that offers fine-grained control over uploaded photos. The app supports global style editing and region editing as well as cloning, with a brush tool that replaces the structure code from another part of the image.

“Tools for creative expression are an important part of human culture … Learning-based content creation tools such as our method can be used to democratize content creation, allowing novice users to synthesize compelling images,” the coauthors wrote.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How Stitch Fix used AI to personalize its online shopping experience

July 6, 2020   Big Data
 How Stitch Fix used AI to personalize its online shopping experience

Online retailers have long lured customers with the ability to browse vast selections of merchandise from home, quickly compare prices and offers, and have goods conveniently delivered to their doorstep. But much of the in-person shopping experience has been lost, not the least of which is trying on clothes to see how they fit, how the colors work with your complexion, and so on.

Companies like Stitch Fix, Wantable, and Trunk Club have attempted to address this problem by hiring professionals to choose clothes based on your custom parameters and ship them out to you. You can try things on, keep what you like, and send back what you don’t. Stitch Fix’s version of this service is called Fixes. Customers get a personalized Style Card with an outfit inspiration. It’s algorithmically driven and helps human style experts match a garment with a particular shopper. Each Fix included a Style Card that showed clothing options to complete outfits based on the various items in a customer’s Fix. Due to popular demand, last year the company began testing a way for shoppers to buy those related items directly from Stitch Fix through a program called Shop Your Looks.

AI is a natural fit for such services, and Stitch Fix has embraced the technology to accelerate and improve Shop Your Looks. On the tech front, this puts the company in direct competition with behemoths Facebook, Amazon, and Google, all of which are aggressively building out AI-powered clothes shopping experiences.

Stitch Fix told VentureBeat that during the Shop Your Looks beta period, “more than one-third of clients who purchased through Shop Your Looks engaged with the feature multiple times, and approximately 60% of clients who purchased through the offering bought two items or more.” It’s been successful enough that the company recently expanded to include an entire shoppable collection using the same underlying technology to personalize outfit and item recommendations as you shop.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Stitch Fix data scientists Hilary Parker and Natalia Gardiol explained to VentureBeat in an email interview what drove the company to develop Shop Your Looks; how the team used AI to build it out; and the methods they used, like factorization machines.

In this case study:

  • Problem: How to expand the scope of its service that matches outfits to online customers using a mix of algorithms and human expertise.
  • The result is “Shop Your Looks.”
  • It grew out of an experiment by a small team of Stitch Fix data scientists, then expanded across other units within the company.
  • The biggest challenge was how to determine what is a “good” outfit, when taste is so subjective and context matters.
  • Stitch Fix used a combination of human-crafted rules to store, sort, and manipulate data, along with AI models called factorization machines

This interview has been edited for clarity and brevity.

VentureBeat: Did Stitch Fix kind of fall in love with an AI tool or technique, using that as inspiration to make a product using that tool or technique? Or did the company start with a problem or challenge and eventually settle on an AI-powered solution?

Stitch Fix: To create Shop Your Looks, we had to evolve our algorithm capabilities from matching a client with an individual item in a Fix to now matching an entire outfit based on a client’s past purchases and preferences. This is an incredibly complex challenge because it means not only understanding which items go together but also which of these outfits an individual client will actually like. For example, one person may like bold patterns mixed together and another person may prefer a bold top with a more muted bottom.

To help us solve this problem, we took advantage of our existing framework that provides Stylists with item recommendations for a Fix and determined what new information we needed to feed into that framework, and how we could collect it.

First, it’s important to understand how clients currently share information with us:

  • Style Profile: When a client signs up for Stitch Fix, we receive 90 different data points — from style to price point to size.
  • Feedback at checkout: 85% of our clients tell us why they are keeping or returning an item. This is incredibly rich data, including details on fit and style — no other retailer gets this level of feedback.
  • Style Shuffle: an interactive feature within our app and on our website where clients can “thumbs up” or “thumbs down” an image of an item or an outfit. They can do this at any time — so not just when they receive a Fix. So far, we’ve received an incredible 4 billion item ratings from clients.
  • Personalized request notes to Stylists: Clients give their Stylists specific requests, such as if they are looking for an outfit for an event, or if they’ve seen an item that they really like.

For Shop Your Looks, we supplement this with information about what items go together. The outfits in Style Cards, outfits our Creative Styling Team builds, and outfits we serve to clients in Style Shuffle give us valuable additional insight into a client’s outfit style preferences

VB: How did you go about starting this project? Did you need to hire new talent?

SF: Data science is core to what we do. We have more than 125 data scientists who work across our business, including in recommendation systems, human computation, resource management, inventory management, and apparel design.

Data-driven experimentation is an important part of the team’s culture, so like many initiatives at Stitch Fix, Shop Your Looks was born out of an experiment from a small team of data scientists. As the project grew beyond the initial data collecting phase and into beta testing, the data science team worked with other groups across the business. For example, our Creative Styling Team is tuned in to customer needs and able to recommend looks that are approachable, aspirational, and inspirational.

VB: What was the biggest or most interesting challenge you had to overcome in the process of creating Shop Your Looks?

SF: Creating outfits for clients is a really complex problem because what makes a good outfit is so subjective to each individual. What one person believes is a great outfit, another might not. The toughest part of solving this problem is that an outfit is not a fixed entity — it’s fundamentally contextual. Tackling this problem required gathering new insights, not just about specific items that clients like, but also about how clients reacted to items grouped together.

And because style is so subjective, we had to rethink how we qualified a “good” outfit for our algorithms, since there’s not simply one perfect outfit that exists. Clients have different style preferences, so we believe a “good” outfit is one that a certain set of our clients like, but not necessarily all.

We learn a lot about how clients react to items grouped together when we share outfits with clients and ask them to rate them via Style Shuffle.

VB: What AI tools and techniques does Stitch Fix employ — generally, and for Shop Your Looks?

SF: Shop Your Looks combines AI models and human-crafted rules to store, sort, and manipulate data.

The system is roughly based on a class of AI models called factorization machines and has a few distinct steps. Because generating outfits is complicated, we can’t just create an outfit and call it good. In the first step, we create a pairing model, which is able to predict pairs of items that go well together, such as a pair of shoes and a skirt or a pair of pants and a T-shirt.

We then move on to the next stage — outfit assembly. Here we select a set of items that all come together to form a cohesive outfit (based on the predictions from the pairing model). In this system, we use “outfit templates,” which provide a guideline of what an outfit consists of. For example, one template is tops, pants, shoes, and a bag, and another is a dress, necklace, and shoes.

In the final phase of recommending outfits for Shop Your Looks, there are several factors that come into play. We set an anchor item, which is an item the client kept from a past Fix, which we’d like to build outfits around. The algorithm also has to factor in what inventory is available at any given time. Once that is done, the algorithm develops personalized recommendations tailored to each client’s preferences. Clients can then browse and shop these looks directly from the Shop tab on mobile or desktop. The outfit recommendations refresh throughout the day, so clients can regularly check back for new outfit inspiration.

VB: What did you learn that’s applicable to future AI projects?

SF: We introduced Shop Your Looks to a small number of our clients in the U.S. last year, and throughout this initial beta period we learned a lot about how they interact with the product and how our algorithms performed.

A key tenet of our personalization model is that the more information clients share, the better we are able to personalize their recommendations. We are usually able to adapt the model based on feedback from our clients; however, rules-based systems aren’t generally adaptive. We need the system to learn from client feedback on the outfits it recommends. We’re receiving immensely helpful feedback, from how clients engage with the outfit recommendations and also from a custom-built internal QA system. The model is in its early days, and we are continually adding more information to show clients more highly personalized outfits. For example, while seasonal trends are important overall, recommendations should be customized to a client’s local climate so that clients who experience summer weather earlier than others will start to receive summer items before those in cooler climates.

As we serve more clients, we are receiving an additional data set that strengthens the feedback loop and continues to make our personalization capabilities stronger.

VB: What’s the next AI-related project for Stitch Fix (that you can talk about)?

SF: One of the most interesting aspects of data science at Stitch Fix is the unusual degree to which the algorithms team is engaged with virtually every aspect of the business — from marketing to managing inventory and operations, and of course in helping our Stylists choose items our clients will love.

We believe that when we look to the future, the data science team will still be focused on improving personalization. This could include anything from sizing to predicting your styling needs before you even know you need something.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Get the SQL Query used in DirectQuery mode

February 15, 2020   Self-Service BI

When you are optimizing your DirectQuery model and you have done all the optimizations on the model already, you might want to run the queries generated by Power BI by your DBA. He then might be able to do some index tuning or even suggest some model changes. But how do you capture them? There are a few simple ways that I will describe here.

1 Use DAX studio and Power BI desktop

We will start by opening the report and enabling the performance analyzer. This allows us to get each individual query base on duration so we could optimize them one by one. Click Start recording and refresh visuals to load the entire page and get all the queries send.

Once it has run you can copy the query. This is a DAX query so now we need to get the SQL query.

We can do this by running the query in DAX studio which will give us the SQL Query:

That’s option 1. This is a one by one approach.

1B Use DAX studio and Power BI desktop

I call this option 1B as you can change the flow a bit and just capture all the queries all at once in DAX studio too. Just turn on All Queries in DAX studio. This will show all the queries used for the page. For each query you can double click, execute it and get the SQL Query. Saves you copy and pasting but the result is the same.

2 Use SQL Profiler

Another option is to use good old SQL Profiler. You can connect this to Power BI desktop as well and get all the SQL queries in one swoop. Just connect to the diagnostic port of PBI desktop and capture all the “DirectQuery End” (or begin) event class.

This option is probably best if you want to get all the queries and give them to the DBA all at once.

3 Use log analytics

Now this one only works for Azure Analysis services at the moment but you can also capture the same trace events from profiler with Log Analytics as I described here. Just make sure you querythe same “DirectQuery End” event.

That’s it, several ways to get all the SQL queries generated by Power BI.

Share this:

Let’s block ads! (Why?)

Kasper On BI

Read More

Figure out which columns and measures are used in your SSAS Tabular model

October 22, 2019   Self-Service BI

I got an interesting question about being able to figure out which measures and columns are being used for all tabular models in a server, either by known reports or by self service users using Power BI. To solve this I decided to use Power BI :).

To get the required information I decided to capture and parse the queries being that are being send to AAS and parse the results Power BI desktop over a period of time. In this post I describe how to do it.

Start capturing data

First I started capturing the queries in AAS using Azure Log analytics as I described here. This can also be done using profiler or XEvents as well but I decided to be modern :).

In log analytics I can see all the queries that get send:

For my analysis I don’t need all the columns and events, so let’s filter that down with log analytics to something like this:

AzureDiagnostics
| where OperationName == “QueryBegin”
| project TimeGenerated, TextData_s, DatabaseName_s
| where TimeGenerated > ago(30d)

You can change the time range to any range you want to whatever max range you have data for. You need to keep the TimeGenerated column as we want to filter by it.

This will get us the data we can use in Power BI:

To get this data into Power BI you can click Export, “Export to Power BI” in the query log screen.

Bringing it into Power BI

This gets you a txt file with the M expression that is downloaded to your PC. You can then paste the M query into a blank Power BI query. Running it gets me the data I want:

From this I need to parse the columns and measures that are used over the time period I captured the trace. This requires some tricks in Power Query (and it might not work for all scenario’s). These are the steps I took (you can download the full M query here):

  • I got rid of all diagnostic queries by filtering out Select.
  • The time column is not needed for the analysis so I remove it, you need it for log analytics to filter on time.
  • Now here comes the main trick. I need to extract column, measure and table names from the query. I do this by replacing many special characters like ( ,[ ] EVALUATE by ~ and then split the columns. Thanks to Imke for the idea :). That starts to get us somewhere:
  • Next I Unpivot and clean up the data a bit
  • I filter to only show rows that start with ‘ or [. Which keeps all column, table and measure references.
  • To add more information I add a column to show what the type of field it is, measure, table or column.
  • Finally I group the table and add a count.

This gets me the data I want:

With the data I can start doing some reporting:

Now you have an overview of which fields are used and how often they are referenced. You can extend this solution much further of course, for example you could add a username to see who accessed what and how often or you could compare it with the schema that you can get from other discovers to see which fields are never used so you can clean them up.

You can download the full solution here as PBIT. When you open it up for the first time it asks for the URL, you can get it from the M query that you download from Log Analytics.

Share this:

Let’s block ads! (Why?)

Kasper On BI

Read More
« Older posts
  • Recent Posts

    • NOT WHAT THEY MEANT BY “BUILDING ON THE BACKS OF….”
    • Why Healthcare Needs New Data and Analytics Solutions Before the Next Pandemic
    • Siemens and IBM extend alliance to IoT for manufacturing
    • Kevin Hart Joins John Hamburg For New Netflix Comedy Film Titled ‘Me Time’
    • Who is Monitoring your Microsoft Dynamics 365 Apps?
  • Categories

  • Archives

    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited