• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Getting

Not Getting the Most From Your Model Ops? Why Businesses Struggle With Data Science and Machine Learning

March 18, 2021   TIBCO Spotfire
TIBCO ModelOps scaled e1615222627944 696x366 Not Getting the Most From Your Model Ops? Why Businesses Struggle With Data Science and Machine Learning

Reading Time: 2 minutes

Companies have begun to recognize the value of integrating data science (DS) and machine learning (ML) across their organization to reap the benefits of the advanced analytics they can provide. As such, DS/ML has seen a surge in popularity and usage as businesses have invested heavily in this technology. 

However, there’s a distinct difference between investing in DS/ML and managing to successfully gain tangible business value from that investment, and that’s where organizations are running into problems. 

The Results Are in: Businesses Struggle With DS/ML Deployment Across the Board

We recently performed a global survey across 18 countries and 22 industries, including over a hundred business leaders and executives, more than half of which were in the C-Suite. 

Of those respondents, just 14 percent reported that they are currently operationalizing DS/ML. Within that 14 percent, 24 percent can only use it in one functional area, far below the potential innovative capability of the technology.  

Why are so few organizations able to follow through with model ops adoption? What are the barriers keeping businesses from operationalizing data science and machine learning?       

The Devil’s in the Data

According to the survey results, while a lack of talented data scientists to build the models was listed in the top ten obstacles to DS/ML adoption, it was only cited by about 16 percent of respondents. On the other hand, seven of these ten, including the top four, were all data-related. Issues with data security, data privacy, data prep, and data access, in particular, were all cited by between 27 to 38 percent of respondents.

While there are many other issues to contend with, including lack of management and financial support and a clear integration strategy, security compliance and data privacy concerns are clearly a significant barrier when it comes to operationalizing DS/ML. 

Why Overcoming These Problems are Critical for Innovation

Data scientists can develop as many models as they want for a business, but if they don’t get deployed, then they aren’t providing any value. For the modern digital business to have any hope of keeping up with the competition, model ops is a vital tool that can allow them to effectively operationalize DS/ML models, putting them into production and applying them to streaming, real-time data, edge applications, and more. 

For a more in-depth breakdown of our survey results, you can check out our full ebook now. And if you’re ready to move past insights and into action, you can download our four-step guide to finding out what it takes to operationalize data science within your organization and get a leg-up on the competition.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Database version control: Getting started with Flyway

January 16, 2021   BI News and Info

“Database migrations made easy” and “Version control for your database” are a couple of headlines you will find on Flyway’s official website. And let me tell you this, those statements are absolutely correct. Flyway is a multi-platform, cross-database version control tool with over 20 supported databases.

From all my years of experience working as an Architect for monolith and cloud-native apps, Flyway is by far the easiest and best tool on the market to manage database migrations.

Whether you are an experienced data professional or starting to get involved in the world of data, this article is the foundation of a series that will get you through this fantastic journey of database migrations with Flyway.

Background history

Flyway was created by Axel Fontaine in early 2010 at Google Code under the Apache 2.0 license. According to Axel’s words, it all started when he searched for a tool that allows integrating application and database changes easily and simply using plain SQL. To his surprise, that kind of tool didn’t exist, and it makes total sense to me because there were not many options back at that time.

Just to get you in context of what I’m talking about in the previous paragraph, everything we know as DevOps today was conceived around 2009. David Farley and Jez Humble released the recognized “Continuous delivery” book in 2010. Therefore, Axel was, without question, a pioneer in deciding to write his own tool to solve this widespread software development problem: Make database changes part of the software deployment process.

Flyway acceptance was great among the developer community, leading to high-speed growth and evolution. For example, the list of supported databases grew, additional support to multiple operating systems was added, and many more features were included from version to version.

The next step in Flyway’s evolution was Pro and Enterprise editions’ launch back in December 2017, which was a smart decision to secure the project’s progression and viability. Without question, Flyway was already the industry-leading standard for database migrations at that time.

Around mid-2019, Redgate Software acquired Flyway from Axel Fontaine. Redgate’s expertise in the database tooling space opens the door to Flyway for new opportunities in expansion, adoption, and once more evolution!

Database migrations

You are probably already familiar with the term Database migration which can mean several different things within the context of enterprise applications. It could mean to move a database from one platform to another or move a database from a previous version of the DBMS engine to the most recent one. Another common scenario these days is moving a database from an on-premises environment to a cloud IaaS, PaaS solution.

This article is not related to any of these practices mentioned above. This article will get you started with database migrations in the context of schema migrations. Yes, this is another kind of database migration which means the practice of evolving a database schema with incremental, reversible, and consistent changes through a simple approach. This approach enables integrating database changes with version control and application deployment processes.

Before digging deeper into this topic, I would like to address the basic requirements of database migrations. Trust me, this topic is fascinating and full of great information that will help you adopt this practice. Whether you are a software developer, database administrator, or solutions architect, understating database development practices like this is essential to become a better professional.

Evolutionary Database Design is the title of an article published on Martin Fowler’s website in May 2006. It is an extract of the Refactoring databases book by Scott Ambler and Pramod Sadalage, also released in 2006. This lecture goes above and beyond explaining the evolution of database development practices through the years, providing techniques and best practices to embrace database changes in software development projects, especially when adopting agile methodologies.

The approach described in this book sets the stage for a collection of best practices that should be followed to be successful.

DBA and developer collaboration

Software development practices like DevOps demand that people with different skills and backgrounds to collaborate closely, knocking down silos and bottlenecks between multiple teams, like the usual separation between development and operations.

In a database development effort, collaboration is crucial to the success of the project. Developers and DBAs should work in harmony, assessing the impact of the database changes proposed before implementing them. Anybody can take the initiative to start the conversations around whether the database code is optimal, secure, and scalable, or simply to make sure it is following best practices.

Version control

Without question, everybody benefits from using version control. All the artifacts that are part of a software project should be included to keep track of the contributor’s individual changes. Starting from the application code, unit and functional tests, database scripts, and even other code types such as build scripts used to create an environment from scratch, known today as Infrastructure as Code.

All databases changes are migrations

All database changes created during earlier stages of the development phase should be captured, no exception. This approach encourages treating database change files like any other artifact of the application, making sure to save and commit these change files to the same version control repository as the application code to be versioned along together.

Migration scripts should include but are not limited to any modification made to your database schema like DDL (Data definition language) and DML (Data manipulation language) changes or data correction changes implemented to solve a production data problem.

Everybody gets their own instance

It is very common for organizations to have shared database environments. This scenario is often a bad idea due to the imminent risk of project delays caused by unexpected resource contention problems. Or, in other cases, delays are caused by interruptions made by the development team itself. A person working on some database objects modified the objects that were part of a last-minute database schema refactoring.

Everyone learns by experimenting with new things. Having a personal workspace where one can endeavor to explore a creative way to solve a problem is excellent! More importantly, being able to work free of interruptions increase productivity.

Leveraging technologies like Docker containers to create an isolated and personal database development environment/workspace seems like a good way to resolve this issue. Other solutions like Windows Subsystem for Linux (WSL) take this approach to a whole new level, providing an additional operating system on top of the Windows workstation.

Leverage continuous integration

Continuous Integration —CI, for short— is a software development practice that consists of merging all changes from a developer’s workspace copy to a specific software branch.

Best practices recommend that each developer should integrate all changes from their workspace into the version control repository at least once a day.

There is a plethora of tools available to set up a continuous integration process like the one recommended above. The one to choose depends on the size of the organization and budget. The most popular are Jenkins, Circle CI, Travis CI, and GitLab.

According to the theory behind this practice, there are few key characteristics a database migration tool should meet:

  • All migrations must have a unique identifier
  • All migrations must be recorded in a migration history table
  • All migrations should be repeatable and reversible

All these practices and characteristics sound attractive to speed up a database development effort. However, the question is: How and what can we use to approach database migrations easily? Worry no more, Flyway to the rescue!

logo company name description automatically gene Database version control: Getting started with Flyway

What is Flyway?

Flyway’s official documentation describes the tool as an open-source database migration tool that strongly favors simplicity and convention over configuration designed to facilitate continuous integration processes for any database on any platform.

Migrations can be written in plain SQL, of course, as explained at the beginning of this article. This type of migrations must follow the specific syntax rules of each database engine such as PL/pgSQL for PostgreSQL, T-SQL for SQL Server, PL/SQL for Oracle, etc.

Flyway migrations can also be manually executed through its command-line client or programmatically using the Java API, Docker containers, or Maven and Gradle plugins.

It supports more than twenty database engines by default. Whether the database is hosted on-premises or cloud environment, Flyway would not have a problem connecting by leveraging the included JDBC driver library shipped with the tool.

Flyway folder architecture

At the time of this writing (December 2020), Flyway’s latest version is 7.3.2. which has the following directory structure:

text description automatically generated Database version control: Getting started with Flyway

* Screenshot is taken from Flyway official documentation

As you can see from the folder structure, it is very straightforward; the documentation is so good that it includes a brief description for some of the folders. Let’s take a look in-depth look and define each one of these folders.

The conf folder is the default location where Flyway will look for the database connectivity configuration. Flyway uses the simple key-value pair approach to set and load specific configurations via the flyway.conf file. I will address the configuration file in detail in future articles; for now, I will stick to this simple definition.

Flyway was written in Java, hence the existence of JRE and lib folders. I strongly recommend leaving those folders alone; any modification to the files within these folders will compromise Flyway’s functionality.

The licenses folder contains the teams, community, and third-party license information in the form of a text file; these three files are available for you if you want to take a look and read all details about each type of license.

The drivers folder is the place where all the JDBC drivers mentioned before can be found in the form of jar files. I believe this folder is worth to be explored in detail to see what is shipped with the tool in terms of database connectivity through JDBC.

I will use my existing Flyway 7.3.2 environment for macOS. I’ll start by verifying my current Flyway version using the flyway -v command:

word image 33 Database version control: Getting started with Flyway

Good, as you can see, I’m on the 7.3.2 version. This is the same version used from the official documentation screenshot that describes the folder structure. Now, I will find the actual folder where Flyway is installed using the which flyway Linux command:

word image 34 Database version control: Getting started with Flyway

Using the command tree -d, I can list all folders inside the Flyway installation path:

a picture containing graphical user interface tex Database version control: Getting started with Flyway

Then I simply have to navigate towards the drivers folder and list all files inside this path using the ls -ll Linux command:

graphical user interface text description automa Database version control: Getting started with Flyway

Look at that long list of JDBC drivers in the form of jar files; right of the box, you can connect to the most popular database engines like PostgreSQL, Microsoft SQL Server, SQLite, Snowflake, MySQL, Oracle, and more.

Following the folder structure, there are the jars and sql folders where you want to store your Java or SQL-based migrations. Flyway will look at these folders by default to automatically discover filesystem (SQL scripts) or Classpath (Java) migrations. Of course, these default locations can be overridden at execution time via a config file and environment variables.

Finally, there are the executable files. As you can see, there are two types: One for macOS/Linux (Flyway) based systems and one for Windows (Flyway .cmd) systems.

How it works

Take a look at the following visual example, where there is an application called Shiny Soft and an empty shell database called Shiny DB. Flyway is installed on the developer’s workstation, where a couple of migrations were created to deploy some database changes.

diagram description automatically generated Database version control: Getting started with Flyway

The first thing Flyway will do when starting this project is to check whether the migration history table exists. This example begins the development effort with an empty shell database. Therefore, Flyway will proceed to create the flyway_schema_history table on the target database called Shiny DB.

a picture containing diagram description automati Database version control: Getting started with Flyway

Right after creating the migration history table, Flyway will scan and apply all available migrations on its default location (jars / sql)

graphical user interface application teams desc Database version control: Getting started with Flyway

Simultaneously, the flyway_schema_history was updated with two new records, one for each of the migrations available (Migration 1 and 2).

This table will contain a high level of detail that will help you to understand better how the database schema is evolving. Take a look at the following example:

word image 35 Database version control: Getting started with Flyway

As you can see, there are two entries. Each has a version, description, type of migration, the script used, and more audit information.

This metadata is valuable and crucial to Flyway functionality. Because it helps Flyway keep track of the actual and future version of your database. And yes, Flyway is also capable of identifying those migrations pending to be applied.

Imagine a scenario where Migration 2 needs to be refactored, creating just one table instead of two. What you want to do is to create a new file called Migration 2.1. This migration will include the DDL instructions to drop the two existing tables and create a new one instead.

Flyway will automatically flag and update this new migration as pending in the flyway_schema_history table; however, it will not apply such migration until you decide to do it.

a picture containing diagram description automati 1 Database version control: Getting started with Flyway

Once Migration 2.1 is applied, Flyway will update the flyway_schema_history table with a new record for the latest migration applied: table description automatically generated Database version control: Getting started with Flyway

Notice the third record that corresponds to the database version 2.1 is not a SQL script. Hence the type column record shows JDBC; instead, this was a Java API type migration successfully applied to perform a database refactoring change.

diagram description automatically generated 1 Database version control: Getting started with Flyway

Advantages

At this point, you should be a little bit more familiar with Flyway. I briefly described what it is and how it works. Now, stop to think about what advantages you will get, including Flyway as the central component of your database deployment management.

In software development, as with everything you do in life, the longer you take to close the feedback loop, the worse the results are. Evolving a monolithic legacy database, where any database change is performed following the state-based database deployment approach, could be challenging. However, choosing the right tool for the job should make your transition to a migration-based deployment easier and painless.

Embracing database migrations with Flyway could not be easier. Whether you choose to start with SQL script-based migrations or Java classes, the learning curve is relatively small. You can always rely on Flyway’s documentation to check, learn, and get guidance on every single command and functionality shipped with the tool out of the box.

You don’t have to worry about keeping a detailed control of all changes applied to your database for starters. All the information from past and future migrations are held with great detail in Flyway’s schema history table. This is not just a simple control table. What I like about this schema history table is the level of detail about every single migration applied to the database. You will be able to identify the type of migration (SQL, Java), who, when, and exactly what was changed in your database.

Another major paint point solved by Flyway is the database schema mismatch. This is a widespread and painful problem encountered when working with different environments like development, test, QA, and production. Recreating a database from scratch, at the same time specifying the exact schema version you want to deploy, is a powerful thing. A database migration tool like Flyway will ensure to apply all those changes that belong to a specific version of your application. Database changes should be implanted with application changes.

Conclusion

This article provides a foundation and detailed explanation of Evolutionary database design techniques and practices required to approach database migrations with tools like Flyway.

I also included a summary of Flyway as a database migration tool, starting from the early days, explaining why and how this tool was born. It finally explored its folder structure and components and provided a visual and descriptive example of how this tool approaches database migrations with ease.

Please join me in the next article series, focusing on explaining how to install Flyway’s command-line tool for Linux/macOS and Windows. Also, explore all details related to its configuration through config files and environment variables.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Getting Excited for the Microsoft Cloud for Healthcare!

October 30, 2020   Microsoft Dynamics CRM

The Microsoft Cloud for Healthcare is launching this week and HCL-PowerObjects is proud to be one of only eight Global Systems Integrators (GSIs) selected by Microsoft to offer this transformational new product. With Microsoft’s trusted and fully integrated cloud capabilities as its foundation, the Microsoft Cloud for Healthcare is designed to empower hospitals and clinics to improve health care…

Source

Let’s block ads! (Why?)

PowerObjects- Bringing Focus to Dynamics CRM

Read More

THINKING ABOUT GETTING SOME IN THE FALL

June 5, 2020   Humor

Ice Cream tulips:

I don’t normally think of food when looking at flowers, but these lovely ‘Ice Cream Tulips’ really get me thinking about a nice cold treat to cool me off on a hot summer day.

If you’re a flower enthusiast, you probably already know about the ice cream tulip variety, but for most people they are still somewhat of a novelty, especially just before their petals open, when they truly look like an ice-cream cone good enough to eat, or even as a whipped cream-topped treat. They are a relatively new tulip variety, and even though bulbs seem to be widely available for purchase online, they are rather expensive, so you probably won’t see them sold at most flower markets too often. Still, if you’re trying to make your garden stand out, or just make your neighbors constantly crave ice cream, they are worth the investment.

Tulipa Ice Cream bulbs are rather large in size compared to most other tulip bulbs, measuring up to 4″ in diameter, and the flowers themselves grow to 25cm tall, on average. The flowers appear to be pink at first, but then the white center bursts open and the flower takes its iconic ice cream shape.

Ice cream flowers are double-petaled, numbering at least 12 petals, as opposed to the 6 of regular tulips. While both the pink and white petals can open, most often than not, the white middle doesn’t open entirely, maintaining that coveted ice cream look.

Let’s block ads! (Why?)

ANTZ-IN-PANTZ ……

Read More

AI Weekly: Getting back to work under the watchful eye of technology

May 23, 2020   Big Data
 AI Weekly: Getting back to work under the watchful eye of technology

This week, we published our latest special issue, “AI and surveillance in the age of coronavirus,” in which we examined how to balance freedom and safety as governments and companies use specific technologies to track and trace the spread of the coronavirus. Now, all the parts and pieces of those issues are filtering down to the next challenge: How do we get back to work?

Everyone is considering how and when to emerge from quarantines and reboot normal life in some way. When do we send our kids back to childcare? When do we finally get those haircuts? When can we safely enjoy a night out at a restaurant with our partners? But most of those are decisions that we can each control for ourselves. We can do those things whenever we feel comfortable doing them.

Not so when it comes to the workplace. Even as working from home has become a new normal for millions, there are millions of other people who do not have that option.

Yet we’re still in the midst of the pandemic, and every point of contact with another human is a potential risk. Entering a building with other employees, and staying in some kind of contact throughout the day with coworkers or customers, increases that risk. There must be measures in place to protect these workers. But that requires some mix of screening, testing, tracking, and surveillance bumping up against ethics and workers’ rights — all the same delicate problems we tackled in our special issue, but filtered down into the workplace.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Companies that bring employees back to their in-person jobs have to be cognizant of their liability if any workers, clients, or customers get sick. In addition to requiring masks, implementing things like touchless kiosks, and using thermal scanning to check workers’ temperatures at the door, that could mean frequently testing employees for COVID-19 — and contact tracing them within the building and beyond, perhaps using an app that they’re required to install on their phones. It may also mean using computer vision to ensure that warehouse workers maintain safe social distance and wear protective gear. And so on.

Were this any other time, unions or privacy advocates could step in and push back on onerous workplace surveillance. But it’s hard to make the argument that such measures are anything but completely necessary. People technically have a choice about whether or not to show up to their workplace — but effectively, they don’t. Some people don’t get a paycheck unless they show up and punch in; regardless, amid historic unemployment and income loss, the vast majority of employees will do anything to hang onto their jobs. A “choice” between feeding your family or potentially contracting a fatal illness and passing it to your loved ones is no choice at all.

Ironically, that lack of choice shifts the balance of safety and freedom, because if you require a person to be at work, you absolutely have to protect them by enforcing safety measures — which may require a deeper level of surveillance and workers’ loss of control over personal data. It’s a complicated problem that we’ll continue to explore, unpack, and investigate.

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

Monitoring the Power Platform: Canvas Driven Apps – Getting Started with Application Insights

May 15, 2020   Microsoft Dynamics CRM

Summary


 

Power Apps Canvas Apps represent a no or low code approach to building and delivering modern applications for makers. The requirements of knowing programming languages such as C# have been removed allowing makers of virtually any background to build apps. These apps can be used with hundreds of connectors allowing for a flexible user interface layered on top of data sources. Apps can also be generated from data sources automatically allowing you to quickly create and deploy an application to your team or customers.

This article is designed to introduce makers to incorporating Azure Application Insights into Power Apps Canvas Apps. We will cover adding the necessary identifiers, reviewing the tables events are sent to and examine some helpful Kusto queries.

Application Insights


 

2262.pastedimage1589496009445v17 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Azure Application Insights is an extensible Application Performance Management (APM) service that can be used to monitor applications, tests, etc. Azure Application Insights can be used with any application hosted in any environment. Depending on what’s being monitored there are SDKs available. For other applications connections and message delivery can be programmed using the REST APIs available.

For Power Platform components, Application Insights is recommended due to its direct integration with Power Apps features and tools and its capabilities to deliver to the API.

Once we begin sending telemetry to Application Insights we can review in real time availability tests, user actions, deployment metrics as well as other feedback from our applications. Connecting our messages with correlation identifiers allows us a holistic view into how our apps are interdependent upon each other. This provides the transparency desired and honestly needed with modern era technology.

Adding Application Insights to Canvas Apps


 

Adding Azure Application Insights message delivery is a native feature of Power Apps Canvas Apps. Once added it will begin to send messages from both the preview player and once deployed, your application in Power Apps. This article from

To add the Instrumentation Key to your Canvas App, open the Power Apps Studio and locate the ‘App’ in the Tree view.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Next, in the App Object window to the right, add the Azure Application Insights Instrumentation Key.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Adding and Locating Identifiers


 

Identifiers in Canvas Apps come in various formats including the user playing the app, the session id, the player version, app version, etc. These data elements are a must when troubleshooting and monitoring how your users interact with your app. Some of the data points I find valuable are the app and player build numbers which are key to understanding if users are using out of date player versions. The other major data point is the session id. For an app user, to obtain these values, navigate to the Settings window and click ‘Session details‘.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Session Window:

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

If a user reports an issue, having the session id and Power Apps player version can help with troubleshooting. That said, currently I don’t see a way to grab the session id natively using Canvas App functions. However, using the Power Apps connector with Power Automate, the session Id can be obtained and added to a trace entry.

AUTHOR’S NOTE: This article from Aengus Heaney titled “Log telemetry for your Apps using Azure Application Insights” details that this feature is coming. I would suggest, unless needed immediately, to avoid the Power Automate customization. This feature will eliminate the need for using the Power Apps connector in this fashion, I’m very excited to see its coming! I’ll update this article once this is available.

7181.SessionIdBetweenCanvasAppAndAppInsights Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Adding Traces


 

The Trace function is used to send messages to the traces table in Application Insights. This allows makers the ability to add telemetry and instrumentation from practically any control on any screen. This also opens us up to generating identifiers for specific actions which can be used for troubleshooting in a live environment. The below image shows using informational traces to capture the timings between the invocation of a Power Automate flow. The image shows a trace for the button click, and entries for adding instrumentation for a Power Automate flow.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

The image below is the result of the Trace methods showing the message and the time stamp for each entry.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Traces can three types: Information (for operational activities), Warnings (non breaking issue), or Errors (breaking issue).

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Based on the severity level, the Application Insights record will change.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Exploring what’s delivered


 

At the time of this writing the tables data is delivered to natively include the customEvents, pageViews and browserTimings tables. Each table contains generic Azure Application Insights properties as well as specific relevant properties.

7002.pastedimage1589496009449v27 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

The customEvents table shows when the published app was started.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

pageViews – The pageViews table is included when Azure Application Insights is added to Canvas Apps. The message received by Application Insights contains the URL, the name of the screen within your app and the duration as well as a handy performance bucket. Using the duration along with contextual information from the session and user we can begin to identify performance concerns.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

NOTE: I have seen the pageViews duplicate duration across all screens. Consider adding trace events in the Maker Defined Events section or a technique to find the difference between pageView entries in Kusto.

browserTimings – This represents browser interactions including the send and receive duration, the client processing time and the all up total duration. Similar to the pageViews table a performance bucket is present allowing for a general visualization.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Maker Defined Events

 

The traces table contains information sent from the app by the platform and the maker using the Trace() method. For the platform trace events, the user agent string and client information is captured. For the trace method, as shown above an enumeration is used to set the severity from the Canvas App. In Application Insights this translates to a number.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Useful Kusto Queries


 

The Kusto Query Language allows analysts to write queries for Azure Application Insights. Below I’ve included some queries to help you get started with each table events are currently delivered to.

Pulling ms-app identifiers from custom dimensions:

//This query shows how to parse customDimensions for the app identifiers
traces
| union pageViews, customEvents, browserTimings
| extend cd=parse_json(customDimensions)
| project timestamp, 
itemId, //this changes for each call
itemType,
operation_Id , operation_ParentId , //this does not changes for each call
operation_Name , session_Id , user_Id, 
message, cd.['ms-appSessionId'],cd.['ms-appName'],cd.['ms-appId']


​

Pulling Page Views within the same session:

//This query shows how to use a session_id to follow a user's path in the canvas app
pageViews 
// | where session_Id == "f8Pae" //Windows 10
// | where session_Id == "YhUhd" //iOS
| where (timestamp >= datetime(2020-05-13T10:02:52.137Z) and timestamp <= datetime(2020-05-14T12:04:52.137Z)) 

Slow Performing Pages or screens:

// Slowest pages 
// What are the 3 slowest pages, and how slow are they? 
pageViews
| where notempty(duration) and client_Type == 'Browser'
| extend total_duration=duration*itemCount
| summarize avg_duration=(sum(total_duration)/sum(itemCount)) by operation_Name
| top 3 by avg_duration desc
| render piechart 

Connecting the Dots


 

The Power Apps Canvas App platform provides app contextual information for each event passed to Application Insights. These include operation name, operation and parent operation identifiers as well as user and session data. Most messages also include custom properties titled ms-appId, ms-appName and ms-appSessionId.

The following Kusto query is an example showing how to isolate for specific operations by a user in a player session. Using the session_id field, we can filter the specific action, which may have generated multiple events, and group them together.

union (traces), (requests), (pageViews), (dependencies), (customEvents), (availabilityResults), (exceptions)
| extend itemType = iif(itemType == 'availabilityResult',itemType,iif(itemType == 'customEvent',itemType,iif(itemType == 'dependency',itemType,iif(itemType == 'pageView',itemType,iif(itemType == 'request',itemType,iif(itemType == 'trace',itemType,iif(itemType == 'exception',itemType,"")))))))
| where 
(
    (itemType == 'request' or (itemType == 'trace' or (itemType == 'exception' or (itemType == 'dependency' or (itemType == 'availabilityResult' or (itemType == 'pageView' or itemType == 'customEvent')))))) 
    and 
    ((timestamp >= datetime(2020-04-26T05:17:59.459Z) and timestamp <= datetime(2020-04-27T05:17:59.459Z)) 
    and 
    session_Id == 'tmcZK'))
| top 101 by timestamp desc

Application Insights contains a User Session feature that can help visualize and provide data points for the specific session. The image below combines custom events and page views.

 Monitoring the Power Platform: Canvas Driven Apps   Getting Started with Application Insights

Next Steps


 

The native functionality of Azure Application Insights traceability is a relatively new feature for Canvas Apps. I would also expect to see additional messages delivered to missing tables above such as exceptions and custom metrics. In the meantime consider using a Power Automate flow to send events, a custom connector or the Log Analytics Data Collector connector. This connector requires a P1 license but does allow to send data to a Log Analytics workspace which can be queried and monitored by Azure Monitor and other platforms.

Utilizing the Azure Application Insights API Key, Microsoft Power BI reports can be created based off the data collected from Power Apps Canvas Apps. Consider using the M Query export or building custom queries using the Application Insights API.

If you are interested in learning more about specialized guidance and training for monitoring or other areas of the Power Platform, which includes a monitoring workshop, please contact your Technical Account Manager or Microsoft representative for further details.

Your feedback is extremely valuable so please leave a comment below and I’ll be happy to help where I can! Also, if you find any inconsistencies, omissions or have suggestions, please go here to submit a new issue.

Index


 

Monitoring the Power Platform: Introduction and Index

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Weathering A Pandemic And Preparing For The Next Outbreak: Getting UK Hospitals Ready

April 1, 2020   BI News and Info
 Weathering A Pandemic And Preparing For The Next Outbreak: Getting UK Hospitals Ready

The coronavirus outbreak, declared a pandemic by the World Health Organization (WHO), has put a huge strain on certain industries. Obviously, healthcare is at the forefront, along with pharma and research. They are reacting to the crisis by accessing information, creating understanding and insights, and trying to predict the ways in which it will affect patients, vulnerable groups, and the healthcare infrastructure itself.

The National Health Service (NHS) is gearing up to tackle the growing threat of a COVID-19 outbreak at a time when our health service is already strained. To respond to the current situation in the most appropriate way, NHS announced that hospitals should cancel all non-urgent surgeries for at least three months starting 15 April. The operational aim is to expand critical care capacity to the maximum and free up around 30,000 of England’s 100,000 general and acute beds. Such actions are unprecedented in the history of NHS.

Independent healthcare providers in the UK, like Spire Healthcare, have postponed some services based on new guidance from the Department of Health. They may see a softening of demand for elective surgeries from self-pay and insured patients, as the patients may want to avoid non-urgent contact at this time. At the same time, the independent sector has been called in to help NHS cope with this crisis. There are reports that the government will pay £2.4 million-a-day for the use of 8,000 private hospital beds to relieve pressure on the NHS as the coronavirus outbreak intensifies. Therefore, the independent sector is also likely to play a crucial role in managing the situation.

In this blog, I’ll highlight what actions hospitals and providers can take immediately and in the coming weeks and months to manage the current situation and be prepared for the next, inevitable outbreak.

Understand, protect, and enable employees

Communication with employees with the right level of specificity and frequency is quite important to understand the pulse of employees in a crisis. Workplace communication improves employee morale, productivity, and commitment, and software can be used to communicate effectively with employees for different purposes. For example, mass emails can be sent to employees to convey important messages. Beginning immediately, some employee experience management (XM) tools are free and publicly available for all organizations. These tools will help organizations understand how their employees are doing and what support they need as they adapt to new work environments, helping to close experience gaps and maintain continuity. In just four days, thousands of organizations signed up for one free XM Solution.

To manage the patient care demand, staff may need refresher/customized training to provide all the extra care that may be required due to COVID-19. NHS Trusts and independent hospitals can deliver training to all clinical and patient-facing staff using mobile/handheld devices anytime and anywhere. Once the staff is adequately trained, they can take care of COVID-19 patients; keep themselves safe by using proper protection; support patients who need respiratory support; and begin setting up makeshift intensive care wards. To help support organizations, a completely free remote readiness and productivity academy provides training content for anyone, anytime, anywhere. The ready-to-watch video-based courses are designed to help mental wellness for workers, maintain the highest levels of hygiene, and develop leadership during times of change and challenge.

Technology can also help in identifying business-critical positions, making replacement plans, and managing the increased need for care staff. Hospitals can quickly optimize their use of external workers/contractors by hiring, onboarding, and training a large number of people quickly so they are up to speed to manage the surge in demand at a short notice.

Each hospital must develop a plan of not only finding more beds but also finding the staff to staff those beds. Some reports suggest that recently retired medics or those on a career break may be asked to return to NHS to handle the current situation. This could work out, provided onboarding and safety procedures are handled effectively. To manage the coordination of these efforts, free-of-charge collaboration software can be used for project and task management to help teams spend less time on admin and more on execution.

However, in the long term and with proper planning, such actions shouldn’t be needed if recruitment and retention of NHS clinicians are priorities. There are various solutions available to do strategic workforce planning and plan employee succession, development and performance, to build a strong workforce.

Patient management to experience management

“Ready to discharge” patients occupied an average of 3,450 beds a day in acute hospitals in January 2020, which means a total of 160,637 bed days were lost to people who did not need to be there. When a hospital is flooded with more critically ill patients than it can handle, more patients die.

By planning and coordinating with community health providers, acute providers can urgently discharge patients who do not need to be in the hospital. Hospital trusts manage as many as 8 million outpatient appointments every month. The current situation necessitates a surge in telemedicine, remote screening, and remote patient management to free up doctors’ time. Health engagement tools allow patients to closely interact with their caregivers without seeing the caregiver in the physician’s office. Patients can take an active, involved role in their journey, and physicians can access real-time insights that can be used to intervene when needed to improve patient outcomes.

Stablilize the supply chain

The healthcare supply chain is increasingly globally integrated, and COVID-19 has affected supply chain dynamics across China and other parts of the world. Pharma manufacturing is a case in point. A parliamentary report on the impact of Brexit on the pharmaceutical sector, published in May 2018, highlighted that 80 to 90% of generic medicines used in the NHS are imported, with China and India in the top five providers of UK medicines outside of the EU.

People in intensive care units need all kinds of specialized equipment, such as IV pumps, ventilators, and different kinds of monitors. Healthcare workers are going to need personal protective equipment (PPE) – FFP3 masks, goggles, gowns – because if they can’t protect themselves, more and more of them will start to fall sick. Doctors in Italy are already making equipment choices (e.g., ventilator versus bag valve mask) based on a patient’s age and likelihood of survival. The government has released its national stockpile of protective gear held back for pandemics and urged UK manufacturers to regear factories to build ventilators for the NHS.

Swift changes in demand across multiple geographies require agile actions: inventory optimization, route optimization, transportation optimization, and demand prediction and sensing are crucial more than ever. For the next 90 days, a procurement discovery tool is available free of charge, so any buyer can post their immediate sourcing needs, and any supplier can respond to show they can deliver. Buyers and suppliers can connect quickly and effectively and minimize disruption caused by shipment delays, capacity issues, and increased consumer demand in times of crisis. Such tools can help make the connections to keep the supply chain intact, which will ultimately have an impact on the everyday lives of consumers.

For the longer term, NHS could look at alternative sourcing methods and suppliers to diversify the supply base and reduce extensive expediting costs.

Travel and spend management

The virus is demonstrating the need for travel to be managed holistically for employees. NHS and independent sector providers have a duty of care and therefore need to know where their people are to support and help them.

Healthcare workers must leave their homes and family and may also need the alternative option of staying in NHS-reimbursed hotel accommodations while they continue to work – away from their families. To ease the burden, the pro version of TripIt, a travel management service , is available to individual users free-of-charge for six months, whether they are new to the service and sign up by April 14 or are existing basic users. This offer aims to make things a little easier for care workers.

Mobilize hospital control centers

Better management of contagions requires planning, collaboration, and a comprehensive, systemic approach within each hospital. A hospital “control center” can operate like an air traffic control center in an airport, using advanced technology and artificial intelligence (AI) to efficiently move patients coming into and going out of the hospital. It can also provide more accurate predictions about demand; enable real-time information about staffing constraints and bed, operation theatre/equipment availability; manage discharge planning; and improve the flow of information. With this advanced functionality, hospitals should be able to treat more patients, cut waiting time, improve the patient experience, and reduce pressure on staff.

Establishing such a command center depends on the availability of clean data in digital format, but for the digitally mature NHS trusts, this should be achievable in the medium term.

Remain true to purpose

Finally, in response to the COVID-19 outbreak, NHS has been true to its purpose of serving patients and advancing the health of the nation. As we brace for months of heightened risk from the disease, this outbreak may change the healthcare sector and the world economy for the years ahead. Together, we can make this change a positive one.

Learn more about how organizations can help people respond better to the COVID-19 crisis in Dealing With Disruption: A Digital Nudge.

This article originally appeared on SAP Community.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Getting A Stuck Ring Off A Finger With A String

November 15, 2019   Humor
 Getting A Stuck Ring Off A Finger With A String

Steady.


https://i.imgur.com/WzDyDl3.mp4

“How to remove a stuck ring.”
Image courtesy of https://imgur.com/gallery/WzDyDl3.

Let’s block ads! (Why?)

Quipster

Read More

Test Automation and EasyRepro: 01 – Overview and Getting Started

October 30, 2019   Microsoft Dynamics CRM

EasyRepro is a framework that allow automated UI tests to be performed on a specific Dynamics 365 organization. You can use it to automate testing such as Smoke, Regression, Load, etc. The framework is built from the Open Source Selenium web drivers used by the industry across a wide range of projects and applications. The entire EasyRepro framework is Open Source and available on GitHub. The purpose of this article is to walk through the setup of the EasyRepro framework. It assumes you are familiar with concepts such as working with Unit Tests in Visual Studio, downloading NuGet packages and cloning repositories from GitHub.

Getting Started

Now that you have a basic understanding of what EasyRepro is useful for you probably would like to start working with it. Getting EasyRepro up and running is very simple as the framework is designed with flexibility and agility in mind. However, like any other utility there is some initial learning and few hurdles to get over to begin working with EasyRepro. Let’s start with dependencies!

Dependencies

The first dependency involves the EasyRepro assemblies and the Selenium framework. The second involve .NET, specifically the .NET framework (.NET core can be used and is included as a feature branch!). Finally depending on how you are working with the framework you will want to include a testing framework to design, build and run your unit tests.

Choosing How to Consume the EasyRepro Framework

There are two ways of consuming the EasyRepro framework, one is using the NuGet packages directly while the other is to clone or download from the GitHub repository. The decision to use one over the other primarily depends on your need to explore or extend the framework and how you go about doing so. Working directly with the source code allows exploration into how EasyRepro interacts with Dynamics 365. However for extending the framework the approach of using the NuGet packages and building on top allows for increased flexibility.

Downloading using NuGet Package Manager

The quickest way to get started with the EasyRepro framework is to simply add a NuGet package reference to your unit test project. You can do by running this command in the NuGet Package Manager command line:

1564863288683 Test Automation and EasyRepro: 01   Overview and Getting Started

Create your unit test project and navigate to the NuGet Package Manager CLI. Use the Install-Package command to get the PowerApps.UIAutomation.Api package as show in the command below (v9.0.2 is the latest as of this writing please refer to this link for any updates:

Install-Package PowerApps.UIAutomation.Api -Version 9.0.2

This will get you the references needed to begin working with the framework immediately. Once installed you should the following packages begin to download into your unit test project:

1564952837073 Test Automation and EasyRepro: 01   Overview and Getting Started

When complete the required assemblies are available and you can begin working with the EasyRepro framework. There are some settings needed for the framework to connect to your Dynamics 365 organization which if you’re new to the framework maybe unknown. If so I would suggest reviewing the next section which initiates a clone of the EasyRepro framework which happens to include a robust amount of sample unit tests that show how to interact with the framework.

Cloning from GitHub

If you’re new to the framework in my opinion this is the best way to begin familiarizing yourself how it works and how to build a wide range of unit tests. This is also the way to go if you want to understand how EasyRepro is built upon the Selenium framework and how to extend.

To begin go to the official EasyRepro project located at https://github.com/Microsoft/EasyRepro. Once you’re there take a moment to review the branches available. The branches are structured in a GitFlow approach so if you’re wanting to work with the latest in market release of Dynamics 365 review the releases/* branches. For the latest on going development I would suggest the develop branch.

Start by cloning the project locally to review the contents and see how the interaction between the frameworks occurs.

The gif below shows cloning to Azure DevOps but cloning locally directly from GitHub is also supported.

Clone%20from%20GitHub%20to%20Azure%20DevOps Test Automation and EasyRepro: 01   Overview and Getting Started

Cloning locally from Azure DevOps

Another alternative which I highly recommend is to clone to an Azure DevOps project which can then be cloned locally. This will allow us to automate with CI/CD which we will cover in another article. If you decided to clone to Azure DevOps from GitHub the next step is to clone locally.

The gif below shows cloning locally from an Azure DevOps repository.

Clone%20locally%20from%20Azure%20DevOps Test Automation and EasyRepro: 01   Overview and Getting Started

Reviewing the EasyRepro Source Code Projects

The EasyRepro source code includes a Visual Studio solution with three class library projects and one for sample unit tests.

 Test Automation and EasyRepro: 01   Overview and Getting Started

The projects used by the Unified Interface are Microsoft.Dynamics365.UIAutomation.Api.UCI and Microsoft.Dynamics365.UIAutomation.Api.Browser. Most of the usage between EasyRepro and unit tests will happen with objects and commands within the Microsoft.Dynamics365.UIAutomation.Api.UCI project. This project contains objects to interact with Dynamics Unified Interface modules and forms. The Microsoft.Dynamics365.UIAutomation.Api.Browser project is limited to interacts with the browser driver and other under the hood components.

Reviewing Sample Unit Tests

Looking into the Open Account Sample Unit Test

The unit test project Microsoft.Dynamics365.UIAutomation.Sample contains hundreds of unit tests which can serve as a great learning tool to better understand how to work with the EasyRepro framework. I highly suggest exploring these tests when you begin to utilize the framework within your test strategy. Many general and specific tasks are essentially laid out and can be transformed to your needs. Examples include opening forms (OpenRecord), navigating (OpenSubArea) and searching for records (Search), creating and updating records (Save).

For this exercise we will open up the UCITestOpenActiveAccount unit test, you can find this using Find within Visual Studio (Ctrl+F). Once found you should see something like the following:

1565014162322 Test Automation and EasyRepro: 01   Overview and Getting Started

Following the steps within the unit test you can see its designed to perform basic user actions to read an account. We start by logging into an organization (Login). Then we proceed to open the UCI application titled “Sales” (OpenApp). Once in the organization we open the Accounts sub area (OpenSubArea) and search for “Adventure” in the Quick Find View (Search). Finally we open the first record (OpenRecord(0)) in the quick find view results.

Exploring Test Settings

In the current sample Unit Test project the test settings are set in two places: the app.config file located in the root of the project and in the TestSettings.cs file, a class object used across all of the tests.

Application Configuration file

The app.config file includes string configurations that tell the tests what organization to login to, who to login as and other under the hood settings like which browser to run and how to run the tests.

Application Configuration File Settings

Property Description
OnlineUsername String. Used to represent the test user name.
OnlinePassword String. Used to represent the test user password.
OnlineCrmUrl String. Used to represent the organization (i.e. https://<your org>.crm.dynamics.com/main.aspx)
AzureKey String. GUID representation of Azure Application Insights Instrumentation Key.
BrowserType String. Represents enum flag for Microsoft.Dynamics365.UIAutomation.Browser.BrowserType.
RemoteBrowserType String. Represents enum flag for Microsoft.Dynamics365.UIAutomation.Browser.BrowserType. Only used if BrowserType is Remote.
RemoteHubServer String. Represents Selenium Server remote hub URL. Only used if BrowserType is Remote.

For this article we will focus on simply running locally with the Google Chrome browser by setting the BrowserType to “Chrome”. Also inside of the app.config file are three settings we need to modify called OnlineUsername, OnlinePassword and OnlineCrmUrl. In my case I am using a trial and as you can see below I am using a “user@tenant.onmicrosoft.com” username and a “https://<orgname>.crm.dynamics.com/main.aspx” URL.

Before:

1565014621572 Test Automation and EasyRepro: 01   Overview and Getting Started

After:

1565015235915 Test Automation and EasyRepro: 01   Overview and Getting Started

Test Settings and the BrowserOptions object

Another key object is the TestSettings class and the various properties inside. This class tells the unit tests how to render the browser, where the browser driver can be located as well as other properties. The TestSettings class will need to be included in the Unit Test project and instantiate the BrowserOptions object as shown below:

 Test Automation and EasyRepro: 01   Overview and Getting Started

In the next post we will explore how these settings can change your experience working with unit tests and what options are available.

Next Steps

Conclusion

From this article you should be able to begin using EasyRepro with your Dynamics 365 organization immediately. The following articles will go into designing and debugging unit tests, extending the EasyRepro code, implementing with Azure DevOps and other topics. Let me know in the comments how your journey with EasyRepro is going or if you have any questions. Thanks!

Let’s block ads! (Why?)

Dynamics 365 Customer Engagement in the Field

Read More

Getting Ahead Of A Crisis: How To Improve Organizational Continuity, Resiliency, And Well-Being

October 15, 2019   SAP
 Getting Ahead Of A Crisis: How To Improve Organizational Continuity, Resiliency, And Well Being

Getting Ahead Of A Crisis: How To Improve Organizational Continuity, Resiliency, And Well-Being

Elias Moreira and Adam Brito

No matter how well prepared your business is, life is unpredictable – and every now and then, things will inevitably go wrong. It could be something as commonplace as a power outage or as major as a fire. Whatever the case, the important thing is being able to respond quickly, in the right way, in order to safeguard your business, your employees, and in some cases, your customers.

Unplanned disruptions can impact productivity and workflow. A power outage in a warehouse can cause problems all along the supply chain, while employees reaching the office to find there’s no Internet access, for example, is going to waste valuable time. 34% of organizations report costs of at least €1 million each year due to supply chain disruptions.

In more serious cases, these incidents can also threaten the morale and even personal safety of your staff and others. This might be harder to quantify, but the single most important aspect of protecting your organisation is safeguarding the people within it.

Communicating clearly

So how do you prepare for the unpredictable? The most important thing is ensuring you have a clear line of communication.

Word can travel fast in the connected world, but it doesn’t necessarily spread evenly, and what gets out isn’t always accurate. 55% of organisations use three or more means to communicate during a crisis – typically including legacy methods such as call trees, which are manual, uncoordinated, and put the burden on employees.

A single centralised system allows you to sidestep these issues and communicate the right message to all individuals or groups at once. This means instead of having to rely on hearsay, they’ve got accurate information coming directly from their employer – a situation that’s better for both sides.

A digital solution for a digital world

There are two parts to a good crisis management tool. The first is a reliable source of information on potential risks affecting any location relevant to your business. This might mean a planned protest or rally that blocks the route to your office, or it might be an incoming weather event that might put employees at risk.

These locations aren’t necessarily limited to your own real estate. You should also take into account the home addresses of employees and their current location, especially when they are traveling. These can all be mapped out to show what risks will affect whom – and this is whether the second half comes in: communication.

You need a single platform that can communicate with any affected employees. If there’s an outage, you could send out a message to everyone based in the relevant office advising them to work from home that day. If it’s an emergency, you can check who’s in the area, and then reach out to make sure they’re okay or ask if they need any assistance.

When things go wrong, people need real-time, two-way channels – ultimately the same expectations that they’d have in any other aspect of modern life. The only difference is that the stakes are higher, both for them getting help and for your business minimising the financial, reputational, and personal losses.

To ensure you’re doing the right thing when disaster strikes, read “Resilience is Your Competitive Advantage” and explore advanced features of SAP People Connect 365. Join the SAP Digital Interconnect Community and learn how to better control, measure, and manage your response to disruptions.

This article originally appeared on The BCI.

Let’s block ads! (Why?)

Digitalist Magazine

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited