Category Archives: Sisense

10 Techniques to Boost Your Data Modeling

With new possibilities for enterprises to easily access and analyze their data to improve performance, data modeling is morphing too. More than arbitrarily organizing data structures and relationships, data modeling must connect with end-user requirements and questions, as well as offer guidance to help ensure the right data is being used in the right way for the right results. The ten techniques described below will help you enhance your data modeling and its value to your business.

1. Understand the Business Requirements and Results Needed

The goal of data modeling is to help an organization function better. As a data modeler, collecting, organizing, and storing data for analysis, you can only achieve this goal by knowing what your enterprise needs. Correctly capturing those business requirements to know which data to prioritize, collect, store, transform, and make available to users is often the biggest data modeling challenge. So, we can’t say it enough: get a clear understanding of the requirements by asking people about the results they need from the data. Then start organizing your data with those ends in mind.

2. Visualize the Data to Be Modeled

Staring at countless rows and columns of alphanumeric entries is unlikely to bring enlightenment. Most people are far more comfortable looking at graphical representations of data that make it quick to see any anomalies or using intuitive drag-and-drop screen interfaces to rapidly inspect and join data tables. Data visualization approaches like these help you clean your data to make it complete, consistent, and free from error and redundancy. They also help you spot different data record types that correspond to the same real-life entity (“Customer ID” and “Client Ref.” for example), to then transform them to use common fields and formats, making it easier to combine different data sources.

Data Preparation Kit white 770x250 10 Techniques to Boost Your Data Modeling

3. Start with Simple Data Modeling and Extend Afterwards

Data can become complex rapidly, due to factors like size, type, structure, growth rate, and query language. Keeping data models small and simple at the start makes it easier to correct any problems or wrong turns. When you are sure your initial models are accurate and meaningful you can bring in more datasets, eliminating any inconsistencies as you go. You should look for a tool that makes it easy to begin, yet can support very large data models afterward, also letting you quickly “mash-up” multiple data sources from different physical locations.

4. Break Business Enquiries Down into Facts, Dimensions, Filters, and Order

Understanding how business questions can be defined by these four elements will help you organize data in ways that make it easier to provide answers. For example, suppose your enterprise is a retail company with stores in different locations, and you want to know which stores have sold the most of a specific product over the last year. In this case, the facts would be the overall historical sales data (all sales of all products from all stores for each day over the past “N” years), the dimensions being considered are “product” and “store location”, the filter is “previous 12 months”, and order might be “top five stores in decreasing order of sales of the given product”. By organizing your data using individual tables for facts and for dimensions, you facilitate the analysis for finding the top sales performers per sales period, and for answering other business intelligence questions as well.

5. Use Just the Data You Need, Rather Than All the Data Available

Computers working with huge datasets can soon run into problems of computer memory and input-output speed. However, in many cases, only small portions of the data are needed to answer business questions. Ideally, you should be able to simply check boxes on-screen to indicate which parts of datasets are to be used, letting you avoid data modeling waste and performance issues.

6. Make Calculations in Advance to Prevent End User Disagreements

A key goal of data modeling is to establish one version of the truth, against which users can ask their business questions. While people may have different opinions on how an answer should be used, there should be no disagreement on the underlying data or the calculation used to get to the answer. For example, a calculation might be required to aggregate daily sales data to derive monthly figures, which can then be compared to show best or worst months. Instead of leaving everyone to reach for their calculators or their spreadsheet applications (both common causes of user error), you can avoid problems by setting up this calculation in advance as part of your data modeling and making it available in the dashboard for end users.

7. Verify Each Stage of Your Data Modeling Before Continuing

Each action should be checked before moving to the next step, starting with the data modeling priorities from the business requirements. For example, an attribute called the primary key must be chosen for a dataset, so that each record in the dataset can be identified uniquely by the value of primary key in that record. Suppose you chose “ProductID” as a primary key for the historical sales dataset above. You can verify that this is satisfactory by comparing a total row count for “ProductID” in the dataset with a total distinct (no duplicates) row count. If the two counts match, “ProductID” can be used to uniquely identify each record; if not, look for another primary key. The same technique can be applied to a join of two datasets to check that the relationship between them is either one-to-one or one-to-many and to avoid many-to-many relationships that lead to overly complex or unmanageable data models.

8. Look for Causation, Not Just Correlation

Data modeling includes guidance in the way the modeled data is used. While empowering end users to access business intelligence for themselves is a big step forwards, it is also important that they avoid jumping to wrong conclusions. For example, perhaps they see that sales of two different products appear to rise and fall together. Are sales of one product driving sales of the other one (a cause and effect relationship), or do they just happen to rise and fall together (simple correlation) because of another factor such as the economy or the weather? Confusing causation and correlation here could lead to targeting wrong or non-existent opportunities, and thus wasting business resources.

9. Use Smart Tools to Do the Heavy Lifting

More complex data modeling may require coding or other actions to process data before analysis begins. However, if such “heavy lifting” can be done for you by a software application, this frees you from the need to learn about different programming languages and lets you spend time on other activities of value to your enterprise. A suitable software product can facilitate or automate all the different stages of data ETL (extracting, transforming, and loading). Data can be accessed visually without any coding required, different data sources can be brought together using a simple drag-and-drop interface, and data modeling can even be done automatically based on the query type.

10. Make Your Data Models Evolve

Data models in business are never carved in stone because data sources and business priorities change continually. Therefore, you must plan on updating or changing them over time. For this, store your data models in a repository that makes them easy to access for expansion and modification, and use a data dictionary or “ready reference” with clear, up-to-date information about the purpose and format of each type of data.

Better Data Modeling Leads to Greater Business Benefit

Business performance in terms of profitability, productivity, efficiency, customer satisfaction, and more can benefit from data modeling that helps users quickly and easily get answers to their business questions. Key success factors for this include linking to organizational needs and objectives, using tools to speed up the steps in readying data for answers to all queries, and making priorities of simplicity and common sense. Once these conditions are met, you and your business, whether small, medium, or big, can expect your data modeling to bring you significant business value.

Data Preparation Kit white 770x250 10 Techniques to Boost Your Data Modeling

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

Ways to Introduce Data Visualization to Your Whole Company

What occupational fields do you think of when you hear the words “data visualization” and “analytics”? Finance? Marketing? IT? Sales?

What about Distribution? HR? Operations? Fundraising? Education?

One spring when I was in elementary school, my teacher passed out sheets of colored construction paper and instructed us to cut them into strips. We then stapled the strips of paper together to make several chains that were each five links long, one chain that was only four links long, and two chains with three links. She then hung them in a row along the back wall corkboard and explained that each link represented a school day and each of the chains represented a school week. The four-link chain was for the week we had Monday off for Memorial Day and the two three-link chains represented our short weeks for teacher in-service and make up snow days. Each school day, one of us would get to rip off a link and watch as, one by one, the days and weeks tore away to summer vacation.

That, my friends, is data visualization: expressing information through imagery. My teacher took a predetermined set of data (days and weeks left in the school year) and presented it in a visual way that helped us better comprehend the amount of time until summer break. Sure, we could count the links if we wanted to know exactly how many days were left in the school year, but one glance at the corkboard gave us enough information to get the big picture.

Finance, IT, and marketing have done a great job at utilizing data visualization, but they don’t own the technique. If a teacher can use data visualization with a group of elementary kids, so can you. Let’s look at some less-than-conventional ways you can use data visualization throughout your company.

Production and Distribution

You’re making and moving products, what do you need pictures for, right? Well, for one, mapping out production and distribution processes helps identify pain points. Are you behind on orders? Don’t know where the holdup is? Physically mapping out your process can reveal possible bottlenecks and opportunities for improved efficiency. Take a good look at:

  • Workload: Bottlenecks can signify uneven workloads. Does anyone have too much on their plate? Are valuable resources being underutilized?
  • Timing: Timing is everything, or so the saying goes, and time is one of our most valuable resources. Are you using your time wisely? Are all of the steps in sync throughout the process? Is there a way to capitalize on unavoidable lag time?
  • Physical layout of machinery and supplies: It’s about more than feng shui. Visualizing movement within the factory or warehouse highlights the most frequently used routes and resources. Are there physical obstacles hindering process flow or highly trafficked areas? Can the most frequently used and requested items be made more accessible?
  • Distribution routes: Wandering and overlapping routes waste time and fuel. Map out and assign deliveries by location to provide drivers with the most efficient routes. Update your maps regularly to help your drivers steer clear of road closures and construction zones.

Human Resources

Project management and recruiting tools allow HR to collect so much information it can be easy to lose the forest for the trees. Don’t get lost in the minutiae. Human resources data visualization can help you to step back to see the bigger picture. Identify trends in employee satisfaction, engagement, and career progression. New employee drop off or confusion may signal a snag in the onboarding process. Training and certification data can uncover potential issues for succession planning or coverage. Organizational charts can expose over-burdened branches that might benefit from pruning or reorganization. Watch industry trends to ensure your company attracts and keeps quality employees with competitive salaries and benefits.

Sales and Marketing

Okay, okay, I know I said marketing is already accustomed to data visualization, but I also said we were going to look at unconventional methods. Sales and marketing teams have been using charts and graphs to measure progress and success basically since the dawn of sales and marketing. Pie charts show who’s got the biggest market share, line graphs follow sales trends, bar graphs help us compare year over year, and many sales and marketing tools like CRMs and marketing automation software provide simple data visualizations.

What if, instead of using data visualization to analyze what has been done, we used it to flesh out what can be done. I’m not just talking about forecasting trends. Scatter maps are great for seeing geographical location and impact, and lack thereof. Everyone wants to focus on analyzing the data represented. Don’t forget to examine the white space. Are there missed sales opportunities outside of or between spheres? Can you use this information to literally add in new sales while moving between markets? Scatter graph spheres also make excellent Venn Diagrams, showing overlapping areas, where marketers may be overspending or inundating their customers with messaging. The list goes on.

The truth is, data is information, and every field works with some sort of information. You don’t have to be an accountant or scientist to work with data and you certainly don’t have to be a pivot-table whiz to benefit from data utilization. It’s all around. That thermometer you colored in as your club raised money for the big trip? Data visualization. The fuel gauge on your car’s dashboard that’s always a little too close to “E”? Data visualization. Pro-Con list? March Madness Bracket? Paper chain-link countdown? I think you can see where I’m going, but the real question is: can you see where you’re going?

About the Author
Melissa Reinke Ways to Introduce Data Visualization to Your Whole Company

Melissa Reinke is a writer for TechnologyAdvice.com. She is a storyteller, editor, writer, and all-around word nerd extraordinaire. She spends her days managing web content and her nights unwinding in myriad creative ways, including writing for herself and others. From personal memoirs to professional solutions, when writing and editing for others Melissa’s singular goal is to sculpt each piece into its best, most successful form while maintaining the integrity of the original voice and vision. Based in Music City, USA, Melissa can often be found enjoying great live tunes with even better friends. Then again, she’s just as likely to be found curled up with a good book and a tasty beverage.

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

Data Management Rules for Analytics

With analytics taking a central role in most companies’ daily operations, managing the massive data streams organizations create is more important than ever. Effective business intelligence is the product of data that is scrubbed, properly stored, and easy to find. When your organization uses raw data without proper management procedures, your results suffer.

The first step towards creating better data for analytics starts with managing data the right way. Establishing clear protocols and following them can help streamline the analytics process, offer better insights, and simplify the process of handling data. You can start by implementing these five rules to manage your data more efficiently.

1. Establish Clear Analytics Goals Before Getting Started

As the amount of data produced by organizations daily grows exponentially, sorting through terabytes of information can become problematic and reduce the efficiency of analytics. Such large data sets require significantly longer times to scrub and properly organize. For companies that deal with multiple streams that exhibit heavy bandwidth, having a clear line of sight towards business and analytics goals can help reduce inflows and prioritize relevant data.

It’s important to establish clear objectives for data and create parameters that filter out data points that are irrelevant or unclear. This facilitates pre-screening datasets and makes scrubbing and sorting easier by reducing white noise. Additionally, you can focus even more on measuring specific KPIs to further filter out the right data from the stream.

banner blog 2 Data Management Rules for Analytics

2. Simplify and Centralize Your Data Streams

Another problem analytics suites face is reconciling disparate data from multiple streams. Organizations have internal, third-party, customer, and other data that must be considered as part of a larger whole instead of viewed in isolation. Leaving data as-is can be damaging to insights, as different sources may use unique formats or different styles.

Before allowing multiple streams to connect to your data analytics software, your first step should be establishing a process to collect data more centrally and unify it. This centralization makes it easier to input data seamlessly into analytics tools, but also simplifies the methodology for users to find and manipulate data. Consider how to set up your data streams best to reduce the number of sources to eventually produce more unified sets.

3. Scrub Your Data Before Warehousing

The endless stream of data raises questions about quality and quantity. While having more information is preferable, data loses its usefulness when it’s surrounded by noise and irrelevant points. Unscrubbed data sets make it harder to uncover insights, properly manage databases, and access information later.

Before worrying about data warehousing and access, consider the processes in place to scrub data to produce clean sets. Create phases that ensure data relevance is considered while effectively filtering out data that is not pertinent. Additionally, make sure the process is as automated as possible to reduce wasted resources. Implementing functions such as data classification and pre-sorting can help expedite the cleaning process.

4. Establish Clear Data Governance Protocols

One of the biggest emerging issues facing data management is data governance. Because of the sensitive nature of many sources—consumer information, sensitive financial details, and so on—concerns about who has access to information are becoming a central topic in data management. Moreover, allowing free access to datasets and storage can lead to manipulation, mistakes, and deletions that could prove damaging.

It’s vital to establish clear and explicit rules about who can access data, when, and how. Creating tiered permission systems (read, read/write, admin) can help limit the exposure to mistakes and danger. Additionally, sorting data in ways that facilitate access to different groups can help manage data access better without the need to give free rein to all team members.

5. Create Dynamic Data Structures

Many times, storing data is reduced to a single database that limits how you can manipulate it. Static data structures are effective for holding data, but they are restrictive when it comes to analyzing and processing it. Instead, data managers should place a greater emphasis towards creating structures that encourage deeper analysis.

Dynamic data structures present a way to store real-time data that allows users to connect points better. Using three-dimensional databases, finding methods to reshape data rapidly, and creating more inter-connected data silos can help contribute to more agile business intelligence. Generate databases and structures that simplify accessing and interacting with data rather than isolating it.

The fields of data management and analytics are constantly evolving. For analytics teams, it’s vital to create infrastructures that are future-proofed and offer the best possible insights for users. By establishing best practices and following them as closely as possible, organizations can significantly enhance the quality of the insights their data produces.

banner blog 2 Data Management Rules for Analytics

Let’s block ads! (Why?)

Blog – Sisense

Why You Should Already Have a Data Governance Strategy

Garbage in, garbage out. This motto has been true ever since punched cards and teletype terminals. Today’s sophisticated IT systems depend just as much on good quality data to bring value to their users, whether in accounting, production, or business intelligence. However, data doesn’t automatically format itself properly, any more than it proactively tells you where it’s hiding or how it should be used. No, data just is. If you want your business data to satisfy criteria of availability, usability, integrity, and security, you need a data governance strategy.

Data governance in general is an overarching strategy for organizations to ensure the data they use is clean, accurate, usable, and secure. Data stakeholders from business units, the compliance department, and IT are best positioned to lead data governance, although the matter is important enough to warrant CEO attention too. Some organizations go as far as appointing a Data Governance Officer to take overall charge. The high-level goal is to have consistent, reliable data sets to evaluate enterprise performance and make management decisions.

Ad-hoc approaches are likely to come back to haunt you. Data governance has to become systematic, as big data multiplies in type and volume, and users seek to answer more complex business questions. Typically, that means setting up standards and processes for acquiring and handling data, as well as procedures to make sure those processes are being followed. If you’re wondering whether it’s all worth it, the following five reasons may convince you.

banner blog 2 Why You Should Already Have a Data Governance Strategy

Reason 1: Ensure data availability

Even business intelligence (BI) systems won’t look very smart, if users cannot find the data needed to power them. In particular, self-service BI means that the data must be easy enough to locate and to use. After years of hearing about the sinfulness of organizational silos, it should be clear that even if individual departments “own” data, the governance of that data must be done in the same way across the organization. Authorization to use the data may be restricted, as in the case of sensitive customer data, but users should not ignore its existence, when it could help them in their work.

Availability is also a matter of having appropriate data that is easy enough to use. With a trend nowadays to store unstructured data from different sources in non-relational databases or data lakes, it can be difficult to know what kind of data is being acquired and how to process it. Data governance is therefore a matter of first setting up data capture to acquire what your enterprise and its different departments need, rather than everything under the sun. Governance then also ensures that data schemas are applied to organize data when it is stored, or that tools are available for users to process data, for example to run business analytics from non-relational (NoSQL) databases.

Reason 2: Ensure users are working with consistent data

When the CFO and the COO work from different sets of data and reach different conclusions about the same subjects, things are going to be difficult. The same is true at all other levels in an enterprise. Users must have access to consistent, reliable data, so that comparisons make sense and conclusions can be checked. This is already a good reason for making sure that data governance is driven across the organization, by a team of executives, managers, and data stewards with the knowledge and authority to make sure the same rules are followed by all.

Global data governance initiatives may also grow out of attempts to improve data quality at departmental levels, where individual systems and databases were not planned for information sharing. The data governance team must deal with such situations, for instance, by harmonizing departmental information resources. Increased consistency in data means fewer arguments at executive level, less doubt about the validity of data being analyzed, and higher confidence in decision making.

Reason 3: Determining which data to keep and which to delete

The risks of data hoarding are the same as those of physical hoarding. IT servers and storage units full of useless junk make it hard to locate any data of value or to do anything useful with it afterwards. Users use stale or irrelevant data as the basis for important business decisions, IT department expenses mushroom, and vulnerability to data breaches increases. The problem is unfortunately common. 33% of the data stored by organizations is simply ROT (redundant, obsolete, or trivial), according to the Veritas Data Genomics Index 2017 survey.

Yet things don’t have to be that way. Most data does not have to be kept for decades, “just in case.” As an example, retailing leader Walmart uses only the last four weeks’ transactional data for its daily merchandising analytics. It is part of good data governance strategy to carefully consider which data is important to the organization and which should be destroyed. Data governance also includes procedures for employees to make sure data is not unnecessarily duplicated, as well as policies for systematic data retirement (for instance, for archiving or destruction) according to age or other pertinent criteria.

Reason 4: Resolve analysis and reporting issues

An important dimension in data governance is the consistency across an organization of its metrics, as well as the data driving them. Without clearly recorded standards for metrics, people may use the same word, yet mean different things. Business analytics are a case in point, when analytics tools vary from one department to another. Self-service analytics or business intelligence can be a boon to an enterprise, but only if people interpret metrics and reports in a consistent way.

When reports lack clarification, the temptation is often to blame technology. The root cause, however, is often the mis-configuration of the tools and systems involved. It may even be in their faulty application, as in the case of reporting tools being wrongly applied to production databases, triggering problems in performance that mean that neither transactions nor analytics are satisfactorily accomplished. Ripping out and replacing fundamentally sound systems is not the solution. Instead, improved data governance brings more benefit, faster, and for far less cost.

Reason 5: Security and compliance with laws concerning data governance

Consequences for non-compliance with data regulations can be enormous, especially where private individuals’ information is concerned. A case in point, the European General Data Protection Regulation (GDPR) for May 2018 sets non-compliance fines up to some $ 22 million or four percent of the offender’s worldwide turnover, whichever is the higher, for data misuse or breach affecting European citizens.

Effective data governance helps an organization to avoid such issues, by defining how its data is to be acquired, stored, backed up, and secured against accidents, theft, or misuse. These definitions also include provision for audits and controls to ensure that the procedures are followed. Realistically, organizations will also conduct suitable awareness campaigns to makes sure that all employees working with confidential company, customer, or partner data understand the importance of data governance and its rules. Education and awareness campaigns will become increasingly important as user access to self-service solutions increases, as will the levels of data security already inherent in those solutions.

Conclusion

If you think about data as a strategic asset, the idea of governance becomes natural. Company finances must be kept in order with the necessary oversight and audits, workplace safety must be guaranteed and respect the relevant regulations, so why should data – often a key differentiator and a confidential commodity – be any different? As IT self-service and end-user empowerment grow, the importance of good data governance increases too. Business user autonomy in spotting trends and taking decisions can help an enterprise become more responsive and competitive, but not if it is founded on data anarchy.

Effective data governance is also a continuing process. Policy definition, review, adaptation, and audit, together with compliance reviews and quality control, are all regularly effected or repeated as a data governance life cycle. As such, data governance is never finished, because new sources, uses, and regulations about data are never finished either. For contexts such as business intelligence, especially in a self-service environment, good data governance helps users to use the right data in the right way, to generate business insights correctly and take sound business decisions.

banner blog 2 Why You Should Already Have a Data Governance Strategy

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

[Infographic] Everything You Ever Wanted to Know About Donut Charts

The donut chart. Often seen as just another version of a pie chart, the donut chart doesn’t often get much praise and recognition. And that’s fair. There are only a few very specific scenarios in which using a donut chart is the best way to visualize data. And, by the way, I recognize many people would argue there’s never a good time to use a donut chart.

However, today is National Donut Day, and if there was ever a day to let the donut chart take center stage this has got to be it.

So, Happy National Donut Day! Here’s everything you ever wanted to know about donut charts.

And for the record, I’d take a donut over a pie any day of the week.

Donut Day Infographic [Infographic] Everything You Ever Wanted to Know About Donut Charts
Embed this infographic on your site:

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

How Healthcare Data Visualizations Can Transform Your Entire Practice

Medical practices are made up of more than just doctors. Operating a practice requires a variety of skills and knowledge. When a patient needs to see a doctor, they don’t simply show up at their door. Instead, their visit goes through an initial scheduling, a call, a visit, and a post-visit record. This process isn’t always effective, but it can be.

The question is how to find areas for improvement in your medical practice. One easy solution is to start utilizing health data visualization. These visual analytics can help you understand how each area of your practice can be improved, how it performs over time, and what areas are causing your efficiency to suffer. More importantly, you can implement data visualizations across your entire practice to gain a better grasp of every step of your patients’ visit.

Before the Visit – Scheduling

A patient’s visit begins well before they set foot in a medical practice. When there are multiple doctors and staff working, one of the biggest challenges involves scheduling. For medical professionals, scheduling takes on an added dimension because of specialization and specific patients’ needs. This can create bottlenecks when not enough doctors or other personnel are available at specific times, or the wrong doctors are slated to work.

Including staff management analytics in your healthcare data visualizations can deliver a variety of improvements. For one, it shows which doctors can see more patients and those that may require more time per patient. Staff management visualizations can also help improve scheduling by showing which doctors are best suited to which patients. Finally, analysis can help reduce bottlenecks by demonstrating the best combinations and practices for both scheduling patients and set doctors’ time slots.


allocation How Healthcare Data Visualizations Can Transform Your Entire Practice

Collecting Patient Data – The Appointment

Once their appointment has been scheduled, patients finally arrive at the practice and immediately start providing data. This includes demographic information such as age, gender, weight, as well as basic medical details. When they finally see a doctor, the data continues to flow at an even greater rate. Handwritten notes and Excel are useful for recording information, but often can’t provide the specific insights more advanced healthcare analytics can deliver.

Visualizing patient data can give you a clearer picture of their health profile and show you better courses of action. Implementing a risk dashboard can advance patient management by plotting out improvements or changes in health and how risk factors evolve over time. Additionally, using a baseline visualization can help you compare patients by specific factors—age, weight, pre-existing conditions, and more—to their ideal health profile.

Finishing the Visit – Payment

Paying for medical treatments or even visits can be a complex process. Dealing with insurers, completing claims, and working directly with patients involves several layers of decisions and intensive back-and-forth conversations. For many medical practices, payments can become problematic when there is no straightforward way to track them. Moreover, medical practices need a better way to inform their patients of their expected costs and processes.

Using insurance claims visualizations at this stage offers two major benefits. The first is helping medical practices be better informed about the rate of claims, which patients may have problems making payments, and which are likely to be delayed. The second is closely tied to the first and helps medical practices work more closely with their patients to find treatment options and plans that echo their circumstances.


insurance How Healthcare Data Visualizations Can Transform Your Entire Practice

Post-Visit – Referrals and Follow-Up

A patient’s healthcare doesn’t end when they walk out of a medical practice’s door. A treatment cycle usually involves some follow-up—how treatments are working, how a patient is feeling—and in some cases referral for further treatment. Follow-ups can easily slip through, and patients can be left on an island. One of the best dashboard examples is the use of referral dashboards to track patient follow-ups and how they were referred.

In an industry where referrals are increasingly important for bottom lines and growth, being able to visualize how effective your practice has been is crucial for success. Applying visualizations can help you track better KPIs such as the number of referrals per MD, number of referrals over time, and even the number of referrals per procedure category. Combining this with follow-ups can help practices find novel ways to refer patients for better outcomes.

Visualizations don’t have to focus on a whole practice at once. By creating dashboards and visualizations for each stage of a patient’s medical visit, medical professionals can improve their services, deliver improved patient outcomes, and remain profitable. Using visualizations can also produce better insights about each patient and help create a patient-centric approach that deprioritizes profits without hurting your bottom line.

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

SQL Cheat Sheet: Retrieving Column Description in SQL Server

In SQL Server, details regarding a specific table column – e.g. column name, column id, column data type, column constraints – can be retrieved by joining system tables such as sys.tables, sys.columns, sys.types.

Query 1: Fetching tables and object_id

About sys.tables

sys.tables is a system table and is used for maintaining information on tables in a database. For every table added to the database, a record is created in the sys.tables table. There is only one record for each table and it contains information such as table name, object id of table, created date, modified date, etc. Object ID is unique and we will use it to join this table with other system tables (sys.columns) in order to fetch column details.

The following query can be used to fetch the object_id field of all the tables in a database:

Select name AS TableName, object_id AS ObjectID
From sys.tables
From sys.tables
–– where name = '<TABLENAME>'
–– Uncomment above line and add <Table Name> to fetch details for particular table

The result will be look something like this:

TableName

InstrumentAudit
InstrumentMerge
InstrumentMerge_MinDates

ObjectID

375724441
379864420
395864477

Query 2: Fetching Columns

The next step is to fetch columns for these tables. This can be done by joining sys.tables with sys.columns based on object_id for the database tables.

About sys.columns

sys.columns is a system table and is used for maintaining information on columns in a database. For every column added in database, a record is created in the sys.columns table. There is only one record for each column and it contains the following information:

  • Name: The name of the column. This is unique within the table object.
  • Object_id:object_id is unique identifier for table in which the column exists. We will use this column to join sys.columns with sys.tables in order to fetch columns in different tables.
  • Column_id: ID of the column. This is unique within the table object./li>
  • user_type_id: System code for column data type
  • max_length: Maximum length (in bytes) of the column.
  • is_nullable: 1 if Column is nullable.

There are additional columns in this table but for our purposes, the above stated columns will suffice.

The following query can be used to fetch column details (for their corresponding tables) and their data type id by joining sys.columns and sys.tables according to the object_id column:

SELECT TAB.name AS TableName, TAB.object_id AS ObjectID, COL.name AS ColumnName, COL.user_type_id AS DataTypeID
From sys.columns COL
INNER JOIN sys.tables TAB
On COL.object_id = TAB.object_id
-- where TAB.name = '<TABLENAME>'
-- Uncomment above line and add <Table Name> to fetch details for particular table
-- where COL.name = '<COLUMNNAME>'
-- Uncomment above line and add <Column Name> to fetch details for particular column names

The result should look like this:

 

TableName

InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentMerge
InstrumentMerge
InstrumentMerge_MinDates
InstrumentMerge_MinDates
InstrumentMerge_MinDates

ObjectID

375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
379864420
379864420
395864477
395864477
395864477

ColumnName

AuditDate
AuditID
ValidFrom
ValidTo
ID
Name
Type
Description
CreateDate
InstrumentOrigin
AuditType
ReutersID
PairBBID
BBID
MinBBDate
ReutersID

DataTypeID

61
56
61
61
56
167
56
167
61
56
175
56
56
56
61
56

Query 3: Add Data Type Name

The next step is to replace Data Type ID with Data Type Name. This can be done by joining the above query with sys.types table.

About sys.types

sys.types is a system table and is used for maintaining information on column data types in a database. This table contains one row for each data type and includes the following information:

  • Name: The name of the column. This is unique within the table object.
  • user_type_id: System code for column data type. . This is unique for this table and is used for joining with sys.columns table.
  • max_length: Maximum length (in bytes) of the column.

As before, there are more columns in this table but for our purposes the above stated columns will suffice.

The following query can be used to replace Data Type ID with Data Type Name by joining sys.types with sys.columns on user_type_id:

SELECT TAB.name AS TableName, TAB.object_id AS ObjectID, COL.name AS ColumnName, TYP.name AS DataTypeName, TYP.max_length AS MaxLength
From sys.columns COL
INNER JOIN sys.tables TAB
On COL.object_id = TAB.object_id
INNER JOIN sys.types TYP
ON TYP.user_type_id = COL.user_type_id
-- where TAB.name = '<TABLENAME>'
-- Uncomment above line and add <Table Name> to fetch details for particular table
-- where COL.name = '<COLUMNNAME>'
-- Uncomment above line and add <Column Name> to fetch details for particular column names
-- where TYP.name = '<DATATYPENAME>'
-- Uncomment above line and add <Data Type Name> to fetch details for particular Data Type

The result will be the following table, containing the column descriptions we were looking for:

 

TableName

InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentAudit
InstrumentMerge
InstrumentMerge
InstrumentMerge_MinDates
InstrumentMerge_MinDates
InstrumentMerge_MinDates

ObjectID

375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
375724441
379864420
379864420
395864477
395864477
395864477

ColumnName

AuditDate
AuditID
ValidFrom
ValidTo
ID
Name
Type
Description
CreateDate
InstrumentOrigin
AuditType
ReutersID
PairBBID
BBID
MinBBDate
ReutersID

DataTypeName

datetime
Int
datetime
datetime
Int
varchar
Int
varchar
datetime
int
char
int
int
int
datetime
int

MaxLength

8
4
8
8
4
8000
4
8000
8
4
8000
4
4
4
8
4

Gain Deeper Insight Into SQL Server Data

Looking for an SQL reporting tool that enables you to easily perform cross-database joins using data from SQL Server, unstructured data sources, flat files and more? Start your free trial, connect to your SQL server database in just a few clicks, and see powerful business intelligence software in action.

banner blog 2 SQL Cheat Sheet: Retrieving Column Description in SQL Server

Let’s block ads! (Why?)

Blog – Sisense

Building a Dashboard? 4 Keys to Find The Most Telling Metrics

Since 1770 when Britain’s James Hargreaves patented his spinning jenny that allowed a single spinster to run eight spindles and produce eight times as much raw thread and yarn as before – cutting both time to market and labor expense involved with producing textiles in Blackburn, Lancashire – doing more with less has been the driving force behind growing a business.

This productivity remains an elemental economic force – with a decisive effect on profit.

In our modern economy, software applications measure linear-feet equivalents of today’s “thread and yarn.” Such raw, furnished data, unlike cotton or wool fibers, begs translation, comparison, and analysis. Consequently, every team lead needs an agent by which to see, interpret and act on that data.

The Dashboard – 3 Types for Business

And that’s what dashboards – imperative to business intelligence software – do. Of course, dicing and splicing that data constitutes a need for tailored dashboards, of which three types are recognized:

  • Strategic – aggregates critical, overarching metrics, presenting a 10,000-foot view of a business.
  • Analytic – gathers and compares particular metrics across time and many variables, drilling down to actionable data per team.
  • Operational – monitors data in real time, alerting a team to any issues that need to be addressed.

Regardless of a dashboard’s purpose, it should reflect a company’s particular needs and culture, displaying Key Performance Indicators (KPIs) based on a firm’s high-level (and/or low-level) objectives. These KPIs will stand as quantifiable measurements of each goal; metrics, by any other definition.

That’s important, because according to Sruthi Varanasi of ReportGarden, “A metric is a quantifiable measure that is used to track and assess the status of a specific business process.”

Metrics – Lifeblood for a business; Heart of a Dashboard

Suffice to say, metrics are the truest barometer of how your online business is doing.

Consider this: A 15 percent increase in conversions is just that, a successful trend. Subsequently, metrics serve as buoys that can keep your business sailing in deep water or warn you when shoals are near. A 21 percent dip in visibility over a month is just that, a falling trend, indicating you might need to revisit strategy and adjust – on the fly.

It stands to reason, then, that constructing a clean, uncluttered, incisive dashboard that represents key business intelligence metrics is equal parts science and art.

You want a dashboard whose widgets illustrate – at a glance – essential data from which sound business decisions can be made – whether those decisions concern the content of a webpage or the features of an actual product.

Dashboard enABLEd! How to Determine Which Metrics to Track

So, how do you decide which specific metrics should populate that dashboard from which you will extract actionable data? How do you identify those KPIs for each business goal? Following these four steps will enABLE you (apology for the acronym within the acronym) to populate your dashboard with meaningful data:

  1. Apply S.M.A.R.T. methodology.
  2. Bring the selection to the team.
  3. Limit KPI assignment to three primary, overriding goals.
  4. Eliminate the urge to add more metrics to the dashboard.

1. Apply S.M.A.R.T. to each KPI

For a basic example, if an overriding goal is to increase monthly recurring revenue (MRR), the questions to ask – and answer (more than yes/no) – to assess the validity of a KPI begin with:

  • Is a metric Specific to a goal? What needs to be accomplished and why?
    We want to increase MRR to increase margins and subsidize a new product launch next year.
  • Is it Measurable? What kind of historical change has been evident? How will we know the goal was reached?
    According to historical analytics, we can feel confident that an MRR increase of between 3 – 5 % would be achievable.
  • Is it Attainable? Are the resources readily available to achieve success? Is the goal reasonable? Is it likely to bring success?
    We can ramp up social media promotion, launch a campaign, or otherwise put effort behind ramping up sales to drive revenue.
  • Is it Relevant? How meaningful and worthwhile is the goal? In the current situation can we commit to its achievement?
    Our competition has lost revenue, so more of the market is available to us. The more revenue generated, the more reward for us.
  • Is it Timely? Is the goal ahead of the curve, or behind? What’s the deadline for achieving it? What’s the overall timeline set for adopting the goal?
    After strategic planning, we can achieve a substantive bump in MRR over the subsequent quarter.

So, your team devises this KPI: Increase MRR by 3% during Q2. What metric goes on the Dashboard? A monthly monitor of incoming revenue.

2. Bring KPI selection process to the team

Gain consensus on those metrics paramount to the team’s and the company’s success. Asking for a collective viewpoint not only helps distill the essence of paramount KPIs but also builds morale. Each team member gets some skin in the game.

3. Limit KPI assignation to no more than three primary goals

Segment’s Analytics Academy declares the purpose behind each solitary metric populating your dashboard should focus attention on a specific business process (goal!) that needs to be optimized. Using the sample KPI above, it could be one of three under an overarching goal to drive an increase of MRR.

4. Eliminate unnecessary metrics

The rule of thumb is to have no more than seven metrics displayed on any single dashboard because, after all, it functions as a quick-glance representation of a goal’s status. Thus, its design should advance easy comprehension, simple updating, and clean navigation without secondary data distractions.

Your team should make hard decisions on which metrics to include. Consider: secondary data get in the way, conflating interpretation, overwhelming the reviewer. Fewer metrics are better metrics.

Each time you visit the dashboard, you should remember that KPIs keep your business strategy agile, fleet, responsive. Positive data dictates stability and steadiness. Negative data compels your team to adjust, adapt and provide alternatives.

An effective dashboard illustrates this crucial data and discloses a course of action to take.

Metrics on Dashboard: What Are My Choices?

Once you’ve followed the ABLE steps to determine your KPIs, you’re ready to populate your dashboard. At this point, you may ask, “What are metrics that achieve near-universal adoption by businesses?”

That depends on the purpose behind your team, the audience (your team? An executive?) that will be reviewing the dashboard, the “actionability” of the selected KPIs, and the type of visuals preferred.

Metrics for a marketing team might include tracking web traffic sources, incremental sales, social sentiment, conversion rate, and SEO keyword ranking. A sales team might want to monitor sales growth, product performance, average purchase value, and average profit margin.

A financial team can follow working capital, debt-to-equity ratio, and current ratio. An e-commerce team might monitor customer lifetime value (CLV), customer retention rate, churn rate, and monthly recurring revenue.

Other salient KPIs can address net profit, revenue growth rate, project schedule variance (PSV), and average revenue per customer. Because your KPI choices are ultimately subjective, the A.B.L.E. methodology can help your team judiciously arrive at which data would be most constructive to track and display.

Vital Metrics on (Dash)Board: The Skinny

As long as any KPI on your dashboard is based in company goals, is relevant to the team behind achieving that goal, is attainable, measurable and remains timely, the dashboard itself should render keen data from which you can take incisive action to engineer successes — as well as avert disasters.

Taking the time to apply the SMART methodology, bring in the team, limit primary goals and amount of KPIs assigned to each, and eliminate the urge to overpopulate a dashboard with secondary data, will help you select the most meaningful metrics for your business onto your dashboard.

Perform these steps. Pick your metrics. Build your dashboard. Mine your data.

Grow your business.

About the Author

Keith Craig bio pic 150x150 Building a Dashboard? 4 Keys to Find The Most Telling Metrics

Keith Craig is Content Marketing Manager for Betterbuys. He has more than a decade of experience using, researching and writing about business software and hardware. He can be found on Twitter and LinkedIn.

Tags: |

Let’s block ads! (Why?)

Blog – Sisense

Dashboard Design Best Practices – 4 Key Principles

Building an effective dashboard according to best practices for dashboard design is the culmination of a comprehensive BI process that would usually include gathering requirements, defining KPIs and creating a data model. However, the importance of proper dashboard design should not be understated – poorly designed dashboards could fail to convey useful information and insights and even make the data less comprehensible than it was originally.

A good BI dashboard design is one that –

  • Makes the complex simple: we have lots of information, lots of data that changes all the time and different analytical needs and questions. We want to take all this complexity and make it simple.
  • Tells a clear story: we want to be able to connect data to its context in the business and to answer the viewer’s questions. This is where the visual layout of a dashboard plays a crucial role.
  • Expresses the meaning of the data: the chosen data visualizations need to correctly represent the data, and the information you want to extract from it.
  • Reveals details as needed: we want each viewer to have access to the data they need – no less but also no more. Some users might need to be able to see a more granular view of the data – others could suffice with an overview.

While each data dashboard has its own requirements, limitations, and goals, there are certain guidelines that are almost always relevant for dashboard creation. We will proceed to present four of these principles, and how you can start applying them to your dashboards right now.

First let’s examine how a poorly designed dashboard might look like:

common dashboard mistakes 770x364 Dashboard Design Best Practices – 4 Key Principles
Click to enlarge

Which poor design choices are immediately noticeable?

  • Too many widgets (about 30 of them), visualization and indicators creating visual clutter
  • Basic questions such as “what is the total amount of sales” take much more than 5 seconds to answer
  • No organizing principle behind the visual layout – widgets seem to be strewn randomly
  • Tables in the bottom add very little in the way of insights

By applying the following good dashboard design principles, this dashboard could have been improved dramatically.

Telling a Story Through Data yellow Dashboard Design Best Practices – 4 Key Principles

1. The 5 Second Rule

Your dashboard should provide the relevant information in about 5 seconds.

Your dashboard should be able to answer your most frequently asked business questions at a glance. This means that if you’re scanning for the information for minutes, this could indicate a problem with your dashboard’s visual layout.

When designing a dashboard, try to follow the five-second rule – this is the amount of time you or the relevant stakeholder should need to find the information you’re looking for upon examining the dashboard. Of course, ad-hoc investigation will obviously take longer; but the most important metrics, the ones that are most frequently needed for the dashboard user during her workday, should immediately ‘pop’ from the screen.

2. Logical Layout: The Inverted Pyramid

Display the most significant insights on the top part of the dashboard, trends in the middle, and granular details in the bottom.

When designing a dashboard it’s important to follow some kind of organizing principle. One of the most useful ones is the inverted pyramid (see image). This concept originated from the world of journalism, and basically divides the contents of a news report into three, in order of diminishing significance: the most important and substantial information is at the top, followed by the significant details that help you understand the overview above them; and at the bottom you have general and background information, which will contain much more detail and allow the reader or viewer to dive deeper (think of the headline, subheading and body of a news story).

How does a journalistic technique relate to dashboard design? Well, business intelligence dashboards, like news items, are all about telling a story. The story your dashboard tells should follow the same internal logic: keep the most significant and high-level insights at the top, the trends, which give context to these insights, underneath them, and the higher-granularity details that you can then drill into and explore further – at the bottom.

inverted pyramid design for dashboards example 770x399 Dashboard Design Best Practices – 4 Key Principles

3. Minimalism: Less is More

Each dashboard should contain no more than 5-9 visualizations.

Some dashboard designers feel the need to cram as many details as possible into their dashboard in an effort to provide a fuller picture. While this might sound good in theory, cognitive psychology tells us that the human brain can only comprehend around 7+-2 in one time – and this is the amount of items you want in your dashboard. More than that just translates into clutter and visual noise that distracts and detracts from the dashboard’s intended purpose.

You can avoid visual clutter by layering the data by using filters and hierarchies (e.g. instead of having one indicator for amount of sales in North America and one for South America, give the user the option to apply a filter which changes the same indicator between one and the other) – or simply by breaking your dashboard into two or more separate dashboards.

4. Choosing the right data visualization

Select the appropriate type of data visualization according to its purpose.

We’ve written before about ways to visualize data so won’t go into too much detail here – suffice to say that data visualization are intended to be more than mere eye candy – they should serve a specific purpose and convey specific facts in a more effective way than the basic tabular format.

Before choosing a visualization, consider which type of information you are trying to relay:

  • Relationship – connection between two or more variables.
  • Comparison – compare two or more variables side by side.
  • Composition – breaking data into separate components.
  • Distribution – range and grouping of values within data.

If you’re stuck, you can always use our interactive wizard to help you choose the right data visualization.

image3 dashboard ecommerce 770x377 Dashboard Design Best Practices – 4 Key Principles

Example of a good dashboard design.

Dashboard Design: What Else to Consider

Choosing the right visualization is key to making sure your end users understand what they’re looking at, but that’s not all you should consider. When thinking about how to design a dashboard you need to also take into account who will be the end user of the dashboard in the first place.

For example, when designing a dashboard for an end user focused on ad platform optimization, you probably want to focus your widgets on metrics that will increase conversion rates. Because your end user is in the thick of what goes on with every ad on a day to day level, looking at the nitty gritty measures such has CPM (cost per mille) makes a lot of sense. However, a VP Marketing probably just wants to see, at first glance, the more broad strokes on how ad performance changes leads brought in.

apo image11 Dashboard Design Best Practices – 4 Key Principles

To that end, as we mentioned earlier, before diving head first into dashboard design, sit with your end users in order to gather requirements and define KPIs. Without doing that, you can design the most beautiful dashboard in the world but it won’t change the way users make decisions in the long run.

Telling a Story Through Data yellow Dashboard Design Best Practices – 4 Key Principles

Let’s block ads! (Why?)

Blog – Sisense

4 Revenue Benefits of Embedded Analytics for Application Vendors

While offering clear advantages to end-users, benefits of embedded analytics can also deliver scalability and revenue to the companies providing analytics to their customers.

Often overlooked, these benefits can make a big impact on your bottom line, positively influencing customer decision-making, encouraging new business, and even taking advantage of previously untapped monetization opportunities.

Let’s examine 4 main advantages of embedding an analytics solution in your B2B application.

EmbeedScale 770X250 770x250 4 Revenue Benefits of Embedded Analytics for Application Vendors

1. Increased Win Rate

Analytics has become a compulsory functionality in today’s B2B market. This means providing an analytics solution as part of your product or service that lacks one can immediately increase customer satisfaction, market positioning and adoption.

In addition, upgrading an application’s analytics solution also presents revenue opportunities. It’s an ideal way to keep current customers’ attention by offering new capabilities from an existing offering. It can also pique the interest of net-new customers, potentially securing more business overall.

2. Decreased Churn Rate

Customers will switch solutions when they aren’t getting the functionality they need. Because data analytics is associated with a competitive advantage, today, many decisions to switch solutions are driven by the need for more information and better analytics. By adding or upgrading your offering’s Analytics and BI, existing customers benefit from new capabilities that keep them from seeking a different solution.

On top of this, showing your clients that you’re constantly working to improve your product is impressive. You can ensure loyalty longevity from your current customer base by offering them not just new features, but functional ones that will make their lives easier.

3. Expanded Product Licensing

Embedding analytics doesn’t have to be limited to a single use case or product. Similar to the above point, adding or upgrading analytics functionality can grow your application, product or service’s potential user base. The more departments, teams, or business units that can utilize and realize value from your application, the bigger increase you’ll see in user or product licenses from new and existing customers.

4. Feature Monetization

Giving users the ability to customize your application with additional “pay to play” modules (outside of the main offering) can be an excellent way to maximize the flexibility and value of your application, product, or service. Offering an analytics module can be a lucrative addition to your customization portfolio due to the sharp market demand for analytics tools. The additional information supplied by the added analytics can be a line item for additional revenue.

Ready to See The Benefits of Embedding Analytics Into Your Offering?

The revenue opportunities analytics and BI present are real. But what does this mean for your company? How can you be sure that you’re choosing the analytics solution that can expand and grow as you do? Choosing the right analytics solution that dovetails seamlessly with your application, product or service is, of course, an important and strategic decision.

EmbeedScale 770X250 770x250 4 Revenue Benefits of Embedded Analytics for Application Vendors

Categories: BI Best Practices

Let’s block ads! (Why?)

Blog – Sisense