• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: Best

Keep your Microsoft Dynamics 365 CRM Data Secure – Azure AD Best Practices After MFA – Part 1

April 4, 2021   Microsoft Dynamics CRM

Your Microsoft Dynamics 365 CRM is a database packed with invaluable information on your business.  It deserves the best protection you can provide. From prospects, customers and service issues, purchasing, and so much more, your business can’t function without this information. A security breach that puts this data in the wrong hands would be nothing short of devastation.  

While your everyday CRM users are probably focused on serving or selling, they’re probably not thinking about their every action as it relates to security. Setting up Multi-Factor Authentication (MFA) with Microsoft Azure AD is a great first step, but there are other best practices that can maximize your organization’s data security in the cloud while still providing an end user experience that promotes productivity. 

Screen Shot 2021 03 31 at 3.49.58 PM e1617388125722 625x446 Keep your Microsoft Dynamics 365 CRM Data Secure – Azure AD Best Practices After MFA – Part 1

Eric Raff, Cloud Practice Director at JourneyTEAM, in a presentation hosted by buckleyPLANET to the Utah SharePoint User Group (UTSPUG) and Microsoft User Group (MUGUT), shared the top 10 security tips and considerations after you’ve rolled out MFA in your Microsoft Dynamics 365 tenant. Raff is a 25+ year expert in Identity and Access Management in Microsoft 365 and Windows Azure. 

This is a two-part blog. In Part 1 we cover first five of the top 10 Microsoft Dynamics 365, Windows Azure and Microsoft Cloud Services security tips after you’ve deployed MFA. Many of the security steps involved with these best practices require that you have Azure ADP2 or the Microsoft Enterprise Mobility + Security (EM+S) mobility management and security platform. 

1. Azure Portal Settings: Two Suggestions

Log into portal.azure.com and check on these two settings:  

  • Go to “User Settings.” You should restrict access to the Azure AD Administration Portal — to do this, make sure this is set to “Yes.” 
  • Also, check out the name of your tenant. This will show up whenever there is a Microsoft OneDrive sync integration. Make sure it is relevant!  

Top 10 welcome 625x456 Keep your Microsoft Dynamics 365 CRM Data Secure – Azure AD Best Practices After MFA – Part 1

2. Set Restrictions on Guest User Access

Do you know how many guest accounts you have in your tenant? If you don’t know the exact number, you should at least be aware of where you can find out. The default External Sharing Setting is “Allow guests to share items they don’t own,” meaning sharing content with anyone can be done anonymously, including guests. Your guests can also invite other guests.  This is an ideal area to create some restrictions. Thankfully, “Restrict access to the Azure AD Administration portal” is “no” by default! 

The Identity Governance solution in Azure AD P2 can set restrictions on guest accounts with Access Packages and Access Reviews.   

Access Package 

  • In the Azure AD Portal, click “Identity Governance” > “Settings.” 
  • Select what happens when an external user that was added to your directory through an Access Package request loses their last assignment under “Manage the lifecycle of external users.” 
  • This allows you to block external users from signing into the directory and will remove an external user after a set number of days. This only works if the guest account came in via an Access Package. 

Access Review Policy 

  • From the Azure AD Portal, click “Identity Governance” > “Access Review.” 
  • To create a new Access Review: 
  • Select what to review by “Teams + Groups,” or “Applications.” 
  • Select a specific group, e.g., “All Guests.”  
  • Select a review scope: “Guest Users Only.” 
  • Adjust the settings to your preference. You can set up a policy that ensures guest access only to those that truly need it. Your policy could put the responsibility on the users to review their access. If they don’t respond to a review request in a specific amount of time, they may be blocked from signing in for 30 days, then later removed from the tenant.  
  • At myaccount.microsoft.com you can self-manage your guest account in other directories as well as completely delete guest accounts you don’t use. Go to “Organizations” and click “Leave Organization.”  

3. Manage Consent and Permissions for Enterprise Apps

Cyber criminals now use fake enterprise apps to gain access by convincing you into consent. New functionality in the Azure Active Directory Microsoft 365 environment allows for greater consent governance.  

  • Head to “Enterprise Apps” > “Consent and Permissions.” 
  • Here you can manage user consent and permissions from verified publishers. 
  • Once an app is a verified publisher and you set up the permissions, users will only be able to consent to those actions. 

Next, check the user settings under “Admin consent requests (Preview).” 

  • Change “Users can request admin consent to apps they are unable to consent to,” to “Yes.” 
  • Click “Select users to review admin consent requests” and select an appropriate Admin (should be Global, Application or Cloud Application Administrator) who will be notified and make the decision to allow or reject consent.  

A last note, if you as a Global or an Enterprise App Administrator ever see a “permissions requested” box with the option to consent on behalf of your organization, proceed with caution. You will be consenting for everyone in the tenant and should be sure about this decision.  

4. Block Legacy Protocols 

Hundreds of spray attacks can happen every hour that target legacy protocols such as SMTP, IMAP, POP, Active Sync, Outlook Anywhere (RPC over HTTP), and older Office clients, such as 2010 and 2013.  

Identify who is using legacy protocols in the environment: 

  • Log into the Azure AD portal (portal.azure.com). 
  • Click to “Sign-Ins” > “Monitoring.” 
  • Make sure you have the new experience turned on. 
  • Click “Add Filter” > “Client App” > “Apply.” 
  • You can then review the client apps and see a list of Legacy Authentication as well as review the successful and failed attempts. 

Now you have what you need to build a Conditional Access (CA) policy to block access.  

  • Navigate to “Security” > “Conditional Access” > “Classic Policies.” 
  • Here you can create a new policy that blocks legacy protocols. Be sure it targets all users (except your “break glass” account). 
  • Go to “Conditions” > “Client Apps” > “Legacy Authentical Clients.
  • Set access controls to “Block Access”

5. Check Your Security Defaults

Before following this tip, be aware that using Security Defaults is only suggested if:  

  • No Conditional Access policies are enabled in your environment. 
  • You don’t need fine-grained control over access and authentication. 
  • Your organization is relatively small. 

Microsoft’s Security Defaults are basic recommendations for identify security mechanisms and provide a great baseline of features, but you should still consult an expert to confirm if Security Defaults is the right choice for your organization. 

To turn on defaults: 

  • From the Azure AD Portal, go to “Properties.” 
  • Make sure that Security Defaults is set to “Yes.” 

What Security Defaults activate or enforce:  

  • Requires all users to register for Azure MFA.  
  • Administrators must perform MFA. 
  • Blocks legacy authentication protocols. 
  • Users must perform MFA when risky activity is detected. 
  • Access to the Azure Portal and other “privileged” activities will be protected.  

Be sure that “Users can use the combined security information registration experience” is turned on. 

Read the full article.

Click here to continue and read up on tips 6 – 10!  

NEXT STEPS:

  1. Join a free consultation and ask all the questions you wish.
  2. Plan your Deep Dive meeting – Get your organization’s Customized Solutions presentation.

Jenn Alba JourneyTEAM 625x625 Keep your Microsoft Dynamics 365 CRM Data Secure – Azure AD Best Practices After MFA – Part 1Article by:Jenn Alba – Marketing Manager – 801.938.7816

JourneyTEAM is an award-winning consulting firm with proven technology and measurable results. They take Microsoft products; Dynamics 365, SharePoint intranet, Office 365, Azure, CRM, GP, NAV, SL, AX, and modify them to work for you. The team has expert level, Microsoft Gold certified consultants that dive deep into the dynamics of your organization and solve complex issues. They have solutions for sales, marketing, productivity, collaboration, analytics, accounting, security and more. www.journeyteam.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Inogic Preferred apps – 15 smart solutions to get the best of Dynamics 365 CRM / Power Apps

December 25, 2020   CRM News and Info

xCRM soft prefer app 625x357.jpg.pagespeed.ic.vYrJxLFV2d Inogic Preferred apps – 15 smart solutions to get the best of Dynamics 365 CRM / Power Apps

It’s finally here, the end of the year 2020. The onset of an unforeseen lockdown made us all stop, question, think and reflect on our goals, priorities, and the pressing question of what’s next? Despite the barriers and challenges this new reality bought, businesses all around the globe showed the spirit of adaptation and emerged stronger, savvier, and even profitable in many ways.

Just like the champion spirit our Dynamics 365 CRM / Power Apps community exhibited, we strapped down and committed to keep moving forward to solidify and enhance our Inogic productivity suite. So before 2020 fades away and we see the dawn of 2021, we would like to share a roadmap of everything that you can do supercharge with Inogic’s suite of preferred productivity apps.

Integrate Seamlessly (Preferred Apps on Microsoft AppSource)

Maplytics™ – What if you could have all your CRM data conveniently and accurately plotted on the map right within Dynamics 365 CRM/ Power Apps.? How about then using these powerful visualizations with features that could allow you to increase your efficiency, optimize your productivity, and extract geographical insights? Say hello to Maplytics™!

Maplytics is a geo-mapping and geo-analysis app that seamlessly integrates with Bing Maps within Dynamics 365 CRM/ Power Apps that enables you to make the most of your CRM data. Maplytics comes packed with features like Territory Management, Radius Search, Appointment Planner, Optimized Routing, PCF Controls, Census Data, Heat Maps, Truck Routing, and much more! All in all, there is no going back, once you’ve got the Maplytics pack.

InoLink –  If accounting and sales data in QuickBooks and Dynamics 365 CRM proves to be a hurdle for you, it’s time to step over the entire hassle. With our cloud-based productivity app InoLink, you can integrate Intuit QuickBooks with Dynamics 365 CRM. You’ll realize the real value of Inolink when you will have a complete aerial view of your entire customer accounting information right within your Dynamics 365 CRM!

Enhance productivity with a single click (Preferred Apps on Microsoft AppSource)

There are some basic operations users need to perform frequently. But the absence of a way to optimize the monotony is painfully inefficient for us. Therefore, we created our line of 1 click apps that minimize inefficiency and maximize results.

Click2Clone – Clone/Copy any one of your Dynamics 365 CRM entity records. You can have simple parent-child records or 100+ line-item records, Click2Clone is up for cloning any record you throw at it.

Click2Export – Exporting your data can be as simple as a click. With one click, you can export and email any Dynamics 365 CRM document such as Reports/CRM Views/Document Templates (Word, Excel), etc. Moreover, the format will be of your choice, you can choose Excel, Word, PDF, CSV or TIFF for your exported files.

Click2Undo – Undo your mistakes instantly with ease. Whether you made a simple error or deleted some very important records in your CRM, Click2Undo can restore anything that slips through the cracks of mistakes.

Get Hassle Free Attachments (Preferred Apps on Microsoft AppSource)

Attach2Dynamics – Ever wondered if there was a comprehensive attachment management solution that could allow you to integrate with multiple storages like SharePoint, Dropbox or Azure Blob Storage? Imagine no running out of storage space, having a hard time managing documents in could or dealing with a single integration! Attach2Dynamics can achieve all of this for you by allowing you to store & manage your documents/attachments in a cloud storage of your choice within Dynamics 365 CRM.

SharePoint Security Sync – Syncing Dynamics 365 CRM security privileges with SharePoint. You can ensure secure and reliable access to confidential documents stored in SharePoint. This has similar attributes to Attach2Dynamics and allows attachment management in SharePoint from within Dynamics 365 CRM.

Visualize to Optimize (Preferred Apps on Microsoft AppSource)

Kanban Board – Say goodbye to the old grid view and say hello to the new Kanban view. With Kanban Board you can visualize Dynamics 365 CRM data by categorizing the entity records in lanes and rows as per their status, priority or Business Process Flow stages. This is a proven and effective way to keep track of on-going projects and assignments.

Map My Relationships – Visualization makes everything easier. Even when it comes to complex relationships between entities. With Map My Relationship you can visualize the relationships between entities or related records in a mind map view. This allows you to have an overview of all the related entities. This is truly information in an instant.

Use Independent Stand-Alone Apps (Preferred Apps on Microsoft AppSource)

Alerts4Dynamics – Notify your team about the upcoming sales meeting or monthly sales target, or even wish everyone celebratory greetings, you can do all this and much more with Alerts4Dynamics. Set and alert your team members through pop-ups, email or form notification anytime, anywhere, all within Dynamics 365 CRM.

User Adoption Monitor – Motor and keep track of all user actions and performances in Dynamics 365 CRM! Monitor common actions and special messages of all users in your organization on a daily, weekly or monthly basis. For the results, you can view the dashboards and leaderboards to analyze the performance of each user. This allows you to implement measures to improve the user adoption of Dynamics 365 CRM.

Lead Assignment and Distribution Automation – Assign your pending workload or incoming leads to your team members easily and automatically. Automation also means fair and balanced assignments. With Lead Assignment and Distribution Automation, you can be assured that there will be no mismanagement, no cherry picking, and no biased working styles when you are creating a plan for your future business.

Accurate and Easy Billing Management (Preferred Apps on Microsoft AppSource)

Recurring Billing Manager – Automate your billing processes through Recurring Billing Manager. Generate invoices, send payment reminders, calculate charges on delayed payments, all with a click of a few buttons. Human inaccuracy and slow seeds are no more an obstacle for Recurring Billing Manager users.

Subscription Management – For Software Publishers and Value Added Resellers (VARs), our Subscription management app can create flexible billing schedules, payment reminders and penalty calculations. Also, create invoices for the entire Accounts Payable department, or to a single contact, all with assured accuracy and reliable calculations.

Auto Tax Calculator – Automate your tax calculations with Auto Tax Calculator you can leave manual calculations by automating your calculations to the brain of the software. This increases accuracy, saves time for the entire team, and improves productivity!

Quite the list, isn’t it?

Did you find something that caught your interest? If you did, you’re in luck, you can try any of the above apps for free with our 15 days free trial! Just go to our website or Microsoft AppSource and get started in 15 minutes.

If you need a walkthrough of any of the apps, we are just a mail away, shoot us a quick inquiry at crm@inogic.com and one of our productivity apps experts will get in touch with you.

So until next time – we wish you a safe and innovative and productive new year

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

3 Best Practices for Analytics Professionals to Bring Wind Power to Fruition

November 24, 2020   TIBCO Spotfire
TIBCO WindPowerAnalytics scaled e1605290762840 696x365 3 Best Practices for Analytics Professionals to Bring Wind Power to Fruition

Reading Time: 4 minutes

The world is racing towards global electrification, poised to provide electricity to the 20 percent of the global population currently without this utility. Leading the race is wind energy—specifically offshore. The transformation of the energy industry from “black gold,” oil, to “green gold,” renewable energy sources, will have varied far-reaching impacts. Fortunately, TIBCO has developed a comprehensive solution that will take current and future wind energy investors and operators from the land to the sea—moving operations offshore. 

In fact, over $ 35 billion was invested in offshore wind projects during the second quarter of 2020. The longevity and significance of these projects make this a hot market with plenty of room for expansion. To tackle the complex variables that impact the outcomes of wind projects, TIBCO Data Scientist Catalina Herrera and her team have worked tirelessly to produce the most comprehensive solution available to date. 

Investors and developers can utilize this solution to not only forecast potential for wind farms, but to predict the production power of wind projects and determine the feasibility and potential limitations of a location. Users can also site and plan for the great new frontier—offshore wind farms. Established wind farms will also benefit from operations optimization such as the ability to incorporate real-time data from sensors to prevent downtime and eliminate overproduction concerns.

Streaming data from offshore wind farms in Spotfire

From planning to operations, siting for wind farms to placing steel in the ground, TIBCO’s Wind Forecasting Analysis is opening doors for energy players. In this blog, you’ll learn the role of analytics and data science in bringing the promise of one form of renewable energy, wind, to fruition.

3 Best Practices for Analytics Professionals to Bring Wind Power to Fruition

Source Meaningful Data

The variables needed to be considered for accurate wind forecasting are vast. In this solution, wind characteristics such as wind speed, wind temperature, and wind pressure comprise just one component of analysis. Wind characteristics are proven variables that are required when predicting power outcomes. Additional variables such as weather seasonality, product power of various wind turbines, and geographical relation to power stations (and more!) also play a role in developing this comprehensive analysis.

 3 Best Practices for Analytics Professionals to Bring Wind Power to Fruition
Analysis of characteristics such as wind speed, wind temperature, and wind pressure

Due in part to the multiple players involved with collecting these data sets, many data silos exist. Pulling in this data from weather stations, equipment manufacturers, and current wind farms, produces large quantities of data but not necessarily quality data. Therefore, data cleansing is imperative to take this raw data and produce quality models for prediction. TIBCO solutions make the data wrangling process quick and seamless. 

Create Location-Specific Time Series Models

Using the cleansed data, this Wind Forecasting Analysis Solution creates time series models for stakeholders in the wind industry. 

  • Wind farm developers can use the models for siting optimal locations. 
  • Energy suppliers can avoid overproduction by coordinating the collaborative production of traditional power plants with weather-dependent energy sources. 

To obtain more accurate predictions, extrapolation techniques are required. Obtaining weather data or wind farm data isn’t enough, as this analysis requires weather data from exact longitudes and latitudes. This extrapolation can be done using Voronoi Polygons, a function of TIBCO Spotfire. To create accurate models, weather conditions data are indexed in time order to demonstrate simple models such as wind speed variations during a single day. These smaller models are captured in an ARIMA (Auto-Regressive Integrated Moving Average) model to predict wind power generation. ARIMA is a class of predictive model that captures a suite of different standard temporal structures in time series data.

 3 Best Practices for Analytics Professionals to Bring Wind Power to Fruition
Explore weather variables across wind stations or projects

The user of this Wind Forecasting Analysis benefits from the clean visualizations that TIBCO Spotfire provides, but behind the dashboard, is a complex data pipeline that uses large quantities of data from many services, processed by advanced statistical methods. TIBCO Data Science is responsible for processing all of the data transformation from the multiple data sources, imputing missing values, and aggregating and joining data tables—all in preparation for the ARIMA model to run. 

Looking ahead, the solution can incorporate streaming data, effectively offering the analysis in real time. This hyperconverged analytics approach enables users to not only visualize and perform expansive analysis into wind data but also to generate actionable insights, rapidly.

Make Informed Decisions: Onshore and Offshore 

As wind energy plays an increasingly important role in the supply of energy globally, implementing a solution for predicting the output of wind farms becomes more crucial. Not just for planning new wind projects, but also for optimizing existing operations, and creating energy ecosystems that are collaborative and efficient. 

Here’s what we know about offshore wind: 

  • 80 percent of offshore wind resources are in waters greater than 60 meters (197 feet). 
  • Floating wind turbines enable sites further from shore, where they are out of sight, but there’s also better wind!
  • Floating wind technology is expected to be deployed at utility-scale by 2024.

From planning to operations, siting for wind farms to placing steel in the ground, TIBCO’s Wind Forecasting Analysis is opening doors for energy players. Click To Tweet

The analytics needed behind the scenes will continue to increase in complexity as the number of variables to consider increases with the transition to offshore wind projects. With the push for offshore wind projects to provide a greater amount of energy to the grid within this decade, the capabilities that this analysis provides are all the more relevant. We are proud to be spearheading a data science solution that will empower the renewable energy industry. We are actively working to employ this same science in other types of renewable energy, such as solar. You can learn more about our Wind Forecasting Analytics solution in this webinar.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Datathon Winner: Best Analytics Visualization Using TIBCO Spotfire

November 11, 2020   TIBCO Spotfire
TIBCO EnergyAnalytics scaled e1603984637123 696x365 Datathon Winner: Best Analytics Visualization Using TIBCO Spotfire

Reading Time: 5 minutes

Oil and gas and energy professionals came together this summer for an amazing, data-driven event. Untapped Energy and the Society of Petroleum Engineers (SPE) organized and hosted an incredible event, the DUC Datathon a.k.a. Drilled but Uncompleted Wells Datathon. Bringing together hundreds of students and energy professionals, participants gained access to virtual sessions, data bootcamps, and lively competitions. 

TIBCO was excited to sponsor the event and provided free access to TIBCO Spotfire®, an advanced analytics tool that is very popular in the global energy sector. Our very own Michael O’Connell, TIBCO Chief Analytics Officer, kicked off the competition with a fantastic keynote.  

We even had a competition of our own! Worthy competitors went head to head to create the “Best Visualization Using TIBCO Spotfire.” There was no shortage of creativity and talent, but in the end only one champion could be crowned for their outstanding efforts: Dean Udsen. Udsen is an accomplished data and analytics consultant in the energy space. Over the last decade, he has been widely recognized for his efforts with TIBCO customers.

Udsen decided to join the Datathon to stay busy, motivated, and continue his learning this summer. And his enthusiasm paid off, winning him a pair of Bose 700 headphones. There’s a lot we can learn from Udsen’s success. Below we’ll take a look at a few examples of how he helps TIBCO Spotfire customers in Western Canada create real business value.

Providing Value to Customers 

Udsen’s success is based on his core best practices: It’s essential to have a deep understanding of your clients’ goals and ensure you’re working towards a common objective. It’s also important to work with the client team at their comfort level. Some organizations are experienced and ready to move quickly. Others take time to ramp up their capabilities and develop trust in the analytics.

Udsen believes that integrating data is the “secret sauce” for all organizations to identify ways to improve their efficiency and effectiveness. By providing advice and experience on how to pull the data together, he helps clients quickly build dashboards that are providing the business with real value.

Today, there is often an impulse to either build everything all at once or to build all the backend components to support all potential processes before an end-user sees anything. But, according to Udsen, it’s better to start with one or two business processes and build them from beginning to end. This gives end-users something to start working with, shows them the potential, and allows the team to learn as they go. Plus, you gain immediate value as clients use the new process quickly to help make business decisions.

It’s important to not employ technology for the sake of technology. In a very price sensitive environment, you need to implement tactical projects that provide an immediate benefit. First, understand what is the most important issue for the client, and then look for the quickest way to improve.

Now, let’s look at a few customer examples from Dean Udsen’s past projects:

  • Radical Improvements in Time to Acquisition Analysis: One client wanted to improve their acquisition process and significantly decrease the amount of time needed to perform analysis. How accurate were their forecasts for price, production volumes, revenue, etc. in comparison to the reality of what happened in the ensuing years? They built the dashboard below using Spotfire to combine data from public data sources, internal accounting, and forecasting software. It allows the team to quickly and easily compare the forecasted vs. actual data of any acquisition.
 Datathon Winner: Best Analytics Visualization Using TIBCO Spotfire
  • An Evolution in Weekly Production Reporting to Immersive, Real-Time Reporting: In the past, another client of Udsen’s met each week to compare weekly production using print out hard copy reports for each well. Now, with a new dashboard built using Spotfire (pictured below), the team has an automated way to review the data. The team can select the report week, group wells by their pad location, and investigate individual wells in real time. They can easily compare wells and pads to each other, and the amount of time required to prepare for the meeting is now zero. The data is available on demand without having to wait for the reports.
 Datathon Winner: Best Analytics Visualization Using TIBCO Spotfire
  • Faster Well Counting, from Weeks to Hours: Another client previously used a quarterly report process to review industry activity and predict how many wells would be drilled on their royalty lands. This process took weeks to complete each quarter. The dashboard below, again built using Spotfire, speeds up this process greatly, allowing the team to get updates every day and see up to the minute activity and expected well counts as they occur. It combines data from a number of sources including public data, land, and GIS data to automatically determine which wells are “on the lands” and whether the expected drilling matches the company’s mineral rights.  
 Datathon Winner: Best Analytics Visualization Using TIBCO Spotfire
  • Deeper Insights into Well Operations: Engineers at one company needed a quick way to combine data between their production, accounting, and drilling applications. A quick look at each well was needed to ensure optimal performance and that costs were being contained. Working with the engineers, a number of charts were developed using Spotfire to meet this need, looking at the physical operations of the wells, along with the production and sales revenues. See example below:
 Datathon Winner: Best Analytics Visualization Using TIBCO Spotfire

Get started with TIBCO Spotfire 

These are just a few examples of the successful use cases saving Spotfire users time and money. In fact, according to one customer:

“Spotfire enables us to bring together several disparate data sets, both internal and external to the company.  We can then visualize relationships and develop insights about our business, without spending most of our time collecting, curating, modifying, and combining data.  The results are easily shared, and often generate new ideas that lead to even further leverage of our data, and impact the strategic activities within the business.” 

It’s essential to have a deep understanding of your clients’ goals and ensure you’re working towards a common objective. It’s also important to work with the client team at their comfort level. Click To Tweet

Interested in implementing any of the use cases mentioned above or others? Implement TIBCO Spotfire and enable everyone to visualize new discoveries in your data, quickly and easily. 

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Performance of pure function – Best way to define a function?

October 28, 2020   BI News and Info

 Performance of pure function   Best way to define a function?

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Read More

Building Bridges — Best Practices for BI Teams Working with Data Teams

September 17, 2020   Sisense

Implementing analytics at your company is a multi-team job. In Building Bridges, we focus on helping end-users, app builders, and data experts select and roll out analytics platforms easily and efficiently.

Selecting and implementing a new BI and analytics platform is a big decision and can be a vital part of an organization’s digital transformation. Rolling out a new platform involves everyone who’ll implement, maintain, and most heavily use such a platform.

Advanced analytics and BI democratize access to data, empowering more business users to develop insights, with less reliance on data professionals who have previously been gatekeepers of this information. As business teams become more involved with data in their day-to-day work, it’s natural that they should play a role in choosing the right platform and determining how it will benefit their organization.

Making these decisions — which platform to choose and how to put it into operation—  requires buy-in from both the analytics and BI team (the probable end-users/frontline users) and the data team (who will prepare the data, build the models, and connect datasets).

Importantly, to make the best decision for your organization, each team must understand, acknowledge, and address the needs and concerns of the other.

packages CTA banners BI and Analytics21 Building Bridges — Best Practices for BI Teams Working with Data Teams

Working together: Understanding priorities

Mutual understanding can only come about via dialogue between teams, so that they can understand the priorities and needs that their BI and analytics platform should meet. It’s important that the analytics and BI team clearly indicate their needs and that the data team understand what the BI platform will be used for and how they can build the right data model(s) to suit the analytics and BI team’s requirements.

To help achieve this, let’s look at some considerations that data teams and analytics & BI teams should discuss in the vital conversation about selecting and implementing a new platform, so that they both can get the most out of the process.

The five big questions

First, when considering a BI and analytics platform for your organization, there are five big questions that everyone should ask, irrespective of their function. A simple yes or no answer in each case, will help you determine quite simply some fundamental requirements that a platform should fulfil. These five big questions are:

The conversation

Armed with the answers to the five big questions, you’re then in a position to drill down into some details about what a BI and analytics platform should do for you. This is where the really interesting conversation starts, as youestablish where your requirements, questions, and concerns both converge and diverge. We envisage that the conversation could go something like this:

building bridges convo graphic2 387x455 1 205x770 Building Bridges — Best Practices for BI Teams Working with Data Teams

Usability

BI & Analytics Team: Can we find a BI and analytics platform that’s really user-friendly? Can you involve us in choosing something that offers self-service analytics, so we don’t have to hassle you all the time to help us crunch complex data? We’d really like to be part of the selection process.

Data Team: Sure. Let’s work together to find a platform that can meets your needs and ours. From our perspective, we need to be confident that a platform is robust and that it can handle on-premise and cloud data from any source.

While the data team is concerned with storing, connecting, and preparing data for analysis, the BI and analytics team is concerned with examining the data and creating relationships and comparisons between datasets, in order to surface insights and visualize the data. The former are data experts. The latter is more focused on the business needs and usability is a priority for them.

“When we were using a different BI platform, I wouldn’t let frontline business users touch it,” says Jennah Crotts, data analytics manager at Jukin Media. “They’d click one thing, everything would break, and I’d have to rebuild it from scratch. Now, with Sisense, if somebody is up to speed on their data and has gone through some basic training, I can copy a dashboard and give them ownership.”

Governance

BI & Analytics Team: It’s really important that we make our data as accessible as possible to as many users as we can, without unnecessary impediments. Can we make this happen?

Data Team: Certainly, but let’s not forget governance too. It’s also important for us to be able to control access to our data and ensure that proper policies are in place

Data management, including security, is a priority for the data team. Accessibility and the democratization of data are key focuses for the BI and analytics team. To maximize the utility of an analytics platform and the value of data to an organization, data for decision-making should be as accessible and as easy to understand for as many end-users as possible.

Trust and accuracy

BI & Analytics Team: For data, and ultimately insights, to be as accessible as possible, we need analysis and visualizations to be as simple as possible to find and use. Can we ensure that this is possible?

Data Team: We want to make this happen for you, while at the same time ensuring that data isn’t inadvertently mishandled and that problems don’t arise when setting up new KPIs or dashboards. We need to protect the data, the methods of accessing, and the ways it can be analyzed and visualized, while making it as accessible as possible

Both teams are concerned about the quality of the data they’re handling. The BI and analytics team, and end users, want to be sure that they’re receiving and analyzing the most accurate data as possible. However, it’s the data team’s job to manage and prepare the data, and optimize its accuracy, by ensuring it’s “clean”, without errors, duplications and without it being available to unauthorized users. The ability to set access programmatically, by group, team, department, or individual is an important feature that Data teams will look for in a BI platform.

“The benefit to using a BI system,” says Jennah, is that: “Even if you have all of your data mapped beautifully, is: Can you make sense of it? Can you notice trends? Can you identify what’s going on? Having the right BI tool and visualizations can allow you to do a lot of things.”

Scalability, agility, and capacity

BI & Analytics Team: It’s really important that we get the most value we can from a BI and analytics platform. So it shouldn’t just be suitable for now. It must be able to scale with the growth of our business. And scaling up shouldn’t be costly. Positive ROI is essential.

Data Team: Agreed. The platform we choose must be scalable, and agile, to respond to the growth of our business, the increase in the volume of data we’re handling, and changes in our market that require us to pivot. It needs to have the capacity and capability to grow with us.

In the interests of the business, the BI and analytics team seeks to maximize ROI, which is why ensuring that a BI and analytics platform is future-proofed so that it can scale up with the growth of the organization and the data it generates. The data team is solely focused on the capability of the platform to scale up and its capacity to handle increased volumes of data.

“To test the agility of the platform, I was able to connect the technical person at Sisense with our guy in engineering,” Jennah says. “This was helpful because if you try to reword what your tech guy is saying, it just won’t make sense. So when the right technical people can speak directly to each other, it helps the flow of the setup enormously.”

Teamwork 770x250 770x250 Building Bridges — Best Practices for BI Teams Working with Data Teams

Cost-effectiveness

BI & Analytics Team: On the subject of ROI, a BI and analytics platform has to be cost-effective. It needs to save users time and resources while improving performance, and decision-making. We need to be sure we can achieve these aims, so we should conduct a proof of concept before making a final decision.

Data Team: Indeed. We appreciate that we can’t implement a complex platform just for complexity’s sake. It has to serve our business, our needs, and our objectives. And it has to address the particular challenges that any business has. A POC is essential.

When it comes to cost-effectiveness, the teams align. Both need to ensure that their choice of BI and analytics platform can deliver what it promises, what each of them requires and the maximum benefits to their organization, with no hidden or unexpected costs.

Versatility

BI & Analytics team: Please keep people — the users — in mind. We really need to put end-users first vs focusing mainly on IT needs. Any platform that we choose must be able to solve a range of different data-related issues, each of which affects different people in different functions within the business.

Data team: We actually think that our objectives meet here. Our platform of choice must be able to handle the widest variety of data from the widest variety of sources, precisely to maximize accessibility and usability. More and more data is unstructured (text, video, audio, graphical data and the like) and comes from non-technical sources. It requires technology like NLP to deliver usable findings for all. We want to reduce the time it takes to clean, integrate, and maintain data.

So, for instance, the C-Level wants to see high-level performance reports across departments. Mid-level executives want to make better decisions in their line of business.  All data, and all types of data, must be accessible to all, subject to governance and policies. This necessitates that a BI and analytics platform be as versatile as possible.

“We’re starting to get more and more positive feedback from our C-level executives,” Jennah reports. “They’re saying things like, ‘This is a lot cleaner looking.’ ‘You were able to make those changes a lot quicker than I’m used to.’ ‘You’re able to get this to me faster.’ and stuff like that.”

Speed

BI & Analytics team: We’re missing opportunities when we can’t get insights quickly. We need data and visualizations in real time so we can make decisions and pivot fast.

Data team: We get it. Speed and ease of use are critical, but so is thoroughness. We must be sure that we choose a platform that handles and manages all our data properly so you can be confident that you’re getting the best and most accurate insights.

In a world where data is generated by the second, quickly getting your BI and analytics up and running could give you a competitive advantage. More importantly, making data-driven real-time decisions can mean the difference between success and failure, so from a business perspective, speed and ease of use are imperative. They should not be impeded by the need for accurate data management and preparation, which advanced BI and analytics should be capable of achieving equally quickly.

“The head of our content acquisition team, a major data consumer, was unsure about our switch. She was worried about transition costs and maintaining access to her particular data requirements.” Jennah says. “But when I showed her the first Sisense dashboard on a remote screen-share and was able to edit the dashboard to her specifications in real-time, she lost her mind over the speed and ease of use of the new platform.”

Implementation

BI & Analytics Team: How are we going to implement a new platform? We want results quickly so we can make decisions fast and get changes under way.

Data Team: Let’s not rush this. We’ll do our due diligence with a POC and ensure the right platform meets our needs, then consider how best to implement a new platform within the organization. If we get it right the first time, it’ll be easier, quicker, and more cost-effective in the long term. Tell us what you need and that will dictate timing.

Implementation can happen incrementally by department, company-wide in one sweep, limited to certain levels within an organization, and it’s important to decide whether to include embedded analytics and white-labeling for new revenue opportunities. All of these considerations will influence how quickly a new platform can be implemented.

“There’s been solid social interaction with the right kind of people who understand what they’re doing. Our sales guy did his homework and came back and hit every single request with full beautiful explanations,” says Jennah. “That allowed me to go to leadership and say they’re definitely getting a POC because they hit all the boxes. Because when the VP of engineering gets on the phone and starts asking the questions, they need to be able to answer them.”

Cloud, on-premises, or hybrid?

BI & Analytics Team: We need to have the fastest, most flexible, and most scalable access to data of any kind and in any form, to benefit as much as possible from the information we generate and handle. The Cloud has the capacity and flexibility that legacy on-premises solutions lack. Plus, cloud solutions include support, maintenance, and upgrades, which is cost-effective.

Data Team: Sure. We just need to consider whether we put all of our BI and analytics on the cloud, run it on-premises, or have a hybrid of the two? And how do we handle legacy systems on-premises? Is it more cost-effective and efficient to migrate them to the cloud, or maintain them on-premises alongside cloud analytics for newer data?

This is a key consideration when seeking to maximize the capabilities of BI and analytics and future-proofing them. The future of data and analytics is in the cloud. Its capacity is almost unlimited. You only pay for the capacity and processing power you need. There’s no need to invest in your own IT servers or additional data center facilities or hire a team to manage and maintain the application. The cloud services vendor will do all of that for you. And implementation is immediate, with no lead time need for ordering, installation, or deployment.

“I was really interested in a cloud-based interface.” Jennah reports. “I didn’t want a situation I had with another platform where I was constantly downloading and uploading data to communicate with internal business users. Other platforms that are so reliant on Internet speed won’t cut it in our current situation. It’s faster for me to build reports from scratch in Sisense than it would be to edit them in another platform.”

The first step to data-driven success

Choosing and implementing the right BI and analytics platform is a major project that can be hugely valuable for your organization. It will enable you to maximize the value you get from your data and will optimize the insights you can take from it. Plus, done right, it will empower many more of your colleagues to engage with data, and make vital decisions. Asking these questions and having this conversation, can be the first, essential step in establishing a successful, data-driven future that can supercharge your organization.

packages CTA banners Cloud Data Teams Building Bridges — Best Practices for BI Teams Working with Data Teams

Adam Murray began his career in corporate communications and PR in London and New York before moving to Tel Aviv. He’s spent the last ten years working with tech companies like Amdocs, Gilat Satellite Systems, and Allot Communications. He holds a Ph.D. in English Literature. When he’s not spending time with his wife and son, he’s preoccupied with his beloved football team, Tottenham Hotspur.

Let’s block ads! (Why?)

Blog – Sisense

Read More

When Is Microsoft Dynamics 365 The Best Fit For Your Company?

September 12, 2020   Microsoft Dynamics CRM
crmnav When Is Microsoft Dynamics 365 The Best Fit For Your Company?

Microsoft Dynamics 365 is a powerful and versatile Cloud ERP solution that is helping businesses around the world automate processes, organize data, and take advantage of some of the best Business Intelligence tools on the market.

So, do we recommend Microsoft Dynamics 365 for all our clients? In many instances, we do.

If your company is primarily marketing and sales-focused, you really should look into Microsoft Dynamics 365’s powerful CRM (Customer Relationship Management) features.

Some businesses are more focused on order management, event management, project management, service tickets, etc. Usually, we call this the “x” factor in XRM (Anything Relationship Management)

With Dynamics 365, most of what you need is built-in, out-of-the-box, and the rest is something we can configure for your business requirements.

Why salespeople like Microsoft Dynamics 365

  • Outlook integration

Salespeople live in Microsoft Outlook. Dynamics 365 CRM allows you to work in Outlook, tracking contacts, emails, and appointments without switching back and forth between Outlook and CRM.

  • Custom Views

Salespeople like to slice and dice their data. You could decide you need a list of jobs pending or companies in a specific area.  With Microsoft Dynamics 365 CRM, it takes just minutes to create a query and see results. You can also create custom dashboards, pie charts, and graphs to help you visualize your information.

  • Mobile Access

While working remotely, you can still have complete access to your customer information, contacts, opportunities, and everything else right from your iPhone or Android device. 

3 Companies That Are a Fit for Microsoft Dynamics 365

We work with a company that does clinical trials. Their sales process is very workflow-oriented, and they want their data pushed through to their project team. Microsoft Dynamics 365 is an excellent fit for them.

A company that sells measuring instruments is also a client of ours.  They have sales reps in territories nationwide. The sales team needs to track all their sales so that next year, they can try to upsell a maintenance plan or an upgrade. Microsoft Dynamics 365 CRM handles commission tracking and territory boundaries. It can be connected to Microsoft Power BI to do customer mapping and route optimization. And the sales reps can use it remotely via their phones.

The third company has a single Microsoft Dynamics 365 user. She can slice and dice data any way she likes to maintain up to the minute insight. She uses portal features so that her subcontractors can securely access the data they require. For a very low monthly fee, she can have all the functionality she needs.

Each of these companies needs technology to drive and manage their sales processes. The sales teams are the primary users, so Microsoft Dynamics 365 is an excellent solution for these companies.

But if your users focus less on sales and more on everything else, we generally recommend P2xRM, a custom XRM system available at an affordable cost.

P2 Automation can help you determine if your company is a fit for Microsoft Dynamics 365; let’s start the conversation. Contact P2 Automation.

By P2 Automation, www.p2automation.com

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

September Webinar – 10 Dynamics 365 Best Practices

September 10, 2020   CRM News and Info

There’s no doubt that Dynamics 365 can improve your day-day productivity. And we all know that the quicker and more efficiently are able to work, the more you accomplish. These 10 Dynamics 365 tricks will enable you to kick your performance up a notch. Our September webinar will give you a plethora of great tips you can implement today.

Join Brian Begley from enCloud9 as he demonstrates some tips, tricks, hacks, and best practices for Dynamics 365.

xSeptember 2020 Webinar image Case Management 625x235.png.pagespeed.ic.TynPcIRqJq September Webinar – 10 Dynamics 365 Best Practices

How can we help?

If you are not already using Dynamics 365 and are interested in getting started, contact the professionals at enCloud9 today.  We can get you started quickly with our Accelerator packages. Our Accelerators are prepackaged Dynamics 365 implementations designed to get your business up and running on Dynamics 365 in approximately seven days.

If you are currently using Dynamics 365 and your system just needs an update or a reboot, we can help there. Our rescue and repair solution will ensure that you are utilizing your Dynamics 365 to its full potential.

enCloud9 has one of the most experienced Microsoft Dynamics 365 CRM teams in the US. From pre-sales to project management, and user support, we will respond quickly with our expertise to answer your questions. Our history dates back to 2009, but our experience dates back even longer. Our consultants have been advising companies for over thirty years to give them the tools to achieve their goals. Our experience leads to your success. We use our unique approach to help small and medium-sized businesses lower their costs and boost productivity through Microsoft’s powerful range of cloud-based software.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Register for our exciting July Webinars – Dynamics 365 CRM Productivity at its best!

July 11, 2020   CRM News and Info

611x349xwebinar blog.jpg.pagespeed.ic.fvVwRap2Gg Register for our exciting July Webinars – Dynamics 365 CRM Productivity at its best!

July is a another exciting month since we are lined up with not one, not two, but three exciting webinars on our Preferred Solutions at Microsoft Business Applications marketplace AppSource viz. Maplytics, Attach2Dynamics, SharePoint Security Sync and Click2Undo.

Let’s throw a spotlight on how these apps follow the forefront of Microsoft’s technology:

Click2Undo

Businesses that offer flexibility to its users are the most popular and sought after in the market. One of these requirements is the ability to undo the changes that have been made in the system. Click2Undo offers just that – It enables its users to undo the changes made in Dynamics 365 CRM records in the click of a button. Users can undo last changes done in a record or collectively undo changes on multiple records. It allows the users to enter data more freely and store past records changes that can be undone.

Register for Click2Undo Webinar on Jul 15, 2020 10 AM EST

Maplytics

Maplytics is our flagship product and a market-leading geo-analytical app that is Certified for Microsoft Dynamics (CfMD). It integrates maps with CRM and offers a plethora of functionalities which has made it stand amongst the top echelon of Microsoft’s Dynamics 365 CRM solutions. Maplytics is quite popular for its features Territory Management, Radius Search, Appointment Planning, Optimized Routing, Area of Service, Heat Maps, Map Controls & Dashboards and Shapefile Integration. It has revamped the way maps were used for Dynamics 365 and brought a new wave of usability of maps for CRM.

Register for our Maplytics Webinar on July 23, 5:15 PM Brisbane time – GMT +10
Presented by MVP Roohi Shaikh (CEO, Inogic)

Attach2Dynamics + SharePoint Security Sync

This webinar comes with the goodness of two in just one. Yes! Get to know about Dynamics 365 attachment management on SharePoint, Dropbox and Azure Blob Storage and replicate the security model of CRM in SharePoint all at once. This webinar is all about our attachment management solutions Attach2Dynamics and SharePoint Security Sync with which you can perform actions on attachments like drag and drop, upload, download, delete, deep search, create anonymous sharable link and more on cloud storages. With SharePoint Security Sync being an advanced version of Attach2Dynamics, in addition to attachment management, it provides you the same level of data security in SharePoint that you enjoy in Dynamics 365 CRM by synchronizing security model between the systems.

Register for Attach2Dynamics and SharePoint Security Sync Webinar on Jul 23, 2020 11 AM EST

So hurry up and register for these informative webinars right now and avail the best benefits of CRM in one go. What are you waiting for, register now!

See you there – please register now and Stay Safe!

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Hands-On with Columnstore Indexes: Part 2 Best Practices and Guidelines

July 2, 2020   BI News and Info

The series so far:

  1. Hands-On with Columnstore Indexes: Part 1 Architecture
  2. Hands-On with Columnstore Indexes: Part 2 Best Practices and Guidelines

A discussion of how columnstore indexes work is important for making the best use of them, but a practical, hands-on discussion of reality and how they are used in production environments is key to making the most of them. There are many ways that data load processes can be tweaked to dramatically improve query performance and increase scalability.

The following is a list of what I consider to be the most significant tips, tricks, and best practices for designing, loading data into, and querying columnstore indexes. As always, test all changes thoroughly before implementing them.

Columnstore indexes are generally used in conjunction with big data, and having to restructure it after-the-fact can be painfully slow. Careful design can allow a table with a columnstore index to stand on its own for a long time without the need for significant architectural changes.

Column Order is #1

Rowgroup elimination is the most significant optimization provided by a columnstore index after you account for compression. It allows large swaths of a table to be skipped when reading data, which ultimately facilitates a columnstore index growing to a massive size without the latency that eventually burdens a classic B-tree index.

Each rowgroup contains a segment for each column in the table. Metadata is stored for the segment, of which the most significant values are the row count, minimum column value, and maximum column value. For simplicity, this is akin to having MIN(), MAX(), and COUNT(*) available automatically for all segments in all rowgroups in the table.

Unlike a classic clustered B-tree index, a columnstore index has no natural concept of order. When rows are inserted into the index, they are added in the order that you insert them. If rows are inserted from ten years ago, then they will be added to the most recently available rowgroups. If rows are then inserted from today, they will get added on next. It is up to you as the architect of the table to understand what the most important column is to order by and design schema around that column.

For most OLAP tables, the time dimension will be the one that is filtered, ordered, and aggregated by. As a result, optimal rowgroup elimination requires ordering data insertion by the time dimension and maintaining that convention for the life of the columnstore index.

A basic view of segment metadata can be viewed for the date column of our columnstore index as follows:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

SELECT

tables.name AS table_name,

indexes.name AS index_name,

columns.name AS column_name,

partitions.partition_number,

column_store_segments.segment_id,

column_store_segments.min_data_id,

column_store_segments.max_data_id,

column_store_segments.row_count

FROM sys.column_store_segments

INNER JOIN sys.partitions

ON column_store_segments.hobt_id = partitions.hobt_id

INNER JOIN sys.indexes

ON indexes.index_id = partitions.index_id

AND indexes.object_id = partitions.object_id

INNER JOIN sys.tables

ON tables.object_id = indexes.object_id

INNER JOIN sys.columns

ON tables.object_id = columns.object_id

AND column_store_segments.column_id =

     columns.column_id

WHERE tables.name = ‘fact_order_BIG_CCI’

AND columns.name = ‘Order Date Key’

ORDER BY tables.name, columns.name,

column_store_segments.segment_id;

The results provide segment metadata for the fact_order_BIG_CCI table and the [Order Date Key] column:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

Note the columns min_data_id and max_data_id. These ID values link to dictionaries within SQL Server that store the actual minimum and maximum values. When queried, the filter values are converted to IDs and compared to the minimum and maximum values shown here. If a segment contains no values needed to satisfy a query, it is skipped. If a segment contains at least one value, then it will be included in the execution plan.

The image above highlights a BIG problem here: the minimum and maximum data ID values are the same for all but the last segment. This indicates that when the columnstore index was created, the data was not ordered by the date key. As a result, all segments will need to be read for any query against the columnstore index based on the date.

This is a common oversight, but one that is easy to correct. Note that a clustered columnstore index does not have any options that allow for order to be specified. It is up to the user to make this determination and implement it by following a process similar to this:

  1. Create a new table.
  2. Create a clustered index on the column that the table should be ordered by.
  3. Insert data in the order of the most significant dimension (typically date/time).
  4. Create the clustered columnstore index and drop the clustered B-Tree as part of its creation.
  5. When executing data loads, continue to insert data in the same order.

This process will create a columnstore index that is ordered solely by its most critical column and continue to maintain that order indefinitely. Consider this order to be analogous to the key columns of a classic clustered index. This may seem to be a very roundabout process, but it works effectively. Once created, the columnstore index can be inserted into using whatever key order was originally defined.

The lack of order in fact_order_BIG_CCI can be illustrated with a simple query:

SET STATISTICS IO ON;

GO

SELECT

SUM([Quantity])

FROM dbo.fact_order_BIG_CCI

WHERE [Order Date Key] >= ‘2016/01/01′

AND [Order Date Key] < ‘2016/02/01′;

The results return relatively quickly, but the IO details tell us something is not quite right here:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

Note that 22 segments were read, and one was skipped, despite the query only looking for a single month of data. Realistically, with many years of data in this table, no more than a handful of segments should need to be read in order to satisfy such a narrow query. As long as the date values searched for appear in a limited set of rowgroups, then the rest can be automatically ignored.

With this mistake identified, let’s drop fact_order_BIG_CCI and recreate it by following this set of steps instead:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

DROP TABLE dbo.fact_order_BIG_CCI;

CREATE TABLE dbo.fact_order_BIG_CCI (

[Order Key] [bigint] NOT NULL,

[City Key] [int] NOT NULL,

[Customer Key] [int] NOT NULL,

[Stock Item Key] [int] NOT NULL,

[Order Date Key] [date] NOT NULL,

[Picked Date Key] [date] NULL,

[Salesperson Key] [int] NOT NULL,

[Picker Key] [int] NULL,

[WWI Order ID] [int] NOT NULL,

[WWI Backorder ID] [int] NULL,

[Description] [nvarchar](100) NOT NULL,

[Package] [nvarchar](50) NOT NULL,

[Quantity] [int] NOT NULL,

[Unit Price] [decimal](18, 2) NOT NULL,

[Tax Rate] [decimal](18, 3) NOT NULL,

[Total Excluding Tax] [decimal](18, 2) NOT NULL,

[Tax Amount] [decimal](18, 2) NOT NULL,

[Total Including Tax] [decimal](18, 2) NOT NULL,

[Lineage Key] [int] NOT NULL);

CREATE CLUSTERED INDEX CCI_fact_order_BIG_CCI

ON dbo.fact_order_BIG_CCI ([Order Date Key]);

INSERT INTO dbo.fact_order_BIG_CCI

SELECT

     [Order Key] + (250000 * ([Day Number] +

         ([Calendar Month Number] * 31))) AS [Order Key]

    ,[City Key]

    ,[Customer Key]

    ,[Stock Item Key]

    ,[Order Date Key]

    ,[Picked Date Key]

    ,[Salesperson Key]

    ,[Picker Key]

    ,[WWI Order ID]

    ,[WWI Backorder ID]

    ,[Description]

    ,[Package]

    ,[Quantity]

    ,[Unit Price]

    ,[Tax Rate]

    ,[Total Excluding Tax]

    ,[Tax Amount]

    ,[Total Including Tax]

    ,[Lineage Key]

FROM Fact.[Order]

CROSS JOIN

Dimension.Date

WHERE Date.Date <= ‘2013-04-10′

ORDER BY [Order].[Order Date Key];

CREATE CLUSTERED COLUMNSTORE INDEX CCI_fact_order_BIG_CCI

ON dbo.fact_order_BIG_CCI WITH (MAXDOP = 1, DROP_EXISTING = ON);

Note that only three changes have been made to this code:

  1. A clustered B-tree index is created prior to any data being written to it.
  2. The INSERT query includes an ORDER BY so that data is ordered by [Order Date Key] as it is added to the columnstore index.
  3. The clustered B-tree index is swapped for the columnstore index at the end of the process.

When complete, the resulting table will contain the same data as it did at the start of this article, but physically ordered to match what makes sense for the underlying data set. This can be verified by rerunning the following query:

SELECT

SUM([Quantity])

FROM dbo.fact_order_BIG_CCI

WHERE [Order Date Key] >= ‘2016-01-01′

AND [Order Date Key] < ‘2016-02-01′;

The results show significantly improved performance:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

This time, only one segment was read, and 22 were skipped. Reads are a fraction of what they were earlier. This is a significant improvement and allows us to make the most out of a columnstore index.

The takeaway of this experiment is that order matters in a columnstore index. When building a columnstore index, ensure that order is created and maintained for whatever column will be the most common filter by:

  1. Order the data in the initial data load. This can be accomplished by either:
    1. Creating a clustered B-tree index on the ordering column, populating all initial data, and then swapping it with a columnstore index.
    2. Create the columnstore index first, and then insert data in the correct order of the ordering column.
  2. Insert new data into the columnstore index using the same order every time.

Typically, the correct data order will be ascending, but do consider this detail when creating a columnstore index. If for any reason descending would make sense, be sure to design index creation and data insertion to match that order. The goal is to ensure that as few rowgroups need to be scanned as possible when executing an analytic query. When data is inserted out-of-order, the result will be that more rowgroups need to be scanned in order to fulfill that query. This may be viewed as a form of fragmentation, even though it does not fit the standard definition of index fragmentation.

Partitioning & Clustered Columnstore Indexes

Table partitioning is a natural fit for a large columnstore index. For a table that can contain row counts in the billions, it may become cumbersome to maintain all of the data in a single structure, especially if reporting needs rarely access older data.

A classic OLAP table will have both newer and older data. If common reporting queries only access a recent day, month, quarter, or year, then maintaining the older data in the same place may be unnecessary. Equally important is the fact that in an OLAP data store, older data typically does not change. If it does, it’s usually the result of software releases or other one-off operations that fall within the bounds of our world.

Table partitioning places data into multiple filegroups within a database. The filegroups can then be stored in different data files in whatever storage locations are convenient. This paradigm provides several benefits:

  • Partition Elimination: Similar to rowgroup elimination, partition elimination allows partitions with unneeded data to be skipped. This can further improve performance on a large columnstore index.
  • Faster Migrations: If there is a need to migrate a database to a new server or SQL Server version, then older partitions can be backed up and copied to the new data source ahead of the migration. This reduces the downtime incurred by the migration as only active data needs to be migrated during the maintenance/outage window.

Similarly, partition switching can allow for data to be moved between tables exceptionally quickly.

  • Partitioned Database Maintenance: Common tasks such as backups and index maintenance can be targeted at specific partitions that contain active data. Older partitions that are static and no longer updated may be skipped.
  • No Code Changes: Music to the ears of any developer: Table partitioning is a database feature that is invisible to the consumers of a table’s data. Therefore, the code needed to retrieve data before and after partitioning is added will be the same.
  • Partition Column = Columnstore Order Column: The column that is used to organize the columnstore index will be the same column used in the partition function, making for an easy and consistent solution.

The fundamental steps to create a table with partitioning are as follows:

  1. Create filegroups for each partition based on the columnstore index ordering column.
  2. Create database files within each filegroup that will contain the data for each partition within the table.
  3. Create a partition function that determines how the data will be split based on the ordering/key column.
  4. Create a partition schema that binds the partition function to a set of filegroups.
  5. Create the table on the partition scheme defined above.
  6. Proceed with table population and usage as usual.

The example provided in this article can be recreated using table partitioning, though it is important to note that this is only one way to do this. There are many ways to implement partitioning, and this is not intended to be an article about partitioning, but instead introduce the idea that columnstore indexes and partitioning can be used together to continue to improve OLAP query performance.

Create New Filegroups and Files

Partitioned data can be segregated into different file groups and files. If desired, then a script similar to this would take care of the task:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

ALTER DATABASE WideWorldImportersDW ADD FILEGROUP WideWorldImportersDW_2013_fg;

ALTER DATABASE WideWorldImportersDW ADD FILEGROUP WideWorldImportersDW_2014_fg;

ALTER DATABASE WideWorldImportersDW ADD FILEGROUP WideWorldImportersDW_2015_fg;

ALTER DATABASE WideWorldImportersDW ADD FILEGROUP WideWorldImportersDW_2016_fg;

ALTER DATABASE WideWorldImportersDW ADD FILEGROUP WideWorldImportersDW_2017_fg;

ALTER DATABASE WideWorldImportersDW ADD FILE

(NAME = WideWorldImportersDW_2013_data,

        FILENAME = ‘C:\SQLData\WideWorldImportersDW_2013_data.ndf’,

SIZE = 200MB, MAXSIZE = UNLIMITED, FILEGROWTH = 1GB)

TO FILEGROUP WideWorldImportersDW_2013_fg;

ALTER DATABASE WideWorldImportersDW ADD FILE

(NAME = WideWorldImportersDW_2014_data, FILENAME = ‘C:\SQLData\WideWorldImportersDW_2014_data.ndf’,

SIZE = 200MB, MAXSIZE = UNLIMITED, FILEGROWTH = 1GB)

TO FILEGROUP WideWorldImportersDW_2014_fg;

ALTER DATABASE WideWorldImportersDW ADD FILE

(NAME = WideWorldImportersDW_2015_data, FILENAME = ‘C:\SQLData\WideWorldImportersDW_2015_data.ndf’,

SIZE = 200MB, MAXSIZE = UNLIMITED, FILEGROWTH = 1GB)

TO FILEGROUP WideWorldImportersDW_2015_fg;

ALTER DATABASE WideWorldImportersDW ADD FILE

(NAME = WideWorldImportersDW_2016_data, FILENAME = ‘C:\SQLData\WideWorldImportersDW_2016_data.ndf’,

SIZE = 200MB, MAXSIZE = UNLIMITED, FILEGROWTH = 1GB)

TO FILEGROUP WideWorldImportersDW_2016_fg;

ALTER DATABASE WideWorldImportersDW ADD FILE

(NAME = WideWorldImportersDW_2017_data, FILENAME = ‘C:\SQLData\WideWorldImportersDW_2017_data.ndf’,

SIZE = 200MB, MAXSIZE = UNLIMITED, FILEGROWTH = 1GB)

TO FILEGROUP WideWorldImportersDW_2017_fg;

The file and filegroup names are indicative of the date of the data being inserted into them. Files can be placed on different types of storage or in different locations, which can assist in growing a database over time. It can also allow for faster storage to be used for more critical data, whereas slower/cheaper storage can be used for older/less-used data.

Create a Partition Function

The partition function tells SQL Server on what boundaries to split data. For the example presented in this article, [Order Date Key], a DATE column will be used for this task:

CREATE PARTITION FUNCTION fact_order_BIG_CCI_years_function (DATE)

AS RANGE RIGHT FOR VALUES

(‘2014-01-01′, ‘2015-01-01′, ‘2016-01-01′, ‘2017-01-01′);

The result of this function will be to split data into 5 ranges:

Date < 2014-01-01

Date >= 2014-01-01 & Date < 2015-01-01

Date >= 2015-01-01 & Date < 2016-01-01

Date >= 2016-01-01 & Date < 2017-01-01

Date >= 2017-01-01

Create a Partition Scheme

The partition scheme tells SQL Server where data should be physically stored, based on the function defined above. For this demo, a partition scheme such as this will give us the desired results:

CREATE PARTITION SCHEME fact_order_BIG_CCI_years_scheme

AS PARTITION fact_order_BIG_CCI_years_function

TO (WideWorldImportersDW_2013_fg, WideWorldImportersDW_2014_fg,

    WideWorldImportersDW_2015_fg, WideWorldImportersDW_2016_fg,

    WideWorldImportersDW_2017_fg);

Each date range defined above will be assigned to a filegroup, and therefore a database file.

Create the Table

All steps performed previously to create and populate a large table with a columnstore index are identical, except for a single line within the table creation:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

CREATE TABLE dbo.fact_order_BIG_CCI (

[Order Key] [bigint] NOT NULL,

[City Key] [int] NOT NULL,

[Customer Key] [int] NOT NULL,

[Stock Item Key] [int] NOT NULL,

[Order Date Key] [date] NOT NULL,

[Picked Date Key] [date] NULL,

[Salesperson Key] [int] NOT NULL,

[Picker Key] [int] NULL,

[WWI Order ID] [int] NOT NULL,

[WWI Backorder ID] [int] NULL,

[Description] [nvarchar](100) NOT NULL,

[Package] [nvarchar](50) NOT NULL,

[Quantity] [int] NOT NULL,

[Unit Price] [decimal](18, 2) NOT NULL,

[Tax Rate] [decimal](18, 3) NOT NULL,

[Total Excluding Tax] [decimal](18, 2) NOT NULL,

[Tax Amount] [decimal](18, 2) NOT NULL,

[Total Including Tax] [decimal](18, 2) NOT NULL,

[Lineage Key] [int] NOT NULL)

ON fact_order_BIG_CCI_years_scheme([Order Date Key]);

Note the final line of the query that assigns the partition scheme created above to this table. When data is written to the table, it will be written to the appropriate data file, depending on the date provided by [Order Date Key].

Testing Partitioning

The same query used to test a narrow date range can illustrate the effect that table partitioning can have on performance:

SELECT

SUM([Quantity])

FROM dbo.fact_order_BIG_CCI

WHERE [Order Date Key] >= ‘2016-01-01′

AND [Order Date Key] < ‘2016-02-01′;

The following is the IO for this query:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

Instead of reading one segment and skipping 22 segments, SQL Server read one segment and skipped two. The remaining segments reside in other partitions and are automatically eliminated before reading from the table. This allows a columnstore index to have its growth split up into more manageable portions based on a time dimension. Other dimensions can be used for partitioning, though time is typically the most natural fit.

Final Notes on Partitioning

Partitioning is an optional step when implementing a columnstore index but may provide better performance and increased flexibility with regards to maintenance, software releases, and migrations.

Even if partitioning is not implemented initially, a table could be created after-the-fact and data migrated into it from the original table. Data movement such as this could be challenging in an OLTP environment, but in an OLAP database where writes are isolated, it is possible to use that period of no change to create, populate, and swap to a new table with no outage to the reporting applications that use the table.

Avoid Updates

This is worth a second mention: Avoid updates at all costs! Columnstore indexes do not treat updates efficiently. Sometimes they will perform well, especially against smaller tables, but against a large columnstore index, updates can be extremely expensive.

If data must be updated, structure it as a single delete operation followed by a single insert operation. This will take far less time to execute, cause less contention, and consume far fewer system resources.

The fact that updates can perform poorly is not well documented, so please put an extra emphasis on this fact when researching the use of columnstore indexes. If a table is being converted from a classic rowstore to a columnstore index, ensure that there are no auxiliary processes that update rows outside of the standard data load process.

Query Fewer Columns

Because data is split into segments for each column in a rowgroup, querying fewer columns means that less data needs to be retrieved in order to satisfy the query.

If a table contains 20 columns and a query performs analytics on 2 of them, then the result will be that 90% of the segments (for other columns) can be disregarded.

While a columnstore index can service SELECT * queries somewhat efficiently due to their high compression-ratio, this is not what a columnstore index is optimized to do. Like with standard clustered indexes, if a report or application does not require a column, then leave it out of the query. This will save memory, speed up reports, and make the most of columnstore indexes, which are optimized for queries against large row counts rather than large column counts.

Columnstore Compression vs. Columnstore Archive Compression

SQL Server provides an additional level of compression for columnstore indexes called Archive Compression. This shrinks the data footprint of a columnstore index further but incurs an additional CPU/duration cost to read the data.

Archive compression is meant solely for older data that is accessed infrequently and where the storage footprint is a concern. This is an important aspect of archive compression: Only use it if storage is limited, and reducing the data footprint is exceptionally beneficial. Typically, standard columnstore index compression will shrink data enough that additional savings may not be necessary.

Note that if a table is partitioned, compression can be split up such that older partitions are assigned archive compression, whereas those partitions with more frequently accessed data are assigned standard columnstore compression.

For example, the following illustrates the storage footprint of the table used in this article:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

23.1 million rows are squeezed into 108MB. This is exceptional compression compared to the OLTP variant:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

That is a huge difference! The columnstore index reduced the storage footprint from 5GB to 100MB. In a table where columns have frequently repeated values, expect to see exceptional compression ratios such as this. The less fragmented the columnstore index, the smaller the footprint becomes, as well. This columnstore index has been targeted with quite a bit of optimization throughout this article, so its fragmentation at this point in time is negligible.

For demonstration purposes, archive compression will be applied to the entire columnstore index using the following index rebuild statement:

ALTER INDEX CCI_fact_order_BIG_CCI ON dbo.fact_order_BIG_CCI

REBUILD PARTITION = ALL WITH

(DATA_COMPRESSION = COLUMNSTORE_ARCHIVE, ONLINE = ON);

Note that the only difference is that the data compression type has been changed from columnstore to columnstore_archive. The following are the storage metrics for the table after the rebuild completes:

 Hands On with Columnstore Indexes: Part 2 Best Practices and Guidelines

The data size has been reduced by another 25%, which is very impressive!

Archive compression is an excellent way to reduce storage footprint on data that is either:

  • Accessed infrequently
  • Can tolerate potentially slower execution times.

Only implement it, though, if storage is a concern and reducing data storage size is important. If using archive compression, consider combining it with table partitioning to allow for compression to be customized based on the data contained within each partition. Newer partitions can be targeted with standard columnstore compression, whereas older partitions can be targeted with archive compression.

Conclusion

The organization of data as it is loaded into a columnstore index is critical for optimizing speed. Data that is completely ordered by a common search column (typically a date or datetime) will allow for rowgroup elimination to occur naturally as the data is read. Similarly, querying fewer columns can ensure that segments are eliminated when querying across rowgroups. Lastly, implementing partitioning allows for partition elimination to occur, on top of rowgroup and segment elimination.

Combining these three features will significantly improve OLAP query performance against a columnstore index. In addition, scalability will be significantly improved as the volume of data needed to service a query will only ever be massive if there is a clear need to pull massive amounts of data. Otherwise, standard reporting needs that manage daily, weekly, monthly, quarterly, or annual analytics will not need to query any more data than is needed to return their results.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More
« Older posts
  • Recent Posts

    • Accelerate Your Data Strategies and Investments to Stay Competitive in the Banking Sector
    • SQL Server Security – Fixed server and database roles
    • Teradata Named a Leader in Cloud Data Warehouse Evaluation by Independent Research Firm
    • Derivative of a norm
    • TODAY’S OPEN THREAD
  • Categories

  • Archives

    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited