Category Archives: Sisense

How to Calculate Total Cost of Ownership for Business Intelligence

Imagine you’re comparing gym memberships to figure out which one offers the best value. Sure, you could simply look at the monthly fee and go for the cheapest, but that wouldn’t tell you everything you need to know about the total cost of ownership.

For starters, you’d want to know what the cost includes. Does it offer all the machines and classes? Do you have to rent/buy extra equipment? Then there are the less obvious considerations. Do you need to pay a trainer to get true value? What’s the price of travel? Is there enough capacity to cope with the crowds, even during peak hours?

Loosely speaking, the approach to buying a new gym membership should be, for the majority of savvy businesses, the same approach they use for price comparisons when weighing up different tech solutions for their business – especially with a solution as powerful and intricate as Business Intelligence.

Business Intelligence Pricing – There’s a Catch

There are many things to consider when pricing out the total cost of ownership of BI. To really get a feel for the cost of implementing a BI solution, start by making sure that the platform in question does everything you need and has enough capacity for all of your data – or if not, how much you’ll need to spend on additional technical infrastructure, tools, or the necessary consulting / IT expertise manpower to tailor a solution version that does work for you.

Try to estimate how much you’ll need to commit in terms of internal budget and resources, whether you’ll need to pay to take on new staff, and the opportunity costs of taking existing personnel off revenue-generating projects to ensure smooth deployment and daily use.

Then, once you’ve tallied up all the hidden costs of rolling out and operating a workable solution, choose the option that offers the best value for the price tag.

Sounds sensible, right? Well, yes – in 99% of cases, this formula works just fine.

But BI is different. To work out the real cost of using your BI platform, you have to take a final, vital step: calculate the value that a BI solution gives you – it’s cost of new analytics.

770x250 TOC 2 770x250 How to Calculate Total Cost of Ownership for Business Intelligence

Considering the Cost of New Analytics

Let’s look at the gym membership example again. Imagine that you spot in the small print that one of the gyms is only open on weekends, whereas the other one is open every day.

Until this point, you’d thought Gym A offered the better deal. You’d calculated the total cost of ownership at $ 820 per year, while Gym B worked out at $ 1200 per year.

But if you can only visit Gym A a maximum of twice a week, even if you take every available opportunity to go, you’re still paying a significant amount of money per session. The gym is only open 104 days of the year, so the absolute minimum you pay per workout will be:

$ 820 / 104 = $ 7.8

Gym B, on the other hand, might be more expensive, but it’s open seven days a week. In fact, it’s only closed on two days out of the whole year. If you took advantage of this and went there on every possible day, the minimum you’d pay per workout would be:

$ 1200 / 363 = $ 3.3

Suddenly, Gym B looks like a much better option, right?

This is precisely how you need to approach your value assessment of a BI platform, too.

That’s because BI platforms vary wildly in the time it takes you to submit a new data query, generate results and present them in a format that makes sense – for example, an easy-to-process dashboard showing progress on your KPIs.

On first look, it might seem that the annual total cost of ownership of one product is much higher than another. Once you factor in the turnaround time for a data analysis project, though, and divide your number by the maximum amount of data projects you can process in a year, this could quickly start to look very different indeed.

That’s because BI tools aren’t best measured by total cost of ownership per annum, but by the cost of running each individual analysis.

How to Calculate the Cost of New Analytics

In short, it’s putting a concrete number on the actual value you and your team are going to be getting from a BI solution.

Since we have already established that upfront costs is just one aspect of a bigger equation, businesses are now using a newer, more clever and accurate way of measuring the total cost of ownership of a BI solution by incorporating the full value potential of BI – how much will you and your team benefit from BI – that’s by calculating the cost of new analytics.

Ask yourself: What is the cost of a new analytics report for my team? This is precisely how you need to approach your value assessment of a BI platform because the cost of new analytics essentially calculates how quickly your team can churn out (and benefit from) new analytics and reports, which actually measures how much value for how much investment you are getting from your BI tool.

A Formula for Calculating BI’s Total Cost of Ownership

By incorporating the notion of speed, you will quantify how agile a BI tool is, which depends on quickness on operations.

Get our guide on calculating the total cost of ownership of a BI tool to see an exact formula on how you can quantify the cost of new analytics and take all costs – from technical infrastructure to manpower- into account before you buy a business intelligence solution.

770x250 TOC 2 770x250 How to Calculate Total Cost of Ownership for Business Intelligence

Let’s block ads! (Why?)

Blog – Sisense

Postgres vs. MongoDB for Storing JSON Data – Which Should You Choose?

In the fast-moving world of unstructured data, does it make more sense to use a database management system (DBMS) built from the start to handle the widely accepted JSON data format? Or can an SQL database that now includes JSON functionality be a better choice? Postgres with its SQL roots started offering NoSQL functionality early on with its key-value store functionality, called hstore and introduced in 2006. JSON document storage and management for Postgres arrived somewhat later, after MongoDB began life in 2009 as a native JSON document DBMS. Since then, MongoDB and Postgres have both been enhancing their JSON storage capabilities.

What is MongoDB? What is PostgreSQL?

The question of MongoDB vs PostgreSQL is not a new one. Let’s take a look at the most basic differences between the two commonly used databases.

MongoDB is an open source database designed for scalability and agility. It uses dynamic schemas so you can make records without defining the structure first, and supports hierarchical documentation of data.

mongo

On the other hand, PostgreSQL is an open source relational database with a focus on standards compliance and extensibility. PostgreSQL uses both dynamic and static schemas and, unlike MongoDB, supports relational data and normalized form storage.

The Rise and Rise of JSON and JSONB

To better understand similarities and differences between the two database systems, let’s quickly recap on JavaScript Object Notation, or JSON for short. Unstructured and human-readable, the JSON data format is something of a milestone on the road to user-friendly computing. It offers the ability to dump data into a database as it comes. Fields in a data record can be nested and different fields can be added to individual data records as required. Preferred now by many over XML, the flexible JSON data format is used by a number of NoSQL data stores. Because basic JSON lacks indexing, the JSONB data format was created. It stores data in a binary format, instead of a simple JSON blob. JSONB data input is little slower, but processing is then significantly faster because the data does not need to be reparsed.

banner database selection Postgres vs. MongoDB for Storing JSON Data – Which Should You Choose?

Deliberate Constraints and Collateral Limitations

Both Postgres and MongoDB offer JSON and JSONB (MongoDB calls its JSONB “BSON”) data storage functionality. There are however differences:

  • The BSON format used by MongoDB is limited to a maximum of 64 bits for representing an integer or floating point number, whereas the JSONB format used by Postgres does not have this limit.
  • Postgres provides data constraint and validation functions to help ensure that JSON documents are more meaningful: for example, preventing attempts to store alphabetical characters where numerical values are expected.
  • MongoDB offers automatic database sharding for easy horizontal scaling of JSON data storage. Scaling of Postgres installations has often been vertical. Horizontal scaling of Postgres is also possible, but tends to be more involved or use an additional third party solution.
  • MongoDB also offers the possibility of increasing write throughput by deferring writing to disk. The tradeoff is potential loss of data, but this may suit users who have less need to persist their data.

In offering both SQL as well as JSON storage, Postgres lets users keep their options open. Data can be routed to a JSON column for possible data modeling afterwards, or to a table using an SQL schema, all within the same Postgres database.

Native JSON Data Stores do not always have the Best Performance

One of the advantages frequently cited for NoSQL database management systems is their performance. Operating with simpler data structures than those of SQL databases, NoSQL database systems have often shown faster speeds of storage and retrieval. While they may lack the ACID (atomicity, consistency, isolation and durability) properties needed for financial transactions, for example, they may offer advantages in handling larger volumes of unstructured data more rapidly.

However, NoSQL fans got a shock when performance ratings from EnterpriseDB (enterprisedb.com) in 2014 showed Postgres performance to be significantly better than that of MongoDB. The tests were based on selecting, loading, and inserting complex document data to the tune of 50 million records. Postgres was about twice as fast in data ingestion, two-and-half times as fast in data selection, and three times as fast in data inserts. Postgres also consumed 25% less disk space.

Still, performance ratings are made to be beaten. With the introduction of its WiredTiger database engine, MongoDB 3.0 offered improvements in write speeds (between 7 and 10 times as fast), together with data compression of 50% to cut disk space.

Use Cases and Factors Affecting the Choice of Postgres or MongoDB

The question is – where does this leave us in terms of choosing either Postgres or MongoDB for JSON data storage? The answer is that any choice will depend on your goals and your circumstances.

  • Focus on the application. MongoDB minimizes the number of database management commands needed in application development. This can fit well with rapid prototyping, as well as queries and commands built on demand by the application. On the other hand, the application itself must insert meaningful data. Software maintenance may require more effort afterwards as well.
  • Structure needed later. Postgres offers similar broad powers for unstructured data, but also lets developers migrate to a mixture of unstructured and structured data later. If ACID compliance is likely to be a future requirement as data collected or generated becomes more valuable to its owners, Postgres may be a more suitable choice from the beginning for JSON data storage.
  • Static JSON data. For relatively static JSON data and active data naturally structured for SQL storage, Postgres offers the advantage of efficient JSONB representation and indexing capabilities (although ODBC and BI integration enable running SQL queries in MongoDB reporting as well).
  • JSON data modification. On the other hand, for JSON data that will be modified within the data store, MongoDB, engineered from the start around JSON documents, offers possibilities for updating individual fields that Postgres does not. While Postgres is efficient in the storage and retrieval of JSON documents, JSON field modification in Postgres requires the extraction of the entire JSON document concerned, modification of the field concerned, and the subsequent rewriting of the document back into the data store.
  • Dynamic queries. Typical uses of MongoDB focus on frequently changing data of different types, without any complex transactions between objects. It is suited to dynamic queries of frequently written or read data, offering good performance for the storage of JSON documents with a large number of fields with ad hoc queries on a small subset of those fields.
  • Automatic sharding. The automatic sharding functionality of MongoDB may fit well with IT environments using multiple instances of standardized, commodity hardware (converged architectures).
  • Costs and resources. The availability and costs of hosting platforms for Postgres and MongoDB may be part of the decision criteria, as well as the ease or expense of hiring developers with the corresponding skills. Resources of Postgres knowledge and talent have been built up over time, encouraged among other things by the inclusion of Postgres at no extra cost in many Linux operating systems. On the other hand, since its introduction, MongoDB has already achieved the status of fifth most popular database technology out of all the technologies available (and not just NoSQL), suggesting that it too benefits from a reasonable pool of talent.

Conclusion

Emotions sometimes run high, even when it comes to purely technical choices. Data-driven decisions are not always easy to make when new releases and new performance ratings continually upset previous evaluations. In addition, the use cases above show that there is no automatic winner. If you have already made a choice between Postgres and MongoDB, sunk effort and acquired expertise may make a change undesirable. However, the experiences of some business users related on the net show that sometimes such choices are reversed even after a significant period of deployment and operation.

In the future, a choice between Postgres and MongoDB for JSON storage may depend on yet other factors. When commenting on the release of JSONB functionality for Postgres, Robert Haas, the Chief Architect at EnterpriseDB, said, “The implication for NoSQL solutions is that innovation around the format in which you store your data is not enough; you’ve got to come up with truly novel capabilities for working with the data, which is much harder.”

banner database selection Postgres vs. MongoDB for Storing JSON Data – Which Should You Choose?

Let’s block ads! (Why?)

Blog – Sisense

How to Streamline Query Times to Handle Billions of Records

Here at Sisense, we love a challenge, so when a client comes to us and tells us they need to find a way to run queries on billions of records without this slowing them down, our ears perk up and we leap at the chance to find a solution.

In fact, that’s how we recently found ourselves testing a billion transactional records and three million dimensional records – totaling a whopping 500GB of data – with 100 concurrent users and up to 38 concurrent queries, with a total setup time of just two hours… and an average query time of 0.1 seconds!

But wait! I’m getting ahead of myself. Let’s start by talking through some of the issues that affect how fast you can query data.

How Are You Storing Your Data?

Let’s start with the obvious: data warehousing.

Typically, working with masses of data means you also need extensive data warehousing in place to handle it, alongside Extract-Transform-Load tools that uploads data from the original source on a regular basis (Extract), adjusts formats and resolve conflicts to make the datasets compatible (Transform), and then delivers all of this data into the analytical repository where it’s ready for you to run queries, calculations, and trend analysis (Load).

This creates a single version of the truth – a source of data that brings together all your disparate pieces into one place. While this is great, there are also some drawbacks to data warehousing.

First of all, data warehouses are highly structured, and the row-and-column schema can be overly restrictive for some forms of data. Also, the sheer volume of data quickly overloads most systems, grinding to a halt if you run queries that attempt to tap into the entire data pool.

Then, there are data marts.

To help tackle the issues that come with working with huge data sets, many IT teams deploy data marts alongside their databases. These essentially siphon off access to a smaller chunk of the data – and then you select which data marts each department or user has access to. The outcome of this is that you put less pressure on your hardware, as your computer is tapping into smaller pools of data, but the flipside is that you have vastly reduced access to the organization’s total data assets in the first place.

At the other end of the scale, you have data lakes.

These are used to store massive amounts of unstructured data, helping to bypass some of the issues that come with using conventional data warehouses. They also make sandboxing easier, allowing you to try out different data models and transformations before you settle on a final schema for your data warehouse – to avoid getting trapped into something that doesn’t work for you.

The trouble with data lakes is that, while the offer formidable capacity for storing data, you do need to have all kinds of tools in place to interface between the data lake and your data warehouse, or with your end data analytics tool if you want to skip the need for warehousing on top. Systems like this that use data lakes aren’t exactly agile, so your IT team will need to be pretty heavily involved in order to extract the insights you want.

Alternatively, you might deal with unstructured data using an unconventional data storage option.

For example, you might use a NoSQL database like MongoDB.

This gives you tons of freedom in terms of the kind of data you add and store, and the way that you choose to store it. MongoDB also makes use of sharding techniques to avoid piling the pressure on your IT infrastructure, allowing for (pretty much) infinite scaling.

The downside, of course, is that the thing that makes this so great – the unstructured, NoSQL architecture – also makes it tricky to feed this data straight into a reporting tool or analytics platform. You need a way to clean up and reconcile the data first.

What About Tools Used for Analysis?

Dynamic DBMS tools like PostgreSQL can open doors.

PostgreSQL is an analytics and reporting tool that allows you to work with an enormous variety of data types – including native data types that give you much more freedom as you come to build and manipulate a BI solution, and “array” types, which help you to aggregate query results rapidly on an ad hoc basis.

Introducing PostgreSQL into the mix can be massively helpful in bringing together your disparate strands – but again, it can’t do everything. It can’t help much with qualitative data, and as a non-relational database (which wasn’t built to handle Big Data) it will buckle under huge volumes of information.

You can also use R for high end predictive analytics.

Lastly, once you have a solid BI system in place, you can add another layer of awesomeness by using R to build working models for statistical analysis, quickly and easily. R is incredibly versatile, and allows you to move away from static reporting by programming a system for analysis that you can adapt and improve as you go.

The thing is, though, this is an add-on: it doesn’t replace your current BI or data analytics system. R is an excellent programming language that can help you generate predictive analytics fast, but you need to have a rock-solid system in place for handling and preparing data in the first place.

How to Streamline Everything

I know what you’re thinking: I said I was going to explain how to streamline your data queries to help you generate results faster, but so far, all I’ve done is dangle some potential solutions and then show you how they fall short!

That’s because I haven’t revealed the secret sauce that binds all these pieces together in perfect harmony.

As you can see, each of the tools we’ve discussed are used to fix one problem in the storage, flow and use of data within your organization, but they don’t help with the big picture.

That’s where Sisense’s Elasticube comes in.

The Elasticube allows you to store data or drag it in directly from your existing stores at lightning speed, giving users unfettered access to their entire pool of data, whatever format it’s kept in (unless you choose to stagger permissions). Thanks to clever use of In-Chip Processing and a Columnar Database structure, you tap into only the data you need for the query, without restricting yourself permanently, as you would with a data mart.

You can then reconcile and harmonize this data with minimal hassle to treat all these strands as a single data source for the purpose of analysis and reporting.

Still within the Elasticube, you can map and manipulate these data sources to build your own dashboards and run your own queries at incredible speed.

Plus, using our range of custom-built connectors, you can link your Sisense Elasticube directly to MongoDB, PostgreSQL and other DMBS tools, and you can integrate Sisense with R for even more in-depth predictive analytics.

Where the Magic Happens

So that’s the big secret. Using the Sisense Elasticube, I was able to set up a system in 120 minutes that could run concurrent queries on data representing one billion online purchases, from three million origins/ destinations, with an average query time of 0.1 seconds and a maximum query time of just 3 seconds.

Pretty impressive, huh? Here’s what it looked like:

Sisense performance 4 770x415 How to Streamline Query Times to Handle Billions of Records

And here’s an example dashboard that we used to display the results in real time:

Sisense performance 5 770x401 How to Streamline Query Times to Handle Billions of Records

How’s that for streamlined?

Want to see exactly how I built this super-streamlined query model for yourself? Click here for the detailed tutorial.

Let’s block ads! (Why?)

Blog – Sisense

Infographic: Why You Need Embedded Analytics

These days, customers demand a lot from their service providers. Any why shouldn’t they?

But in a world with so many requests coming from your customers, business intelligence shouldn’t fall to the wayside. Embedding analytics not only expands your offering and creates happy users, it also offers a wide variety of benefits internally.

Let’s have a look at how embedded analytics can benefit the C-Suite, Product Development, and R&D teams.

OEM Package Infographic 1 Infographic: Why You Need Embedded Analytics

Want to learn even more about how embedded analytics can benefit you? Check out our whitepapers, tailor made for the C-Suite, Head of Product, and R&D Team.

Let’s block ads! (Why?)

Blog – Sisense

The Secret Sauce of Customer Success

dresner1200x628 The Secret Sauce of Customer Success

Here at Sisense we’re obsessed with the success of our customers – it’s in our DNA. So when we received a “Perfect Recommend” in Dresner’s 2017 Wisdom of Crowds BI Market Study for the second year in a row, I was ecstatic. The Wisdom of Crowds BI Market Study is an objective source of industry research that surveys actual BI consumers, which means that our perfect recommend score is coming from the people that know us the best – our customers.

In a climate where people expect more than just a great product, it is our job as Customer Success Managers to maximize and help our customers achieve a strong and ongoing ROI. What’s our “secret sauce” for such happy and satisfied customers? I’m glad you asked.

  • A great product bolstered by internal expertise

  • The first and most important ingredient for creating happy customers is to understand their desired outcomes and help them achieve them quickly. Through our Proof of Concept we make sure to start our relationship with customers by understanding their real technical and business needs, connecting to their actual data sources, and showing insights almost immediately.

    We think of ourselves as an extension of – and partner to – each and every customer, assigning each customer a personal BI Consultant and Customer Success Manager. This means we stop and learn about our customers’ day to day needs and long-term vision and use our internal expertise to make sure they get to where they need to go. And because we are lucky enough to work with both small, medium, and large organizations across industries from healthcare and financial services to manufacturing and energy, we are able to bring our experience and learnings from one client to the next.

  • A company-wide commitment to our Net Promoter Score (NPS)

  • It’s no surprise that customer success teams live and die by their NPS – a feedback score based on one simple question, “Would you recommend our organization to a friend or colleague?”. But it might come as a surprise that every department across Sisense is dedicated to doing their part to make sure all of our customers are happy ones. Customer success isn’t just a team within our organization, it’s a pillar of our company culture. Our Customer Success team works together with every corner of our organization, from Marketing to Development so that our customer journey is a smooth and rewarding ride all the way through.

  • A highly engaged client community

  • Achieving ongoing ROI doesn’t just mean meeting your service level agreement and answering the phone when your customer calls. Creating a community where customers can interact with other customers is paramount to success. For example, the Sisense online community is home to a broad knowledge base where customers can interact with one another by asking questions, sharing tips and tricks, and exploring tutorials and best practices.

    Your customer success strategy can (and should) even extend beyond the screen. By hosting intimate customer meetups and events, like Sisense Connect, customers can swap stories face to face and hear firsthand experiences of their peers’ successes and challenges, best practices and how-to’s. At Sisense, we engage our customers for feedback on our roadmap because, at the end of the day, if our product doesn’t work to meet their needs we’re doing something wrong.

Dresner’s acknowledgment of our market leadership comes on the heels of multiple other industry recognitions — including moving from a Niche Player to a Visionary in Gartner’s Magic Quadrant for BI and Analytics Platforms and CRN naming us to its 2017 Big Data 100: 40 Coolest Business Analytics Vendors — all of which we recognize wouldn’t be possible without the success of our customers.

And while we’re flattered to receive such recognition we’re not settling. We continue to strive to improve the value our Customer Success team delivers and are always reaching to do even more. So, thank you – and here’s to continuing our partnerships and joint innovation.

To read the full Dresner’s 2017 Wisdom of Crowds BI Market Study click here.

Let’s block ads! (Why?)

Blog – Sisense

Psst! We Don’t Want to Brag, But…

lp general Psst! We Don’t Want to Brag, But…

Do you hear the pitter patter of champagne corks popping? That’s because we just got another piece of incredible news. The G2 Crowd Business Intelligence Platforms Product Comparison is out… and (to put it mildly), we smashed it.

It’s been an all-round spectacular 12 months for us. Thanks to our amazing team, we’ve grown at breakneck speed over the past year, had the pleasure of working with many fantastic clients and, to top it off, were featured as a “visionary” on Gartner’s Magic Quadrant for Business Intelligence.

If you’re unfamiliar with it, the Gartner Magic Quadrant is kind of like our version of the Oscars. It’s hugely prestigious, it’s decided by the great and powerful of the industry, and getting a mention can determine your future success.

Only a handful of BI companies in the world make it onto the quadrant, so having Gartner name us as a visionary was kind of a big deal.
Since then, we’ve generated more and more buzz in the sector, partnered with bigger and bigger clients, and worked hard on the R&D front to make sure Sisense stays on the cutting edge for years to come.

But if Gartner is like the Oscar committee, the G2 Crowd Competitor report is more like the X Factor. That’s because, rather than a mysterious cabal of industry insiders deliberating quietly behind closed doors, G2 Crowd gets its feedback from YOU – the clients and customers that use the product.

This means that, when we’re voted best in a category in the G2 Crowd Competitor Report, we know that it’s genuinely our customers who are vouching for us. That the people who know our product best and rely on it most feel that it tops the charts for them.

… And that makes us feel pretty warm and fuzzy inside, to be honest.

But enough with the acceptance-speech making already. Let’s take a moment to break down what the G2 Crowd report actually says about us.

“Sisense allows data to be accessible in a way that everyone can see it, understand it, and filter it independently. That was the probably the biggest win for us.” -Brent Allen, Skullcandy

We came out top in 26 of 28 categories, covering everything from the quality of our analysis, to the design of our dashboards, to what we’re like to work with.
Those categories are: ease of doing business with us, quality of support, how well we meet requirements, ease of admin, whether the product is headed in the right direction, ease of setup, graphs and charts, dashboards, scorecards, steps to answer, reports interface, collaboration/workflow, data discovery, search, automodeling, data column filtering, calculated fields, data visualization, big data services, integration APIs, WYSIWYG report design, data transformation, data modelling, customization, internalization, and user, role and access management.

We’re especially excited about the 95% satisfaction score for our quality of support. As a company who wakes up in the morning and goes to bed at night thinking of how we can better support our customers, we’re ecstatic that our customers feel how much we care.

“I love that Sisense has a very close relationship with its customers–something that is not easy to find.” -Doron Gill, Fairfly

We were totally blown away by this show of support, and all we can say is: thank you, thank you, THANK YOU.

We’re so happy and so humbled that you guys think we’re doing such a great job – and rest assured, we won’t get complacent now. We plan to keep developing and innovating to hold our place at the head of the pack, and to better serve you, our customers, all the time.

… I think we can all raise our glasses to that.

Click here to read the full G2 Crowd Compatibility Report.

Let’s block ads! (Why?)

Blog – Sisense

Infographic: Top 5 BI Trends You Need to Know Right Now

Trends move fast these days. It seems like just as soon as we take the last sip of our unicorn frappuccinos the next big thing will be here. And then the next. And then the next. How are we meant to keep up? It’s all too much! Business Intelligence is moving at a similar pace, and although we can’t help you translate the latest lingo the kids are using, when it comes to trends in BI you need to know, we’ve got your back.

For example, did you know that two-thirds of BI users check the status of their KPIs daily or hourly and that 55% of them prefer visual alerts but 57% would like to interact with bots or voice activated data assistants in the future? Or how about that only a small 5% of BI users are IT-savvy analysts?

From self-service BI for all types of users in your organization to BI that literally speaks for itself via AI and machine learning, below are the top five trends in BI we think you need to know right this second.

Trends2017 4 01 Infographic: Top 5 BI Trends You Need to Know Right Now

Want to hear more about the biggest trends in BI for 2017? Watch to our webinar with Howard Dresner, President, Founder, and Chief Research Officer at Dresner Advisory Services; Noel Poler, CTO at EREA Consulting; and Ani Manian, Head of Product Strategy at Sisense.

Let’s block ads! (Why?)

Blog – Sisense

Sisense Leaps From Cool Vendor To Visionary, And 5 Key Takeaways From the 2017 Magic Quadrant

After several weeks of tense anticipation, the Gartner Magic Quadrant for Business Intelligence and Analytics Platforms has officially been released. This is probably the most influential report in the BI space – describing the current state of the industry, and influencing its future by impacting buyer behavior, vendor strategy and market awareness.

After last year’s Magic Quadrant redefined business intelligence, this year’s report gives us a fascinating snapshot of an industry changing into something completely different, driven by technological breakthroughs and a crowded marketplace.

I strongly recommend everyone with even a fleeting interest in BI to read the MQ. These are my own 5 key takeaways (and my own opinion, unless stated otherwise):

1. Sisense Leads the Way in Innovation, Customer Success

I can’t help but start with mentioning how immensely proud and grateful I am of our amazing global team. Together, we have managed to achieve the largest organic shift in this year’s Magic Quadrant: moving from Cool Vendor to Niche Player to Visionary, all in the span of just three years.

gartner magic quadrant business intelligence 2017 Sisense Leaps From Cool Vendor To Visionary, And 5 Key Takeaways From the 2017 Magic Quadrant

This is, above all, a testament to our core values: we innovate, care and deliver solutions with a single-minded focus on customer success. We are leading the way in innovation – reinventing core BI technologies, providing the most flexible embedded analytics platform in the market, and transforming the way business users receive insights. But this is never innovation for its own sake: everything is done with a strong, constant focus on business value and empowering our customers.

From my experience at the helm of several software companies, this spirit is absolutely vital for long-term success – and Sisense has an abundance of it. Based on information and feedback from our clients, Gartner rated Sisense in the top quartile for user enablement and achievement of business benefits.

2. Simplifying Complexity

The BI industry is maturing from departmental projects into enterprise endeavors, creating demand for more robust functionality, more data sources and more analyses. As Gartner notes:

“…buyers want to expand modern BI usage, including for self-service to everyone in the enterprise and beyond. They want users to analyze a more diverse range and more complex combinations of data sources (beyond the data warehouse or data lake) than ever before — without distinct data preparation tools.”

Indeed: we’ve talked before about the rise of complex data. Modern enterprises are dealing with troves of data generated from more sources than they can wrap their head around, and business departments are demanding more than visualization. Today’s data-driven professional needs the ability to navigate a wide variety of disparate data sources in a self-service environment, and derive insights before making a decision. Enterprise data tools should empower business units to be data-driven in this sense, rather than retroactively justifying decisions with canned reports.

It all boils down to simplifying, removing barriers, and giving more power to everyday users while enabling “data heroes” to truly unleash their analytical prowess.

3. A Move to Consolidate

More than ever we see the market gravitating towards full-stack and single-stack solutions, replacing the infamous “assembly line” of database, ETL, querying and visualization tools. This is reflected in Pentaho and Alteryx – two good companies with a very loyal customer base – moving backwards in this Magic Quadrant due to focusing mainly on back-end, data preparation features.

I believe this trend will continue and expand. It is simply unsustainable for organizations to maintain three, four or five different and expensive systems for analytics. Instead, modern business intelligence should be a seamless value chain that is purchased, owned, and operated mainly by the individual business units. If Marketing or Sales have dashboards, but any new data source or query has to go through centralized IT systems, the bottleneck has merely moved elsewhere. BI vendors that can deliver on this premise will lead the industry in the near future.

4. The Future is Hands Free and Machine Guided

bot transparent 370x334 Sisense Leaps From Cool Vendor To Visionary, And 5 Key Takeaways From the 2017 Magic Quadrant

Another clear trend is the emergence of next-generation technologies – machine learning, natural language processing, and artificial intelligence – as core components of modern BI solutions. Gartner predicts that:

“By 2020, natural-language generation and artificial intelligence will be a standard feature of 90% of modern BI platforms.

By 2020, 50% of analytic queries will be generated using search, natural-language processing or voice, or will be autogenerated.”

It would seem that BI is set to converge with AI. Machine learning is already being used to serve analytical insights to end users with close to zero human intervention. Couple this with the amazing advancement of voice and natural language processing, and it’s safe to assume that the way businesses interact with data is going to very different, very soon.

I am once again extremely thrilled to see our own product vision aligning with the marketplace – with Sisense leading the way in the use of machine learning for automated anomaly detection, alerting and performance optimization, as well as integrating natural language into analytical workflows through chatbots and smart devices.

5. An Overcrowded Market

Finally, if there is one conclusion that is undisputedly evident just from a cursory glance at this year’s MQ, it’s this: there are a LOT of vendors out there. Gartner’s report is merely the “cream of the crop” with dozens of other tools available, offering everything from verticalized dashboards to advanced statistical analysis.

So how can you make an informed buying decision in such an overcrowded market?

I would start by asking three questions:

  1. Will the vendor give me the tools I need to succeed as a customer?
  2. Do I believe in the vendor’s vision and roadmap?
  3. Will the vendor bring me to where I want to be with my data today? Tomorrow? Three years from now?

In other words, focus on the future as well as the present. Business intelligence is evolving, perhaps more so than any other enterprise technology. Data is changing and growing in complexity; technology is moving towards consolidation, automation, and smarter workflows; and business itself is changing and becoming more and more dependent on data for its operations. In this time of sea change, future-proofing is the way to go.

These are definitely exciting times, the best and most dynamic I have experienced in my own career (and possibly yours). I for one can’t wait to see how the next iterations of data and analytics will transform the enterprise.

You can get your copy of the full Magic Quadrant for Business Intelligence and Analytics Platforms right here.

gartner banner for blog Sisense Leaps From Cool Vendor To Visionary, And 5 Key Takeaways From the 2017 Magic Quadrant

Copyright notice:
Gartner, Magic Quadrant for Business Intelligence and Analytics Platforms, by Rita L. Sallam, Cindi Howson, Carlie J. Idoine, Thomas W. Oestreich, James Laurence Richardson, Joao and Tapadinhas, published 16 February 2017
This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Sisense.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Let’s block ads! (Why?)

Blog – Sisense

The Ultimate Guide to Compare Embedded Analytics Solutions

The Inevitable Challenge: What to Do with All This Data?

Businesses across all applications and in every industry are faced with mountains of data. Finding a meaningful way to manage this data has become a necessity, especially when it’s data that can help your customers or partners succeed. The only question is: will data be your company’s weak point or competitive differentiator?

In today’s ambitious business environment, customers want access to an application’s data with the ability to interact with the data in a way that allows them to derive business value. Afterall, customers rely on your application to help them understand the data that it holds, especially in our increasingly data-savvy world.

Sisense OEM vs Alternatives small The Ultimate Guide to Compare Embedded Analytics Solutions

Embedded Analytics Is the Most Popular Answer

Embedded analytics is defined as when analytical capabilities such as data management, reporting and visualization are built into other business applications and solutions. Service providers, including on-premise, cloud-based or hybrid solutions, can then offer customer-facing dashboards, reports, and services such as software infrastructure, platforms, and processes.

Many companies today are embedding analytics into their application so users can access insights from their data in easy to understand reports and dashboards. Gartner reported that today 25% of analytics capabilities are embedded in business applications, while other industry research firms stated that as high as 40% percent of organizations are embedding analytics. Both numbers show the incredible growth of embedding analytics and point to a high-value return on investment.

The Hidden and Not-So-Hidden Benefits

The value is actually two-fold: service providers who are using embedded analytics to help their customers be more successful are simultaneously creating a powerful competitive differentiator. The outcome? Happier, more loyal customers, and a strong competitive advantage for the company.

Aberdeen Group discovered that 53% of service providers are embedding analytics to drive competitive advantage, and the top service providers who did saw a 31% year over year increase in customer base. Other top benefits service providers are experiencing include improved user experience, new revenue streams, and increased average customer value:

pic 2 oem 770x369 The Ultimate Guide to Compare Embedded Analytics Solutions

Becoming an OEM Partner for BI & Analytics

Today, it is most common to embed an business intelligence solution by working with a BI partner and embedding their product through an original equipment manufacturer (OEM) agreement. Becoming an OEM is proven to be the easiest and most-effective way to offer business intelligence because it allows you a fast time to market using an established BI solution and technology.

Many of the resource allocation and budget issues dissipate by embedding a BI solution, especially one with technology built on cost effective infrastructure and that can easily scale to your current and future data needs. In fact, top BI analyst and researcher, Wayne Eckerson described the movement towards embedding analytics as:

The best way to simplify and operationalise BI is to embed it directly into operational applications and processes that drive the business. This is the definition of embedded analytics, and it’s the next wave in BI.

Eckerson went on to say that future of BI and its continued success in terms of user adoption and extensive deployments is dependent on embedded analytics.

How to Make Analytics Your Competitive Differentiator

To capitalize on the value of their information, many companies today are taking an embedded approach to analytics and delivering insights into the everyday workflow of their users through embedded analytics and business intelligence (BI). However, in order to successfully expose analytics to customers and partners, companies are faced with three main challenges:

  • Manage complex data (big and disparate datasets) quickly
  • Securely share data and insights
  • Ensure the solution is built on scalable, cost effective infrastructure

Learn how to overcome those challenges by seeing a series of matrixes comparing popular embedded analytics technologies and vendors in our new eBook The Ultimate Guide to Comparing Embedded Analytics. You’ll get a general overview of embedded analytics as well as indepth analysis of the different approaches to embedding BI and analytics, and the benefits and challenges of the most common BI solution technologies that offer OEM partnerships. By the end, you’ll have a strong sense of the embedded analytics marketplace and understand the most strategic way your company can benefit from embedded analytics.

The Ultimate Guide to Comparing Embedded Analytics Solutions

Is it better to build or buy? What are the different cost factors? What differentiates the many OEM vendors? Download our new eBook The Ultimate Guide to Comparing Embedded Analytics, and learn:

  • Pros and cons to building vs. buying embedded analytics
  • Pros and cons to building vs. buying embedded analytics
  • See how Sisense stacks up against other vendors
  • Get real-life use cases of embedded analytics

Download Now

Let’s block ads! (Why?)

Blog – Sisense

Beyond Dashboards: Introducing BI Virtually Everywhere

BIVE LinkedIn Beyond Dashboards: Introducing BI Virtually Everywhere

Today we announce an exciting new initiative and another step forward in our quest to simplify the way business users consume, interact and engage with business data. Sisense BI Virtually Everywhere takes data out of the 2D screens in which it “lives” today – and gives it a new, physical presence, to inspire immediate data-driven action in response to changes, as they happen.

The private beta launched with two Sisense-enabled devices – a smart IoT lightbulb that integrates with Sisense to show the way your department or business is performing against a certain KPI (for example, changing to green once the sales reps hit their daily targets); and an Amazon Echo device which enables you to ask questions about your data and receive questions, all in natural language. Here’s what some of the first users are saying:

Needless to say, we will still be providing business intelligence software – we’re not moving away from data models and dashboards quite yet! But we are very excited about BI Virtually Everywhere, because this initiative fits like a glove with three of Sisense’s core mission statements: simplifying complex data, building new and innovative technology, and delivering unparalleled user experience to our customers.

We’ve got a lot more planned in the future – the possibilities that can come from combining data analytics with the Internet of Things and new innovations in VR and AR tech are mind boggling. And as usual at Sisense, we don’t develop anything for the sake of novelty –but in order to deliver better, smoother and more effective products to end users. At this point BI Virtually Everywhere is a private beta, and we’re already getting almost more requests to join than we can handle. However, for now registration is still open – so go ahead and apply!

Learn more

Let’s block ads! (Why?)

Blog – Sisense