Category Archives: Pentaho

A new collaboration space

newForums A new collaboration space

With the move to Hitachi Vantara we’re not letting the community go away – exactly on the contrary. And one of the first things is trying to give the community a new home, in here: http://community.pentaho.com

We’re trying to gather people from the forums, user groups, whatever, and give a better and more modern collaboration space. This space will continue open, also because the content is extremely value, so the ultimate decision is yours.

Your mission, should you choose/decide to accept it, is to register and try this new home. Counting on your help to make it a better space

See you in http://community.pentaho.com

Cheers!

-pedro

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

Pentaho Business Analytics Blog

Today, our parent company Hitachi, a global leader across industries, infrastructure and technology, announced the formation of Hitachi Vantara , a company whose aim is to help organizations thrive in today’s uncertain and turbulent times and prepare for the future. This new company unifies the mission and operations of Pentaho,…

Let’s block ads! (Why?)

Pentaho Business Analytics Blog

Pentaho Community Meeting 2017: exciting use cases & final Call for Papers

Enjoyed your vacations? Good – now let’s get back in business!

The Pentaho Community Meeting 2017 in Mainz, taking place from November 10-12, is approaching and more than 140 participants interested in BI and Big Data are already on board.

Many great speakers from all over the world will present their Pentaho use cases, including data management and analysis at CERN, evaluation of environmental data at the Technical University of Liberec and administration of health information in Mozambique. And of course Matt Casters, Pedro Alves and Jens Bleuel will introduce the latest features in Pentaho.</span>

The 10th jubilee edition features many highlights:

·      Hackathon and technical presentations on FRI, Nov 10 
·      Conference day on SAT, Nov 11                    
·      Dinner on SAT, Nov 11                          
·      Get-together and drinks on SAT, Nov 11  
·      Social event on SUN, Nov 12

See here the completeagenda with all presentations of the business and technical track on the conference day. Food and drinks will be provided.  Highlight to the CERN use case (you can read a blog post on it here)

And don’t forget: you can participate in the Call for Papers till September 30th! Send your Pentaho project to Jens Bleuel via the</span> contact form.

 Some of the speakers: 

·      Pedro Alves – Aka… me! All about Pentaho 8.0, which is a different way to say “hum, just put some random title, I’ll figure out something later”
·      Dan Keeley – Data Pipelines – Running PDI on AWS Lambda
·      Francesco Corti – Pentaho 8 Reporting for Java Developers
·      Pedro Vale – Machine Learning in PDI – What’s new in the Marketplace?
·      Caio Moreno de Souza – Working with Automated Machine Learning (AutoML) and Pentaho
·      Nelson Sousa – 10 WTF moments in Pentaho Data Integration
If you haven’t done so, Register Here

We are looking forward to seeing you in
Mainz, which can be reached in only 20 minutes by train from Frankfurt airport or main train station!
In the meantime follow-up on all updateson Twitter.


-pedro, with all the content from this post shamelessly stolen from Ruth and Carolin, the spectacular organizers from IT-Novum

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

Hello Hitachi Vantara!

cslogo Hello Hitachi Vantara!

Ok, I admit it – I am one of those people that actually likes changes and views it as an opportunity. Four years ago, I announced here that Webdetails joined Pentaho. For the ones who don’t know, Webdetails was the Portugese-based consulting company that then turned into Pentaho Portugal (and expanded from 20 people at the time to 60+), completely integrated into the Pentaho structure.

Two years ago, we announced that Pentaho was acquired by HDS, becoming a Hitachi Group Company.

We have a new change today – and since I’m lazy (and in Vegas, for the Hitachi Next event, and would rather be at our party at the Mandalay Bay Beach than in my room writing this blog post!), I’ll simply steal the same structure I used two years ago (when Pentaho was acquired) and get straight to the point! :p

Big news

17 148 Hitachi NewCo blog v1 Hello Hitachi Vantara!
 An extremely big transformation has been taking place and materialized itself today, September 19, 2017. A new company is born. Meet: Hitachi Vantara

You may be asking yourselves: Can it possibly be a coincidence that the new company is launched on the exact same day I turn 40? Well, actually yes, a complete coincidence… :/

This new company unifies the mission and operations of Pentaho, Hitachi Data Systems and Hitachi Insight Group into a single business. More info in the Pentaho blog: Hitachi Vantara – Here’s what it means

What does this mean?

It has always been our goal to provide an offering that would allow customers to build their high value, data driven solutions. We were, I think, successful at doing that! And now we (Hitachi Vantara) want to take it to the next level, thus this transformation is needed: We’re aiming higher – we want to not only to be the best at (big) data orchestration and analytics, we want to do so in this new IoT / social innovation ecosystem aiming to be the biggest player in the market.

And this transformation will allow us to do that!

What will change?

So that it’s clear, Pentaho, as a product will continue to exist. Pentaho, as a company, is now Hitachi Vantara.

And for Pentaho as a product, this gives us conditions we’ve never had to improve the product focusing on what we need to do best (big data orchestration and analytics) and leveraging from other groups in the company on areas that even though they weren’t our core focus, people expect us to have. 
Overall, we’ll also improve the overall portfolio interoperability. While so far we’ve always tried to be completely agnostic, now we’ll keep saying that but add a small detail: But we have to work better with our stuff – because we can make it happen! 

Community implications

This one is very easy!!! I’ll just copy paste my previous answer – because it didn’t change:

Throughout all the talks, our relationship and involvement with the community has always been one of the strong points of Pentaho, and seen with much interest.
The relationship between the community and a commercial company exists because it’s mutually beneficial. In Pentaho’s case, the community gets access to software it otherwise couldn’t, and Pentaho gets access to an insane amount of resources that contribute to the project. Don’t believe me? Check the Pentaho Marketplace for the large number of submissions, Jira for all the bug reports and improvement suggestions we get out of all the real world tests, and discussions on the forums or on the several available email lists.
Is anyone, in his or her right mind, willing to let all this go? Nah.
Plus, not having a community would render my job obsolete, and no one wants that, right? (don’t answer, please!)

The difference? We wanna do this bigger, better and faster!

–>

And things are already moving in that direction. We are moving the Pentaho Community page to the Hitachi Vantara communit site with some really col interactive and social features. You can visit our new home herehttps://community.hitachivantara.com/community/products-and-solutions/pentaho. I look forward to engaging with all of you on this new site.

Will Hitachi Vantara shut down it’s Pentaho CE edition / it’s open source model?

I will, once again, repeat the previous answer:

Just in case the previous answer wasn’t clear enough, lemme spell it out with all the words: There are no plans of changing our opensource strategy or stop providing a CE edition to our community!
Can that change in the future? Oh, absolutely yes! Just like it could have changed in the past. And when could it change? When it stops making sense; when it stops being mutually beneficial. And on that day, I’ll be the first one to suggest a change to our model.

And speaking of which – don’t forget to register to PCM17! It’s going to be the best ever!
Cheers!
-pedro 

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

Pentaho Maven repository changed to nexus.pentaho.org

From a recent (at the time of writing, obviously!) issue in the mondrian project we noticed we failed to notify an important change:


This morning the pentaho maven repository seems to be down.

Each download request during maven build fails with 503 error:
[WARNING] Could not transfer metadata XXX/maven-metadata.xml from/to pentaho-releases (http://repository.pentaho.org/artifactory/repo/): Failed to transfer file: http://repository.pentaho.org/artifactory/repo/XXX/maven-metadata.xml. Return code is: 503 , ReasonPhrase:Service Temporarily Unavailable.



The reason for this is that the maven url is now nexus.pentaho.org/content/groups/omni .

Here’s a link to a complete ~/.m2/settings.xml config file: https://github.com/pentaho/maven-parent-poms/blob/master/maven-support-files/settings.xml

-pedro

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

PCM17 – Pentaho Community Meeting: November 10-12, Mainz

PCM17 – 10th Edition

PCM17 Banner EN PCM17   Pentaho Community Meeting: November 10 12, Mainz


One of my favourite blog posts of the year – Announcing PCM17. And this year, for the 10th edition, we’re going back to the beginning – Mainz in Germany.


Location

Location address: Kupferbergterrasse, Kupferbergterrasse 17-19, 55116 Mainz. Close to Frankfurt, Germany

map PCM17   Pentaho Community Meeting: November 10 12, Mainz

Event

We’re maintaining the schedule of the previous years: A meet-up on friday for drinks preceded by a hackathon; A meet-up on Saturday for drinks preceded by a bunch of presentations or really cool stuff; A meet-up on Sunday for drinks preceded by a city sightseeing! You got the idea

All the information….

Here: https://it-novum.com/en/pcm17/! IT-Novum is doing a spectacular work organizing this event, and you’ll find all the information needed, from instructions on how to get there to suggestions for hotels to stay on

Registration and Call for Presentations

Please go to the #PCM17 website to register and also to send us a presentation proposal!


Cheers!



-pedro

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

A consulting POV: Stop thinking about Data Warehouses!

What I am writing in here is the materialization of a line of thought that started bothering me a couple of years ago. While I implemented projects after projects, built ETLs, optimized reports, designed dashboards, I couldn’t help but thinking that something didn’t quite make sense, but couldn’t quite see what. When I tried to explain it to someone, I just got blank stares…
Eventually things started to make more sense to me (which is far from saying they actually make sense, as I’m fully aware my brain is, hum, let’s just say a little bit messed up!) and I ended up realizing that I’ve been looking at the challenges from a wrong perspective. And while this may seem a very small change in mindset (specially if I fail in passing the message, which may very well happen), the implications are huge: not only it changed our methodology on how to implement projects in our services teams, it’s also guiding Pentaho’s product development and vision.

A few years ago, in a blog post far, far away…

A couple of years ago I wrote a blog post called ”Kimball is getting old”. It focused on one fundamental point: technology was evolving to a point where just looking at the concept of an enterprise datawarehouse (EDW) seemed restrictive. After all, the end users care only about information; they couldn’t care less about what gets the numbers in front of them. So I proposed that we should apply a very critical eye to our problem, and maybe, sometimes, Kimball’s DW, with its star schemas, snowflakes and all that jazz wasn’t the best option and we should choose something else…

But I wasn’t completely right…

I’m still (more than ever?) a huge proponent of the top down approach: focus on usability, focus on the needs of the user, provide him a great experience. All rest follows. All of that is still spot on.
But I made 2 big mistakes:
1.    I confused data modelling with data warehouse
2.    I kept seeing data sources conceptually as the unified, monolithic source of every insight

Data Modelling – the semantics behind the data

Kimball was a bloody genius! Actually, my mistake here was actually due to the fact that he is way smarter than everyone else. Why do I say this? Because he didn’t come up with one, but with two groundbreaking ideas…
First, he realized that the value of data, business-wise, comes when we stop considering it as just zeros and ones and start treating it as business concepts. That’s what the Data Modelling does: By adding semantics to raw data, immediately gives it meaning that makes sense to a wide audience of people. And this is the part that I erroneously dismissed. This is still spot on! All his concepts of dimensions, hierarchies, levels and attributes, are relevant first and foremost because that’s how people think.
And then, he immediately went prescriptive and told us how we could map those concepts to database tables and answer the business questions with relational database technology with concepts like star schemas, snowflake, different types of slowly changing dimensions, aggregation techniques, etc.
He did such a good job that he basically shaped how we worked; How many of us were involved in projects where we were talked to build data warehouses to give all possible answers when we didn’t even know the questions? I’m betting a lot, I certainly did that. We were taught to provide answers without focusing on understanding the questions.

Project’s complexity is growing exponentially

Classically, a project implementation was simply around reporting on the past. We can’t do that anymore; If we want our project to succeed, it can’t just report on the past: It also has to describe the present and predict the future.
There’s also the explosion on the amount of data available.
IoT brought us an entire new set of devices that are generating data we can collect.
Social media and behavior analysis brought us closer to our users and customers
In order to be impactful (regardless of how “impact” is defined), a BI project has to trigger operational actions: schedule maintenances, trigger alerts, prevent failures. So, bring on all those data scientists with their predictive and machine learning algorithms…
On top of that, in the past, we might have been successful at convincing our users that it’s perfectly reasonable to expect a couple of hours for that monthly sales report that processed a couple of gigabytes of data. We all know that’s changed; if they can search the entire internet in less than a second, why would they waste minutes for a “small” report?? And let’s face it, they’re right…
The consequence? It’s getting much more complex to define, architect, implement, manage and support a project that needs more data, more people, more tools.
Am I making all of this sound like a bad thing? On the contrary! This is a great problem to have! In the past, BI systems were confined to delivering analytics. We’re now given the chance to have a much bigger impact in the world! Figuring this out is actually the only way forward for companies like Pentaho: We either succeed and grow, or we become irrelevant. And I certainly don’t want to become irrelevant!

IT’s version of the Heisenberg’s Uncertainty Principle: Improving both speed and scalability??

So how do we do this?
My degree is actually in Physics (don’t pity me, took me a while but I eventually moved away from that), and even though I’m a really crappy one, I do know some of the basics…
One of the most well-known theorems in physics is Heisenberg’s Uncertainty principle. You cannot accurately know both the speed and location of (sub-)atomic particle with full precision. But can have a precise knowledge over one in detriment of the other
I’m very aware this analogy is a little bit silly (to say the least) but it’s at least vivid enough on my mind to make me realize that we can’t expect in IT to solve both the speed and scalability issue – at least not to a point where we have a one size fits all approach.
There have been spectacular improvements in the distributed computing technologies – but all of them have their pros and cons, the days where a database was good for all use cases is long gone.
So what do we do for a project where we effectively need to process a bunch of data and at the same time it has to be blazing fast? What technology do we chose?

Thinking “data sources” slightly differently

When we think about data sources, there are 2 traps most of us fall into:
1.    We think of them as a monolithic entity (eg: Sales, Human Resources, etc) that hold all the information relevant to a topic
2.    We think of them from a technology perspective
Let me try to explain this through an example. Imagine the following customer requirement, here in the format of a dashboard, but could very well be any other delivery format (yeah, cause a dashboard, a report, a chart, whatever, is just the way we chose to deliver the information):
Pentaho%2B8%2B %2BPage%2B3 S A consulting POV: Stop thinking about Data Warehouses!

Pretty common, hum?

The classical approach

When thinking about this (common) scenario from the classical implementation perspective, the first instinct would be to start designing a data warehouse (doesn’t even need to be an EDW per se, could be Hadoop, a no-sql source, etc). We would build our ETL process (with PDI or whatever) from the source systems through an ETL and there would always be a stage of modelling so we could get to our Sales data source that could answer all kinds of questions.
After that is done, we’d be able to write the necessary queries to generate the numbers our fictitious customer wants.
And after a while, we would implement a solution architecture diagram similar to this, that I’m sure looks very similar to everything we’ve all been doing in consulting:
Pentaho%2B8%2B %2BPage%2B4 S A consulting POV: Stop thinking about Data Warehouses!

Our customer gets the number he numbers he want, he’s happy and successful. So successful that he expands, does a bunch of acquisitions, gets so much data that our system starts to become slow. The sales “table” never stops growing. It’s a pain to do anything with it… Part of our dashboard takes a while to render… we’re able to optimize part of it, but other areas become slow.
In order to optimize the performance and allow the system to scale, we consider changing the technology. From relational databases to vertical column store databases, to nosql data stores, all the way through Hadoop, in a permanent effort to keep things scaling and fast…

The business’ approach

Let’s take a step back. Looking at our requirements, the main KPI the customer wants to know is:
How much did I sell yesterday and how is that compared to budget?
It’s one number he’s interested in.
Look at the other elements: He wants the top reps for the month. He wants a chart for the MTD sales. How many data points is that? 30 tops? I’m being simplistic on purpose, but the thing is that it is extremely stupid to force ourselves to always go through all the data when the vast majority of the questions isn’t a big data challenge in the first place. It may need big data processing and orchestration, but certainly not at runtime.
So here’s how I’d address this challenge
Pentaho%2B8%2B %2BPage%2B5 S A consulting POV: Stop thinking about Data Warehouses!

I would focus on the business question. I would not do a single Sales datasource. Instead, I’d define the following Business Data Sources (sorry, I’m not very good at naming stuff..), and I’d force myself to define them in a way where each of them contains (or output) a small set of data (up to a few millions the most):
·      ActualVsBudgetThisMonth
·      CustomerSatByDayAndStore
·      SalesByStore
·      SalesRepsPerformance
Then I’d implement these however I needed! Materialized, unmaterialized, database or Hadoop, whatever worked. But through this exercise we define a clear separation between where all the data is and the most common questions we need to answer in a very fast way.
Does something like this gives us all the liberty to answer all the questions? Absolutely not! But at least for me doesn’t make a lot of sense to optimize a solution to give answers when I don’t even know what the questions are. And the big data store is still there somewhere for the data scientists to play with
Like I said, while the differences may seem very subtle at first, here are some advantages I found of thinking through solution architecture this way:
·      Faster to implement – since our business datasources’s signature is much smaller and well identified, it’s much easier to fill in the blanks
·      Easier to validate – since the datasources are smaller, they are easier to validate with the business stakeholders as we lock them down and move to other business data sources
·      Technology agnostic – note that at any point in time I mentioned technology choices. Think of these datasources as an API
·      Easier to optimize – since we split a big data sources in multiple smaller ones, they become easier to maintain, support and optimize  

Concluding thoughts

Give it a try – this will seem odd at first, but it forces us to think differently. We spend too much time worrying about the technology that more than often we forget what we’re here to do in the first place…


-pedro


Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

Pentaho 7.1 is available!

Pentaho 7.1 is out

pentaho7.1 Pentaho 7.1 is available!


Remember when I said at the time of the previous release that Pentaho 7.0 was the best release ever? Well, I was true till today! But not any more, as 7.1 is even better!  :p

Why do I say that? It’s a big step forward in the direction we’ve been aiming – consolidating and simplifying our stack, not passing the complexity to the end user. 

These are the main features in the release:

  • Visual Data Experience
    • Data Exploration (PDI)
      • Drill Down
      • New Viz’s: Geo map, sunburst, Heat Grid
      • Tab Persistency
      • Several other improvements including performance
    • Viz API 3.0 (Beta)
      • Viz API 3.0, with documentatino
      • Rollout of consistent visualizations between Analyzer, PDI and Ctools
  • Enterprise Platform
    • VCS-friendly features
      • File / repository abstraction
      • PDI files properly indented
      • Repository performance improvements
    • Reintroducing Ops Mart
    • New default theme on User Console
    • Pentaho Mobile deprecation
  • Big Data Innovation
    • AEL – Adaptive Execution Layer (via Spark)
    • Hadoop Security
      • Kerberos Impersonation (for Hortonworks)
      • Ranger support
    • Microsoft Azure HD Insights shim

I’m getting tired just of listing all this stuff… Now into a bit more detail, and I’ll jump back and forth in these different topics ordering by the ones that… well, that I like the most :p

Adaptive Execution with Spark

This is huge; We’ve decoupled the execution engine from PDI so we can plug in other engines. Now we have 2: 
  • Pentaho – the classic pentaho engine
  • Spark – you’ve guessed it…
What’s the goal of this? Making sure we treat our ETL development with a pay as you go approach; First, we worry about the logic, then we select the engine that makes most sense.

adaptiveexecution v1 Pentaho 7.1 is available!
AEL execution of Spark
One of the things people need to do on other tools (and even on our own tools, that’s why I don’t like our own approach to the Pentaho Map Reduce) is that from the start you need to think about the engine and technology you’re going to use. But this makes little sense.

Scale as you go

Pentaho’s message is one of future-proofing the IT architecture, leveraging the best of what the different technologies have to offer without imposing a certain configuration or persona as the starting point. The market is moving towards a demand for BA/DI to come together in a single platform.  Pentaho has an advantage here as we have seen the differentiation of BI and DI better together with our customers and what sets us apart from the competition. Gartner predicts that BI and Discovery tool vendors will partner to accomplish this.  Larger, proprietary vendors, will attempt to build these platforms themselves.  With this approach from the competition, Pentaho has a unique and early lead in delivering this platform.

A good example is the story we can tell about governed blending. We don’t need to impose on customers any pre-determined configuration; We can start with the simple use of dataservices and unmaterialized data sets. If it’s fast enough, we’re done. If not, we can materialize the data into a data base or even an enterprise data warehouse. If it’s fast enough, we’re done. If not we can resort to other technologies – NoSQL, Lucene based engines, etc. If it’s fast enough, we’re done. If everything else fails, we can setup a SDR blueprint which is the ultimate scalability solution. And throughout this entire journey we never let go of the governed blending message.

This is an insanely powerful and differentiated message; We allow our customers to start simple, and only go down the more complex routes when needed. When going down a single path, a user knows, accepts and sees the value in extra complexity to address scalability 

Adaptive Execution Layer

The strategy described for the “Logical Data Warehouse” is exactly the one we need for the execution environment; A lot of times customers get hung up on a certain technology without even understanding if they actually needed. Countless times we we’ve seen customers asking for Spark without a use case that justifies it. We have to challenge that.

We need to move towards a scenario where the customer doesn’t have to think about technology first. We’ll offer one single approach and ways to scale as needed. If a data integration job works on a single Pentaho Server, why bother with other stacks? if it’s not enough, then making the jump to something like Map Reduce or Spark has to be a linear move.

The following diagram shows the Adaptive Execution Layer approach just described

AEL Pentaho 7.1 is available!
AEL conceptual diagram

Implementation in 7.1 – Spark

For 7.1 we chose Spark as the first engine to implement for AEL. It has seen a lot of adoption, and the fact that it’s not restricted to a map reduce paradigm makes it a good candidate to separate business logic and execution.

How to make it work? This high definition conceptual diagram should help me explain it:

ael sketch Pentaho 7.1 is available!
An architectural diagram so beautiful it should almost be roughly correct

We start by generating a PDI Driver for Spark from our own PDI instance. This is a very important starting point because using this methodology we ensure that any plugins we may have developed / installed will work when we run the transformation – we couldn’t let go of the extensibility capabilities of Pentaho 

That driver will be installed on an edge node of the cluster, and that’s what will be responsible for executing the transformation. Note that by using spark we’re leveraging all it’s characteristics: namely, we don’t even need a cluster, as we can select if we want to use spark standalone or yarn mode, even though I suspect the majority of users will be on yarn mode leveraging the clustering capabilities.

Runtime flow

One of the main capabilities of AEL is that we don’t need to think about adapting the business logic to the engine; We develop the transformation first and then we select where we want to execute. This is how this will work from within Spoon:

RunOptions Pentaho 7.1 is available!
Creating and selecting a Spark run configuration

We created the concept of a Run Configuration. Once we select a run configuration set up to use Spark as the engine, PDI will send the transformation to the edge node and the driver will then execute it.

All transformation steps in PDI will run in AEL-Spark! This was the thought from the start.  And to understand how this works, there are 2 fundamental concepts to understand:

  • Some steps are safe to run in parallel while others are not parallelizable or not recommended to run in clustered engines such as Spark. All the steps that take one row as input and one row as output (calculator, filter, select values, etc, etc), all of them are parallelizable; Steps that require access to other rows or depend on the position and order on the row set, still run on spark, but have to run on the edge node, which implies a collect of the RDDs (spark’s datasets) from the nodes. It is what it is. And how do we know that? We simply tell PDI which steps are safe to run in parallel, and which are not
  • Some steps can leverage Spark’s native APIs for perfomance and optimization. When that’s the case, we can pass to PDI a native implementation of the step, greatly increasing the scalability on possible bottleneck points. Examples of these steps are the hadoop file inputs, hbase lookups, and many more



Feedback please!

Even though running on secured clusters (and leveraging impersonation) is an EE capability only, AEL is also available in CE. Reason for that is that we want to get help from the community in testing, hardening, nativizing more steps and even writing more engines for AEL. So go and kick the tires of this thing! (and I’ll surely do a blog post on this alone)


Visual Data Experience (PDI) Improvements

This is one of my favorite projects. You may be wondering what’s the real value of having this improved data experience in PDI, why is this all that exciting… Let me tell you why: This is the first materialization of something that we hope becomes the way to handle data in pentaho regardless of where we are. So this thing that we’re building in PDI, will eventually make it’s way to the server… I’d like to throw away all the technicalities that we expose in our server (analyzer for olap, pir for metadata, prd for dashboards….) into a single content driver approach and usability experience. This is surely starting to sound confusing, so I better stop here :p

In the 7.1 release, Pentaho provides new Data Explorer capabilities to further support the following key use cases more completely:

  • Data Inspection: During the process of cleansing, preparing, and onboarding data, organizations often need to validate the quality and consistency of data across sources. Data Explorer enables easier identification of these issues, informing how PDI transformations can be adjusted to deliver clean data. 
  • BI Prototyping: As customers deliver analytic ready data to business analysts, Data Explorer reduces the iterations between business and IT. Specifically, It enables the validation of metadata models that are required for using Pentaho BA. Models can be created in PDI and tested in Data Explorer, ensuring data sources are analytics-ready when published to BA.

And how? By adding these improvements:

New visualization: Heatgrid

This chart can display 2 measures (metrics) and 2 attributes (categories) at once. Attributes are displayed on the axes and measures are represented by the size and color of the points on the grid. It is most useful for comparing metrics at the ‘intersection’ of 2 dimensions, as seen in the comparisons of quantity and price across combinations of different territories and years below (did I just define what an heatgrid is?! No wonder it’s taking me hours to write this post!):

heatgrid Pentaho 7.1 is available!
Look at all those squares!



New visualization: Sunburst

A pie chart on steroids that can show hierarchies. Less useless than a normal piechart!

sunburst Pentaho 7.1 is available!
Circles are also pretty!

New visualization: Geo Maps

The geo map uses the same auto-geocoding as Analyzer, with out of box ability to plot latitude and longitude pairs, all countries, all country subdivisions (state/province), major cities in select countries, as well as United States counties and postal codes.

GeoMap Pentaho 7.1 is available!
Geo Map visualization

Drill down capabilities

–> When using dimensions in Data Explorer charts or pivot tables, users can now expand hierarchies in order to see the next level of data.  This is done by double clicking a level in the visualization (for instance, double click a ‘country’ bar in a bar chart to drill down to ‘city’ data).


drilldown1 Pentaho 7.1 is available!
Drill down in the visualizations…

This can be done though the visualizations or though the labels / axis. Once again, look at this as the beginning of a coherent way to handle data exploration!

drilldown2 Pentaho 7.1 is available!
… or from where it makes more sense

And this is only the first of a new set of actions we’ll introduce here…

Analysis persistency

In 7.0 these capabilities were a one-time inspection only. Now we’ve taken a step further – they get persisted with the transformations. You can now use to validate the data, get insights right on the spot, and make sure everything is lined up to show to the business users.

tabPersistency Pentaho 7.1 is available!
Analysis persistency indicator

Viz Api 3.0

Every old timer knows how much disparity we’ve had throughout the stack in terms of offering a consistent visualization. This is not an easy challenge to solve – the reason they are different is because different parts of our stack were created in completely different times and places, so a lot of different technologies were used. An immediate follow-up consequence is that we can’t just add a new viz and expect it to be available in several places of the stack

We’re been working on a visualization layer, codenamed VizAPI (for a while, actually, but now we reached a point where we can make it available on beta form), that brings this so needed consistency and consolidation.


vizApiPentahoServer Pentaho 7.1 is available!
Viz API compatible containers

In order to make this effort worthwhile, we needed the following solve order:

  1. Define the VizAPI structure
  2. Implement the VizAPI in several parts of the product
  3. Document and allow users to extend it

And… we did it. We re-implemented all the visualizations in this new VizAPI structure, adapted 3 containers – Analyzer, Ctools and DET (Data Exploration) in PDI, and as a consequence, the look and feel of the visualizations are the same




VizApiAnalyzer Pentaho 7.1 is available!
Analyzer visualizations are now much better looking _and_ usable

One important note though – migration users will still default to the “old” VizAPI (yeah, we called it the same as well, isn’t that smart :/ ) not to risk interfering with existing installations. In order for you to test an existing project with the new visualizations you need to change the VizAPI version number in analyzer.properties. New installs will default to the new ones.

In order to allow people to include their own visualization and promote more contributions to Pentaho (I’d love to start seeing more contributions to the marketplace with new and shiny Viz’s), we need to really make it easy for people to know how to create them.

And I think we did that! Even though this will require it’s own blog post, just take a look at the documentation the team prepared for this

VizAPIDoc Pentaho 7.1 is available!
Instructions for how to add new visualizations

You’ll see this documentation has beta written on it. The reason is simple – we decided to put it out there, collect feedback from the community and implement any changes / fine tunes / etc before 8.0 timeframe, where we’ll lock this down, guaranteeing long term support for new visualizations


MS HD Insights

HD Insights (HDI) is a hosted Hadoop cluster that is part of Microsoft’s Azure cloud offering. HDI is based on Hortonworks Data Platform (HDP). One of the major differences between the standard HDP release and HDI’s offering is the storage layer. HDI connects to local cluster storage via HDFS or to Azure Blob Storage (ABS) via a WASB protocol.


We now have a shim that allows us to leverage this cloud offering, something we’ve been seeing getting more and more interest on the marketplace.


Hortonworks security support

This is a continuation of the previous release, available on the Enterprise Edition (EE)
 Pentaho 7.1 is available!
Added support for Hadoop user impersonation
Earlier releases of PDI introduced enterprise security for Cloudera, specifically, Kerberos Impersonation for authentication and integration with Apache Sentry for authorization. 

This release of PDI extends these enterprise level security features to Hortonworks’s Hadoop distribution as well. Kerberos Impersonation is now support Hortonworks’s HDP. For authorization, PDI integrates with Apache Ranger, an alternative OSS component included in the HDP security platform.

Data Processing-Enhanced Spark Submit and SparkSQL JDBC

Earlier PDI and BA/Reporting releases broaden access to Spark for querying and preparing data through a dedicated transformation step Spark Submit and Spark SQL JDBC. 

This release will be extending these existing features to support additional vendors so that these features can be used more widely. Apart from additional vendors, these features have been now certified with a more up to date version of Spark 2.0. 

Additional big data infrastructure vendors supported for these functionalities apart from Cloudera and Hortonworks:
  1. Amazon EMR
  2. MapR
  3. Azure HD Insights

VCS Improvements

Repository agnostic transformations and jobs

Currently some specific step interfaces (the sub-transformation one being the more impactful) where the ETL dev has to choose, upfront, if he’s using a file on the file system or the repository. This prevents us from being able to abstract the environment where we’re working, so checking out things from git/svn and just import them is a no-go.

Here’s an example of a step that used this:

vcs old Pentaho 7.1 is available!
The classic way to reference dependent objects
ThisIn general, we need to abstract the linkage to other artifacts (sub-jobs and sub-transformations) independent on the used repository or file system.

The linkage needs to work in all environments whether it is a repository (Pentaho, Database, File) or File Based system (kjb and ktr).

The linkage needs to work independently of the execution system: On the Pentaho Server, on a Carte Server (with a repository or file based system), in Map Reduce and future execution systems as part of the Adaptive Execution System (AES) 

So we turned this into something much simpler:

vfs Pentaho 7.1 is available!
The current approach to define dependencies
We just define where the transformation lives. This may seem a “what, just this??” moment, but now we can just work locally, remotely, check into a repository, even automate the promotion and control the lifecycle in between different installation environments. I’m absolutely sure that existing users will value this a lot (as we can deprecate the stupid file-based repository)

KTR / KJB XML format

We did something very simple (in concept), but very useful. While we absolutely don’t recommend playing around with the job and transformation files (they are plain old XML files), we guaranteed that they are properly indented. Why? Cause when we use a version control system (git / svn, don’t  care which as long as you USE one!), you can easily identify what changes happened from version to version

Repository performance improvements

We want you to use the Pentaho Repository. And till now, performance while browsing that repository from Spoon was crap (there’s no other way to say it!). We addressed that – it’s now about 100x faster to browse and open files from the repository

Operations Mart Updates

Also known as the ops marts, available in EE. Used to work. Then it stoped working. Now it’s working again. Yay :/ 

I’ll skip this one. I hate it. We’re working on a different way to handle monitoring on our product, and at scale

Other Data Integration Improvements

Apart from all the above new big features, there are some smaller data integration enhancements added to product to build data pipeline with Pentaho easier.

Metadata Injection Enhancement

Metadata Injection enables creating generalized ETL transformations whose behavior can be changed at run-time and thus significantly improves data integration developer agility and productivity. 
In this release, a new option for constant has been added for Metadata Injection which will help making steps more dynamic with Metadata Injection feature.   
This functionality extended to Analytic Query and Dimension Lookup/Update steps which will help making these steps dynamic and thus make them highly dynamic. Dynamism of these steps will improve the Data Warehouse & Customer 360 blueprints and similar analytic data pipeline. 

Lineage Collection Enhancement

Customers can now configure the location for the lineage output and add the ability to write to VFS location. This will help customers to maintain lineage in clustered / transient node environments, such as Pentaho MapReduce. Lineage information helps with data compliance and security needs of the customers. 

XML Input Step Enhancement

XML Input Stream (StAX) step has been updated to receive XML from a previous step. This will make it easier to develop XML processing in data pipeline when you are working with XML data. 

New Mobile approach (and the deprecation of Pentaho Mobile)

We used to have a mobile specific plugin, introduced in a previous Pentaho release, that enabled touch gestures to work with analyzer.

But while it sounds good, in fact it didn’t work as we’d expected. The fact that we had to develop and maintain a completely separate access to information caused that mobile plugin to become very outdated. 

To complement that, the maturity of the browsers on mobile devices and the increased strength of tables makes it possible for Pentaho reports and analytic views to be accessed directly without any specialized mobile interface. Thus, we are deprecating the Pentaho mobile plug-in and investing on the responsive capabilities of the interface

It sounds bad? Actually it’s not – just use your tablet to access your EE pentaho, looks great icon smile Pentaho 7.1 is available!

Pentaho User Console Updates

sapphire Pentaho 7.1 is available!
Sapphire theme in PUC

Starting in Pentaho 7.1, Onyx will be deprecated and removed from the list of available themes in PUC. In addition, a new theme “Sapphire” has been introduced in 7.0. As of Pentaho 7.1, Sapphire will be PUC’s default selected theme. Crystal will be the available alternative.

Moreover, a newly refreshed log-in screen has been implemented in Pentaho 7.1, this screen has been based on the new Sapphire theme that was introduced in Pentaho 7.0. This is something that was already in 7.0 CE and now it’s the default for EE as well


———————–


As usual, you can get EE from here and CE from here

This is a spectacular release! I should be celebrating! But instead, it’s 8pm, I’m stuck in the office writing this blog post, and already very very stressed because I have all my 8.0 work stuff already piling up on my inbox…  icon sad Pentaho 7.1 is available!


I’m out, have fun!


-pedro



Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

PentahoDay 2017 – Brazil, Curitiba, May 11 and 12

PentahoDay 2017 – Brazil, Curitiba, May 11 and 12


header PentahoDay 2017   Brazil, Curitiba, May 11 and 12



After a pause to rest in 2016, the biggest Pentaho event organized by the community is back. 2 days, May 11 and 12, dozens of presentations, use cases, even hands-on mini-labs will happen in Curitiba, Brazil.

speakers PentahoDay 2017   Brazil, Curitiba, May 11 and 12
Pentaho Day speakers



400 attendees or more are expected on this huge event. It’s really amazing, so if you’re even near South America, be there!

location PentahoDay 2017   Brazil, Curitiba, May 11 and 12

Register here

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence

Building Pentaho Platform from source and debugging it

After all, if it’s open source, means we can compile it, right?

I love this hammer Building Pentaho Platform from source and debugging it
I’m sure you’ve guessed by now this is not an original image from me even though I’ve been told I’m very good at drawing stuff – and I always believe my daughter!

Sure – but sometimes it’s not as easy as it seems. However, we’re doing a huge consolidation work to streamline all our build processes. Historically, each project, specially the older ones (kettle, mondrian, prd, ctools) used each own build method, depending on the author’s personal stance on them (and boy, there are some heavy opinions in here…)

Personally, I come from the CCLJTMHIWAPMIS school of thought (for the ones not familiar with it, the acronym means Couldn’t Care Less Just Tell Me How It Works And Please Make It Simple, very popular specially within lazy Portuguese people).

And we’re now doing this, slowly and surely, to all projects, as you can see from browsing through Pentaho’s Github.

So let’s take a look at an example – building Pentaho Platform from source. Please note that we’ll try to make sure the project’s README.md contains the correct instructions. Also, this won’t work for all versions, as we don’t backport this changes; In the case of Pentaho Platform, this works for master and will appear in 7.1. Other will have it’s own timeline.

Compiling Pentaho Platform

1. Clone it from source

Ok, so step one, clone it from source:

$ git clone https://github.com/pentaho/pentaho-platform.git



(or use git:// if you already have a user)

2. Set up your m2 config right

Before compiling it, you need to set some stuff in your maven settings file. In your home directory, under the .m2 folder, place this settings file. If you already one m2 settings files, that means you’re probably familiar with maven in the first place and will know how to merge the two. Don’t ask me, I have no clue.

If you’re wondering why we need a specific settings file… I wonder too, but since my laziness is bigger than my curiosity (CCLJTMHIWAPMIS, remember?) I think I zoned out when they were explaining it to me and now I forgot.

3. Build it

This one is easy icon smile Building Pentaho Platform from source and debugging it

$ mvn clean install

or the equivalent without the tests:

$ mvn clean package  -Dmaven.test.skip=true

If all goes well, you should see 


[INFO]
[INFO] — maven-site-plugin:3.4:attach-descriptor (attach-site-descriptor) @ pentaho-server-ce —
[INFO]
[INFO] — maven-assembly-plugin:3.0.0:single (assembly_package) @ pentaho-server-ce —
[INFO] Building zip: /Users/pedro/tex/pentaho/pentaho-platform-master/assemblies/pentaho-server/target/pentaho-server-ce-7.1-SNAPSHOT.zip
[INFO] ————————————————————————
[INFO] Reactor Summary:
[INFO]
[INFO] Pentaho BI Platform Community Edition ………….. SUCCESS [  4.461 s]
[INFO] pentaho-platform-api …………………………. SUCCESS [ 10.149 s]
[INFO] pentaho-platform-core ………………………… SUCCESS [ 19.819 s]
[INFO] pentaho-platform-repository …………………… SUCCESS [  2.210 s]
[INFO] pentaho-platform-scheduler ……………………. SUCCESS [  0.172 s]
[INFO] pentaho-platform-build-utils ………………….. SUCCESS [  1.695 s]
[INFO] pentaho-platform-extensions …………………… SUCCESS [01:22 min]
[INFO] pentaho-user-console …………………………. SUCCESS [ 19.596 s]
[INFO] Platform assemblies ………………………….. SUCCESS [  0.059 s]
[INFO] pentaho-user-console-package ………………….. SUCCESS [ 16.399 s]
[INFO] pentaho-samples ……………………………… SUCCESS [  1.159 s]
[INFO] pentaho-plugin-samples ……………………….. SUCCESS [ 11.129 s]
[INFO] pentaho-war …………………………………. SUCCESS [ 45.434 s]
[INFO] pentaho-style ……………………………….. SUCCESS [  0.742 s]
[INFO] pentaho-data ………………………………… SUCCESS [  0.211 s]
[INFO] pentaho-solutions ……………………………. SUCCESS [31:31 min]
[INFO] pentaho-server-manual-ce ……………………… SUCCESS [01:15 min]
[INFO] pentaho-server-ce ……………………………. SUCCESS [01:51 min]
[INFO] ————————————————————————
[INFO] BUILD SUCCESS
[INFO] ————————————————————————
[INFO] Total time: 38:36 min
[INFO] Finished at: 2017-03-31T15:36:43+01:00
[INFO] Final Memory: 102M/1084M
[INFO] ————————————————————————



There you go! In the end you should see a dist file like assemblies/pentaho-server/target/pentaho-server-ce–SNAPSHOT.zip. Unzip it, run it, done.

Debugging / inspecting the code

So the next thing you’d probably want, would be to be able to inspect and debug the code. This is actually pretty simple and common to all java projects. Goes something like this:

1. Open the project in a Java IDE

Since we use maven, it’s pretty straightforward to do this – simply navigate to the folder and open the project as a maven project.

In theory, any java IDE would do, but I had some issues with Netbeans given it uses an outdated version of maven and ended up switching to IntelliJ IDEA.
idea Building Pentaho Platform from source and debugging it
I actually took this screenshot of IntelliJ myself, so no need to give credits to anyone

2. Define a remote run configuration

Now you need to define a remote debug configuration. It works pretty much the same in all IDEs. Make sure you point to the port of the Java Debug Wire Protocol (JDWP) port you’ll be using in the application you’re attaching to
debugConfig Building Pentaho Platform from source and debugging it
Setting up a debug configuration

3. Make sure you start your application with JDWP enabled

This sounds complex, but really isn’t. Just make sure your java command includes the following options:

-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8044

For pentaho platform is even easier, as you can simply run start-pentaho-debug.sh

4. Once the server / application is running, simply attach to it

And from this point on, any breakpoints should be intercepted

breakpoints Building Pentaho Platform from source and debugging it
Inspecting and debugging the code

Submitting your fixes

Now that you know how to compile and debug code, we’re a contributor in the works! Let’s imagine you add some new functionality or fix a bug, and you want to send it back to us (you do, right???). Here are the steps you need – they may seem extensive but it’s really pretty much the normal stuff:
  1. Create a jira
  2. Clone the repository
  3. Implement the improvement / fixes in your repository
  4. Make sure to include a unit test on it (link to how to write a unit test or a sample would be good)
  5. Separate formatting-only commits from actual commits. So if your commit reformats the java class, you need to have a commit with [CHECKSTYLE] as the commit comment. Your main changes including your test case should be in a single commit.  
  6. Get the code formatting style template for your IDE and update the year in the copyright header
  7. Issue a pull request against the project with [JIRA-ID] as the start of the commit comment
  8. For visibility, add that PR to the jira you created, email me, tweet, whatever it takes. Won’t promise it will be fast, but I promise we’ll look icon smile Building Pentaho Platform from source and debugging it



Hope this is useful!

-pedro

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence