Category Archives: Data Mining

Why TIBCO Cloud Live Apps is a Good Fit for Blockchain Integration

blockchain1 Why TIBCO Cloud Live Apps is a Good Fit for Blockchain Integration

Today more than ever, digital businesses need to quickly find and capitalize on new opportunities. TIBCO Cloud Live Apps is a low-code application platform that empowers citizen developers to build fully functional smart applications in mere minutes. These apps seamlessly integrate and extend existing systems, so creating and changing enterprise apps is now easy and fast.

Audit trails that keep track of transaction history with date/time, participant name, and activity processing are one of the built-in capabilities in TIBCO Live Apps. However, this audit trail data stored in relational databases could potentially be exposed to tampering if illegal users gain access.

A blockchain can be used to solve this problem by storing an immutable audit trail of all transactions accessible only to those who need that information. Using a blockchain, every transaction is recorded and the block of transactions is hashed together with the previous block’s hash and stored at the end of the chain. Data that should be secure and audit trail information that should not be changed, are safe from corruption by external parties.

Why I’m excited about TIBCO Live Apps

Both TIBCO BPM and blockchain provide process-based solutions. Live Apps provides a different kind of user experience, one geared towards citizen developers. As of yet, blockchain does not provide much graphical design-time tooling, but as the technology matures that will improve. Until then, Live Apps and blockchain provide a powerful combination for creating immutable process-based solutions.

Live Apps provides a fast and easy iterative approach for creating process-based applications. When the technical team creates new blockchain solutions and exposes the blockchain application services to business users, Live Apps citizen developers can very quickly expose the blockchain process to other users within the organization. This allows for a very quick turnaround time for creating and exposing blockchain applications in a production environment.

Empower your organization with low-code apps to streamline processes.

Let’s block ads! (Why?)

The TIBCO Blog

The Future of Casinos

casino The Future of Casinos

Traditionally, casinos have always been laggards in the technology landscape, slow to implement new, cutting-edge technologies. They believe in sticking with what is tried and true, but are slowly beginning to use new technologies to help increase revenue by being able to track patron’s behaviors to create more customized marketing programs.

Right now, many casinos are using basic analytic functions for revenue management including managing casino sales, hotel rooms, and popular dates and times. They display this collected information in a reporting dashboard that the marketing and gaming departments use to create customized offers based on their patron’s gaming behaviors.

Casinos will begin to implement advanced analytics to conduct a statistical analysis to help determine the best rates for hotel rooms and to segment and profile patrons to target the right customers. Once customers are in the door, customized offers are put in place to get them to stay. Their goal will be to keep patrons engaged and to spend money outside gambling in areas such as dining, events, and tourism to build return patronage.

Artificial intelligence will allow casinos to get a real-time view into their floor to see where their patrons are and how they are moving about the casino. Casinos are able to collect demographic information about the users in addition to when they card in and out, and the number of pulls. Individual machine’s profit performance can be tracked to see where in the casino the machines are best performing and how much each machine is bringing in.

With all of this new information, casinos will need a place to store it all. Many casinos have already implemented a basic technology stack, equipped with the technology to connect their systems together. Casinos will begin to move their data out of data centers and into the cloud with the help of containers. They will look for flexible deployment options and use containers based on which casino applications will support them.

Casinos have the potential to use emerging technology to their advantage to better their operations and relationships with their patrons. More and more casinos are slowly implementing sophisticated technology to provide a better and more personalized customer experience while also increasing their revenue. The possibilities are endless; it’s up to casinos to see how much they are willing to gamble.

Contact us today so we can help your organization implement emerging technologies.

Let’s block ads! (Why?)

The TIBCO Blog

Decentralized Computing: The Building Blocks for a Digital Enterprise

blockchain Decentralized Computing: The Building Blocks for a Digital Enterprise

The enterprise will forever be changed as blockchain technologies become more pervasive in everyday applications. Not only can decentralized technologies help to reduce IT costs, they will open the door to new applications that were not possible with legacy architectures and business models. The concept of pay-per-compute plays a big part here—it’s the idea that you only pay for the computing power that you require to run your applications and nothing more. The idea is nothing new (think Amazon AWS), but pay-per-compute resources will help shape the implementation of blockchain technologies across the digital enterprise, enabling business leaders to better serve their customers, employees, and deliver critical technologies.

In the near future, entire businesses will run on decentralized pay-per-compute platforms. Decentralized pay-per-compute platforms utilize blockchain technologies to execute computer code on a distributed network of machines. This model will disrupt the way enterprises interact with core business applications and the shift will enable a major reallocation of a company’s computing resources away from maintenance and upkeep toward innovation and business expansion. Decentralized technology will pave the way for a completely new way to deploy and license enterprise software.

We are already capable of developing applications and performing serverless pay-per-compute (PPC) application execution in the cloud. The current PPC offerings enable a new segment of business possibilities and cost-saving opportunities. With this architecture, businesses can greatly reduce IT costs and scale with agility and ease. Larger development budgets lead to business expansion in search of new revenue. However, there are some shortcomings: these technologies still require large server networks, somewhere, to store and execute apps; the server centers are a central point of failure or attack; servers may suffer outages and natural disasters; there are only a select number of vendors controlling costs; scaling can be a costly issue. Enter decentralized cloud networks.

This is where blockchain technology fits in and disrupts the current model. The decentralized pay-per-compute (dPPC) model may be one of the first viable blockchain use cases in the digital enterprise. Many companies are attempting to solve problems by injecting blockchain technologies into all aspects of the business. However, most of these solutions are akin to trying to fit a square peg in a round hole—they are being forced into incorrect applications. Let’s take a second to think about what blockchain technology really enables. Through the use of a decentralized ledger of consensus, blockchains allow us to both verify and access information stored on the chain… forever. Therefore, if an application is uploaded onto a blockchain, I can simply call that application at any time to run on a dPPC computer network.

Blockchain technologies can make use of extra storage and compute resources located all over the world on a vast network of underutilized computers. Just think about the amount of compute resources out there that are currently allocated but unused! Using blockchain technology, the dPPC model doesn’t rely on servers hosted by one company. Instead, code is broken down into small encrypted bits and pieces, stored on a network of machines all over the world, able to be executed at any time. Enterprises will pay reduced fees to store applications and tiny fees to execute them.

The serverless style of computing is going to change the way applications are created. No longer will developers think about where a new application will reside—they will simply create one and allow someone else to worry about storing, securing, and scaling it (the three S’s). As more enterprise applications switch to run on shared compute platforms—PPC and eventually dPPC networks—they are going to become even more location agnostic. Today, many APIs are hosted in hybrid cloud environments and are managed using cloud-based management platforms. With a shift towards API-first architecture styles, cloud services will become more standardized. More of these services will transition to decentralized data centers, managed by someone else and utterly forgotten by their users. This allows developers to focus on creativity and innovation, while delegating the three S’s (storing, securing, scaling) to some other platform.

In the future it is entirely possible that there will not be large enterprise implementations of software, separately owned by big companies. Decentralized enterprises may consume services from networks of distributed nodes that will offer better uptime, faster response, reduced storage and execution costs, along with the most powerful toolkit the enterprise has ever had to innovate and grow. The future trend starts with serverless pay-per-compute business models that will morph into decentralized solutions as blockchain technologies gain adoption.

Find out how to reinvent your business using blockchain technology. I welcome your feedback and opinion on how you see blockchain expanding beyond its cryptocurrency roots.

Let’s block ads! (Why?)

The TIBCO Blog

Insights from the Hackathon at TIBCO Energy Forum 2017

Today’s booming competitive landscape demands that to create a value analytics added solution, a company must invest on multiple levels:

  • Find a BI tool that is fit for the problem and tech ecosystem
  • Train a team to work within the paradigm of the tool
  • Define their domain specific problems as analytic questions
  • Apply the team’s skills and tool capabilities to answer the analytic question

Often, companies will achieve 2-3 out of the above 4 and fail to reap the full benefit of their investments. At the TIBCO Energy Forum 2017 Hackathon we set about to address this by allowing the format to be flexible.

The 2-hour guided hackathon was designed to address common analytic use cases in the energy sector and how TIBCO Spotfire could be leveraged to gain insight. We saw registrations from more than 130 Spotfire enthusiasts of all skill levels. They could choose to compete in the guided hackathon or follow along and explore the energy sector problems they found most relevant in the areas of machine learning, geoanalytics, data exploration, production data analysis, and integrating social media insights into their applications.

solution screen Insights from the Hackathon at TIBCO Energy Forum 2017

Participant entry and comments for “Points in Polygons” problem, which asked analysts to programmatically match the Wells to their regional properties.

The TIBCO tech team was excited to evaluate the participant entries and discover how they had transformed data and leveraged data functions to customize dashboards, identify unique patterns through visualizations, and bridge the gap between knowing the product and using it effectively to solve the business problem.

We at TIBCO recognize that building an ecosystem where consumers are able to develop on and contribute to the platform benefits the entire community. To this effect, we encourage you to try the Spotfire hackathon exercises and send in your entries, questions, and feedback to drspotfire@tibco.com.

Resource Links

  • TIBCO Energy Solutions contains datasheets, case studies, solutions and other resources specific to the Energy Sector.
  • TIBCO Community Exchange contains under category “Analytics” reusable components and data functions that can be downloaded and then added to your dxp file.
  • TIBCO Answers page is a question and answer forum for TIBCO Products. Spotfire questions can be asked under Analytics or Spotfire tags.
  • Dr Spotfire will feature online training and Q&A sessions biweekly for both new Spotfire users and existing Spotfire users.

Let’s block ads! (Why?)

The TIBCO Blog

Integrating SugarCRM Just Got Easier

sugarcrm Integrating SugarCRM Just Got Easier

CRMs are the key to all businesses both big and small. Continuing its long tradition of integrating business applications, TIBCO now supports seamless integration with SugarCRM. Like other CRMs, TIBCO is familiar with the strong SugarCRM community of developers and users and listens to their needs and wants.

SugarCRM is popular because of the many benefits that companies get. Sugar is well-known for its ease of deployment and flexibility. Unlike most other CRMs, Sugar can be deployed on-premises, as a SaaS app, in your private cloud, or in any type of deployment that works for your company. Developers get complete access to the code when they download Sugar, unlike with most other CRMs. Getting complete access to the code means you get ultimate control over its configuration. In addition, Sugar often costs less than half of the other guys. Of course, no CRM is perfect, but Sugar definitely has its benefits.

How does TIBCO help you get the most out of your SugarCRM investment? Well, our integration capabilities enable you to amplify the power of Sugar up to 11. You can connect Sugar using many different tools and platforms, but none will give you the completeness of data like TIBCO. Using TIBCO you get the ability to create a 360 degree view of your customers.  With TIBCO, you get the most out of your investment in SugarCRM.

The TIBCO Cloud Integration platform won’t blow your TCO out of the water. We give you a fast, easy, flexible way to connect Sugar to all the systems inside and outside of your network and beyond. You literally can connect Sugar to any system or device, anywhere—whether it’s on-premises, in the cloud or a hybrid of both. While making sure you have money to spend on other systems.

Users can also try out SugarCRM’s free version before purchasing the pro version. The same goes for TIBCO’s integration solution. You can try it free for 30 days. To learn more, visit our SugarCRM integration page.

Let’s block ads! (Why?)

The TIBCO Blog

Top 5 Moments from Mercedes-AMG Petronas Motorsport’s 2017 Season

The 2017 Formula One season has come to a close. This season has been quite exciting Mercedes-AMG Petronas Motorsport fans, with each race bringing its own set of ups and down for the team and drivers. Here are just a few highlights from the 2017 season:

Lewis Hamilton won the Drivers’ ChampionshipM141777 2 Top 5 Moments from Mercedes AMG Petronas Motorsport’s 2017 SeasonWith two races remaining in the season, Lewis Hamilton took home his fourth Drivers’ Championship title on October 29, 2017 at the Mexico Grand Prix. Hamilton clinched the title despite placing ninth in the race, after suffering damage to his diffuser and underfloor sustained in a first-lap collision. Hamilton finished the 2017 F1 season with a total of 363 points and nine wins. This is his fourth championship, taking home the title in 2008, 2014, and 2015.

Mercedes-AMG Petronas Motorsport won the Constructors’ ChampionshipM139613 Top 5 Moments from Mercedes AMG Petronas Motorsport’s 2017 SeasonOn October 22, 2017, Mercedes won its fourth consecutive Constructors’ Championship at the United States Grand Prix. The standings for the Constructors’ Championship are calculated by adding the points of the team’s drivers together. The Silver Arrows finished the 2017 season with a total of 668 points, a 146-point lead over Ferrari. This is the team’s fourth consecutive championship, taking home the title in 2014, 2015, and 2016.

Hamilton set a new record every time he got on the trackM135378 Top 5 Moments from Mercedes AMG Petronas Motorsport’s 2017 SeasonSome of this year’s highlights include that in the second race of the season in China, Hamilton equalled Jim Clark’s career record of 11 “hat-tricks”—races won from pole while setting the fastest lap—placing him equal second on the all-time list. At the Italian Grand Prix, Hamilton secured his 69th pole position, surpassing Michael Schumacher for the all-time most pole positions. Despite the first-ever rain affecting the night Grand Prix in Singapore, Hamilton came out on top to secure an unlikely win after starting the race in 5th place. During qualifying at the Japanese Grand Prix, Hamilton broke the track record held by Michael Schumacher by over 1.6 seconds. Later in the season at the Austin Grand Prix, Hamilton took his 117th front row start, breaking Schumacher’s previous record of 116.

Valtteri Bottas joined theMercedes-AMG Petronas Motorsport teamM66663 1 Top 5 Moments from Mercedes AMG Petronas Motorsport’s 2017 SeasonPrior to the start of the 2017 season, Mercedes added Finnish driver Valtteri Bottas to the team. Bottas replaced Nico Rosberg, who retired at the end of the 2016 season. During the 2017 season, Bottas won three races and took four pole positions. He had his first Grand Prix win at the Russian Grand Prix, finishing ahead of the Ferraris, making him the fifth Finn to win a Grand Prix. Bottas finished the season third in the Drivers’ Championship with 305 points.

Mercedes-AMG Petronas Motorsport and TIBCO formed a global partnershiprsz 1screen shot 2017 09 05 at 43447 pm Top 5 Moments from Mercedes AMG Petronas Motorsport’s 2017 Season
On April 13, 2017, TIBCO announced a global partnership with Mercedes-AMG Petronas Motorsport. As an Official Team Partner, we are providing the team with expertise in the area of advanced analytics through TIBCO’s System of Insights, while our logo is prominently displayed on the helmets of both Hamilton and Bottas.

Mercedes-AMG Petronas Motorsport finished the season with impressive stats: won 12 of the 20 races; 15 pole positions;  four 1-2 finishes; 26 podiums; nine fastest laps; and an average winning margin of the nearest non-Mercedes driver of 13.1 seconds.

We wish the best of luck to Mercedes-AMG Petronas Motorsport in 2018 and looking forward to kicking off the 2018 season with the Australian Grand Prix in Melbourne in March.

See how TIBCO is giving Mercedes-AMG Petronas Motorsport a competitive advantage with our system of insights.

Let’s block ads! (Why?)

The TIBCO Blog

Shiplap and Artificial Intelligence: Why AI is Important for Everyone, Not Just Those in Silicon Valley

shiplap Shiplap and Artificial Intelligence: Why AI is Important for Everyone, Not Just Those in Silicon Valley

I want to discuss the urgent need to advance AI for all American business, not just Silicon Valley insiders. But first I have a confession to make. Like millions of others, I’m a fan of the HGTV show “Fixer Upper.” Don’t judge! It’s a nice way to unwind after work. Now entering its fifth and final season, fans of the show know that most episodes follow a simple format: Agreeable hosts Chip and Joanna Gaines take potential homebuyers to three ugly houses around Waco, Texas, and explain how the wrecks could be renovated into modern dream homes within the parameters of the buyer’s budget. Tours of dilapidated farmhouses and garish bachelor pads ensue. A fixer-upper is selected, redesign details are conveyed, crummy cabinetry and inconvenient walls are demolished, setbacks and obstacles are dispatched, and the episode closes with a tour of the now-stunning homestead transformed into a showpiece by JoJo’s creative vision and Chip’s physical labor. Catharsis achieved.

While this home improvement pageant may seem far removed from the technology space, I propose that the show’s formula mirrors our digital evolution. And the present burgeoning of AI capabilities and solutions form a key component in that journey from awful to awesome.

For example, the ugly homes selected for renovation on “Fixer Upper,” with their Congoleum floors and Formica countertops, were once paragons of modern convenience. Times change, as do lifestyles, and what worked architecturally in the Reagan era seems absurdly awkward now. Put in technology terms, there is very little similarity between the way you currently use your smartphone and the way your parents used their wall-mounted landline phone back in the analog day. And just try to imagine using all your now-essential mobile apps on the once vaunted BlackBerry 7230.

Conversely, old houses sometimes contain desirable and time-tested features worth saving or repurposing: apron sinks, oak floors, shiplap. The same is true in technology. Witness the recent surge in artificial intelligence development: Anyone of a certain age will recall that the kinds of artificial neural networks currently powering Siri and Alexa were also hot way back in the 1980s. They’re important again today because we’ve lately been unburdened by the kind of resource constraints that prevented AI fruition in the 20th century.

Back then, the AI dream was stymied by expensive and restricted access to compute capacity coupled with a paucity of data. That blight was largely solved by the rise of the cloud, which allowed vast numbers of researchers and innovators almost unimaginable storage for data and power for computation from pretty much anywhere.

For example, Libratus, the poker AI that recently beat some of the world’s best Texas Hold ’Em players, was built with more than 15 million core hours of computation. While that project was powered by the Pittsburgh Supercomputing Center, something like AWS gives anyone the ability to cheaply spin up the equivalent of 100 high-power machines for about 150 bucks. Or consider the pace and volume of trades on Wall Street today: such commonplace and widespread algorithmic trading would have been unimaginable even with state-of-the-art technology just a few short years ago.

An exponential leap in resource availability, coupled with open source and APIs, also enabled the mobile revolution, which now lets us interact on the go with our little hand-held extensions of more powerful systems on the cloud. Add to all that the emergence of IoT and you start to see that pretty much every “thing” you can think of can now become a point of computation.

Which is all well and good, but has resulted in new challenges to comfortable human habitation—the unbridled proliferation of applications.

Time was, the average working person would deal with a few primary software packages and become expert in those applications. Nowadays, nearly everything you do requires use of a distinct application, while the concept of mastering any one of them grows more elusive by the minute. The way we schedule, organize, travel, pay our bills, create products, purchase or provide goods and services, communicate personally or professionally, consume news or acquire new skills—everything, everywhere is prefaced by some interface to technology that requires mental investment on your part to maneuver. Everything is an app and the cognitive load is crushing us as a people.

In our technology and in our homes, what we need today is different from what we built yesterday. Our old architecture no longer suits our way of life.

Hence, the hype around AI. The investment and development and deployment of new solutions utilizing machine learning and advanced analytics and autonomous everything is driven by the need to simplify and simultaneously expand our technologically dense existence, to reduce that cognitive load, to alleviate the friction between us and our machines. In the context of “Fixer Upper,” we are at the stage where we are standing in a smelly, cramped, derelict kitchen while JoJo maps out the possible vista of bright stainless steel, subway tile, and stupendous stone-topped islands opening on a vast and welcoming living space. I can almost touch the shiplap accent wall…

The point is that we are just starting to visualize what can be done to reimagine our way of life with the expanded resources we have at our disposal, and this is an important phase in making our technology meet our new needs. We can now use natural language processing engines and machine vision and edge intelligence to ease the burden and unlock new potential. This visualization is necessary to spur us toward obtaining that “dream kitchen,” so I won’t criticize all the AI hype. It’s a necessary motivator. We have yet to deal with demolition (legacy systems). We’re still going to face foundation issues (security). We’ll still have to solve our plumbing problems (integration). But the vision of that open, frictionless existence makes all the impending labor worthwhile no matter where you live.

There will undoubtedly be things that don’t work well, which we won’t discover until after we’ve moved into our technological fixer upper and started living in it. But if it’s anything like the vision, it will be a lot better than the ramshackle wreck we’re starting with.

I also believe the Discovery Channel’s “Fast N’ Loud” served up great lessons about enterprise integration, but that’s another story.

Let’s block ads! (Why?)

The TIBCO Blog

‘Tis the Season to be Traveling

traveling ‘Tis the Season to be Traveling

The holiday season is in full swing, which means more people are traveling at this time of the year than usual to visit family and loved ones. In fact, traveling during the holidays is up 23 percent when compared to the average for the rest of the year. That is 23 percent more passengers that take to the skies, roads, and tracks during Christmas and New Year.

With this significant increase comes an increased amount of stress as passengers face larger crowds, delayed or cancelled flights, and traffic. However, the rise of mobile technology has given rise to the connected traveler. Travelers are now able to complete all steps of the traveler’s journey from a mobile device, including booking airline tickets, lodging, and entertainment.

During the holidays especially, communication between vendor and passenger is crucial. Passengers want to know if their flight is delayed as early as possible to plan to fill their wait time. Fortunately, many airlines have enabled text and email communications to provide real-time updates on flight delays, gate changes, and cancellations to keep their passengers informed. It’s one less thing for travelers to worry about, removing stress and ambiguity.

Another stressor during the holidays is wait times. Whether it’s getting to your gate earlier than your departure, or being caught in a delay, sometimes there is a lot of time to kill. So, what do you do with all that time? With communication already happening between airlines and passengers, it can go one step further. Passengers can receive personalized offers from retailers in the airport for ways to pass the time. This can be a coupon for a coffee, or a discount on a massage, or even a restaurant suggestion. All these recommendations can help make travelers’ experience better and pass the time much faster.

Along with wait times, the holiday season brings a large amount of travelers to travel hubs such as train stations and airports. It’s crucial to keep the crowd flow under control, planning for fluctuations and adapting to travel needs. Through sensors and IoT, airports are able to sense, stream, and map crowd flow. They are able to evaluate areas such as security, terminals, transportation, and retail shops to know and anticipate high areas of traffic. This real-time information helps direct the flow of the crowds to avoid long lines and help everyone maneuver smoothly.

While passengers are traveling both domestically and internationally this holiday season, some places are always popular holiday destinations. Identifying these patterns are crucial for airlines, hotels, and trains to prepare for the demand accordingly. Understanding this information helps staff react faster to improve operational efficiency, identify revenue opportunities, and engage with customers. As a result, there is an increased number of flights to these destinations and an increased number of hotel rooms available, all offered at the best price possible.

The journey to see loved ones may be one that seems daunting, but mobile technology has made it much more stress-free than in the past. Where in the past it was just the traveler, today it’s the connected traveler. Today’s connected traveler wants to be kept up-to-date with the latest information and be given personalized offers to excite and delight their travel experience. And with the stress of holiday travel, a positive travel experience can set the tone for the whole season.

From all of us at TIBCO, we’d like to wish you a happy holiday and a seamless traveling experience if you’re traveling!

Check out how our technology can help travel service providers with the delivery of a connected journey.

Let’s block ads! (Why?)

The TIBCO Blog

‘Twas the Season for Integration

christmas cloud ‘Twas the Season for Integration

‘Twas the season for connecting, and as TIBCO is known to do, our products integrate, and are here to help you.

This pervasive integration ebook published by O’Reilly with care, in the hopes that digital transformation soon would be there.

A TIBCO Cloud™ Live Apps webcast was presented with ease, to show that creating low-code apps is a 5-minute breeze.

And integration for SaaS apps as taught by this webinar, can help you to learn what iPaaS is and how to set the bar.

While in this whitepaper there arose such a challenge, for IT architects and developers to digitally transform APIs they manage.

Away from on-prem, cloud grew in a flash, so we created a webinar series to help sketch your own path.

But that’s not all, we have loads more in store. Contact us so we can help you find the solution that you’re looking for.

Happy integration to all, and to all a great 2018!

Let’s block ads! (Why?)

The TIBCO Blog

Announcing Pentaho 8.0 – Coming in November to a theater near you!

Pentaho 8!

announce Announcing Pentaho 8.0   Coming in November to a theater near you!

The first of a new Era

Wow – time flies… Another Pentaho World this week, and another blog post announcing another release. This time… the best release ever! icon wink Announcing Pentaho 8.0   Coming in November to a theater near you!
This is our first Pentaho product announcement since we became Hitachi Vantara – and you’ll see that some synergies are already appearing. And as I said before, again and again… the Community Edition is still around! We’re not kidding – we’re here to rule the world and we know it’s though an open source core strategy that we’ll get there icon smile Announcing Pentaho 8.0   Coming in November to a theater near you!

Pentaho 8.0 In a nutshell

Ok, let’s get on with this cause there’s a lot of people at the bar calling me to have a drink. And I know my priorities! 
  • Platform and Scalability
    • Worker Nodes
    • New theme
  • Data Integration
    • Streaming support!
    • Run configurations for Jobs
    • Filters in Data Explorer
    • New Open / Save experience
  • Big Data
    • Improvements on AEL
    • Big Data File Formats – Avro and Parquet
    • Big Data Security – Support for Knox
    • VFS improvements for Hadoop Clusters
  • Others
    • Ops Mart for Oracle, MySQL, SQL Server
    • Platform password security improvements
    • PDI mavenization
    • Documentation changes on help.pentaho.com
    • Feature Removals:
      • Analyzer on MongoDB
      • Mobile Plug-in (Deprecated in 7.1)
Is it done? Can I go now? No?…. damn, ok, now on to further details…

Platform and Scalability

Worker Nodes (EE)

This is big. I never liked the way we handled scalability in PDI. Having the ETL designer responsible for manually defining the slave server in advance, having to control the flow of each execution, praying for things not to go down… nah! Also, why ETL only? What about all the other components of the stack?
So a couple of years ago, after getting info from a bunch of people I submitted a design document with a proposal for this:
02 DesignDoc WorkerNodes%2B2017 10 24%2B10 28 49 Announcing Pentaho 8.0   Coming in November to a theater near you!
This was way before I knew the term “worker nodes” was actually not original… but hey, they’re nodes, they do work, and I’m bad with names, so there’s that… :p
It took time to get to this point, not because we didn’t think this was important, but because of the underlying order of execution; We couldn’t do this without merging the servers, without changing the way we handle the repository, without having AEL (the Adaptive Execution Layer). Now we got to it!
Fortunately, we have an engineering team that can execute things properly! They took my original design, took a look at it, laughed at me, threw me out of the room and came up with the proper way of doing things. Here’s the high-level description:
03 WorkerNodes Announcing Pentaho 8.0   Coming in November to a theater near you!
This is where I mentioned that we are already leveraging Hitachi Vantara resources. We are using Lumada Foundry for worker nodes. Foundry is a platform for rapid development of service-based applications delivering the management of containers, communications, security, and monitoring toward creating enterprise products/applications, leveraging technology like docker, mesos, marathon, etc. More on this later, as it’s something we’ll be talking a lot more about…
Here’s some of the features
  • Deploy consistently in physical, virtual and cloud environments
  • Scale and load balance services , helping to deal with peaks and limited time-windows, allocate the resources that are needed.
  • Hybrid deployments can be used to distribute load, even when the on-premise resources are not sufficient, scaling out into the Cloud is possible to provide more resources. 
So, how does this work in practice? Once you have a Pentaho Server installed, you can configure it to connect to the cluster of Pentaho Worker nodes. From that point on – things will work! No need to configure access to repositories, accesses, funky stuff. You only need to say “Execute at scale” and if the worker nodes are there, it’s where things will be executed. Obviously, the “things will work” will have to obey the normal rules of clustered execution, for instance, don’t expect a random node on the cluster to magically find out your file:///c:/my computer/personal files/my mom’s excel file.xls…. :/
So what scenarios will this benefit the most? A lot! Now your server will not be bogged down executing a bunch of jobs and transformations as they will be handed out for execution in one of the nodes.
This does require some degree of control, because there may be cases where you don’t want remote execution (for instance, a transformation to feed a dashboard). This is where Run Configurations come into play. Also important to note that even though the biggest benefits of this will be ETL work, this concept is for any kind of execution.
This a major part of the work we’re doing with the Hitachi Vantara team; By leveraging Foundry we’ll be able to do huge improvements on areas we’ve been wanting to tackle for a while but never were able to properly address on our own: better monitoring, improving lifecycle management and active-active HA, among others. In 8.0 we leapfrogged in this worker nodes story, and we expect much more going forward!

New Theme – Ruby (EE/CE)

One of the things you’ll notice is that we have a new theme that reflects the Hitachi Vantara colors. The new theme is the default on new installations (not for upgrades) and the others are still available
ruby Announcing Pentaho 8.0   Coming in November to a theater near you!

Data Integration

Streaming Support: Kafka (EE/CE)

In Pentaho 8.0 we’re introducing proper streaming support in PDI! In case you’re thinking “hum… but don’t we already have a bunch of steps for streaming datasources? JMS, MQTT, etc?” you’re not wrong. But the problem is that PDI is a micro batching engine, and these streaming protocols introduce issues that can’t be solved with the current approach. Just think about it – a streaming datasource requires an always running transformation, and in PDI execution all steps run in different threads while the data pipeline is being processed; There are cases, when something goes wrong, where we don’t have the ability to do proper error processing. It’s simply not as simple as a database query or any other call where we get a finite and well known amount of data.
So we took a different approach – somewhat similar to sub-transformations but not quite… First of all, you’ll see a new section in PDI:
pdi streaming Announcing Pentaho 8.0   Coming in November to a theater near you!
Kafka is the one that was prioritized as being the most important for now, but this will actually be something that will be extended for other streaming sources.
The secret here is on the Kafka Consumer step:

KafkaConsumer Announcing Pentaho 8.0   Coming in November to a theater near you!
The highlighted tabs should be generic for pretty much all the steps, and the Batch is what controls the flow. So what we did was instead of having an always running transformation at the top level, we break the input data into chunks – either by number of records or duration and the second transformation takes that input, the fields structure and does a normal execution. In here, the abort step was also improved to give you more control the flow of this execution. This is actually something that’s been a long standing request from the community – we can now specify if we want to abort with error or without, having an extra ability to control the flow of our ETL.
Here’s an example of this thing put together:
streamingdiagram Announcing Pentaho 8.0   Coming in November to a theater near you!
Now, even more interesting that that is that this also works in AEL (our Adaptive Execution Layer, introduced in Pentaho 7.1), so when you run this on a cluster you’ll get spark native kafka support being executed at scale, which is really nice…
Like I mentioned before, moving forward you’ll see more developments here, namely:
  • More streaming steps, and currently MQTT seems the best candidate for the short term
  • (and my favorite) Developer’s documentation with a concrete example so that it’s easy for anyone on the community to develop (and hopefully submit) their own implementations without having to worry about the 90% of the stuff that’s common to all of them

New Open / Save experience (EE/CE)

In Pentaho 7.0 we merged the servers (no more that nonsense of having a distinct “BA Server” and a “DI Server”) and introduced the unified Pentaho Server with a new and great looking experience to connect to it:
 Announcing Pentaho 8.0   Coming in November to a theater near you!
but then I clicked on Open file from repository and felt sick… That thing was absolutely horrible and painfully slow. We were finally able to do something about that! Now the experience is … well… slightly better (as in, I don’t feel like throwing up anymore!):
pdi opensave Announcing Pentaho 8.0   Coming in November to a theater near you!
A bit better, no? icon smile Announcing Pentaho 8.0   Coming in November to a theater near you!  Also with search capabilities and all the kind of stuff that you’ve been expecting from a dialog like this on the past 10 years! Same for the save experience.
This is another small but IMO always important step in unifying the user experience and work towards a product that gets progressively more pleasant to use. It’s a never-ending journey but that’s not an excuse not to take it.

Filters in Data Explorer (EE)

Now that I was able to open my transformation, I can show some of the improvements that we did on our Data Explorer experience in PDI. We now support the first set of filters and actions! This one is easy to show but extremely powerful to use.
Here’s filters – depending on the data type you’ll have a few options, like excluding nulls, equals, greater/lesser than and a few others. Like mentioned, others will come with time. 
filters Announcing Pentaho 8.0   Coming in November to a theater near you!
Also, while previous version only allowed for drill down, we can now do more operations on the visualizations.
actions Announcing Pentaho 8.0   Coming in November to a theater near you!

Run configuration: Leveraging worker nodes and execute on server (EE/CE)

Now that we are connected to the repository, opened our transformation with a really nice experience and took benefit of these data exploration improvements to make sure our logic is spot on, we are ready to execute it to the server. 
Now this is where the run configuration part comes in. I have my transformation, defined it, played with it, verified that really works as expected on my box. And now, I will want to make sure it also runs well on the server. What before was a very convoluted process, it’s now much simplified.
What I do is define a new Run Configuration, like described in 7.1 for AEL, but with a little twist: I don’t want it to use the spark engine; I want it to use the pentaho engine but on the server, not the one local to spoon:
run config Announcing Pentaho 8.0   Coming in November to a theater near you!
Now, what happens when I execute this selecting the Pentaho Server run configuration?
run config dialog Announcing Pentaho 8.0   Coming in November to a theater near you!
Yep, that!! \o/
executeOnServer Announcing Pentaho 8.0   Coming in November to a theater near you!
This screenshot shows PDI trigger the execution and my Pentaho Server console logging it’s execution.
And if I had worker nodes configured, what I would see would be my Pentaho Server automatically dispatching the execution of my transformation to an available worker node! 
This doesn’t apply to the immediate execution only; We can now specify the run configuration on the job entry as well, allowing a full control of the flow of our more complex ETL
jobentry Announcing Pentaho 8.0   Coming in November to a theater near you!

Big Data

Improvements on AEL (EE/CE apart from the security bits)

As expected, a lot of work was done on AEL. The biggest ones:
  • Communicates with Pentaho client tools over WebSocket; does NOT require Zookeeper
  • Uses distro-specific Spark library
  • Enhanced Kerberos impersonation on client-side
This brings a bunch of benefits:
  • Reduced number of steps to setup 
  • Enable fail-over, load-balancing
  • Robust error and status reporting 
  • Customization of Spark jobs (i.e. memory , settings)
  • Client to AEL connection can be secured
  • Kerberos impersonation from client tool 
And not to mention performance improvements… One benchmark I saw that I found particularly impressive is that AEL is practically on pair with native spark execution! And this is impressive! Kudos for the team, just spectacular work!

Big Data File Formats – Avro and Parquet (EE/CE)

Big data platforms introduced various data formats to improve performance, compression and interoperability, and we added full support for these very popular big data formats: Avro and Parquet. Orc will come next.
When you run in AEL, these will also be natively interpreted by the engine, which adds a lot to the value of this.
bigdataformats Announcing Pentaho 8.0   Coming in November to a theater near you!
The old steps will still be available on the marketplace but we don’t recommend using them.

Big Data Security – Support for Knox

Knox provides perimeter security so that the enterprise can confidently extend Hadoop access to more of those new users while also maintaining compliance with enterprise security policies and used in some HortonWorks deployments. It is now supported on the Hadoop Clusters’ definition if you enable the property KETTLE_HADOOP_CLUSTER_GATEWAY_CONNECTION on the kettle.properties file.
knox Announcing Pentaho 8.0   Coming in November to a theater near you!

VFS improvements for Hadoop Clusters (EE/CE)

In order to simplify the overall lifecycle of jobs and transformations we made the hadoop clusters available through VFS, on the format hc://hadoop_cluster/
namedclusters Announcing Pentaho 8.0   Coming in November to a theater near you!

Others

There are some other generic improvements worth noting

Ops Marts extended support (EE)

Ops Mart now supports Oracle, MySQL and SQL Server. I can’t really believe I’m still writing about this thing icon sad Announcing Pentaho 8.0   Coming in November to a theater near you!

PDI Mavenization (CE)

Now, this is actually nice! PDI is now fully mavenized. Go to https://github.com/pentaho/pentaho-kettle, do a mvn package and you’re done!!!

———–

Pentaho 8 will be available to download mid-November.

Learn more about Pentaho 8.0 and a webinar here: http://www.pentaho.com/product/version-8-0
Also, you can get a glimpse of PentahoWorld this week watching it live at: http://siliconangle.tv/pentaho-world-2017/

Last but not See you in a few weeks at the Pentaho Community meeting in Mainz! https://it-novum.com/en/pcm17/

That’s it – I’m going to the bar!
-pedro

Let’s block ads! (Why?)

Pedro Alves on Business Intelligence