• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Tag Archives: CloudNative

Why You Need To Create A Cloud-native API Developer Portal

November 22, 2019   TIBCO Spotfire
TIBCOCloudAPIDevPortal e1574182873891 696x366 Why You Need To Create A Cloud native API Developer Portal

In early 2019, TIBCO released the fully cloud-native version of TIBCO Cloud Mashery, which provides first-class support for cloud-native deployment in public clouds, private clouds, on-premises environments, and at the edge. This unlocked the opportunity for API-led digital businesses to evolve to cloud-native architectural platforms, such as Kubernetes. More recently, we’ve announced that the TIBCO Cloud Mashery Developer Portal is also available as a fully cloud-native deployment—in private clouds and on-premises. 

An API developer portal is like a virtual storefront for API products developed by digital businesses. It allows developers to access a business’s API products, and get any necessary authorization, in order to begin evaluating documentation and building APIs into applications. In today’s globalized business world, this is a great way for API developers and API consumers to stay connected; however, for many industries, cloud-based APIs pose possible security concerns. By providing a means to deploy an API developer portal on-premises or in a secure, private-cloud infrastructure, financial services, telecom, airline, and other highly-regulated industries are able to effectively execute their digital strategies. 

For example, banks looking to develop an Open Banking platform for PSD2, while also complying with stringent security regulations, can manage their APIs and their API developer portal locally while providing global access to its ecosystem. Every organization that does business within the EU also must comply to strict regulations, regardless of the industry. GDPR requires businesses to follow stringent rules when handling customer data, especially when accessed by APIs. This makes a cloud-native, private-cloud approach to API developer portals vital.  

Cloud-native deployment continues to become more popular because it allows for optimal scalability, security, and development agility of applications. The ability to use API portals to engage internal, external, or partner developers allows you to create new business models to reach customers all over the world, while knowing your data is secure wherever it may be deployed. The TIBCO Cloud Mashery Local Developer Portal—with API-driven content management for easy integration into your CI/CD pipeline —is now available for full cloud-native deployment, either in private clouds or on-premises. 

To learn more about the benefits of a cloud-native API developer portal and you can maintain data locally while providing access locally, download a free 30-day trial today. 

Let’s block ads! (Why?)

The TIBCO Blog

Read More

How Sisense Engineered Its Cloud-Native Linux Deployment From the Ground Up

July 19, 2019   Sisense

A couple of weeks ago, we officially launched the Cloud-Native Sisense on Linux deployment after a successful beta release cycle that kick-started in Spring 2019. 

As of 2017, Linux was running 90% of the public cloud workload. It is increasingly the OS of choice by enterprises and the cloud due to its many advantages: lower TCO, higher security, improved stability and reliability, flexibility, and more. Given this importance, we made it an organizational priority to invest in a Sisense on Linux deployment in late 2017. 

When we sat down to plan this execution strategy, we realized there were several different ways we could approach it. For us, it was critical to not only do it right so we didn’t waste time and resources but also to deliver a product that would lead our customers into the future and support their needs in the ever-growing cloud environment.

Here’s how we re-architected Sisense with the right technologies and frameworks for the task at hand without simply porting the code over.

The First Few Months

When I was tasked with the responsibility of building a Sisense Linux deployment in late 2017, a few small steps had already been taken. Two developers had started a Linux project which initially comprised of simply porting code from one OS to the other. 

They started with C/C++ code, which usually takes the longest to migrate from Windows to Linux. By the end of 2017, the team was able to show their first demo, which ran queries over the ElastiCube Manager, our high-performance database, using C/C++.

Taking Stock and Restarting the Project

Even though some more progress was made, in January of 2018, we decided to take a step back and rethink our approach to this project. Often it is less scary to take what you have and what you know and continue without questioning your approach. However, it is not always the smartest or the best course of action.

Before jumping headlong into merely porting code from one OS to the other, it was necessary to think whether it made sense to migrate all components as-is or instead, to see what language or architecture would work best for the task at hand. 

We decided that where it was required and where it made sense, we would not simply port over code but rebuild the component from scratch using the most relevant stack and technology for what that component was meant to do. 

There were three “buckets” in this decision-making process: 

  • Components that would be migrated. 
  • Components that would be rebuilt from scratch using the right framework while maintaining institutional knowledge and providing a similar user experience. For example, we concluded that several components had to be rewritten in Java. To enable this, we dedicated more than a month to training the entire engineering team in Java. We also recruited Java experts to help guide and govern the design.
  • Components that would stay as-is (for example, JavaScript) with very minimal changes such as updating file names and paths.

In hindsight, this was a critical decision that paved the way for a modern, enterprise-grade, full-stack analytic application that is highly-performant, reliable, and scalable. The best part is that we were able to build it in a little over a year. 

The Right Technology for the Job

cloud native 2 770x433 How Sisense Engineered Its Cloud Native Linux Deployment From the Ground Up

Let’s break this down some more. The Sisense application has a few key tasks handled by different components:

1. Sisense ElastiCube or Data Engine

The Sisense ElastiCube crunches hundreds of millions of records and needs to be highly optimized. It has to be close to the OS for better control of what is being done with less overhead. Most of this code was in C and C++ and was left that way.

Takeaway: C & C++ are good to use when building highly optimized processes that are close to the OS, such as building a database engine.

2. ElastiCube Management Service and Query Service

The ElastiCube Management Service and Query Service were moved away from C# and C++ and rebuilt in Java. Java is a highly-portable and mature language that plays well in building mission-critical, high-performance applications that are CPU-intensive. The agility and complexity needed in those components are such that we needed to use a lot of frameworks that come with Java and focus only on our application logic without compromising on performance.  

We already had (and continue to have) a big footprint in Node.js. It would have been easier to use Node.js everywhere. However, we resisted the urge to use Node.js everywhere and use the best language and framework for the job.

Node.js is great for responsive operations with low memory footprint. It is easier to write in Node.js and is fast to debug and develop as well. However, Java has much better performance, more caching, and long state capabilities. Java is also more suited for compilation and type checking, which is important, especially when merging releases and branches over the years.  These actions can have a lot of vulnerability if not caught by compilation errors.

For example, the Management service needs to be aware of all the statuses and aware of Kubernetes with a lot of control of the systems. It made sense to build it in Java as the service needs to be efficient, highly available and multi-threaded.

On the other hand, application parts that are more tightly integrated with the UI, are easier to build it in Node.js. For example, the original pivot was implemented in C# as an IIS application. The pivot is a full stack component. It made sense to rewrite it in Node.js which allows the full stack developer to work on both the front-end and back-end in the same technology.

For web services, it’s not recommended to use C++ because the development time is too expensive. For those reasons, eventually, we decided to go with Java and, in particular, used the Spring Boot framework. We also considered a few options like Guice or EJB (which we immediately disqualified).

Takeaway: Java is useful when building mission-critical, high-performance applications that are CPU-intensive with the need for more caching, long state capabilities, and a robust set of available frameworks. Node.js, on the other hand, is useful for responsive operations with a low memory footprint and when a developer wants to work on both the front-end and back-end in the same technology (which is the genesis of Node.js).

3. Data connectors

The .NET connector-framework was replaced with a new framework based on Java because the support for .NET on Linux is via .NET Core, which was introduced in 2016, and does not contain all the functionality of the .NET framework for Windows. The connector framework acts as a pipe for transferring data. On top of this, the actual drivers for accessing most of the database providers are written in Java, so it was only natural to code the framework in Java too. The actual data crunching is done inside the ElastiCube, which is coded in C/C++.

Takeaway: Java is a natural choice for building data connectors due to its large ecosystem including database drivers and rich frameworks.

In summary, there are certain languages most appropriate for certain operations, and choosing the correct language for the operation at hand is key.

Guidelines on Choosing the Right Technology for the Job

Containerizing Microservices

Another critical change in the Linux deployment was related to the architecture itself. While many components in the Windows deployment are microservice based, given the opportunity to re-architect Sisense, we decided to build a containerized microservices application using Docker for containerization and Kubernetes for orchestration.

We initially debated between Docker Swarm and Kubernetes for orchestration but decided to go with Kubernetes due to the rising popularity and the fact that Kubernetes was becoming the de-facto standard for container orchestration. While our teams were comfortable with Docker Swarm, which is considered more of the DevOps way, Kubernetes better handled other developer requirements like versioning, upgrades, releases, and rollbacks. We decided to go with Kubernetes keeping the future developer user in mind.

An interesting debate that comes with building a microservices architecture is the number of microservices you’ll break your application into.

Two years ago, we had a fairly monolith application with four or five services. That is not the case anymore. We have around 20 services today. As a rule of thumb, we try not to create too many microservices, especially ones that lengthen the call chain. It is okay to add services that are not on the call chain. In a given operation, we shouldn’t involve all the microservices in the call chain (for example, 4-5 services is okay but not all available services). It is important to remember that while microservices are a great way of building scalable and resilient applications rapidly, they also add complexity, especially with communication between them and eventually debugging. You need to find a balance between the number of microservices you create, supportability, and maintainability. 

A New Way of Doing Things with Shared Storage, Updated Monitoring & Logging

Re-architecting the platform also gave us the opportunity to update old ways of doing things and create better and highly-performant new ways. For example, the Windows way of creating highly-available data is to store copies of the data on multiple servers. With this re-architecting, we were able to do away with that and rebuild that experience enabling the use of highly-distributed and available Shared Storage technologies like cloud storage providers, GlusterFS, Amazon EFS, Azure file share, Google Filestore, and many more.

Another example is logging. One of the challenges with building a microservices-based architecture is debugging because of the number of components involved and all of the different places logs can be stored. One of the first steps we took to alleviate this was to build a combined log using FluentD, which collects all the data in a centralized place. In addition, we added Grafana and Prometheus, which provide counters of what’s going on in the system by providing a detailed view of system metrics. 

Learnings Along the Way

While we have come out on the other side of a successful project, the journey was not without difficulties. Some of these were challenges that we learned from and others were limitations that we have had to work with in order to provide the best experience for the end user.

1. Embracing open source technologies

We learned that embracing well-tested and mature open source technologies are game changers in how quickly and efficiently we can build a large-scale, enterprise grade application. This tech is not something to be afraid of. Better yet, some of these technologies provide us with a completely different way of thinking about the problem (like the shared storage solution).

2. Wiping code and rebuilding where needed

We learned not to be afraid to wipe out code and rebuild. Today, we look back at a small portion of a component which we left in C++ and realize that it was a mistake. We could have saved time and done a better job by simply rewriting it. Keeping the code of C components that were not originally written for multi-threaded operations instead of rewriting them to make them multi-threaded was eventually more expensive. 

3. Keeping customer and end-user experience in mind

When we embarked on the Sisense on Linux deployment, it was very clear to us that we wanted to provide the same user experience in both the Windows and the Linux deployments. 

A big reason for this was to ensure that we can use the carefully curated and built automated testing assets across both deployments. The automatic testing assets (various databases, different schemas, dashboards, validated results) were collected and built for the last couple of years. Keeping the same automatic testing assets was a top priority. The ability to test both deployments with the exact same assets is an important tool to ensure we were retaining data integrity between the two systems. This meant that, in certain areas, we choose not to change something on the front-end that we could have changed in order to ensure the end-user experience was not affected. 

We also wanted to make the transition process from Windows to Linux (if asked for) to be quick and painless. To address this, we built a migration tool that allows our customers to move over all the work assets from Windows to Linux seamlessly so that they do not have to worry about rework.

4. Organization-wide focus and cross-company collaboration

A critical component of our success lay in cross-collaboration across R&D teams, and later with non-R&D teams, across the company. The Linux deployment is a completely new platform that touches every aspect of our organization and, at any given point, we had a significant number of Sisense developers contributing to it. 

Additionally, this required changes outside of R&D.

  • Technical support teams needed to know how to debug issues and support customers using a completely new OS and new technology. 
  • Pre-sales engineers needed to know how to successfully install and demo the new deployment to customers and needed to learn about the details of the new architecture. To facilitate the training of the tech teams, the teams not only subscribed to external courses but we also flew internal R&D trainers around the globe to share and educate the teams at various sisense locations. 
  • Sales and marketing teams also needed to become familiar with cloud-technology and the benefits of the Cloud-Native Sisense on Linux deployment in order to convey these benefits to customers and prospects. 

It was essential to garner buy-in across the organization with the full-support and prioritization coming from senior leadership. Without a vision and cross-organization goals, no project like this could come to fruition.

Summary

The Cloud-Native Sisense on Linux deployment marked a milestone in our journey as we became the only data and analytics platform with an advanced containerized microservices architecture that is purpose-built from the ground up with best-of-breed-technologies like Docker containers and Kubernetes orchestration that can be deployed on the cloud or on-premises. It provides the full Sisense platform including the Elastic Data Hub, which offers both live in-database connectivity to all major cloud databases, as well as Sisense’s proprietary In-Chip™ Performance Accelerator. The deployment fits seamlessly into DevOps processes and enables faster delivery, resiliency, and scalability. 

We started this journey with a vision of building a true next-gen analytics platform that will lead the way in how organizations build large-scale analytic applications. We are proud of the platform being deployed with our customers today. 

We successfully made this transition in little over a year and while we had some setbacks and difficulties (as with any project), the decisions around how to approach this project — like not shying away from rewriting components where needed — not only sped up the process but also allowed us to build a platform that can provide the most value to our customers in the cloud and the web based world we work in. 

As we continue rolling out this new, full-stack Cloud-Native Sisense deployment, we are carefully working with teams across Sisense to make this a great experience for our customers and enable them to go from data to insights even faster in a highly-scalable and resilient environment.

CTA Linux 770x2501 770x250 How Sisense Engineered Its Cloud Native Linux Deployment From the Ground Up
Tags: Cloud | linux

Let’s block ads! (Why?)

Blog – Sisense

Read More

TIBCO Talks About Going Cloud-native with The New Stack at Cloud Foundry Summit 2019

June 22, 2019   TIBCO Spotfire
APIManagement 696x464 TIBCO Talks About Going Cloud native with The New Stack at Cloud Foundry Summit 2019

This year’s Cloud Foundry summit in April was an exciting one for us here at TIBCO.  We announced that TIBCO Cloud™ Mashery® is now cloud-native. This cloud-native deployment supports Cloud Foundry customers via deployment on PKS (Pivotal Container Service). It’s a ‘pivotal’ move for us, in order to further empower customers with cloud-native API management using TIBCO Cloud Mashery. Mashery can be deployed anywhere, and supports next generation cloud-native architectures to deliver more speed, scale, and agility for market-leading enterprises.

Watch this interview with the New Stack that took place at the Summit where TIBCO Senior Product Manager Beerinder Rodey, interviewed by Joab Jackson, managing editor for The New Stack, on April 22nd, discusses the cloud-native launch of TIBCO Cloud Mashery and what it means for users. According to Beerinder, in today’s world, we see both cloud-native and traditional enterprises. And, we hear a lot of customers ask “How can I leverage and augment the systems and applications I already have on-premises?”

That’s the great thing about cloud-native Mashery. You can go fully on-premises, SaaS, or use containers in your private cloud, and manage API endpoints for services deployed anywhere. Cloud-native Mashery allows you to bring cloud-native deployments on-premises, and is a good way to start for customers whose architecture doesn’t align with the public cloud, but want all of the benefits of cloud.

APIs have also evolved to become fully integrated with DevOps and front and back-end development. Among the benefits APIs offer, well-developed APIs serve integral role allowing organizations to realize their business goals more efficiently and rapidly.

In the video, Beerinder also discusses:

  • The current state of API management
  • How organizations are using API management for both external and increasingly internal use cases
  • The evolution to cloud-native API management
  • How Mashery now makes Kubernetes deployments easy and integration with DevOps tooling even easier
  • TIBCO BusinessWorks(TM) Container Edition is also certified on Cloud Foundry

To learn more about the TIBCO Cloud Mashery API management platform, please visit our API management landing page or contact us.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Deploy Anywhere, Manage Everywhere with Cloud-Native Mashery

February 20, 2019   TIBCO Spotfire

TIBCO is proud to announce that its API Management solution, TIBCO Cloud Mashery®, is now fully cloud-native. TIBCO Cloud Mashery is the ultimate API Management solution for enterprises moving toward cloud-native architectures in order to develop and deploy faster, align seamlessly with their DevOps tooling and processes. and achieve the speed, agility, and innovation that digital businesses require.

Cloud-Native Scaling

Scale dynamically across the cloud, wherever your API management solution or APIs are deployed, using the API platform that runs more cloud API traffic than any other provider, TIBCO Cloud Mashery.  With first-class support for Kubernetes, Mashery lets you manage your entire API platform, APIs of connected apps and microservices within the same containerized environment.

Deploy Anywhere

You can now utilize the market’s leading platform’s capabilities to align with any DevOps or enterprise cloud strategy in any environment including any public cloud, private cloud, or fully containerized on-premises system. Cloud-native Mashery is a seamless fit with DevOps app development and deployment processes because it operates within the same containerized environments and includes seamless integration with native DevOps tooling.

Manage your APIs from Everywhere

In the cloud-native world, APIs can truly live anywhere, changing how they are managed and secured. With TIBCO Cloud Mashery, you can control your entire API program, regardless of where your APIs run — in the cloud, on-premises, or at the edge. And while your services and microservices may reside anywhere, you can manage them all using one platform (including microgateways), through a single pane of glass.

These are just a few of the great things that the all-new TIBCO Cloud Mashery can do for your digital business. To find out more, download a free trial today.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

You’re Going Cloud Native—Don’t Forget Cloud-Native API Management

February 13, 2019   TIBCO Spotfire
Cropped shot of a young computer programmer looking through data

This is part two in a three-part series on the evolution to cloud-native application development. Read last week’s introduction to cloud-native app development, and check back in soon for a special cloud-native announcement.

Last week, we introduced you to cloud-native application development and deployment; today we’re going to talk about cloud-native API management and it’s vital role in your digital business strategy. The goal of cloud-native is faster development, faster deployment, more control, and a more agile business.

API management includes the creation, productization, security, and analytics of APIs. A  cloud-native API management platform is designed to operate natively within your broader cloud-native stack of tooling and processes. It provides a lightweight, easy to deploy solution that manages all of your APIs seamlessly, regardless of where those services run.

Here are a few of the top benefits that will make you want to invest in a cloud-native API management platform:

Containers and Beyond

Is container management tool Kubernetes part of your cloud-native strategy? The ability to deploy apps or microservices within containers, and orchestrate those containerized services, is a noted benefit. You can deploy and isolate microservices and complete applications that are able to scale and run independently. However, not all apps are developed with such capabilities. With a cloud-native API management platform, you can deploy the platform in a cloud-native manner, while still managing API-led services that reside anywhere in your enterprise ecosystem, cloud or not.

API-Led Design

Without APIs, there is no integration, and API-led integration is the key to a seamless and efficient digital business. This is why cloud-native apps are always developed under the pretense that they will connect to and work together with a variety of other apps. Cloud-native begins and ends with utilizing and connecting to a variety of systems and microservices via APIs to encourage interoperability and reuse, key tenets of efficiency.

Cloud Agnostic

Taking advantage of the cloud doesn’t mean relinquishing control of your assets. With cloud-native API management, you maintain a fully containerized, portable, and scalable platform, deployed how you choose. Cloud-native means deploying all services on-premises or in a private cloud, including public cloud container services like Azure Kubernetes Service (AKS) or AWS Elastic Container Service (ECS). You can continue to take advantage of data center investments that you have, or those of a public or private cloud, while still benefiting from the elastic and agile nature of the cloud.

DevOps Alignment to Drive Agility

DevOps is a strategic IT framework implemented during your cloud-native evolution that drives more efficiency and agility for your digital business. Each microservice developed within a cloud-native app goes through an independent life cycle, managed via an agile DevOps process. For the app to function properly, multiple continuous integration/continuous delivery (CI/CD) routes work together to deploy and manage the application. Cloud-native API management is a seamless fit with DevOps app development and deployment models, because the platform operates within the same containerized environments as the apps.

The Bottom Line: Deploy Anywhere, Manage Everywhere

Evolving to cloud-native architecture is top of mind for all digital businesses trying to stay ahead of the curve. That’s not always easy, given the amount of legacy on-premises tools and services that are too expensive and time-consuming to replace in the immediate future. A cloud-native API management solution gives you the ability to create, manage, and analyze your APIs seamlessly with cloud-native tooling, and this will align with your company’s overall cloud-native evolution. Further, a single view of all on-premises, cloud, and edge APIs is key to enterprise efficiency and transformation. The flexibility to begin developing green field cloud-native services, while also maintaining operations on hybrid architectures, is what really makes cloud-native API management valuable.

Enterprises continue to trend towards cloud-native solutions, and implementing that into your API management platform is crucial for a complete digital transformation. To learn more about how TIBCO® can help you learn more about cloud-native technologies, check out TIBCO.com, and stay tuned for our special announcement on Feb 19.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

An Introduction to Cloud-Native Applications

February 6, 2019   TIBCO Spotfire
View of storm seascape

This is part one in a three-part series on the evolution to cloud-native application development. Next week’s part two will cover cloud-native API management, and tune in February 19 for a special cloud-native announcement.

Applications drive successful digital platforms, and their development and management continue to evolve. While many businesses still operate substantial legacy tech on-premises, many are looking to the cloud for development. But what does it mean to develop a cloud-native app, and why should you care?

Rather than implying where an app resides, cloud-native refers to how it was created and deployed. Cloud-native apps are developed with tools that allow them to take full advantage of cloud benefits, meaning they can be built and changed more quickly, are more agile and scalable, and can be connected with other existing apps more easily.

Now that you understand cloud-native as a concept, let’s explore some of the best practices developers use when creating cloud-native apps, as well as why those elements are vital in driving innovation.

Microservices

Cloud-native apps are developed with loosely-coupled microservices, allowing each microservice to function only as needed, as well as be updated/altered as needed, rather than requiring the full application to run aggregately.

Scalability

Cloud-native apps are deployed on self-service, shared infrastructure, and are built to be elastic and scalable. They are able to scale up to utilize more cloud resources when required while taking up as little bandwidth as possible otherwise. This is beneficial because apps often share tenancy with others, and the ebb and flow of resources can be shared as necessary.

Containerization

Cloud-native apps are built to be a collection of independent, autonomous services packaged together in lightweight containers. These containers are built to be scalable, as well as fully independent from other another. Because the unit of scaling shifts to the containers, the resource utilization is optimized.

Tooling

Best-of-breed languages and frameworks are used in the development of cloud-native apps. In order to create the best app possible, developers will pick and choose which tools to implement into each microservice, which allows the completed application to have the most efficient functionality possible.

DevOps

Each microservice of a cloud-native app goes through an independent life cycle, managed via an agile DevOps process. For the app to function properly, multiple continuous integration/continuous delivery (CI/CD) routes will work together to deploy and manage the application.
There are myriad other features of cloud-native apps that allow them to be faster, more flexible, and more efficient than their legacy counterparts. To learn more about how TIBCO® can help you learn more about cloud-native technologies, check out TIBCO.com

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Understanding The Distinction Between Cloud-Based And Cloud-Native Application Development

September 27, 2018   SAP
 Understanding The Distinction Between Cloud Based And Cloud Native Application Development

Contemporary discussions about software development are rife with the mention of “cloud-native development,” even though the term rarely receives the elaboration and specification worthy of such a new concept. For example, cloud-native development is often confused with cloud-based development, which takes place by means of a browser or online interface.

While cloud-based and cloud-native development share many characteristics, cloud-native development differs from browser-based development in important ways.

For starters, cloud-native development refers to application development that is container-based, dynamically orchestrated, and leverages microservices architectures as per the CNCF’s definition of cloud-native development. Because cloud-native applications run in containers and are dynamically orchestrated, they exhibit many of the attributes of applications deployed in cloud-based infrastructures, such as elastic scalability and high availability.

In the case of cloud-native applications, container orchestration frameworks take responsibility for attributes such as automated scalability and high availability that are typically associated with cloud computing. Moreover, the microservices-based quality of cloud-native applications translates into modular applications that accelerate application design, development, and lifecycle management.

Understanding containers and orchestration frameworks

Cloud-native applications are fundamentally container-native applications and require developers to achieve familiarity with containers and associated orchestration frameworks such as Kubernetes. The need to demonstrate proficiency with Kubernetes requires developers to obtain expertise with developer tools that provide insight into relationships between discrete containers. Furthermore, developers need proficiency with the design of microservices-based application architectures that are executed in Kubernetes.

What is most notable about cloud-native applications is the conjunction of a container-based deployment infrastructure, marked by the scalability and high-availability characteristic of the cloud, with microservices-based architectures that promote enhanced development agility and velocity. The microservices quality of cloud-native applications, for example, accelerates the delivery of application enhancements, updates, and debugging. This approach thereby creates a strong foundation for the implementation of continuous integration and delivery processes and the integration of DevOps into the development lifecycle.

Examples of cloud-native applications include container-native, microservices-based applications and container-based, functions-as-a-service applications. The fact that container-native applications encompass both container-native and functions-as-a-service applications illustrates how the cloud-native development paradigm stands at the forefront of innovation related to the intersection of infrastructure and cloud-based development.

By eschewing monolithic applications that are deployed on-premise, cloud-native development inaugurates a new modality of application development. This modality is marked by the automation of scaling and high availability at the level of containers, in conjunction with microservices architectures that facilitate expedient debugging and issue resolution.

Challenges specific to cloud-native development involve application lifecycle management and, specifically, debugging multi-container applications to perform multifactor root cause analysis. Other challenges include creating an infrastructure for alerting and monitoring for Kubernetes-based applications that deliver actionable business intelligence for application performance management purposes. In addition, cloud-native developers need to learn how to effectively leverage developer tools to design and develop container-native applications that are exemplary of loosely coupled, microservices-based architectures.

Mastering the basics

Whereas cloud-based development refers to application development executed by means of a browser that points to a cloud-based infrastructure, cloud-native development refers more specifically to application development grounded in containers, microservices, and dynamic orchestration. Developers would do well to master the basics of container-native development that leverage cloud-based IDEs and development frameworks, since container-native development is likely to become increasingly important in the future. Because developer tools that specialize in container-native development are rapidly maturing, developers should pay close attention to how their IDEs and developer tools are adding features and functionality to facilitate the management of loosely coupled systems.

Preparing for the future of application development

Monolithic applications are rapidly transforming into relics consigned to legacy applications that render them challenging for developers to update or modernize for a variety of deployment infrastructures. Cloud-native development, however, embodies the future of application development. It undergirds the development of modern applications marked by enhanced application portability across a multitude of infrastructures because of their implementation in container-based infrastructures. Developers should expect rapid innovation in container-native developer tools and strive to update their skills accordingly by understanding the intersection of cloud-based and cloud-native development.

Read more on how to evaluate and what to expect from DevTools in this IDC Vendor Spotlight paper sponsored by SAP, Addressing Digital Transformation Challenges with Application Development Tools, #EMEA44204018, August 2018. Authored by Arnal Dayaratna and Larry Carvalho.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

TIBCO Boosts Cloud-Native Offerings with New Support for Cloud Foundry Platform

April 21, 2018   TIBCO Spotfire
Screen Shot 2018 04 19 at 12.31.47 PM TIBCO Boosts Cloud Native Offerings with New Support for Cloud Foundry Platform

TIBCO BusinessWorks™ Container Edition now supports Cloud Foundry’s Container Runtime and Pivotal Cloud Foundry 2.0 and higher.

With BusinessWorks Container Edition now supporting Cloud Foundry’s Container Runtime and Pivotal Cloud Foundry 2.0 and higher, TIBCO is proud to announce its strengthened partnership and continued support for Cloud Foundry.

“TIBCO and the Cloud Foundry Foundation have partnered since the initial launch of BusinessWorks Container Edition, providing highly scalable and enterprise-ready applications necessary for digital transformation,” said Abby Kearns, executive director, Cloud Foundry Foundation. “We’re excited to see our collaboration grow as the market continues to evolve.”

Cloud Foundry Container Runtime gives large enterprises the opportunity to run containers at scale and in production.

Cloud Foundry realized that Kubernetes was winning the container management race, so they worked hard to create Cloud Foundry Container Runtime, a solution that incorporates Kubernetes, to help enterprises manage their containers more efficiently.

On the heels of this announcement, TIBCO has also extended support for using Project Flogo with Project RIFF, an open-source project that packages functions as containers, connects them with event brokers and allows functions to scale with events. This allows developers to take the flows built in Project Flogo and deploy them as functions in Project Riff.

Both of these announcements allow customers to leverage the best combination of cloud tools available on the market today to reach their business goals in a more efficient, effective manner.

To learn more, please read the press release.

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Build Cloud-native Applications Using TIBCO BusinessWorks Container Edition on AWS Marketplace

October 28, 2017   TIBCO Spotfire
AWS Build Cloud native Applications Using TIBCO BusinessWorks Container Edition on AWS Marketplace

With Amazon Web Services (AWS), you can provision compute power, storage, and other resources, gaining access to a suite of elastic IT infrastructure services as your business demands them. Among many other benefits, one of the major factors why this has been appealing is the ability to control costs in an elastic manner while providing complete flexibility and agility to use infrastructure on-demand.

To further maximize the advantages of the AWS cloud computing delivery model, developers are turning to a cloud-native approach to building and running applications.

A cloud-native application is a program that is designed specifically for a cloud computing architecture. They are designed to take advantage of cloud computing frameworks, which are composed of loosely-coupled cloud services. This means that developers must break down tasks into separate services that can run on several servers in different locations.

This notion of granularity requires solution architects to think more about composite in lieu of monolithic design. In composite design, you bring together a collection of services to create a business application.

When it comes to implementing these composite applications, the need for “integration logic” is critical for success. For example, you may need to:

  • Route and orchestrate incoming API calls to coordinate the workflow on the backend that may require the interaction between multiple backends
  • Simplify the protocol and format mapping issues encountered when interconnecting multiple services
  • Automatically compose outgoing message by transforming and aggregating data coming from different backbends
  • Require packaged adapters to third-party systems or services (such as mainframes) or packaged applications

While the developer can handle this integration logic manually, TIBCO BusinessWorks™ Container Edition was designed to hide the complexity of this integration from developers, allowing them to focus on the business logic of the application, and not the interconnecting of services.

TIBCO BusinessWorks Container Edition, available on the AWS Marketplace

TIBCO BusinessWorks Container Edition and plug-ins for AWS allow you to quickly and easily build cloud-native applications by connecting APIs, microservices and backend systems. With its drag-and-drop graphical development environment, graphical data mapper, and vast library of connectors, you can create cloud-native integration applications and deploy them on AWS, leveraging native features of AWS Elastic Container Service or your choice of Docker-based PaaS built on AWS for container management.

For developers working within the AWS ecosystem, the fact that BusinessWorks Container Edition is available on the AWS Marketplace provides:

  • Quick and easy access to an industry leading integration solution, designed specially for building cloud-native application
  • Consumption based pricing model, where you will pay only for number of containers running per hour
  • The flexibility to scale on demand and manage software cost as you go.

Deep integration with AWS Ecosystem

To simplify deploying, BusinessWorks Container Edition on AWS TIBCO leverages AWS CloudFormation to set up all the necessary resources, collectively known as a CloudFormation stack. This model also allows BusinessWorks Container Edition to integrate seamlessly with a variety AWS Services like EC2 Container Service (ECS), EC2 Container Registry (ECR), Application Load Balancer (ALB), CloudWatch, etc., to leverage their capabilities for container management, logging, auto-scaling, load balancing, service discovery, and much more. This also removes opportunities for manual error, increases efficiency, and ensures consistent configurations over time.

The capabilities provided as part of this integration include:

  • CloudFormation template to set up highly available ECS cluster in an auto-scaling group. CloudFormation automates creation of all the resources required for this task, such as VPC, public and private subnets across 2 AZs, Internet Gateway, NAT Gateway, EC2 instances, etc.
  • Ability to create BusinessWorks Container Edition-based Docker image and push it to ECR
  • CloudFormation template to extend and customize BusinessWorks Container Edition Docker image
  • Ability to download CloudFormation templates and tailor them to suit  your needs
  • AMI to create EC2 instances and set up your own Container Management platform using tools like Kubernetes and Docker Swarm

TIBCO BusinessWorks™ Container Edition and plug-ins for AWS hide the complexity of integrating APIs, microservices, and backend systems from the developers allowing them to focus on the business logic of their applications. You can now access BusinessWorks™ Container Edition on the AWS Marketplace, leveraging a consumption based pricing model allowing you to only pay for the software and AWS resources on an hourly basis and billed by AWS after the usage.

You can learn more about BusinessWorks Container Edition and Plug-ins for AWS on the AWS Marketplace or by visiting our website at: https://www.tibco.com/products/tibco-businessworks

Let’s block ads! (Why?)

The TIBCO Blog

Read More

Cloud-Native Integration Microservices with Netflix’ Hystrix Circuit Breaker and TIBCO BWCE

January 21, 2017   TIBCO Spotfire
rsz bigstock 123133862 Cloud Native Integration Microservices with Netflix’ Hystrix Circuit Breaker and TIBCO BWCE

Cloud-native microservices offer many benefits. You can develop, test, and deploy and maintain independent lightweight services. You can easily combine various technologies, including programming languages such as Java or Go, and tools like integration middleware. However, as you do not build monoliths anymore, ”that complexity has moved and […] increased [to] the outer architecture” as Gartner states. For these reasons, new design patterns (have to) emerge to solve the challenges of independent, distributed microservices.

Circuit Breaker design pattern for resilient microservice architectures

The Circuit Breaker is one of these design patterns. It allows:

  • Fail fast and rapidly recover
  • Prevent cascading failures
  • Latency tolerance logic
  • Fault tolerance logic
  • Fallback options

This is realized by rejecting service requests if a service is not available for whatever reason; in a microservice architecture there can be many reasons or issues. The rejection is configured by various parameters such as request volume threshold or error threshold percentage.

Martin Fowler has a great explanation of the Circuit Breaker design pattern. Therefore, I will just explain it briefly using one of his graphics:

martin fowler Cloud Native Integration Microservices with Netflix’ Hystrix Circuit Breaker and TIBCO BWCE

The circuit is closed in the beginning. All service requests get a successful response from the service. If a threshold of 5 failures is reached, the circuit is opened. New service requests are rejected. After a timeout of 1 minute, a new service request tries if the service is available again; therefore the circuit i.e. half open in this state. Depending on the success or failure, the circuit is opened or closed after this service request.

This relative simple pattern can get very powerful (depending on the configuration options) and allows to build resilient microservice architectures with reduced latency and lowered resource consumption.

Netflix’ open source implementation ‘Hystrix’

Hystrix was open source-d by Netflix a few years ago and is by far the most widely used framework for using the Circuit Breaker pattern in a microservices architecture.

Microservice architectures and cloud-native platforms such as Cloud Foundry or Kubernetes can leverage Hystrix to build resilient microservice deployments. In addition, you can leverage the Hystrix Dashboard to get some near real time visualization. The video Hystrix Dashboard—Tech Talk and Demo shows a great introduction. Here is a screenshot explaining the key aspects of the dashboard:dashboard screenshot Cloud Native Integration Microservices with Netflix’ Hystrix Circuit Breaker and TIBCO BWCE

The dashboard does its job. But, in more sophisticated microservice architectures, you might leverage the benefits of a real time streaming analytics visualization tool like TIBCO Live Datamart. This allows not only monitoring of live streaming data, but also applies rules and predictive analytics for automated or human-driven decision-making.

TIBCO BWCE + Netflix’ Hystrix = resilient integration microservices

A resilient architecture is even more important for integration services because they interconnect everything. If the integration service is not resilient, fails all the time, or becomes unresponsive, the complete enterprise gets into trouble. Therefore, circuit breakers can help a lot to make integration services more resilient. The following demo setup includes several cloud native components:

demo set up Cloud Native Integration Microservices with Netflix’ Hystrix Circuit Breaker and TIBCO BWCE

With focus on Circuit Breakers, we use TIBCO BusinessWorks Container Edition (BWCE), Docker, and Netflix’ Hystrix. The same could be achieved on other cloud-native platforms like Kubernetes or CloudFoundry in the same way. BWCE offers out-of-the-box support for circuit breakers. You just enable it and configure the required parameters:

bwce parameters Cloud Native Integration Microservices with Netflix’ Hystrix Circuit Breaker and TIBCO BWCE

Details about the configuration and options can be found in the BWCE documentation.

Live video: Development, deployment, and monitoring with TIBCO BWCE and Hystrix Dashboard

The following video demos how to use BWCE with Netflix’ Hystrix open source implementation of the design pattern ‘Circuit Breaker’ to develop, deploy, and monitor cloud native middleware microservices.


Find more information about cloud native middleware in the TIBCO Community: Microservices, Containers, and Cloud Native Architectures. Please use the BusinessWorks Community Q&A to ask questions and discuss concepts or use cases for BWCE and design patterns like Circuit Breaker.

Let’s block ads! (Why?)

The TIBCO Blog

Read More
  • Recent Posts

    • SO MUCH FOR GLOBAL WARMING, EH?
    • Important Changes to Microsoft Dynamics 365 Field Service Mobile App
    • Syncing Dynamics 365 User Permissions with SharePoint
    • solve for variable in iterator limit
    • THE UNIVERSE: A WONDROUS PLACE
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited