OpenStack’s next challenge: interoperability

Since its inception, OpenStack – the open source platform for cloud computing – has been out to prove that it is capable of hosting mission-critical enterprise workloads.

Earlier this year at the OpenStack Summit in Austin, it received just such validation, with analysts and customers agreeing that the platform is ready for prime time, as more than half of the OpenStack projects moved from proof of concept to production.

So what’s the next major goal for OpenStack?

The answer came at the OpenStack Summit in Barcelona, during the Wednesday keynote by Jonathan Bryce, executive director of the OpenStack Foundation.

According to Bryce, the next hurdle is application portability, not just between various distributions of OpenStack – there are easily three dozen of those already – but also, somewhat surprisingly, between OpenStack and third-party services like AWS, Azure, SoftLayer and Google Cloud.

“To me, it’s very clear that the future is multi-cloud,” Bryce said.

18339 IMG 20161026 094606 OpenStack’s next challenge: interoperability

Source: Max Smolaks, DCD

Interop Challenge at OpenStack Summit 2016, Barcelona

A rising tide

Six months ago in Austin, Don Rippert, IBM’s general manager for Cloud Strategy, Business Development and Technology, issued an interesting challenge – he asked community members to try and run the same applications and automation tools across as many different OpenStack distributions as possible.

The challenge involved deploying the LAMP stack – traditional foundation of countless online services – using Ansible and Shade, as well as running a Docker Swarm cluster.

In Barcelona, the challenge culminated in a live demo in front of 5,000 attendees of the summit. It featured representatives of 16 organizations, each running their own flavor of OpenStack or their own type of cloud service, and yet achieving the same result. This marked the first time someone has demonstrated the proof of interoperablebility among various OpenStack environments at such a grand scale.

Participants included AT&T, Canonical, Cisco, DreamHost, Deutsche Telekom, Fujitsu, HPE, Huawei, IBM, Intel, Linaro, Mirantis, OSIC, OVH, Rackspace, Red Hat, SUSE and VMware. And the instance deployed by Linaro was based on a 64-bit ARM server.

“When I talk to customers that are looking at OpenStack, or using OpenStack, or are really using any major open source project, they want three things that they think distinguish the open source approach from proprietary approach,” Rippert said.

“They seek additional innovation based on the community, and looking at the problem from a different direction. They expect better integration, since we each know what the code does and we can handle the code itself. And they expect interoperability – they expect that three vendors, five vendors, 16 vendors are all offering an OpenStack platform, and their workloads will run across all of those platforms.

“The one area of doubt, for our customers and some industry analysts, is interoperability. Will the vendors allow interoperability? Or is it in their best interests to make anyone on their distribution stay on that distribution?

“It is really not in our best interest. Because the way we gather momentum around cloud in general and OpenStack in particular, is by being interoperable. Yes, we compete with each other, all 16 of those companies up there, and we will continue to compete with each other. But you know, a rising tide gives us more surface area to compete in. We have all the reasons at IBM to think we can win more than we are going to lose, and I’m sure everyone on that stage thought the same way.”

In Barcelona, the OpenStack Foundation also announced continued progress of the ‘OpenStack Powered’ program, which requires relevant products and services to run interoperability tests in order to carry the OpenStack brand. There are now 46 products and services meeting such standards.

But the real interoperability work is yet to begin– the long-term objective for the community is to achieve a situation where customers can easily move their OpenStack workloads into proprietary clouds and back again. The key here is standard APIs. Of course, the OpenStack Foundation can’t force a company like Amazon to adopt its APIs, but if there’s a will, there’s a way – like the exotic solution recently developed by Platform 9 and described as “stretching imagination” by Bryce.

“Multi-cloud is not just about OpenStack clouds – it’s really about all of these cloud technologies providing the resources and capacity that users need to get what they want done,” he said.

“So I want you to think about APIs, think about multi-cloud, think about AWS and the different ways to tie those together. That could be a very interesting thought exercise, right?”

Let’s block ads! (Why?)

Colbran South Africa