What is Cloud Native and why should I care
Alexis Richardson, founder and CEO of Weaveworks, did a recent talk at Software Circus about “What Cloud Native is and why should I care.” Cloud Native is open source Cloud computing for applications, a trusted tool kit for modern...
4 Challenges Retailers Face When Adopting Kubernetes at the Edge
Kubectl Port Forward: What is Port Forwarding and How Does it Work?
A Guide to ConfigMap in Kubernetes
Alexis Richardson, founder and CEO of Weaveworks, did a recent talk at Software Circus about “What Cloud Native is and why should I care.”
Cloud Native is open source Cloud computing for applications, a trusted tool kit for modern architectures.
In an effort to describe Cloud Native in the video below, we talk through the lessons we learned in building our product Weave Cloud using Cloud Native:
- Web scale needs to be an available thing. We need good tools that are open-source and that can run anywhere.
- Use good patterns. Containers are cattle, not pets. It’s about establishing the patterns that you need to be highly available and automated. It’s all about Mean Time to Recovery.
- The thing that they liked is what they call “invisible infrastructure.” The idea that they can focus on their application, not on tools. Infrastructure has to be boring if you want to focus on your app.
Today’s topic is, what is Cloud Native and why should I care? Why should you care? Why should anyone care? Let’s just answer that now.
It is open source Cloud computing for applications. The CNCF, which I’m part of, I’m the chairman of the technical oversight committee for the CNCF, as well as being Weaveworks CEO and co-founder. The CNCF exists to provide a complete and trusted toolkit to modern architectures. That’s it. I’ve answered the question. We can all get the beers now.
No, there’s more.
I mentioned Weaveworks. I’ve done a few other things before, mostly to do with open source product companies, including running the Spring team for a while. I was responsible for what became Spring Boot, previously did things at VMware, involving Rabbit, Cloud Foundry, OpenStack. Previously, I was the CEO and founder of Rabbit MQ. I’ve done some other things as well. Some of these companies are even still going.
We had this customer that wanted to move its application to the Cloud. To do that they had to create a duplicate of the application running on Amazon as a DR. This is one of the first, if not the first, really, really big financial services companies moving a critical application to the Cloud. They did so using containers for portability. They’ve got the same app running in containers on metal in their data center, and in containers in the Cloud on Amazon.
This is a production use case of a large scale for business critical application for a major company, enabled by containers and Cloud Native technology, which is why this is so important and why we’ll be talking about it today.
The thing that they liked is what they call “invisible infrastructure.” The idea that they can focus on their application, not on tools. That’s a really important lesson and we’ll have a few more of these lesson slides.
Customers want to migrate applications to the cloud, keep some behind the firewall. They want to change cloud providers. They don’t want to be locked into Amazon. They want to move back into their datacenter, maybe move on to Google. They want everything to just work so they can focus on their app. The question is, how can we enable that?
It’s not a one company thing. This is an ecosystem-wide, community-wide thing, that everybody is involved with. Ultimately, lots and lots and lots of applications that people in this room and your friends will build, will have these kinds of requirements in the future.
What we’re doing is we’re recapping on techniques pioneered by big web companies like Netflix. I personally think it’s amazing how Netflix can work in a hotel room with a dodgy broadband nowadays. They’ve really got it right, in terms of their delivery.
This slide is from a few years ago. I highly recommend looking at Amazon website. This is re:invent. Really setting out what Netflix sees as the key criteria of Cloud Native, which for them is used by consumers. Real people are using this, so it has to be available. People get really annoyed if things break.
It’s global. There are customers all over the world. It’s web scale. For Netflix this really does mean that they can keep on adding capacity whenever they like, anywhere in the system. Which led them down a certain path.
The payoff from a business point of view, is their increasing availability while, at the same time, increasing their ability to change all their applications, which is one of the characteristics of this new breed of modern applications and architectures and practices, that teams in, in particular Silicon Valley, but elsewhere as well, are able to respond to customer requests more or less as they come in.
AirBnB can get an email from a disgruntled customer saying “I went to this webpage and the picture was wonky and the feedback form didn’t work.” They can change that, and if the change doesn’t work, they can roll it back instantly, at the same time, keeping up the system for everybody else. This gives them a huge advantage over people like Marriott and Intercontinental Hotels, which is why these people are so frightened of things like AirBnB.
This is why people like Marc Andreessen says software is eating the world, because it truly is the case that software, which used to be treated as a call center by large companies, is now seen as a source of intellectual property, business value and profit, because you can make your customers happy by using software if you adopt the practices of companies like Netflix that they call Cloud Native. So, businesses want to do this, and so, this is why anyone can raise money now to go off and be a Cloud Native company in Silicon Valley, but jokes aside, if you look at the numbers, you can see some pretty startling differences.
This is from Puppet Labs State of DevOps, an annual report that they do, quite a high-quality server, from 2015, comparing what they think of as the leaders and the laggers in their distribution in terms of things like deployment frequency, Mean Time To Recovery.
As you can see, Mean Time to Recovery has gone, in a year, from 48 times difference between somebody who’s good at this stuff and somebody who’s not, to 168 times difference, and that’s truly astonishing. That probably means, if your site goes down, you can fix it in five minutes versus your customers being offline for hours, and the deployment leads are 200x. This is the difference between companies that struggle to make small changes to their website versus people who change things continuously. This is something you can quantify, and that’s why people get so excited about unicorns, but in reality, the rest of the world is still trying to catch up.
Cloud Native is something that everybody thinks is important. Everyone is aware of it, but how are we going to democratize it? How are we going to make this technology available to everybody, not just an elite of nerds in Silicon Valley?
I’m going to do a little bit of a case study around our own experience trying to become a Cloud Native company, Weaveworks. We have an application called Weave Cloud which we run on EC2. We want to run it on other clouds and the purpose of it is to give you monitoring management for Cloud Native applications to make it easy. So we decided that it didn’t make sense for us to have customers that are trying to adopt Cloud Native without having adopted the practices ourselves, so that’s what we did. We re-architected our app to be Cloud Native, and this is what it currently looks like.
There are some microservices up here for managing the core website. There’s a whole observability stack down here, at the core, which is Prometheus and Weave Scope. There is a multi-tentative monitoring service based on Prometheus and a new thing we’ve written for Frankenstein. And there’s a whole set of data services up here. One thing that’s very important about this, for us, is that at no point in this diagram do we mention Amazon. Well, actually, we mention it here. We’re going to get rid of that soon, but we want to be able to say that. We do mention Kubernetes, which we happen to be running on, because this is about focusing on our application. It also has other elegant properties like all the different parts in a microservices style are independent of one another, so that means that if we want to take a piece down and repair it while the other pieces stay up, we can do that. We can scale these pieces independently, so if we see growth here but not here, or here but not here, we can adapt to that, and all these things basically mean that we can manage cost down and keep profit up.
So key points:
This is a highly available Netflix-style app, 24/7, does all the things; it’s sec, it’s automated, and when you focus on the application, not on Kubernetes itself, we can run any kind of container. Well, not quite, because there’s a few products and services we’re still tied to. All the pieces work together. The Prometheus piece talks to Docker, talks to Kubernetes. When we can get rid of Kubernetes, everything is pluggable, so this is actually quite a good example of a set of requirements that other people might be looking at, because other people, too, want not align revenue with cost. Other people, too, want the ability to. Other people, too, want the ability to move some or all of the application to another cloud or behind the data center. They don’t want to be tied into one cloud, one orchestrator, and they want to choose tools that just work together. They don’t want to have to do the integration themselves.
This is another piece of our app, showing what we call the ABCDE of deployment. We are able to do this, automated deployment, in the way that allows us to swap out any particular piece showing here, so right now we’re using Quay and CoreOS. We’re using CircleCI for our builds, and we’re deploying to Kubernetes, but we can swap these in and out as needed, and this is really … I mean, perhaps this is taking portability too far, but it does give us tremendous comfort, so we can use this architecture going forward indefinitely, because we can swap things in and out as we like, with a full life cycle and a scalable application, our Netflix, that we can move around.
What do we learn from this example?
To do this Cloud Native, web scale needs to be an available thing. We need good tools. We want them to be open-source, because we don’t want to be tied to one provider. We want to run them anywhere. We want to run them on Amazon. We want to run them on Google Cloud. We want to run them on Softlayer. We want to run them behind the firewall on VMware. The particular pieces of software that we are choosing need to be ones that we can trust. We need to know the teams that are working on them. Like Prometheus, the team can look after this software for a long period of time, potentially years and years and years, and we need the other thing to be monitorable and controllable. This needs to be thought about at the beginning of the process of constructing the component, not at the end, and all the tools need to work together. So we were able to make these decisions for ourselves, but what I believe is that the further away you get from people who are right at the cutting edge, the harder this becomes.
Cloud Native and the Cloud Native Computing Foundation, which I’ll talk about a bit later, are all about giving people guidance, clarity, a common set of tools that they can use to solve this problem as we did, but having to go through the struggle we did to get there.
Another lesson which is very important and we’ll come back to a little later is the infrastructure has to be boring if you want to focus on your app. Boring is good here. Now, there are lots of different people saying “We have the greater platform that you can run everything on.” Some platform is a service, containers as a service, different container-based things, Mesosphere, these are all good platforms. Use the one you like, but it’s got to be boring. It’s got to be in the background, because you want to focus on your app, and the one thing to watch out for here with these boring platforms is what I call 1% failure problem.
Earlier on, we had a conversation about Docker in a pan, and in one of the issues, that Docker is being accused of, at the moment, perhaps a little unfairly, because some of the cases are a little bit unusual, of instability. Because they’re pushing really hard to add new features quickly to make customers happy, as a result, they may not be paying as much attention to what’s going on in the back office, keeping the software stable, safe and secure for everybody who’s already running it, breaking changes on APIs, that sort of thing. That’s okay if you’re a small-scale, but if you have a piece of software that behaves weirdly 1% of the time because the network, for example, has some funny ideas about cleaning up the IP address or something, then when you get to the stage of people using this stuff seriously and starting potentially thousands of containers a day, that turns into hundreds of incidents a year, where the developers running the system have a problem and have no idea what is causing it.
It’s the same thing with Netflix. When I show you that chart of pushing to the right and rate of change and pushing up in terms of availability, why go from 99% to 99.9 or .99? Because Netflix have so many users that even 1% failures lead to very, very large numbers of incidents that they just can’t cope with. Be aware that, when I say platforms need to be boring, I mean they need to be really boring. They can’t be 99% boring.
The last lesson that we learned is, because we made this stuff up as we went along, we need good patterns. Microservices is one pattern. We probably over-did it a bit on microservices in our application. We find that sometimes a monolith is good, or as I like to call, a microlith. It’s just a small piece of software. Cattle, not pets is a pattern. There are a whole load of observability patterns and, of course, then of course there are classic traffic patterns, blue-green deployment canary. A canary is when you deploy a patch to your system and divert a proportion of your traffic to it to validate that patch, and then all of your traffic over if you’re happy with the patch, and you roll away the patch and divert all the traffic back if you’re unhappy with the patch.
All of these are patterns that many of the big web companies have learned about in the last 10-15 years. There are great books about this, like Release It by Michael Nygard, which is nearly 10 years old, actually. If you find Adrian Cockcroft, who I think spoke here last year, a lot of what he talks about relates to these patterns. And when you talk to tech teams and businesses that are adopting Cloud Native microservices, I find what they really want to do is get their heads around this, because this is what’s creating value for them, more uptime, less downtime, quicker Mean Time to Recovery.
If nothing else, Cloud Native, for me, is patterns. It’s about establishing the patterns that you need to be highly available, automated, etc. and there’s the obligatory picture of Adrian. Just recognize that he talks about this enough for us to be aware of it and is generally a great guy.
That’s all very well, but why should I care?
Just to repeat, open source cloud computing for applications, not infrastructure like OpenStack. That’s supposed to be boring. I hear it’s not boring enough. Maybe it’s a little bit too interesting for some people, but this is about applications. It’s not infrastructure. A complete and trusted tool kit from modern architecture for applications based around patterns.
For availability, automation, acceleration, and running things anywhere, which translates into various technology concepts. CICD. Anyone who’s doing this stuff without doing CICD, they’ve got a home-grown automation system rather than off the shelf, and it’s probably slightly crazy. Run things anywhere. Containers, one thing that makes them exciting is they’re portable. No longer do we have the, “Will it run on Linux, Windows, Amazon or VMware” question that we have with the VMs. This is a huge step forward. What are the patterns to make that into applications?
Then you need software.
Here’s an example. I’ll talk more about this in detail later on, but this is the kind of software we’re talking about here, monitoring software, tracing software, logging software, and the key one, of course, orchestration software, service discovery, configuration management. All of these things are things that you need when you are running at scale in the way that I have described. There’s actually quite a lot of software that you need to put together to give people a complete tool set for Cloud Native.
Let’s say we have the software. What about this foundation stuff? Why do we need a foundation? Do I care about foundations? What even is a foundation? Does anyone know? Does anyone care? Is foundation some sort of federation of collaborating powers? The answer is yeah, it is both of these things, because the foundation lets people like eBay and AirBnB and Netflix work with Google and Amazon and IBM, and this is something that’s actually very hard to do. They don’t trust each other, so they need a mechanism to work together sometimes, if they want to share code that’s meant to be boring.
Is it some hippie thing? Kind of. I just went to the Linux conference last week. It was definitely a hippie thing. Some of the hippies were a little old, though. They had that really funky bald ponytail look. The Linux Foundation is a cool thing, though. Twenty five years old last week. Three cheers for Linux. There was a time when people thought Linux wouldn’t last, and Microsoft would say “Linux? You shouldn’t use that one.”
The Linux foundation has, in my view, three purposes:
It safeguards Linux for the long term and brackets and a bunch of other
It provides that crucial nexus for collaboration and trust, that hippie thing that makes it possible for big companies that may not like each other to work together on boring things
It is kind of cool. It an ubiquitous open-source brand. You don’t have to explain it to many people anymore, and this is a good thing for customers and the community, because it means we can all benefit because people know this is software that has really a blessing of quality, mostly.
The Cloud Native Computing Foundation is a subset of the Linux Foundation. The way the Linux Foundation is organized here is it’s kind of an umbrella and it has a single infrastructure which is shared across the different sub-foundations which cover different elements of the big picture. There’s one for BlockChain now. There’s one for containers. There’s one for pairs. There’s a bunch of other ones that I could list for you, and so, we’re not talking just about open source. This is a key distinction. We are talking about common open source, not proprietary open source.
There was a time when Rabbit MQ, which I mentioned I founded, was a proprietary open source company. The company retained all the copyright to its software and was the sole developer and supporter committed to the software, and that was by design, and that meant that we could support and build a business around the idea that we have the sole control of the project. That’s a really good business model. The problem is … It’s used by, for example, Mongo. It’s respectable. The thing is, the more big companies like AirBnB and eBay and Google and IBM a) want to get into the game and b) understand how to do it, c) go ahead and actually get on with it, the more the center of gravity shifts away from single creators of software to a commons, and really, that is the future.
The commons can take place in two ways. It can happen through foundations, where there might be an element of regulation, or it can happen out in places like GitHub in an unregulated manner, and over time, when you find that projects sometimes decide “Well, being unregulated was kind of fun for a few years, but now everyone hates each other, so let’s try something else” and that’s what happened, they forked. They loved each other so much, they split into two and pointlessly roamed the wilderness for two years, and they got back together and there was a huge hug and they said “We are one again, but we should write down what we care about.”
I mentioned software is eating the world. Open source is eating software. Open source is a good thing. The problem is, Cloud is also eating open source.
Amazon, 10 billion revenue, just an amazing company, talking about the Cloud here, by the way, but they do things like Amazon app reviews, which is not called what it actually is. They do it with other things, and there’s an element of concern that. In the future, there may be three or four Cloud providers, all of whom have proprietary, large-scale services, and that will lead to an alternative form of lock-in. Some of the right software that works on Amazon, it’ll start using those services, but they won’t be able to port it to Google, and so, without a common set of tools that solve the problems of Cloud Native, we, the customers, the end users, risk being locked in.
Now, I actually like Amazon services. I mentioned earlier, we actually still use some of them in our app. It’s a good thing, but this issue of lock-in is a real issue because nobody wants to find that they can’t move away from a cloud if something does go wrong. It is a worrying situation.
For whom is this a concern, apart from me?
A few years ago, the people who moaned about lock-in in other software vendors, and no one cares what they think, but then, big companies like eBay and AirBnB started to find that they were using Amazon a lot and sometimes decided, “Oh, we’re going to get off Amazon now, because it’s too damn expensive. We’ve actually got so much going on, that we’d rather move everything back in house. It’s cheaper to have our own data center.” You’ve seen that a few times.
Netflix, a few years ago, launched a brand called Netflix OSS to bring all of their different open-source projects into one common umbrella. Sounds a bit like a commons. And they did this because they wanted other people to contribute to their software and build up – wait for it – a common set of tools that everyone could use, that they trust, for building Netflix architectures, because Netflix wants to know they could move to Google Cloud or IBM Cloud or Azure cloud, perhaps, if they had to. They like Amazon a lot, but they saw the terms, and “Listen, guys, we love you, but we can move, so don’t jack up your prices too much, please.” That was a very important change, and I’ll even kind of – forgive me if anyone works for such a company, but – traditional companies, for example, European banks. I know that there are some people here from European banks. I was speaking to one last week from Norway. They said “Our strategy is Amazon plus one, meaning we actually want to use the Cloud now, but we want to have the ability to go to one other thing. It could be our own data center or it could be another Cloud.”
They’ve woken up to this, too. They get the purpose of the Cloud, the flexibility of having the option to do more, manage costs through that, but now they also want the flexibility to go away, so that means pretty much everybody now wants this kind of boring stuff to be at common in the open so they can use it, if possible, rather than getting locked in. That creates a tremendous momentum behind this need for a tool set, as I’ve mentioned before.
What is Cloud Native? Open-source Cloud computing for applications, a common trusted tool kit for modern architectures. For end users, it’s easy, it’s fast, no confusion about what to use, no lock-in, guidance and clarity on what Cloud Native means, what the patterns are, a badge of trust around the project and all shared through a foundation.
Right now, CNCF, which did not exist a year ago, really only got going about nine months ago to fund its first project in March, was announced at KubeCon in London and we’ve had Kubernetes and then Prometheus join, and we’re looking at a bunch of other projects, at the moment.
We’re early stage, but the point is, each one of these tools should be something you can trust, and like a tool kit, sometimes there’s two of them in the box, two screwdrivers, because they have different shaped heads. One is good for one thing, another for another. Doesn’t mean that you’re saying “Oh, well, Prometheus is going to be the only monitoring system that matters in the future,” because we all know that’s not true, and more importantly, perhaps, Kubernetes is not necessarily going to be the one orchestration system. I think it’s doing really well, but some people want to use Mesos and DCOS, not Kubernetes. Some people want to use them together, and what about Docker Swarm? What about something that’s less well-known like Cloudsoft’s NP platform? There’s lots and lots of cool stuff out there.
Gradually, we’re starting to understand what this world might look like. And I mentioned all the different typesˇ of software.
We’ve tried to organize into a kind of stack. We are worried about the orange layers at the top. They’re not colored orange because we’re in Amsterdam. Broadly speaking, containers, orchestration, management and microservices patterns are at the top. When it says networking is out of scope here, by the way, to preempt the question that will come at the end. This is referring to provider networking, Amazon network or Google network, not software networks.
Just to run through this stack a little bit for you, to give you some sense of the importance and depth here, the patterns … There’s a whole lot of important tools that you need to know the patterns, composition tools, application deployment tools, CICD, image registries are a big thing right now. Then you have management. This is probably the biggest area. There’s so much going on here. I mentioned this earlier, but just take something like traffic management, service management. How do you do smart routing between different nodes of a service in order to deliver deployment effectively? A lot of different solutions for this. Which ones are the good ones? How do you choose one? Should you choose console and do it yourself using a hand-crafted solution, or should you use a special-purpose tool or just use Kubernetes and rely on their built-in return?
The runtime, which is probably the most contested area because he or she who controls this layer may control the whole thing, hence the concerns about lock-in that we mentioned earlier. Scheduling, laying out containers, joining them up into networks, giving them the ability to write data to disk in a useful manner that’s retained and copied and so on, and then the power to govern that sometimes goes with that. We go a bit further down the stack and, leaving the scope of CNCF some more things, like security for image management on machines, provisioning, and then the bit that we definitely don’t do, which is the Cloud stuff, Amazon, VMware, OpenStack, and so on.
A common set of tillers, but not standards. We’re not a standards organization. There are good standards organizations that know how to do standards, like ISO.
What is a standard? In my opinion, having been involved in a few standards, a standard is an algorithm for identifying areas of disagreement between different parties and then amplifying them, because all they want to do is standardize the one thing they really, really, really care the most about, so everyone says,
“Well, we agree about all of this. It’s this bit. You guys just don’t get it, do you?”
“Well, no. You’re wrong.”
I saw this a few years ago with something called AMQP, which Rabbit MQ implemented. Red Hat wanted to do it one way and everybody else wanted to do it another way. We spent two years arguing about it, and then somebody re-wrote the standard, and said, “Here’s a new standard that does both.” Actually, it did nothing. “Let’s blast this. This is kind of preposterous, waste of time that makes everyone happy.”
Standards are slow. They lead to arguments. They need patience and they emerge slowly. We don’t need that. What we need is interpretability. We need companies like Weaveworks to go “Oh, I’m going to use Prometheus. They’ll use Kubernetes and then we’ll just work together” or “They’ll work together with a little bit of effort.”
They work together because there’s some conventions that they’re circling around in the community, which might be, for example, we mentioned CNI earlier. A simple convention for talking about networks in the setting of a container. People who want to turn these into standards can go ahead and take a specification document and try and take it to a standards body like the ITF and spend five years battling everybody who wants to disagree with them, because as soon as you say this is a candidate standard, you can guarantee that at least three people will come along and argue with you until you’ve given up, because that’s what they like to do.
Really, what we’re trying to do is get going with a few projects. As we build on more projects, start to consolidate the brand for interoperability and start to make the brand something that customers can go “Ah, Cloud Native! I want that! I don’t want that other thing.” Then, maybe some standards will emerge from that over time.
Okay. How are we doing? All right.
Here’s Docker Cloud next. You should know the answer by now, of course. I thought I’d say a few things about Docker because they’ve been in the news recently. Docker is part of the CNCF. I, personally, am happy with their role in the CNCF, just to be completely clear. I really mean that. Bob Wise is a really cool guy who runs the Samsung Cloud team, Samsung technology services team, out of Seattle. He posted a blog last week, which I’ve linked to here, called “An Ode to Boring,” basically lamenting the fact that it didn’t seem possible for Docker to play nice in certain areas with other people in the community, including vendors and some end users, apparently. I’m not quite sure about that, and one of the issues that was coming up a lot was stability, the idea that, in an attempt to build out features quickly, to claim areas of the platform higher up the stack in their ref stack I showed you.
Docker was breaking features lying down that would still matter and ultimately lead to application breakages. Remember I said, the 1% problem? If 1% of the time, Docker lets you down … I’m not saying it does, but if it did happen, you’d have a lot of outages if you were using Docker hundreds of thousands of times a year, and they would probably be quite confusing, because they’re down in the boring bit no one’s supposed to care about, and here’s Bob saying “I call on the CNCF.” It’s very exciting. “We need a transparent, community-driven implementation.” Then he’s saying we need a common open-source model instead of a single vendor open-source model. I was really, really happy to see this, being called upon.
What was being called for, apparently? Well, three things:
The standard piece, which is handled by OCI, which is a sister foundation to the CNCF. That is the definition of a core standard container. What is disputed there is the scope of that. There are also accusations about some members stuffing it the features while other members basically block any new features being added, naming no names.
Stability, I think, is a new concern that’s come out since Docker 1.12 was released.
They made quite a few changes. I personally have no view on this, but I’m sure that users use the containers.
Finally, this is one that’s been running for a while, to be an open platform, which basically means that it’s not solely under the control of Docker, that it’s actually easy to plug in all kinds of third party software, and in particular, and this is the big one, that orchestration providers can plug in their own orchestration systems instead of Docker’s Swarm and get it working and sell the whole thing to customers without anyone getting unhappy. And that third thing is sort of theoretical right now, because what Docker have done is they’ve introduced a standard option in Docker itself and that and other design issues have made it harder to plug things in.
I’m being quite careful what I say, because I don’t want to be in the position of representing Docker’s opinion, because I’m sure they have a very strong opinion about all these things, which I’m sure makes a lot of sense, nor would I want to represent the opinion of some of the people who have complained recently. I’m just highlighting it as an issue, and here’s what the world would look like if Docker was the only run-time platform. This is why people are saying the CNCF can play a role in helping here, because what we care about is a common set of tools for building Cloud applications that are pluggable and interpretable, so if that were true, then you could imagine plugging things around Docker and Docker would be happening to be building their own platform out of those components, but at the moment, what is disputed is this control over the whole stack. This is something which is an area of tension.
This is my last slide. What’s at stake is, do you want to be locked into a new container layout? We’ve got this amazing thing where Docker, awesomely, is portable across Linux, Windows, Amazon and VMware. People have been looking for this for years and years and years. It’s very exciting. Now we can have a common Cloud across everything. We just deploy stuff to it. Do you want to be controlled by a single vendor or not? Is it going to be one platform, one vendor, monopoly world or competition? Would that be better? Based on a common client.
The CNCF very, very much hopes that we can continue to work with Docker to achieve something that’s fair to everybody, introduces enough competition in the plumbing that people have choice, keeps it working together in this efficiently, boring way, and let’s platform donors like Docker still make money.
There you go. That’s what Cloud Native is, open source Cloud computing for applications, a trusted tool kit for modern architectures.
Thank you very much.