Resources:

Enterprise-wide Kubernetes, Episode 3: Multi-Cloud Persistent Storage


Read the Transcript

Anubhav Sharma: “Hello, everyone; welcome to the third episode of Enterprise-Wide Kubernetes. A couple of weeks ago we covered security and before that we covered hybrid cloud, and this the third episode as part of the series. Joining me today is Michael Ferranti from Portworx. Michael, do you want to introduce yourself?”

Michael Ferranti: “Yes, hello, everyone. I’m really excited to be here and I’m going to be sharing some information about persistence storage and data management for Kubernetes, specifically within a context of hybrid and multicloud, so I’m really looking forward to presenting today and sharing some insights with you.”

Anubhav: “Excellent; thank you, Michael. What we’re going to be covering today is to talk a little bit about considerations to when you’re thinking about building your own Kubernetes stack. What are your data management considerations around that? For enterprise, data is everything and how do you manage data persistence with containers, which by nature are ephemeral?

We’re going to talk a little bit about managing applications and data in the multicloud environment and then we’re going to spend some time demonstrating easy ways in which you can move your applications from one environment to another, across different clouds, across any infrastructure because both Nirmata and Portworx are multicloud solutions for application and data management.

So Kubernetes has become the default standard that enterprises are adopting as far as managing containers, right? It’s become very clear that containers are the best way to package applications and Kubernetes is the best way to orchestrate and manage containers, right?

It has some of the widest support in terms of the kinds of infrastructure the solutions can run on and has a very large ecosystem of support. If you go to CNC and see the number of projects that are in progress around Kubernetes I think it’s just scaling up. We feel that the inflection point has been reached in terms of enterprise adoption and it’s only accelerating as we go on.

When you think about building your Kubernetes stack within an enterprise it’s just not the Kubernetes orchestration; you do have to think about integrations around infrastructure, your network, what you want to do around logging and monitoring, not just of the infrastructure but also around your applications.

How do you manage load balancing for your applications? How do you control the images? How do you manage versions? How do you integrate your build tools for your applications? And then of course you have to think about layering security across infrastructure for users, for your applications, etc, right? What we’re going to focus on today is storage and data management and how do you manage applications and data in a multicloud environment?

So containers by nature are designed to be stateless, right? They’re built for and they’re created to performance specific functions and once that function is over the container and all the context associated with that container, dies with it, right, and that includes data that was used within that container, right?

And when you think about adopting containers, why there’s tremendous value, how do you make sure that your enterprise data gets leverage, number one, and stays, even as containers use that data, add to that data, creates some of that data but that data stays outside of the container? That’s where Portworx data management solutions really helps.

It’s not just about creating and managing data within the enterprise; you’ve got to think about bringing the data closer to where the infrastructure is, where your computer resources are and how do you tie them together based on specific workloads that you want to run.

Another key piece around persistent storage is policies and storage solutions. They vary across different cloud providers; enterprises at any point in time are using tens of different storage solutions based on the kind of applications that they plan to use.

You really need, when you think about running applications, [unintelligible 00:05:58] applications in a multicloud environment, there is need to have single pane of glass management for your applications across the multicloud environment and managing data in that multicloud environment, and that’s where Portworx and Nirmata come together and deliver that single pane management for you.

One of the key things I want to highlight about Portworx and Nirmata integration, is while Nirmata takes care of your cluster and application management, Nirmata and Portworx integrate very easily to deliver application and data management for the multicloud, right?

While you can easily backup and recover the volumes that get created through containers, through Portworx you can do – with Portworx and Nirmata you can deliver lifecycle management; both your applications and data. With that I’d like Michael to talk a little bit about Portworx and its solutions, then we’ll dive into the demo and talk a little bit about Nirmata and its architecture and how it delivers application lifecycle management. Over to you, Michael.”

Michael: “Great; thanks a lot. I’ll skip this part; hopefully my background will become a little bit more apparent as we dive in. I’m really excited to be here. I think this is such a critical topic. We really see it in the majority of our customers, this desire and in a lot of cases need, to run in a multicloud or a hybrid cloud environment.

If we start by just way of level selling, by looking backwards. We go back ten years; it was really the heyday of virtualization, and then almost out of nowhere Amazon launches into this new concept called public cloud. They dominated that market and continue to just as VMware dominated the virtualization market.

But something interesting happened as competition came to public cloud in the form of Azure, in the form Google Cloud Platform, while at the same time enterprises continued also to run their own data centers, oftentimes virtualized but not always, so what we had is this plethora of operating environment options that were better or worse for different use cases.

And enterprises began to realize that, you know, I can do cloud first, cloud native architecture and operations without simply being in a public cloud. I can run cloud in my own data center. I can run some applications in Azure or some in Amazon and they started to see the benefits of being able to do this, whether that was just being closer to their users, whether it gave them kind of negotiation leverage with their cloud provider. There are many reason why enterprises today are adopting these hybrid and multicloud operation models.

As we look toward tomorrow we really see that Kubernetes is driving this hybrid and multicloud world as a multicloud application platform that’s independent from the underlying infrastructure. But a key part of Kubernetes is not simply that you run the same application the same way in multiple environments; that’s a huge benefit but it’s not the only benefit.

Another big, big benefit and really driver of the adoption of Kubernetes is this idea that I can automate everything; that I can have machines respond to very hard to understand and diagnosed operational failures in a much more efficient and fast way than my very high skilled but hard to hire ops people can. Kubernetes can do it better than a team can do it manually, and so that’s why we see enterprises adopting Kubernetes to both enable hybrid and multicloud, but also to automate everything in their environment.

If we flip that on its head we can say, you know, what’s motivating this move towards hybrid and multicloud and automation, is how can my organization be more like a company like Netflix, and we see industry after industry asking this question. Capital One is a great example of a traditional financial services organization asking this question and doing really well by answering it as, well, I’m going to run my applications on Kubernetes.

We saw in a previous slide a different view of this stack, but we can slice it a bunch of different way, but the key point is a cloud native stack has emerged for modern applications to be resilient in the face of failure, to be able to reduce time to market from idea to application in a customer’s hands, and to be able to automate the operations both from a deployment and a failure ops perspective, such that I can have better reliability from applications. That’s going to include monitoring, it’s going to include logging and metrics and security, but it also includes kind of a place for storage and data management.

Just to underscore the importance of data management as it relates to multicloud – and then I’m going to share some more specifics about how Portworx itself solves that problem. We did a recent survey – this is actually so hot off the press it hasn’t been released yet, we’re going to be releasing in a couple of weeks, but this is one snapshot that I wanted to share on this webinar because it’s so relevant.

We asked people what challenges do you have when it comes to deploying containers in production environments? Security topped that list – kind of not surprising – but the number two and number three were data management and multicloud and cross data center operations. For many organizations these are unsolved problems and specifically, those two specific problems are exactly what Portworx addresses, so that combined with a platform like Nirmata you can confidently run missioncritical applications on Kubernetes across environments and know and understand that your data is secure, it’s available and it’s protected.

So going back to that cloud native stack, it turns out that stateful services are at the heart of both cloud native applications and what we might call legacy or traditional applications. There is not an enterprise application that does not require data. The difference between now and previously is not that apps were stateless in the past and now they’re stateful, it’s that people want to run the stateful components on the same platforms as they run the stateless components.

The reason for that is because Kubernetes makes your apps so agile, if you have your databases outside of that Kubernetes platform they’re just going to be slow, they’re going to be hard to update, they’re going to be manually scaled and manually brought back online when there is an issue, and that’s going to create problems for your application teams to try to move faster and faster and faster. It’s going to be kind of like a ball and chain, slowing you down as you adopt Kubernetes

So people want to run those services on the same platform that they run all the other parts of their application, and yet data has particular problems. It needs to be secured in a particular way. It needs to be protected and backed up in a particular way, and because it has gravity, because it has weight it’s hard to move it between environments and so that is exactly what Portworx enables for Kubernetes applications.

You can think about Portworx as the cloud native storage and data management platform for production, and I put emphasis on for production because, you know, we’re talking about real user data here. These are not playthings and we take that very, very seriously. And so, if you were to just kind of Google something like Cassandra on Kubernetes you’re going to find dozens and dozens of articles that talk about how to deploy Cassandra on Kubernetes but very, very few articles that tell you, okay, how do I deal with a network partition that makes some of my pods unavailable?

What happens when my Cassandra pods crashes, or docker running on that host crashes? How do I make sure that my data is still available? How do I back it up to another data center? All of those types of issues, what we call day two operations, are where Portworx focuses. We enable the easy deployment and configuration of stateful services; that’s a necessary condition but it’s not sufficient for production. We also do all of the other hard things around managing day two operations to make sure that your data is safe and secure and available all the time.

Just a snapshot of some of Portworx’s customers. The reason I show you this is to say that, you know, whatever perspective you’re coming at Kubernetes from, whether that’s from the perspective of building a platform as a service or from the perspective of running a fast application that you are getting requests to deploy into a customer’s center, right, and you need a consistent way to do that, Portworx has solved those problems for other customers in the Portworx family. And so chances are, if you’ve got a problem with stateful containers; in Kubernetes we’ve solved it and we can bring that expertise to bear specifically on your problem.

So we think, to put it bluntly, that we are the best storage in data management option for Kubernetes. There are three big reasons for that. One is, we are ourselves, 100 percent cloud native, right; Kubernetes is about building and running cloud native applications, while Portworx was built from the ground up as, itself, a cloud native solution.

What does that mean? Well, it means that Portworx itself runs as a container. It can be managed by modern container platforms like Kubernetes and Nirmata. Everything is containergranular. Everything is configurable via API and CLI. We also believe that we have the most production experience of any cloud native storage and data management solution on the market.

I just showed you some of the customers, some of the largest and most sophisticated organizations in the world. Every single one of those customers is running Portworx for container workloads. There are lots of storage options available that are optimized for VMs. 100 percent of Portworx’s customers are using four containers, and that makes a big difference in terms of reliability in a much more highly dynamic environment that you have with something like Kubernetes, as opposed to a VM-based workload.

And the final is simply our very deep integration into Kubernetes itself. Everything that you can do with Portworx, you can do via Kubectl, and that includes things like Inape telling Kubernetes how to make better scheduling decisions. For instance, when you deploy a pod, say Postgres, you want to make sure that that pod is deployed on a host that has a copy of its data. Portworx can actually tell Kubernetes where data for a particular application is located, so that Kubernetes can place the pods on those same posts. This includes across availability zones and across datacenters.

Okay, so let’s look very quickly at what Portworx is. Portworx, in a nutshell is a distributed block storage solution that deeply integrates with any scheduler, including Kubernetes, to run and manage any stateful service. Here I have four listed but we could add Cassandra to this. We could add Postgres and MySQL to this; we could add any data service that runs on Linux.

We allow you to run it in any cloud or onpremise data center, or importantly, across all of those environments in clusters of up to 1,000 nodes each. So Portworx itself scales to the same level that Kubernetes scales to.

Okay, so a few more slides before I turn it back over for the demo. Let’s look quickly at a use case of running a highly performant and resilient Cassandra cluster on Kubernetes. Cassandra is kind of one of the top five data services that we see customers who are building platforms, use. So it’s really important that I can run this – I can run it across environments and I can it in a resilient and secure manner, so how would we do that?

The first thing that we need to do, is we need to install Kubernetes – excuse me, we need to install Portworx – because as I mentioned Portworx itself runs as a container. We can use Kubernetes to install Portworx and it’s literally as easy as kubectl apply-f, in a YAML file that we help you configure via a web interface. You basically select from options IP addresses of your hosts, what environment are you in, a couple of configuration variables and you have this YAML file and it’s a single command and now Portworx is installed in your cluster.

Now I actually want to configure Cassandra to use Portworx when it’s deployed. How am I going to do that? Well, I need to update a couple of files, again Kubernetes will have YAML; simply text files; very, very easy to work with, and have it declared as method for deployment of my application. So here I’m going to first define a storage class. A storage class is simply a text file that describes the type of storage resources that I want to be made available to that application when it’s deployed.

Here I’m highlighting that I’m setting a replication factor. This states how many copies of my data I want Portworx to maintain at all times. That’s a configurable value; I’ve got it listed as two copies here. I can also set things like IO priority, which would map to, say, in an AWS environment an EDS volume with dedicated IOPs or simply an SSD running on the local EC2 instance. You can set different IO priorities for faster or slower storage so when you deploy your application you can make sure that, for instance, production apps are going on all SSDs, my test and dev environments might go on less expensive spinning discs.

I can also set up snapshot policies. I’m in encryption policy here so I have an application – excuse me, it’s a platform architect – I don’t have to rely on my individual developers applying those best practices; I can make sure that it happens programmatically for them all through my storage class.

After I’ve created a storage class I’m going to create what’s called PVC, or a persistent volume claim that references that storage class, and then I can deploy Cassandra using Portworx. Now what I’ve done here, is I’ve deployed a multinode Cassandra cluster which is automatically – those pods and those volumes are automatically deployed across availability zones, across failure zones within your data center such that I get that first level of high availability simply by taking advantage of my network architecture, again importantly, in an automated manner.

I don’t have to specifically tell Kubernetes and Portworx where to place pods and those volumes. Portworx is able to instruct Kubernetes itself where to place them based on the network topology. Again, automation is key.

From there Portworx, based on that replication factor that we saw earlier, is going to create topology aware replicas so that if I lose a node in my cluster, instead of having Cassandra rebuild that worker which takes a significant performance hit on my application IO during the rebuild operation, we can simply have Kubernetes reschedule that pod to another host in the cluster, that importantly has a local copy of the data already.

So this is called hyper convergence and it’s extremely important so I just want to pause here for a second and say, Kubernetes doesn’t make that scheduling decision in isolation. One of the powerful things about Portworx is we can actually tell Kubernetes where the replicas for that particular Cassandra pod are located.

Here is a simple example of three node cluster, but this might be a ten node cluster, it might be a 20 node cluster, and that data volume could be anywhere. But Portworx can actually tell Kubernetes where it’s located so that it makes a better scheduling decision, so that you maintain the fastest reads and write for your database possible because the pod is located on the same host as the volume.

Very important from a multicloud and hybrid cloud perspective is this idea of offsite backups. So here, imagine that I have my Cassandra cluster running on my premises and I want to snapshot it and move it to an object store, any F3compatible object store, and then move it into another cloud. I can do that very, very easily using a feature within Portworx that we call Cloudsnap. I can actually go point to point as well. The key implication of all of this is that I can architect my application once and run it anywhere. It doesn’t matter if it’s my data center, if it’s a cloud provider or multiple cloud providers

And last but not least, data security is just so critically important these days. We support all of the leading encryption key management systems, including HashiCorp Vault, Amazon KMS, Docker Secrets, so that only you are able to decrypt your data; even Portworx is not able to decrypt your data. That’s why we call it bring your own key encryption; very, very important for our customers that are running missioncritical applications. And with that I’d like to turn it back over for a live demo.”

Anubhav: “Excellent; thank you, Michael. I think was a fantastic overview and really lays the foundation for us to talk a little bit about how Nirmata helps bring your Cassandra application to life with Portworx. In our demo we’re going to use simple MySQL applications, but you can take any application and perform similar operations.

So I’m going to go and share my screen and then we will dive into the demo. All right; hopefully everybody can see it; it looks like it. So here is a Nirmata console and what I’m going to do is walk you through a basic setup of what it takes to deploy a cluster. We’re not going to set that up here; we already have a cluster that has been set up for us, but I just want to show you how we integrate – we’re going to show how we [unintelligible 00:26:37] to integrate Portworx into Nirmata as well and then we will take a demo application and move it from one environment to another.

So here in Nirmata you have a dashboard that gives you a full view of everything that is happening in the environment across different clusters, different clouds; everything that you’re managing you get an aggregate view of that here, including audit trails, utilization, which are the top containers, what the usage looks like, which is a great tool for IT ops to manage the multicluster, multicloud environment, right?

You can set up any of your clouds, including baremetal, which can be directly connected or cloud providers, any of the major cloud providers out here. For example, we integrate with most of them that you see out there including private cloud like VMware vSphere, Open Cloud, etc, right. And if you have another cloud provider that you want us to add the platform is quite composable so if you need to add another provider it’s very easy to us to do, right, so I’m going to stop this here.

Then you create host groups, so you can create your host group leveraging the cloud provider credentials that you put in, and Nirmata will actually create those hosts for you; those host groups for you upon which you can deploy your Kubernetes clusters.

So here is what I have – I have AWS cluster and I have a Portworx POC cluster, which is fixed node cluster which has got three master and three worker nodes, and what we’re going to do is we’re going to deploy applications on top of one environment and then move it to another. Now, these environments can sit across any cloud, two different environments in two different diverse locations and you can easily move applications and data with Nirmata and Portworx.

So here is the view of what the cluster looks like; here are the details of the nodes. We have got three master nodes, and three worker nodes. One of the neat things about Nirmata is you can[console – kubectl console onto the cluster right from here, so I can just go and do kubectl get nodes and get that information. And if I want to deploy anything directly to kubectl we allow direct access into the cluster.

Now, coming back to what you can see in the cluster is here are the storage classes. Now we have – Michael showed you how you can easily, through a single kubectl command, deploy Portworx’s environment. Now what I’m going to show you is how you can actually add storage classes way easier here through Nirmata or Portworx or any other environment, right?

So here is an example where we have added storage class on Portworx. This is the YAML file that you see. If you want to edit that storage class here is an edit button through which you can do that and you can edit any of the parameters here. For example, replication of three; any of the parameters that Michael was talking about.

Now, going back to the cluster you can see what’s deployed, number of namespaces. You can see number of pods that are running in this environment and what beheld of different components, Kubernetes components within that cluster.

Now, how do we deploy Nirmata across different cloud providers? What you can do with Nirmata is you can set up policies. We have prebuilt policies for different cloud providers; once you choose a cloud provider we build the policies and what the policy does ensure that the integration required to integrate with their computer environment, networking, security, storage; all of that undifferentiated heavy lifting is addressed by Nirmata right here.

It is all configurable. You can change it. For example, I deployed a cluster on AWS using Kubernetes version 1.10.4 and I can use that to – you know I can change it. I can upgrade Kubernetes clusters by changing the version or I can bring it – you know go back if I feel that I need to go backwards with an older [unintelligible 00:31:31].

Now, out here you can provide necessary secrets, policies right here, patch policies that you need specifically when you are moving from one environment to another and your configuration maps. For example, for the MySQL app that we’re going to try out here, there is some config map that was required for that that you can add out here. Now, you can also call out if that is applicable to all applications or specific applications.

Now, another neat abstract that is available to Nirmata is you can create environments so your developers can just focus on their applications and the environments that they’re deploying in, right? Now, how are the environments tied to different infrastructure that’s [unintelligible 00:32:23] developer. They don’t need to worry about what the underlying infrastructure is, which cloud it is sitting on. All they see is an environment. They see their applications on the catalogue. They go and deploy that application and, boom, voila, the application runs for them.

So here you can create different environments; I can create a new environment. I can tie to a specific cluster, which is tied to a specific cloud, and then I can define isolation levels which could be in a shared namespace or a namespace per application that ensures that each of your applications are running independently of each other.

And here is the application catalogue; now this catalogue fits outside of your cluster, so you can pick your application from any environment, from the catalogue and deploy it in any environment of your choice, and this environment could be tied to any cloud that you are working with. Now, the platform has got integrated alarms that can be set up and these are set across clusters, and here is an audit trail and I covered dashboard for you before.

So what we’re going to do for our application is we’re going to use a simple MySQL stateful set, and what we have done in this application a lot of the elements are configurable. You can actually edit your YAML or the basic application right here itself. You can model it right here and run and deploy it and manage it, so I’m going to go into this MySQL stateful set and what you see here is all the components that are required for pod or stateful set. Do we need to have Init containers? Do we need to add volume?

Michael called out – in [unintelligible 00:34:27] enterprise you want to provide infrastructure to developers and want them to create their storage requirements dynamically, and you can do all that using storage class and you can define that storage class within your application and it leverages that to create your persistent volume claims for persistent volumes that you need. Persistent volume is the way, the constrict within Kubernetes that ensure that volume stays even after a container has done its work and dies away.

So here you have a simple volume claim template. What we have done here is the storage class that we had configured within the cluster, we just pulled it out there. We call out how much of a storage we want, which is 3G here, and then we save it. All I’ve got to do is take this application; I can run it in the environment of my choice and I have a staging environment. I can just run a simple MySQL – by the way, you can give every – you can take an application and you have to give it a run name because that’s the deployment, that’s the name of the deployment that you’re running within Kubernetes – so I’m going to just give it a simple MySQL staging name and rerun this application.

As the application is running I can go into events and tasks and see how it is deploying that application for us, right; so you can see different stages where it is getting completed. Another thing you have to see is in storage configuration did we create a PVC; here I see the PVC has been created. I see the PVC is actually bound, and now I see my application is actually running. Now, what we want to do with this is we want to take this application, take the persistent volume that we created here; we want to back it up and we want to use it in another environment and that’s where Portworx does its magic.

So here I am in this cluster. I am going to just do certain things with kubectl; where the persistent volumes are – give it a second here for it to show up; it’s logged out of here, guys. Give me a second. All right, there you go; I’m back in. All right; now we see this particular PVC, right; triggered PVC that we just created. It was created 84 seconds ago with MySQL PVC claim zero.

Now, this is the volume that we’re going to backup and we’re going to restore it in another environment for it to be used there. Now, here I have access to Kubernetes worker node, right, and I have Portworx supplied here. I can basically see what my volume list looks like. Oh; it’s logged out of here as well. There you go; I’m in that node.

You see the same volume shows up here as volume list, now what we’re going to leverage is the Cloudsnap capability that Michael highlighted to backup this volume and then we’re going to restore it in another environment. I’ll just take this PVC name and back it up, so it says the backup has started. I can run Cloudsnap status to see where things are. It looks like the backup is active, you see where it is standing right now and see that the backup is done, okay?

Now, what we want to do now is to restore this particular backup for it to be leveraged in another environment, so we want to just see the Cloudsnap list. There it is. This is an environment. This is the Cloudsnap ID and you run a simple command to restore it, right? I’m just going to call that volume that I’m restoring as PVC clone. You can name it anything; I’m going to name it PVC clone.

All right; it says restore has started for this particular volume so I can go ahead and run Cloudsnap status again to see where – how it is progressing. Voila, the restore is done, right? Now, what you have done so far is you have taken a volume, you have backed it up and you have restored that volume.

Now, what you want to do is you want to make that volume available as a persistent volume within your cluster, right? So what I’m going to do here is I’m going to run a simple command to go ahead and configure it as persistent store or persistent volume within Kubernetes. I already created that YAML that; it’s a simple YAML. I can walk you through what that looks like. Here it is; it says persistent volume created. And if you want to look at that YAML file, it’s a very simple file that creates a Portworx type volume PVC clone.

Now, this volume is available within the cluster, right? So if I say get PV, right, I see that PVC clone is here and it is available. So now if I want to run another application – so I’ve got you know the same application and now I want to run this is in another environment, which is sitting in another cloud, so I have a cloud environment here that is tied to another cloud, right?

So all I have to do is – what I’ve done is I’ve taken the same instance – in this particular case what you want to do is you want to configure statically to use the same PVC instead of dynamically provisioning another claim, right? So when you go into the application this is what it looks like; so I’ve just cloned my previous application which was running MySQL-stful. I’ve cloned it here and I’ll show you simply what I’ve done, is in the claim template and sort of configuring this storage class and creating a new one I have just said look for PVC clone volume and if it is there bind to it, right?

So I save it here and now I can take this application, I run it as another instance so I say mysql1 and I run it in prod, which is another environment. All right, the application is getting executed. Let’s again see how the components are getting deployed – all right. Now, in the storage and configuration you see that the claim is bound, setup; so it’s found PVC clone and it actually configured that, so here it is, and you click on that and it shows you PVC clone volume is that, and now your application is running.

So now what you’ve done is you’ve taken a simple application, taken that data, moved it from one environment, sitting in one cloud, to another environment, sitting in a different cloud. In this POC I’m using the same cloud but you can use it in any environment.

One of the key considerations is that when you create your Portworx object store you want to make sure that object store is accessible to all the clusters that you have because that way it can backup and restore that. One of the things that we are working together with Portworx on is automating some of these steps so you don’t have to do some of the manual configuration that you saw us do, and all of those pieces will be automated.

In this scenario I took – I duplicated an application, I changed the PVC, but in future what you’re going to see is there is an option here to clone an application. So you can take an application from an environment, you’re going to clone it; what the platform will do is it will automatically use the new volumes that you call out and use those names, and those will be backed up with the same names and then you can go ahead and access them easily, so it’s truly going to be a single click access and move from one cloud to another. Right now it requires a couple of steps but it’s fairly intuitive.

So you can actually go back here, get the PV. You see that the persistent volume is bound and when you say kubectl get name space you see this mysql1 prod, so kubectl get pvc-n, you see that the claim that we set in that application is now bound to volume PVC clone and it’s essentially using the same data.

So that covers the demo; what you saw in the demo is the ability to take any dynamically created volumes, right, through applications that are running in one cluster. You have an ability with Portworx to enable you to easily back that up into an object store, and then restore it in any other cluster or any other environment in any cloud and leverage that with applications.

Now used through Nirmata, as you are managing multiple clouds and clusters across multiple clouds through the same interface, you can manage both your applications and data to single pane of glass. All right; so that takes care of the demo.

So just to summarize what does Nirmata and Portworx do? It’s a simple set of tools to migrate your applications from one environment to another, right? As Michael called out very eloquently, it is not just persistent storage but it’s enterprisegrade productionlevel data management solution and application management solution that you get with Portworx and Nirmata.

With the abstraction that is available through Nirmata, this rolebased access control, developers just get to see their application catalogue and their environments, and all the underlying plumbing is abstracted from them. And IT Ops have visibility across clusters; they have full view into who is doing what in their clusters, what is the performance of the clusters. That’s a very intuitive tool for them to manage performance operations and troubleshooting as part of their key responsibilities.

A little bit about Nirmata – again, as you saw in the demo Nirmata is truly a cluster and application lifecycle management platform for the multicloud. We automated all the pieces that you typically need in terms of host setup, storage integration, network integration. We deploy it, we operate those clusters and operate applications on top of those clusters, and then we provide ongoing optimization as you run those applications and clusters on an ongoing basis.

So all the pieces around lifecycle management, you want to upgrade your cluster, you want to upgrade your applications, you want to manage your integration with build tools like Jenkins. Nirmata is a very comfortable platform; it’s got integrations, 50+ integrations for different kinds of platforms and anything new can be very easily integrated with our platform as well.

This is what the simple architecture of the platform looks like. Ours is an agentbased architecture so our core platform is available as an [unintelligible 00:50:21] or [unintelligible]. When you are deploying Kubernetes in a set of hosts all you have to do is deploy the agent. The agent talks to the server and through that it deploys and manages Kubernetes clusters.

And what Nirmata provides on top are all the capabilities that you need to manage the entire Kubernetes stack, which is what you need to do around security, what you need to do around alarms, metrics. Nirmata is a multitenant platform and this is a requirement that we have seen many large enterprises ask for and request. The idea there was there are – any enterprise out there, even when they’re starting within a single group need multiple environments, and some of these environments have to be separated out.

And when the different groups want to have their own access through Nirmata they can actually create multitenants where they have complete security and isolation of different users and environments, which goes beyond just creating multiple structures within an account. You can actually now have multiple accounts and you can create clusters within that. When you think about central IT ops managing clusters enterprise-wide Nirmata is the kind of platform that fits that need.

I’m just going to take a minute to summarize this, right? The platform is very composable. We manage Google News through APIs, which basically means that it’s 100 percent upstream, open source. There is no curation. You don’t get our distribution, but what we actually do is we manage any distribution. We deploy by ourselves. We deploy 100 percent upstream Kubernetes but if you want us to manage a managed service like EKS or AKS or GKE we can discover those clusters and manage it through our platform as well.

We can do the same thing with OpenShift, Oracle distribution; there are many platforms that we – many distributions we have tested so we can manage those, and if you have something else do let us know and we can see if we can cover that.

It is cloud flexible, so any cloud, any OS, any infrastructure is our mantra. And you saw how this is truly a platform to deliver an enterprise grade adoption of Kubernetes, right? One of the things we have seen with customers is – you know, a single line of business maybe can build a few clusters and work that, but when you’re thinking about adopting Kubernetes across the enterprise and when you want to make it seamless and easy for them, then you think of a platform like Nirmata. Available as a service, and of course we provide the necessary white glove services to help customers easily integrate and adopt the platform and adopt – essentially get on their cloud native journey.

I’ll leave some time for Q&A so if there are any questions we’re happy to take it. So one of the questions that – and by the way, thank you for that – one of the questions is which applications are really easy to try when you think about multicloud versus storage, running that as a stateful set? I would say MySQL, WordPress. MySQL is a very easy, simple application that you can use to try it out, right?

The easy YAMLs available; there are sample YAMLs available for different application types. You can look at that, but the ones that we use all the time are simple MySQL stateful set that gives it very good bang for the buck when you’re trying to test it out.”

Michael: “Another one that we see a lot of our customers doing is Jenkins, so they will do their CIPD pipeline themselves in containers on Kubernetes as a stepping stone towards actually running their production apps. You learn a lot by just running Jenkins on Kubernetes’ engine [unintelligible 00:55:42] master stateful service, so a lot of the features that we talked about today are required there as well.”

Anubhav: “Yes; fantastic. In fact, Nirmata has an easy plugin for Jenkins so you can integrate Nirmata very easily with your Jenkins instance that you deploy to Nirmata and Portworx.

So there is another question around block storage versus file system storage. Most of the storage in a backup – when you’re thinking about backing up these solutions and restoring it, most of the solutions that we have come across have been on the block storage side. Michael, do you have any comment on that?”

Michael: “Yeah; so typically databases that require high IO, high throughput like block storage, so I like to draw – without even talking about containers in Kubernetes we just – if you’re running a database on Amazon what’s the first thing you do while you attach an EBS volume to your [unintelligible 00:56:58] instance, and that EBS volume stands for elastic block storage, so it’s just a simple way to kind of make the connection. Databases like block storage. Single writer volumes, high IO, high throughput.

Multi-writer though is also important in cloud native applications, so thinking for instance like a multi writer volume to share configuration between workers in a cluster. WordPress is another great example of that where it actually uses both file and block storage, so MySQL which is the database for WordPress, uses block storage, but then in WordPress itself it’s stateful and you need a multi writer volume or sometimes called a shared volume to share content between WordPress workers.

So both are important and that’s why – I talked a lot about the block storage capabilities at Portworx, but Portworx is actually multiprotocol, meaning that you can do single writer block volumes, on as well as multiwriter file volumes and then with embedded object store. We do that as well.

I would say that it depends on your application what type of storage you need, whether block file or object, but almost all applications are going to use some combination of the three that’s why you’re picking a platform that enables you to be flexible is an important first step.”

Anubhav: “Excellent; thank you, Michael. All right; I think we are coming to the end of our session. We’ve got a little bit over a minute. We can take one last question if anybody has. All right; folks, thanks again. Thanks for joining this session. Hope you find it useful and if you do have any questions around a solution and need any help in getting going with us please feel free to reach out to us.

You can reach out to folks at Nirmata, at info@nirmata.com, and you have free access, free one month access to our fast platform to get going. We offer these integration, and Portworx does one month trial as well so if you want to try our joint solution together we will be posting a document, an integration document on our website that will be available by the end of the week so please look for that, and if you have any questions feel free to reach out.”

Michael: “Thank you, everyone.”

Anubhav: “Thank you.”