Resources:

Hybrid Cloud Kubernetes with Diamanti and Nirmata


Read the Transcript

Interviewer: Hi, everyone. This is Ritesh Patel, from Nirmata. Welcome to the webinar for Hybrid Could Kubernetes. And today I have Sean Roth, from Diamanti, with me. And we will be talking about how enterprises are building Kubernetes clusters across public and private cloud. So I’ll start with an introduction on why Kubernetes and why enterprises are using Kubernetes, then we’ll dive into how Diamanti provides bare-metal Kubernetes container platform, and then followed by how Nirmata helps enterprises manage applications across public and private clouds, and we’ll have a short demo on our solution. So let’s jump right into it. So today all enterprises are looking to gain business agility, software development agility by becoming [cloud native], using cloud native technologies like containers.

And when they start using containers the first thing that they need is orchestration. This is where Kubernetes comes in. Kubernetes has become the de facto standard in orchestration. Over the last year, Kubernetes have seen significant momentum in mid to large enterprises and also is extremely stable, being used in production. One of the big changes or, I guess, advantages when using Kubernetes is – all major cloud providers at this point have added or allowed support for Kubernetes. So that means – if an enterprise adopts Kubernetes, they use [unintelligible 00:02:06] applications on any cloud. Essentially, since the API, the Kubernetes API is now standardized across these cloud providers. Also Kubernetes is part of CNCF, which is a cloud native compute foundation.

And as a result there’s a huge ecosystem around Kubernetes. So everything right from infrastructure, services, security services, monitoring, management – all of these services or solutions that you require around Kubernetes are provided by this ecosystem. This gives enterprises a lot of choice and flexibility in terms of how to build their Kubernetes solutions. There’s definitely a lot of advantages and we’re seeing Kubernetes become the multi-cloud operating system for all enterprises. But just like with any other, you know, sophisticated solution, there is significant complexity when enterprises start adopting Kubernetes. There’s a huge learning curve because there’s a lot of new extractions that developers and operators have to learn.

Kubernetes uses a declarative way of configuration using YAML. That’s something that end-users have to learn and manage and even create, along with – when you’re using Kubernetes there’s a lot of other tools that need to be considered and integrated. And that could potentially create challenges in terms of compatibility and that, again, adds to the complexity. Now, there are other kinds of challenges when you start talking about multiple clusters or running on different clouds. How do you ensure consistency? How do you ensure they run the same version? Things like that. And then, like with any other soft, complex software, there’s a whole lifecycle of management of Kubernetes clusters themselves, you know, right from installation, upgrade, troubleshooting, things like that. So all of those are challenges that enterprises face, but there are solutions for that. So we’ll discuss both a little bit later.

So if you take a look at what enterprises need in order to build a full container [and time stack], it start with physical infrastructure. This could be public cloud, private cloud, bare-metal infrastructure. And then the orchestrators [unintelligible 00:05:06] Kubernetes. And then there’s infrastructure for the containers, which is, again, common across [opening your class server]. It’s monitoring, logging, security, ingress controller and such. And then finally end-user applications, databases, messaging, all of the services that the developers want to run. Right? So this is kind of what the stack looks like. And so this stack, you know, to kind of build and operate this stack you need – every layer needs some level of management. Again, starting from the bottom, there’s a physical infrastructure, the infrastructure that we manage typically. If it’s cloud, cloud provider tools can be leveraged to manage the infrastructure.

Then you need to manage the scheduler, in this case Kubernetes. And then managing the life cycle of the container infrastructure, as well as the application, getting visibility, monitoring the health, governance, scaling – all of these tasks become necessary and you require to provide – you need a solution to be able to do that. So this is where Nirmata and Diamanti come in. So Diamanti provides a turnkey container, infrastructure, bare-metal infrastructure with Kubernetes built in. So it’s [unintelligible 00:06:41] a lot of pain around the full lifecycle, installation, management of the cluster itself. And at Nirmata we provide a software-based solution that can manage the lifecycle of the container infrastructure components, as well as the applications running on top. And this can be done across clusters.

So we’ll look at that a little bit later, but first I would like to hand over to Sean to talk about Diamanti and how Diamanti simplifies Kubernetes infrastructure.

Respondent: All right. Thanks so much, Ritesh. All right. My section of the program really focuses on the context around container infrastructure. It’s instructive at this point, you know, to look at some of the major challenges that, you know, most of the customers that we speak with, you know, see – there are three major milestones sort of in the adoption of containers within the enterprise. And we solve the first two. The third one, this is where we team up with Nirmata as a joint solution and a technology integration to be able to solve that one. At the outset – I would imagine, you know, those of you that have started to adopt containers and build Kubernetes environments either within your enterprise or within the cloud – one of the things that you have to contend with is where are you going to run this?

And if you decide to build an [on-premise] Kubernetes environment, then there’s a tremendous amount of heavy-lifting involved typically if you go the route of do it yourself. And essentially what that means is that you pull from, you know, the whole variety of available servers, networking gear, storage arrays that are out there that are ultimately designed for legacy VM environments. And doing that presents a tremendous amount of challenges. We’ll get into some of the details around that later, but there’s a lot of heavy-lifting involved there to actually build your container environment, put the stacks together, configure it with all of the necessary storage and networking, and then on top of that deploy Docker, Kubernetes and getting your containerized applications running.

So that’s the first challenge, which is quite difficult. And then, you know, once you’ve gotten that far you’ve got to figure out how to manage and support this container stacked on an ongoing basis. Things like being able to guarantee real-time service level agreements. What about high availability? And then secure access and management. And then who do you really go to for support if something breaks? You’ve built – if you take the DIY approach, you’ve built an infrastructure that has many different layers, many different technologies and therefore vendors involved. So what do you do to support that on an ongoing basis? And then finally, this being the toughest challenge, what do you do in a situation where you’ve got on-premise Kubernetes environment where you’re running your own application, but you’ve also got deployments in the public cloud. Is there a simple way that you can devise to be able to manage all of this in a coordinated way?

How do you leverage the multi-cloud? Again, I’ll point out that’s where Diamanti and Nirmata have teamed up. So these are the three milestones that you’d be looking at, you know, along the container adoption curve. And so, again, as I mentioned, there were a lot of steps in the Do It Yourself approach, the container infrastructure. This somewhat simplifies it, but the flow pretty much mirrors what a lot of our prospects and customers have experienced. Those who have gone out to spec out their own hardware for servers, networking and storage. Figuring out how to get all that in-house – you know, sometimes you’ve got the existing infrastructure, and now you have to configure it to run containers. And so there’s a tremendous amount of work involved. Kubernetes obviously is a very powerful orchestrator and, you know, the dominant orchestrator thus far, but it’s not necessarily an easy platform to learn.

And so you look at this whole flow and there’s a lot of complexity involved, especially around configuring networking for your Kubernetes environment and then persistent storage for [stateful] applications. These are the kinds of things that we’ve seen customers spend months on. And it is often the case that with this particular approach, you know, customers will just sort of give up and say, “You know, we’ve done this for long enough and it’s had a tremendous cost not only in terms of dollars and cents, but in terms of the resources that we’ve had to assign to make this project happen.” So, you know, even if you do have the right kind of expertise in-house and you get through, there’s obviously a lot of tribal knowledge involved because this is a, you know, a home-built system here. So, you know, not a very easy process if you do it yourself.

And really, you know, the underlying thesis here is that legacy infrastructure is just not architected for Kubernetes, for containers. And so, you know, this is what introduces, you know, the lion share of complexities that we’re showing here. So by star contrast, Diamanti has taken a completely different approach. So this is a purpose-built bare-metal container platform. What we’ve done is we’ve created a platform that shifts with unmodified versions of Docker and Kubernetes, so it is completely open-source in that regard. This is a container platform delivered in a one, new appliance. So within that appliance we offer low latency, NVME flash storage, a completely plug and play approach to networking. I’ll get into some of the details around that. As I mentioned, you’ve got the container [runfinement] and orchestration already built in, and we support the entire stack, 24/7, top to bottom.

So you basically got one source of support to contend with with this approach. So having a look under the hood – again, you know, the beauty of this solution really lies in the custom approach that we take to network virtualization and persistent storage. So it is based on the Intel x86 architecture, but the added value that we provide and giving you a plug-and-play network experience has to do with our ability to take a physical network function and then virtualize it such that every container that’s running on the Diamanti platform will have its own dedicated IP address. There’s no need to go out and build all these overlay networks and deal with [NATs]. You know, this is a dramatically simpler approach. We do all of that heavy lifting, under the good so to speak. And we use SR-IOV with that. The storage that we provide –

So for stateful applications we make it very easy to carve out storage for those applications using NEVER MIND and we give you very granular control over IOPS and performance, so you can set different service levels for your applications. You have a tremendous amount of control, and all of this is really done for you and sort of put at your fingertips when you deploy Diamanti. The last bullet – this is not any kind of, you know, graphical error or type – we don’t have a hypervisor. So you can see here, it says “hypervisor” with a strikethrough. This is, you know, a bare-metal platform. Again, you know, it’s a much simpler approach that having containers deployed on top of VMs, which essentially is two layers of virtualization. There’s no hypervisor involved in any of this.

So, again, you know, on that same frame, you know, doing a compare and contrast of our bare-metal approach to container infrastructure versus, you know, the more typical VM-based DIY approach that we see a lot, you could see here all of the steps that you would have to do. Again, everything from getting all of the infrastructure gear in-house and then configuring it. So, you know, painstakingly on each node you would install VMware ESX, you’d have to download Docker and Kubernetes and install and configure all of that. So, again, a ton of steps in the second part of the process. And then, finally, you’ve got to get all of your supports together. So, you know, we’ve seen with some customers that and [FireFlow] will take about six to nine months. Whereas with Diamanti – you know, [racking] the Diamanti D10 appliance, we recommend the three-node minimum cluster.

So once you’ve done that and you’ve connected it to your layer two switch, you would just run four commands. On the command line you would have one to create your cluster, create your network, carve out your storage volumes, and then finally deploy. It is really that simple. And that’s a process that will take you less than a day. So there’s a tremendous amount of savings in terms of the resources and, you know, being able to forego all of the complexity and painstaking configuration around building your own on-premise container environment. So, again, you know, we do see – you know, as a do-it-yourself approach, I believe there are a fair amount of organizations that are running containers on VMs. And in a lot of ways it makes perfect sense because, you know, likely you’ve got all of that infrastructure running in-house. So why not start with equipment and a configuration that you already have?

But there are some significant drawbacks to running containers on VMs instead of on purpose-built bare-metal infrastructure here. The first being – so here there are fewer layers to manage, and therefore simpler troubleshooting. So if you look at a VM-based stack running Kubernetes, you effectively got two layers of virtualization here. The hypervisor and the VM – you know, effectively you’re abstracting the system from the underlying hardware, and then in your container environment you’re abstracting the containers and abstraction of, you know, the software from the underlying OS. And so if you have a problem here, likely it’s going to be very difficult to figure out where that issue might be occurring. With the bare-metal container stack, you’ve got much higher efficiency. The resources that you’re going to use, that each container will use, is far less than the CPU and memory footprint taken up by a VM.

So you can ultimately run more containers per server on bare-metal than you can on a VM stack. And we see approaches where there’s one container running per VM, and a lot of that has to do with the fact that, you know, dealing with noisy neighbor effects and so forth, you know, sort of limit how many containers you might run on a VM just for the sake of not having to deal with those types of issues. You get better, more predictable performance sort of on the order of 30 percent better and then lower total costs. There’s a much smaller footprint with dedicated bare-metal infrastructure than with VMs. So these are some important things to consider when you’re looking at different approaches, of where – I should say how you’re going to build your container stack. So I’ll move along to this slide. You know, this gets into the synergy that Diamanti has with Nirmata. I think this will lay it out graphically.

This is representative of an approach that a mutual customer of ours took as they were building several different types of container environments on Kubernetes, so across their CICB flow. And then, of course, they were running containers in the public cloud, and you can see that here. And so all these different environments – you know, let’s say you’ve got your design, dev and test all running on Diamanti’s bare-metal platform. Ad so all of these Kubernetes clusters can be managed to a single [unintelligible 00:21:42] glass – Ritesh, I don’t want to steal all your thunder here, but I did want to show this as sort of a transition to talking about how we integrate and work together. So with that I will hand it back to you.

Interviewer: Thanks, Sean. That was awesome. Yes, so just picking up where Sean left off, he showed how Nirmata can span across multiple Kubernetes clusters and provide a unified view of a single [unintelligible 00:22:20], if you will, for managing not only the lifecycle of the cluster itself, but the applications running on the cluster. So essentially once the clusters are up and running Nirmata can help operationalize these Kubernetes clusters in the enterprise. And we’ll look at that a little bit – just a quick overview of what Nirmata is, what our solution is. Essentially, Nirmata is a Kubernetes-native application platform. We are 100-percent Kubernetes certified, [unintelligible 00:22:56]. We validate with the upstream Kubernetes software, and then certified with CNCS. Nirmata is designed to itself be cloud-native and scalable.

And we not only deliver Nirmata as a cloud-based service for some customers, but for large customers it’s deployed on premises. So we have different deployment models. Already talked about support multiple clouds, as well as multiple clusters, which is a reality in the enterprise. Most enterprises want to deploy clusters, multiple clusters, [unintelligible 00:23:41] like high availability, compliance and so on. And everybody wants to kind of [visible] to deploy clusters on premises, as well as cloud because different workloads have different requirements and they want that flexibility. And with containers, that’s one of the value propositions. Portability is a key benefit that you get out of containers. When we build Nirmata, we work closely with enterprise dev teams and design it in a way so that it can be leveraged in the enterprise.

If you look at how enterprises are created versus maybe a small dev team, there’s a lot of different folks, there’s a lot of different teams responsible for different areas. And all of them need different views and different, you know, information. So we build Nirmata ground up with that in mind so that the operators can use Nirmata to ensure the clusters stay up and running, that they can deploy and manage the cluster infrastructure, and then developers can focus on their application and ensure their applications are up and running. Potentially Nirmata is a 20-solution and pretty much out of the box you can just onboard a Kubernetes cluster and be up and running within a few minutes. So here you’re seeing architecture, high-level architecture diagram of how Nirmata works. And starting from the bottom there’s the infrastructure. Whether it’s a private cloud infrastructure, bare-metal Diamanti container infrastructure or public cloud – at Nirmata we have agents that kind of [unintelligible 00:25:38] on the infrastructure.

And then the Kubernetes control [plain] is where the Kubernetes Masters components are running. In order to for a software to be managed by Nirmata we need to add a Nirmata controller in the cluster, and that controller is responsible for any communication between the Nirmata management plain which you see in the blue box at the top and the cluster itself. So that’s how the management plain communicates with Kubernetes Master, and then it sends out any [integration] in a command, the deployment that deploys anything, a Kubernetes manifest, as well as monitors the cluster and the application, as well. So we’re just kind of highlighting how Nirmata is different and what some of the benefits are. We’ve always believed that as applications evolve, as applications become more distributed, you know, following the cloud-native architectures, there is a need for strong application management.

And with Nirmata we provide that and our focus is application management and we integrate and work with best of read infrastructure such as Diamanti, bare-metal infrastructure, as well as cloud providers and so on to kind of ensure – to give customers that choice. We built on [unintelligible 00:27:24] Kubernetes, so that way, you know, we’re not in the way of customers being able to leverage the open source innovation. We are agnostic to cloud providers and operating systems, essentially giving customers that flexibility. In case they have infrastructure investments they want to leverage, they are able to do that. We also have – since we’re compatible with Kubernetes API it gives us the ability to integrate with managed Kubernetes services like [EKS] from Amazon, [unintelligible 00:28:00], and that’s something we do natively to enable hybrid cloud deployment.

So just like with on-board Diamanti clusters into Nirmata, we can also onboard clusters built from [any OB] services, and then provide [unintelligible 00:28:19] view on top. Nirmata is built as a multitenant platform, so this kind of is a huge benefit for enterprises where they want to support multiple teams for their container deployment. So that gives them isolation, in addition to the unified view for all of their application. So these are some of the benefits that Nirmata provides above and beyond Kubernetes, you know, as far as deploying and managing cloud-native applications. So next we’ll jump into a case study and Sean report to a customer who has deployed Diamanti along with Nirmata to enable this Hybrid Cloud Kubernetes clusters. I’ll let Sean quickly walk through the customer case study and will also quickly jump into some of the lessons that we’ve learned as a team, which I will share that with the audience today. So, Sean, please take over.

Respondent: Sure. Thanks, Ritesh. So as we alluded to at several points during the presentation, Diamanti and Nirmata came together as a result of an engagement with a mutual customer. And so this customer is a Fortune 50 energy company. As you can imagine, they have lots of pressure to innovate and do things very differently. Containers present an incredible opportunity to do that. And so as they began to deploy containers and build environments internally – you know, they obviously had some running in the cloud, as well, you know, they met several different challenges, you know, the ones that I talked about initially. Building the infrastructure and managing it, but also now that they – you know, we’re at a point where they had multiple Kubernetes environments running, you know, how can they make all that manageable as a multi-cloud container deployment. So that’s sort of the 30,000-foot view of what their challenges were.

They had taken a couple of DIY approaches to building their own on-premise container infrastructure. They looked at different alternative. They had looked at Openstack, you know, Openshift, as well as a variety of other platforms. And so, you know, you can imagine some of the complexities that they encountered during that process. They also had some specialized, you know – GridOS being something a little particular to their industry, wanting to run a containerized version of that. And so, you know, they looked at all those requirements and figured they needed an alternative to the infrastructure that they were trying to build. So they deployed Diamanti. And as opposed to a project that ran many months, they were able to deploy the Diamanti bare-metal container platform in three days and simultaneously reduce their overall infrastructure footprint by a factor of 10.

There were some performance advantages that they saw initially. These are things that we’re still monitoring with them, but overall they project also a big savings. Again, much less footprint, you know, lower power consumption, all these ancillary benefits, as well. But the fact of the matter is that they have a stable Kubernetes environment that they’ve been able to deploy and also, you know, build Kubernetes clusters for each of the stages in their development pipeline. And so, you know, again, having multiple clusters to manage – the challenge that they were looking at down the road was, you know, how do we manage all these different environments? How do we do it in a way that makes sense and is efficient? So I’ll pass it over to Ritesh to talk to multi-cloud management. That’s really where Nirmata came in and provided a lot of value.

Interviewer: Thanks, Sean. So yes, in addition to having these clusters deployed, what they are also looking at is having their operations team support these clusters for their developers. And that’s where Nirmata comes in, where – through Nirmata, now, they were able to onboard these clusters and add or deploy the infrastructure services like we discussed earlier, whether it’s security monitoring, logging and so on, and then make those clusters readily available for their developers. Now, being a large organization, it’s not just one thing that it’s able to support. They want to support multiple things. So that’s where they needed a way to isolate these teams. Even though they’re sharing the same cluster, every team has different applications. They want to deploy different requirements in terms of resources and so on.

So Nirmata simplifies that for these operational teams. Now they can start creating environments and enable [unintelligible 00:34:16] teams into Nirmata so that it can map environments to teams, and then these developers can start deploying applications to their environment. And that’s something [unintelligible 00:34:28] in a demo, but really Nirmata helped the customer operationalize Kubernetes so that developers can start being productive in using these clusters. Also, another benefit was using the learning curve. Before this project, most of the developers were not even familiar with Kubernetes. And Nirmata, through using our product, it helped them families the community. And frankly they didn’t have to be an expert in order to start to be productive. So there were going to be other benefits, essentially sharpening the learning curve.

And so quickly touching on some of the lessons we learned from this customer – so, you know, going into this customer we weren’t sure whether, you know, customers, they want to deploy multiple clusters or they will just deploy one large cluster. There’s always these kind of questions, but clearly what we’ve seen and what you see now with this customer and other customers is that multi-cluster is a reality. These clusters can be all on trend. These clusters can be either in public or private cloud. There are various reasons customers want to do this, whether it’s high availability, compliance – reasons could vary, but that’s a reality and it’s important to plan for it and to know, understand or figure out how these really manage, how they’re really supported and so on. And then, when you start dealing with multiple clusters, it’s important to make sure that they’re consistent because with these clusters –

Every cluster is different. They’re starting a different version of Kubernetes. It’s very differently, different networking. That creates a lot of challenges when it comes to troubleshooting. Again, standardizing on the infrastructure is important. So it’s not always possible if you’re using, you know, public cloud and private cloud. But wherever possible, it should be done. And then, finally, there’s still several challenges, depending how enterprises are set or depending on how the existing [unintelligible 00:36:58]. You know, we always run into those challenges. Again, it’s not like these challenges aren’t solved. It’s more about planning for these challenges and making sure that you’re aware that these could be issues. So, you know, maybe the customer we’re discussing would hang to some of these and we are able to resolve these and move past these challenges, but there will be – because containers kind of provide a completely different way of deploying applications, some of these challenges emerge and they just need to be addressed.

So just to quickly summarize before we jump into a demo. So Nirmata along with Diamanti – our value proposition is we provide a single management pane for multi-cluster and multi-cloud where Diamanti provides the infrastructure for public cloud and Nirmata can kind of lay it on top. Nirmata has full, comprehensive application management built in, which can essentially help operationalize the infrastructure. With the [unintelligible 00:38:23] solution you end up getting full visibility all the way from applications, down to the cluster and just across cloud. And from a consistency standpoint, we can enable, you know – customers can set up policies to ensure that the clusters are consistent, as well as the applications that are being deployed, how these clusters are being [governed] and not violating any policy.

So the combined solution – we enable these use cases or these – provide these benefits for our customers. All right, so next I’ll jump into a short demo of how Nirmata works with Diamanti. Just give me a second to quickly share the screen. So I’m assuming you can see my screen. I’m sharing the Nirmata dashboard. This is what it looks like. At this point I have a few clusters. I have a cluster on AKS or [unintelligible 00:39:54], which is the [unintelligible] container service. It’s a [unintelligible] cluster. This is my cloud, public cloud cluster, if you will. What I’m going to do is actually onboard a Diamanti cluster. I have a Diamanti cluster already set up. It’s a three-node cluster running a recent version of Kubernetes. Okay, let’s see. It shows you all the cluster components are up and running. So Nirmata, to onboard the cluster, what you need to do is just go through a simple wizard.

In this case we have two options here. One is you can onboard an existing cluster or you can create a new one. To onboard an existing Diamanti cluster, select the provider as Diamanti. And then what should happen is you’ll be presented with a YAML file. Let me download this. And this is the YAML file for the Nirmata controller. So this needs to be applied to the Diamanti cluster. So what we’ll do is just add the file here. And this applies a bunch of resources [unintelligible 00:41:42], it creates the row, and then deploys the Nirmata controller. Then, once you do that, you can click this button and the controller connects back with Nirmata and then the cluster is installed. Now you can see the Diamanti cluster being discovered, the three-node cluster, all the three-nodes. It already has a bunch of [unintelligible 00:42:03] running, the storage clusters, the volumes and so on. So that’s pretty much what it takes to onboard a Diamanti cluster in Nirmata within like five minutes that can be done.

The next step is to use this cluster. So in Nirmata there’s a concept of an environment. So you can create an environment, and we’ll call this, let’s say, “My Bare-metal Environment.” I’ll pick the Diamanti cluster here, and that’s basically what you do to create your environment. And this is where your applications may be running. And before I deploy an application I will just quickly show you [unintelligible 00:42:44] where my applications are [unintelligible]. So Nirmata, you can create compatible applications from scratch. You can also import applications or cognitive manifest into Nirmata. And I have a simple demo application here. It’s a guest book application with a front-end and – php frontend, and that is [unintelligible 00:43:11] set up. These are defined as deployments.

There’s other configurations, the service configuration, the ingress, storage, et cetera, for this application. You can actually, in Nirmata, explore this out and see what the manifest looks like, with the Kubernetes manifest. But Nirmata, again, it’s all classically presented and very easy to configure and set up. So let’s go ahead and deploy this into our cluster. I’m just going to call this “Book demo.” My application is guestbook, and then just click on run. And in a few seconds you’ll see that Nirmata will start deploying this application to the cluster. And what happens at this point is you’re actually sending down all of the different resources to the cluster. You can see a status of what’s happened, and then the cluster will go ahead and create these parts. In case anything fails, it’ll retry the configuration. So that’s kind of what happens in case Kubernetes – in case a part fails the first time, Kubernetes keeps trying, and then at some point it gets started.

Just to make sure that we actually did this, I’m going to go back to my cluster and see how [unintelligible 00:44:52] are deployed. First I look at all my name spaces. See, there’s a name space that’s called “Demo Bare-metal,” so I’m going to get the part for that name space. And you can see all my parts are up and running. It took basically less than a minute or so that these are deployed. And now, once the application is deployed, pretty much you can do management and be on that. So whether it’s scaling up or down this application, upgrading the application, all of those things can be done pretty easily on Nirmata. If you want to get events, analytics, all of the information needed for managing this application and so on. So this is a quick demo. I’m just going to show the hybrid cloud scenario.

I already have the same application deployed on [unintelligible 00:46:01], and I deployed this yesterday. It’s exactly the same application, so there’s no difference and, you know, again, Nirmata also allows you to clone an existing application and deploy it to a different cloud. All of these things are capable and that’s what we enable for how we hybrid solve deployments. So that’s pretty much it for a quick demo. If you have questions, please enter those right now. We’ll start taking the questions at this point.

So Sean, there’s a couple of questions already from the audience. So this one is for Diamanti. Are you running [CentOS] unmodified, or has the version you use been stripped down to essential services?

Respondent: Yeah, I see that. Good question here. I believe this is the standard and modified version of CentOS. And we’ve got everything else running on top of that. There is all of the functions and features that you have at your disposal with the Diamanti platform ran on top of that. That’s what we call DiamantiOS. So that’s everything from how you configure and deploy your clusters to setting service levels, security features, all that kind of stuff. And then there is a second question, I believe, related to our customer case. Ritesh, you might know a little bit about this, as well. The question is what package functionality replaced IBM Web Sphere in the infrastructure?

I’m not 100 percent sure. I haven’t seen the details around that, but I think we can find that. I don’t know if you know.

Interviewer: Yeah, I do, actually. You know, what customer is looking to do in this case – instead of using Web Sphere they wanted to move to using Tomcat, Apache Tomcat, which is an open source application server, and then leverage all of the capabilities around container management to kind of replace the functionality that Web Sphere provided as an application server around managing, you know, Java application. Right? So it’s [unintelligible] container management and open source application server to replace [unintelligible 00:49:01]. So all of the stuff that Web Sphere provides around clustering, around recovery, things like that, how all of those capabilities are actually part of the Kubernetes as an orchestration [enterprise].

So that kind of makes it available, and then Tomcat provides the rest of the application [unintelligible] capability. All right, so there’s any other questions – the couple that we come up, I’m going to just kind of throw them out there. One of the questions that always comes up around hybrid cloud is – is it possible to move applications from public to private cloud and, you know, move them back and forth? Sean, maybe you want to tell us quickly, provide your take on that. And maybe I can add to it.

Respondent: Yeah, it definitely is possible. That was a key consideration in developing a joint integration between Diamanti and Nirmata was that it’s quite often the case that, you know, customers are running some of their containers and application in the public cloud. Want to be able to move then back and forth, you know, for a variety of different reasons. That’s the kind of flexibility that we wanted to enable with this solution. And similarly I believe you guys support VM environments, as well. And so there’s the option of moving an application that’s running in a virtualized environment to another cluster.

Interviewer: Yeah. So I think with moving with the application – so with containers definitely enable that and absolutely we support that. And the only challenge tends to be with the data. And that’s where we work with obviously folks in Diamanti and other vendors were more on the storage side where they help with the data kind of movement part where, “Give me the biggest snapshot of your data and move it to the cloud,” or in some cases, you know, be able to keep a copy of the data in the cloud and then just to move the application components. So there’s various ways of doing it, and it all depends on the solution of the technology that’s available for the cluster, but obviously that’s one of the key benefits of hybrid cloud containers where you can move your applications back and forth. Containers make that possible.

All right, so I think we’re kind of almost towards the end of our session here. If there are any other questions from the audience quickly before we sign off? I think it looks like we’re good. Sean, I’d like to thank you very much for presenting and sharing more about Diamanti. In case anybody wants to reach out and needs more information, our contact information is on the slides right now, so please feel free to reach out. Sean, anything else you’d like to share?

Respondent: I should, yeah. Thanks so much. You know, it’s been great teaming up on this program. I would encourage everyone out there to, you know, contact us with the questions, as you mentioned, Ritesh, and also to have a look at the joint solution brief that we’ve developed. You’ll be able to find that on each of our websites.

Interviewer: Okay. And it looks like there’s one last question about – is this session available at the current URL? And the answer is yes, this session will be available. And I think the audience, the members will be sent out a URL in an e-mail, so you should have access to the session. And yes, feel free to download the information in the attachments. Information and the links to go to the [attachment] section. That should give you more information about each of our products, as well as the solution. So with that thank you very much.