Resources:

Enterprise-wide Kubernetes – Episode 1


Read the Transcript

Jim Bugwadia: Hi, everybody. This is Jim Bugwadia from Nirmata. Welcome to the first episode of Enterprise-Wide Kubernetes. A webinar series we’re sponsoring on BrightTalk. So thanks everyone for joining. And what we’re going to do today in terms of the agenda is we will first introduce the series itself and what we’re going to cover here.

Then we’ll also talk a little bit about what an enterprise-wide Kubernetes looks like, the main components and decisions involved in the stack. And they’ll talk about the Azure Kubernetes service with our guest Paulo Renato who I will introduce shortly. As well then we’ll look at the Nirmata solution in action with AKS. And the integrations we have there.

All right. So to kind of dive into it, if you’re building an operating Kubernetes today and of course Kubernetes has become much more than just a container orchestrator. In many ways, it has become a way to model and package enterprise applications in a cloud-native manner so that they work on different, regardless of whether you’re using public cloud, private cloud, you have a common set of tools, a common set of constructs that you can use to build and package your applications.

So if you’re building and looking at bringing Kubernetes into your enterprise, of course there are several different factors, there’s several things involved in building out a Kubernetes stack. The first is it all starts with infrastructure because Kubernetes develops application, you build on Kubernetes have to run somewhere. So you need to decide what sort of computer, what sort of network, what sort of storage you’re going to use.

If you’re using a public cloud, all of this may be predetermined, but if you’re running on private clouds, each one of these choices has to be vetted out. The next thing you would need to do on top of Kubernetes itself, is make sure that the changes that are involved from your image registries, from your version control tools are visible in your Kubernetes clusters as well.

And to do that, you need integration from Kubernetes into image registry, into version controls as well as that could be gift repositories or any other version control that you’re using to manage your manifests for your Kubernetes applications. And you need integration into your build orchestration tools like Jenkins and others.

With Kubernetes, you also would need Ingress which controls the incoming traffic flows into your cluster. So that’s going to put this separately from networking because there’s layer to layer three networking concerns and then there’s Ingress, the layer (unintelligible) type of routing and load balancing concerns that you have to address.

With Kubernetes as well, of course you would also want to now start looking at other aspects of your stack including end-to-end logging, monitoring, security as well as the application management which needs to be done on top of Kubernetes as you’re deploying and managing your enterprise applications. So all of that, if you think about end-to-end, one of the aspects is there are several components you would want to integrate and compose as part of your Kubernetes stack. And at the same time, there are interesting trade offs and choices that you can make as you’re building together the stack.

So in this webinar series, what we’re going to do is we’re going look at each one of these components, what are some of the main choices that enterprises need to make as they’re building and managing their, you know, Kubernetes stacks across one or more clouds and what are — the questions also to think about as you’re looking at tools from vendors, from cloud providers and from the open source community itself.

So with that, let me go ahead and introduce our guest today. Paulo Renato who I’ve had the pleasure of knowing for quite a few years. Paulo is a principal cloud architect at Microsoft. He’s also one of the founders of the Azure Open-Source Meetup Group in the Bay area. As well as a board member on the Portuguese Organization in the Bay Area community itself. So Paulo, welcome to the webinar.

Paulo Renato: Hi James. Thanks for having me. It’s a pleasure to be here. I’m very excited and I’m looking forward to share some of my thoughts with the audience.

Jim Bugwadia: Absolutely. So in your role, Paulo, you work as a cloud architect and you work with several different enterprises who are either migrating to cloud or trying to take their applications and make them cloud native. So one question, what are some of the key trends that you’re seeing with Kubernetes adoption from enterprises?

Paulo Renato: Sure. So essentially I work primarily with the large enterprise customers in the Bay Area. One common trend that we see in the Bay Area that actually differentiates from different locations where all were folks from Microsoft are working with customers is that a lot of local companies, they like to use open source technologies, right, and essentially my space that our customers adopted containers near or so back and they are now in the journey of moving whatever they deployed before into Kubernetes. This is one trend.

I think we will cover this as part of the webinar but it’s the common sense that it’s a great momentum for Kubernetes in general, right. A second way, the other trend that I realize, we do have customers that are starting their journey with containers as well and they are going straight to Kubernetes and trying out some of the managed offerings from the different cloud providers there.

Jim Bugwadia: Okay. Yeah, that’s interesting. So the early adopters are kind of retooling on Kubernetes than the newer ones now have the luxury of leapfrogging directly into Kubernetes.

Paulo Renato: That’s right. Yes.

Jim Bugwadia: Okay. So my next question and it’s an interesting thing to think about, right. So Kubernetes in many ways’ abstracts infrastructure. The whole purpose is to create this common duel set, this common construct that you can use on any cloud. So does that mean infrastructure no longer matters? What do you think?

Paulo Renato: That’s a really tricky question and very debatable as well, right. So I think we can’t deny that container is a precious commodity, right, and why it matters let’s say to the operations team for the sake of low-level access to how things get deployed or even more important things related to hardening, right. This is one aspect. And the other aspect is some other let’s say roles within the enterprise such as developers. The only they care about is consuming the service.

From that perspective, what we want actually is to distract all those complexity from the developers, right. So I just wanted to also mention with respect to running those services, right, let’s say in cloud ABC, given what menace service or even let’s say those services running on top of infrastructure is a service, right.

A lot of the conversations that we have are essentially related to data gravity. Meaning the infrastructure is distracted, there our different phases that we can search upon later in terms of whether it is important or not. But things like data gravity, something that is important for one to decide where to run the service, right.

Jim Bugwadia: Okay. Yeah, so that’s a pretty interesting point, right, because in some ways Kubernetes may be different things to different roles within an enterprise.

Paulo Renato: That’s right.

Jim Bugwadia: And for developers, it creates this common layer of abstractions, these common concepts in the toolbox. But to a storage architect or to a networking team, there’s still many interesting challenges for which may need to be solved. Infrastructure solutions.

Paulo Renato: That’s true. Yeah.

Jim Bugwadia: Okay. Interesting. Another question and I kind of want to briefly talk about distributions, right, because that was one of the topics we want to cover in this conversation. So obviously Kubernetes, there are some interesting debates out there on blog posts, etcetera. Different vendors are packaging Kubernetes in different ways. So at one extreme, you have vendors which completely abstract Kubernetes. They have their own tools, their own [unintelligible 0:09:51] and their own curated version of Kubernetes. And then you have varying shades or varying flavors if you will of how close to upstream Kubernetes you can be.

Also of course every club provider now including Microsoft with the Azure is providing a manager Kubernetes service. So there’s some interesting tradeoffs in how close to the edge or how close to upstream you need to be. And that’s one thing enterprises need to think about. But I’m also interested in hearing from you in terms of making the choice between a managed service versus just taking upstream and installing it themselves, what are some of the tradeoffs and what should enterprise architect bid?

Paulo Renato: Sure. Definitely. Yeah. So let’s start with the first part of the question in terms of different providers by different distributions as well. Right. I think what I mentioned before in terms of data gravity is one important aspect for enterprises to make the decision on where to run the service itself, right. And usually we also talk about think beyond Kubernetes itself because containers by itself won’t tell you the whole story.

There are other things alongside with the container that composes your applications, right. So we ask things about how containers connect to other services for example, right. By all other managed services as in let’s say database is a service so one. And how those solutions bring you additional capabilities. With respect to [unintelligible 0:11:41] policies or security policies, right, that we can easily run in some of those distributions.

Now with respect to managed and no managed offering, right, it’s not like a one size fits all. Right. There are good reasons for sure why you choose let’s say managed offering versus why would you run your own Kubernetes deployment as well. Right. So typically large enterprises, they choose both approaches if you will. It’s very common enterprises for the sake of having a low-level access to the infrastructure.

For custom network configuration, operating system, hardening, right. It’s very common that they choose to deploy their own cluster on top of IS, right. So on that area specifically, you can definitely deploy Kubernetes on top of IS. You can also use what we call ACS engine so if you’ll search for ACS engine, it’s a collection of its scripts where you streamline the deployment of a very custom Kubernetes cluster on top of VNs. So it makes it a little bit easier than scripting by yourself.

And all this way we have other scenarios where it makes a lot of sense for customers to choose a managed service especially if you are starting to play with containers for your dev tests, keyboard environments, right. But we also have a lot of happy customers if you will be running their production workloads on managed services, managed Kubernetes if you will.

Jim Bugwadia: Okay. So it’s not really one or the other. It can be like with —

Paulo Renato: It could be both.

Jim Bugwadia: It could typically be both. Interesting.

Paulo Renato: That’s right.

Jim Bugwadia: And you mentioned also that there are — when they’re choosing the custom security is one reason. There could be other customizations like with networking, maybe admission controllers. Things that they want to customize in the Kubernetes components which is leading them to use the custom. Very interesting. Cool. That’s great data. So now I’m convinced that I need in my enterprise, I’m also going to use managed services. They’re easy to get started with. How do I decide, like, between, you know, there’s EKS, there’s JKE and there’s AKS. How do I decide on one versus the other?

Paulo Renato: Sure. Definitely. So let’s do this. I think that’s a good Segway for a demo. What we will show you is essentially — let me share in my desktop here. Essentially how you can consume a managed service, how easily you can consume a managed service or measure. And then I will give you some of the data points that you can consider when you make the decision.

Jim Bugwadia: Okay. And while you’re pulling that up, just a quick note to our audience, feel free to type in questions as they come up and we will try to answer as many as we can live and if we cannot, we will also answer questions offline and send that out. So please do enter in your questions as they come to mind. Go ahead, Paulo.

Paulo Renato: Okay. Nice. So I trust you guys can see my desktop. So what you are seeing here is essentially Azure portal. Portal.azure.com. This is where you get to. Right. So and essentially here on my left I have all the services that I pinned to my left bar. And one of the things that I want to show you is Kubernetes services which is where I have my managed Kubernetes. We call it Azure Kubernetes Services or AKS. So if you search for AKS, you will find exactly what I’m showing you here.

So let me just filter to a couple of accounts or how we call it on Azure, we call it subscriptions. So I want you guys to see only those two clusters here. So there are two ways you can consume this service. Either over the U Line or how I like to do it and a lot of my peers like to do over the CLI as well. So for the sake of this demo, what I will do is I will consume one of those two clusters in order to deploy a very simple workload and explain some of the concepts there. And then I will deploy one cluster as well using the CLI.

So one good thing about Azure, it comes with the cloud shell which is essentially a dash or a power shell, right, where you can essentially run your commands, all the tool sets are installed as part of the shell. And as such, it’s much easier for you to start consuming the services right away. Right. So let me run a couple of commands here. Give me just a second.

And by the way, what you are seeing here on Cloud Shell, it runs using our container technology as well. So essentially what I want to do is show you the both clusters that you are seeing on both. You can see here that I have two contacts, right. So if you are a commuter with a conseek file, this is where I have all the credentials or SSH keys or any information related to the closest that I have deployed. It could run straight from my laptop or anywhere else that I have access to this browser.

Now what I will do with this cluster called my Kubernetes cluster tree, I will just show you that I have a single node, right, and the information that I’m seeing here, I could potentially see from the portal as well. One thing that I wanted to show over the UI as we interact here. You see things here like upgrade. So you can see here that the version is matching, this is my current version and it could easily upgrade to a more recent version. It could easily also scale the number of nodes and for this configuration in particular, I could go as mean as like 100 nodes.

We do have customers with different deployments going on both of this. Right. And there’s many other things that it could do from the UI. Also from the CLA. So what I want to show you here as part of this CLA demo to show that we don’t have any parts currently running here. I will create a couple of pods here and explain to you.

So what I’m doing here, I’m just creating an MDX pod, right. I think it’ll serve as an Ingress controller. So the cord is easily running here. Whatever you do now, you expose this part. Give me just a second. I want to tell you about what happens.

So one of the things that I explain to my customers. As you interact with our deployments of the Kubernetes, it’s a 100 percent upstream Kubernetes. We actually don’t modify necessarily but Kubernetes API. What happens though is Microsoft works a lot with the open source community in order to make the integration between the Kubernetes API and our infrastructure in the clouds.

So essentially what I will show you here, it’s a very simple way to expose the service using the Kubernetes API and if you look here, so we are given this service, a public AP address that we will pop on in a couple of seconds here as part of my external AP [unintelligible 0:20:48] and you guys can try by yourself in order to access this AP address.

But what is happening here as part of the cloud infrastructure we are orchestrating, or we are making all the configurations as part of the cloud itself in order for us to assign a public AP address to this Ingress controller. Alongside with all of our load balancer running as part of the cloud infrastructure, right. We told you, even knowing that this whole thing is happening. Right.

So the system does the majority of the work that Microsoft, from engineering perspective alongside with the open source community do, in order to make this type of integration happen. So if you try out this AP address, I’m pretty sure you are going to get NGX. So this is like a very simple example on how you could do that.

So now let’s quickly just kick off creation of a cluster and let me share additional information with you guys. Give me just a second. We will cheat here and just copy and paste. So essentially for everything that we create on Azure, we have the concept of resource groups, so I created a group in one specific location called westus2 which is essentially one our cloud locations.

And now I’m using the address and I will create a cluster on top of this group. Right. Asking also to generate my FCH keys that will later be affixed into the complete file, so I can easily access this cluster. Use my [cudcuttle] client if you will. So now let’s switch back to these lines. Give me just a second there. Okay. Can you guys see these lines?

Jim Bugwadia: Yes, we can.

Paulo Renato: Okay. Great. So while we create this cluster, I just wanted to mention a few important data points. So why would you run let’s say a Kubernetes cluster on Azure, right? So one of the things like the first one, Microsoft is deeply involved with the open source community as you can see on this slide here. Including leading some of the project. Like important projects in the context of Kubernetes. And as we can see there, we even have some of our employees as member of the Kubernete steering committee. We are also as a Microsoft member of the cloud native foundation as well and the Linux foundation and several other contributions to the communities. Right.

So some of the things for you to think as you engage with Microsoft in this type of open source projects. On the same lines, we also have other risk factors as well. The fact we have the number two contributor for Kubernetes and the number four Docker and we also have 70 or plus employees contributing with Kubernetes open source project as well.

So those from my perspective are impressive let’s say data points for you to consider. And if you will think about one of the things that I mentioned before in terms of containers and if it needs to have an ecosystem around it in order for example, right, and orchestrate some of the concepts or the aspects, sorry. We also have several contributions to other open source projects that are very important in the context of Kubernetes.

Probably you heard before about Helm and Draft. Especially Helm which is like a package manager for Kubernetes. Right. And another impressive data points as well. Our customers are welcome in order to run their workloads using open source to run their workloads using open source to link. And we can see here a handful of them in any different aspects of this matrix here from development access to that of monitoring and so on.

Obviously we also have first party offering from that perspective, but we are deeply engaged with the community and also partnering with all those different logos that you can see there, right. That’s another important aspect.

My final thought as we wait for this cluster creation, I just wanted to give you an idea of what we are doing here when we talk about AKS. So you probably saw this sort of high-level architecture before. Right. Where you have your master and you have your nodes as well where you run your pods. So AKS is simply a way for us to streamline the creation of the pods where — I’m sorry, the nodes where you are going to run your pods. Right. And we don’t charge for the master nodes. And we actually extract this information from you.

You have access to the master node pool through the [cudcuttle] command and from there, you orchestrate on your Azure account or how we call it, Azure subscription. The creation of all your nodes where you run your pods. So this is a very nice way to extract all the complexities that goes behind the creation of those infrastructure concepts that we will be part of your cluster creation.

So with that, I will hand it over to Jim and just, so you know, the cluster creation takes roughly between 10 to 15 minutes in order to finish the creation.

Jim Bugwadia: Very cool. Yeah, so definitively makes a lot of sense is that composable nature where you can bring in the best of breed tools, integrate them. It’s a 100 percent upstream and you have the option of either a complete managed service as well as just using infrastructure and building your own custom stack on top.

Paulo Renato: That’s right.

Jim Bugwadia: Thank you. All right. So let me quickly in this next segment, I’m going to introduce Nirmata and we’ll look at now how Nirmata can work with AKS. But like Paulo mentioned, one of the strengths of managed service offerings and what Azure is providing is the integration capabilities. And we’ll take a quick look at that as well.

So first off, just introducing Nirmata briefly. So what we do at Nirmata is we have a 100 percent Kubernetes certified solution. We’re not a Kubernetes distribution. But you can think of us as an upper or a higher-level management plane where we can work with any Kubernetes distribution or any version of Kubernetes. And provide application management capabilities, workload management capabilities.

We also have the ability to install and operate Kubernetes on any infrastructure, private or public. And to set up policies so you have enterprise wide common governance, common visibility, and a single context in a management plane for all of your Kubernetes clusters whether they’re public, private, managed or custom. It doesn’t really matter. You get one kind of view point for all of them.

So quickly stepping to the architecture and I’ll start from the bottom of the stack and move up towards the top. So if you’re running Nirmata, what you would do is if you choose to install Kubernetes using Nirmata, you would put our agent on a virtual or physical server, so we can also support bare-metal servers if those are of interest. And we also integrate with the leading hyperconverged providers in the cloud native space.

So you can do containers on bare-metal just by dropping in our agent and that agent connects back to the Nirmata management plane. And from there, it get its instruction. Now this management plane can be run as a [SAS] so we have a [SAS] based version or we have an on-premises, private edition which they can run in your data center or in your cloud. And it’s the full multi-tenant, horizontally scalable service which you can run internally as well. So once that agent connects back, its only responsibility is to bring up the Kubernetes control plane components as well as the worker component based on the role of the node or the VM or server that it’s running on.

Once Kubernetes is up and running, we have controllers that fit inside of the Kubernetes cluster and then provide all of the management capabilities that you need for your workload inside the Kubernetes cluster. So think of it as we’re kind of creating a sandwich if we will. Right. So we can provide value underneath Kubernetes, for the nodes, for the container engine, visibility and the health.

Or we can in addition to that, we can run inside of Kubernetes as well as provide workload management and integrations into all of your CI/CD tools, into your security tools, things like ADFS for single sign on. So you get this common management blame for all of your Kubernete clusters.

So let me share my screen and what I’m going to do is I’m just going to quickly show you what this looks like in action itself. So hang on a second while I’m pulling that up. It looks like — Paulo, are you still screensharing or is that — there we go. Now it’s preparing. Okay. Let me try that one more time. There we go.

So you should be able to see my screen now and what I’m going to do is just to start with I’m showing the CLI. I actually have an AKS cluster that I just created using the same three commands that Paulo showed is what I used to get this cluster up and running. And if I take a look at my cluster information, I see that it’s an Azure AKS cluster that’s created. Here are some of the services including the dashboard, cube DNS, etcetera that I can see. But what I want to do is I want to now manage this cluster inside of Nirmata, right. So I’m going to go into Nirmata and I already am logged in. Here I’m using the Nirmata dash. So this is our main dashboard.

You see I have shared accounts. I can see [Ratasha]’s been doing some work in this account as well as a few others from the team. I can now take a look at all of my clusters. All of the applications which I’ll show a little bit later that we have in this account. So you see here I already have several clusters including some bare metal clusters, TKA cluster, other AKS cluster and I’m going to now add the new AKS cluster which we just take in kind of a stall. Right.

So I’ll call this demo AKS. For cloud provider, I’m going to choose Azure Kubernete service. Like Paulo mentioned, this does a few important things. So it lets us know that Azure that’s the cloud provider and as we’re looking at infrastructure components, now we have the ability to be able to interact with the cloud provider through native constructs.

So what I did when I clicked next, what I did here was I downloaded a [Yaml] file will run the controller that we were kind of talking about into the Kubernetes cluster. So all I need to do to do that is I’ll do is kubectl and of course you could do this also through the Azure portal. But through just the command line, what I can do is I can run this Nirmata group controller into my cluster itself. And as you see, it’s going to output it at — it took some resources, some roles etcetera that now pertains to my cluster.

So if you go back into the Nirmata UI, you should see the new one demo AKS is what we just created. That actually just lit up so that means that our controller is running now. It connected back. I can already start browsing. I can see what’s in my cluster. Right now there’s really no workloads running. But that’s really all it took to get visibility to start managing this particular cluster itself.

But of course the goal here is not just to have a cluster but to actually start managing applications on the cluster. So to do that in Nirmata, we have one other abstraction which creates a nice level of isolation if you will to share clusters across teams. So the concept we have is one called of an environment. Here I’m going to create a single environment on this new cluster that we just created or just imported from Azure. And I’m going to set my isolation to say I want to name space every application.

So that each application will run in its own little space. This way I can even run multiple copies of that application if I want to. I’m just going to create this environment on top of that cluster.

Now, I could have as many environments as I want. I can control policies from my environment including access controls, things like that in a very easy manner through Nirmata. But what we want to do here just to kind of complete this demo is I want to show what it takes to run an application. But before I run this application, I want to tell you what we already have in our catalog.

And by the way, Paulo mentioned Helm which is one of the important contributions that Microsoft has made to the open source community. Nirmata fully supports Helm, and you can import any Helm start into Nirmata and start modeling your application. Or you can go build your own application and also around those. Right. So for example, we have ghost which has a simple– it’s just one deployment and I can see exactly all the details in here for that application.

If I wish, I can also just expose this out as [Yaml] or I can integrate this into my version control. So lots of different options on how you manage the manifest. The nice thing here is I can also build out my own custom applications from scratch without having to know all of the details and all of the gory workings of the [Yaml] and Nirmata will automatically validate. It will upgrade based on versions. And it manages that low-level complexity for you.

So now these are some of the applications that we already have. And let’s say I just want to run, for the purposes of this demo, I want to run a very simple application that will take low world and I’m going to go ahead and run this on the cluster that we just created. Right. So I’ll just name is ‘Tunnel 1’ and we’ll select demo AKS as the cluster. And if I click run, what’s going to happen is Nirmata is going to start talking to the controller. And in a few seconds we should see — we already now are executing the action. I can drill down. By the way, we have full visibility into every API call that’s made. Which is extremely useful for troubleshooting, debugging, being where things fail in terms of validation, etcetera.

Then I can also go and manage my applications that’s pulling the image right now and they’ll see in a few seconds it’ll be up and running and they’ll start monitoring FLAs etcetera for the application itself. If I switch back to my CLI, I should also be able to see, because we choose the isolation for a name space for application, if I do get name space, we see there’s a name space created for that. I can  even do minus ‘n’, and with that name space we should be able to see the pods running inside of the name space for that particular controller. So it’s still creating. It may take a few seconds to pull that image for the first time on these nodes. And then once that is up and running, we should start seeing the application being managed through Nirmata itself.

Let me go back to the UI and there it is. Yeah, I see it up and running. Now one of the interesting things is once this is running, if I want to do things like if I want to scale up, if I want to change the number of instances, if I want to upgrade, all of this can be fully managed through both the UI as well as and most of our customers will completely automate this in their CI/CD pipeline. But there are times when you want to go troubleshoot, when you want to look at things in more detail. So you have all of that visibility. All of that control. You can also tweak the application as needed and then save it back as a blueprint when you’re happy with it. Right.

So lots of flexibility in how the applications and their state can be managed. So that’s what I wanted to show for a quick demo. And I’m going to go back to the slides. I know we had one question that was posted. So we can answer that and see if there’s any other questions as well that we want to cover.

So the question that was posted was “how do backup and resource containers and applications running in containers supported by Nirmata? Is there any data protections solution?”

The answer is yes. So there is support for that and what you can do is use Snapshot. So if you’re using, in Kubernetes of course you can use persistent volumes and persistent volume claims. If you’re using, you know, a cloud provider like Azure, Nirmata based on that cloud provider integration will automatically select Azure as the default storage cloud. You can also have your own custom classes. And the storage vendors that Azure supports, then you can have additional external storage if you wish.

So [map starting] is one of the new features in Kubernetes supported through CRD. And Nirmata supports that which will allow you to backup things like volumes and then even restore them onto other pods if needed and manage them externally as needed as well.

Paulo, anything else you want to add on that storage and in terms of data recovery that you might support from Azure and AKS?

Paulo Renato: No, I think you covered it well, Jim. There are certainly the work [unintelligible 0:41:23] about persistent volumes, right. And obviously for persistent volumes you can also take let’s say Azure storage is not [unintelligible 0:41:36] as we like.

Jim Bugwadia: Right. Okay. So yeah, that’s possible then too. You could potentially even automate that and take periodic snapshots and use them offline. Okay. Another question which was posted was is there any integration into [Lambda] with Nirmata? The answer to that is no. We currently do not have any integration with Lambda. Of course you can use EKS with Nirmata. You can use Lambda directly if needed. We are looking at one item which is on our roadmap. We’re very keenly looking at things like K Native. Also [open faz] and other solutions for server lists and functions of those servers in the Kubernetes ecosystem. We will be supporting those as our enterprise customers are looking at integration between containers, containerized applications and functions as well as these broader use cases that are emerging.

One interesting thing that we’re working on with customers is also as you see different figures within your applications, whether it’s alerts, whether it’s other triggers based on events that are happening, how would you launch a function through a Kubernetes native function as a service type of solution?

Paulo, anything you want to add? I know Azure has Azure functions as well similar to Lambda, right?

Paulo Renato: To [Ada blast] lambda, that’s right. Yeah, we call it Azure functions. And we also have other, let’s say several other capabilities as well. Lambda would be Azure functions.

Jim Bugwadia: Okay. All right. So any other questions that you might have from the audience? Now is the right time to put them in. We’re happy to answer them live. We do have a few more minutes left so we can take some questions now.

Paulo Renato: We are also happy to take questions over LinkedIn or Twitter as well if you guys like.

Jim Bugwadia: That’s terrific.

Paulo Renato: Yeah.

Jim Bugwadia: Yeah, both Paulo and I are fairly accessible and certainly any one of our other team members from our respective companies. Feel free to reach out as you think about any other questions, any other feedback or thoughts that you had.

One thing I should mention as we’re kind of wrapping up. So we are planning the next session of the series. It’s going to be focused on Security. And just going back to the fact that we had in one of the previous slides, what we wanted to do was have a webinar session on each one of these components. So what we will do is we’ll cover security next. That’s coming up in roughly about a month in October.

Then in November, we will most likely do a session on storage. And after that, we’ll plan subsequent sessions on networking, as well as some of the application management concerns including image registries, integration with Get. There’s a lot of interesting things that are on [Get ops]. We will also cover Helm and Helm III which is another upcoming — it’s the new version of Helm.

So lots of interesting things to cover. So definitely stay tuned to this channel for those upcoming webinars. One more question that came in is about deployment strategies. If there’s extra support for strategies not directly supported by Kubernetes. For example, for blue green deployment, etcetera.

So Kubernetes itself of course supports a few different deployment strategies like rolling updates and it has various controls including using health check, so you can have probes for liveness and readiness. And then you can have ways of managing how many of your budgets for how many pods you can afford to be down at any given time for your service? So there’s lots of controls within Kubernetes itself and those are fully supported.

We are also looking at video because they’re in terms of more advanced traffic management, routing and also deployment strategies. We feel the right solution is using a service mask like [isio] which will allow even traffic splitting. It will allow percentages based on tag and routing to different versions of services. So for example, running LV1 and LV2 and then being able to route internal traffic to let’s say the new version and existing customers, production traffic to the prior version. And then slowly dialing up and rolling out that sort of completing that.

So that’s what we’re working on [isio]. Paulo, anything you want to add in terms of deployment strategies, what’s available today on Azure?

Paulo Renato: Yeah. I field talks on this one. One of the things I mentioned before, AKS is a 100 percent upstream APIs, right, which means a lot of what you just mentioned is also clickable to Kubernetes. We do have some of our teams and teammates and also from the open source community doing a quarterly investigation on ESU on top of AKS. Right.

For the most part, as far as I know, it works without issues. Right, but the path that we are going is essentially to support those, let’s say, issue or the open source pulley in order to support this type of deployments.

Jim Bugwadia: Okay. Alright. So I think those were all the questions. And definitely like Paulo also mentioned, feel free to post any time or any other thoughts, we’d be happy to answer more. Also, feel free to please post your feedback on this. There is option for audience feedback and let us know what else you would like to see us cover. If there’s topics that you have in mind, questions that come up in your enterprise Kubernetes deployment, we’d be happy to get those addressed in a future session.

Thanks everybody and a special thanks to Paulo. Thank you for being our guest today and sharing your —

Paulo Renato: Definitely.

Jim Bugwadia: — keen insights on what you’re seeing with customers.

Paulo Renato: Yeah, thanks for having me and yeah, appreciate the audience as well joining us today.

Jim Bugwadia: Alright. Thanks everybody. Bye bye.

Paulo Renato: Bye bye. Thank you.