Deploying application containers using Azure Resource Manager and Nirmata
A core philosophy at Nirmata has been to integrate with cloud providers without hiding the value provided by the infrastructure-as-a-service layer. For example, on AWS we integrate with constructs like auto scaling groups, spot fleet requests and launch configurations. This approach provides significant flexibility in configuring the underlying resources to deploy applications. When exploring integration of Nirmata with Microsoft Azure, we identify Azure Resource Manager (ARM) to be a powerful construct for grouping various resources into a logical unit which can be used to not only manage infrastructure resources but also to deploy applications. I this post, I will discuss how Nirmata integrates with Azure Resource Manager to provide ultimate flexibility when deploying containerized applications.
Azure Resource Manager
Azure Resource Manager enables you to group your resources so that you can deploy, update, or delete all the resources in a single, coordinated operation. You can use templates for deployment and the template can work for different environments such as testing, staging, and production. Resource Manager also provides security, auditing, and tagging features to help manage resources after deployment.
Creating Azure Resource Group
To create a new resource group, login to the Azure console and select Resource Groups and add a new resource group.
Once the resource group is created, you can create:
- Storage accounts: A storage account provides a unique namespace to store and access your Azure Storage data objects. There are two types of storage accounts. A general-purpose storage account gives you access to Azure Storage services such as Tables, Queues, Files, Blobs and Azure virtual machine disks under a single account. A Blob storage account is a specialized storage account for storing your unstructured data as blobs (objects) in Azure Storage. More details on storage accounts can be found here.
- Virtual networks: A virtual network (VNet) is a representation of your own isolated network in the cloud. You can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. More details on virtual networks can be found here.
- Security groups: Network security group (NSG) contains a list of Access Control List (ACL) rules that allow or deny network traffic to your VM instances in a Virtual Network. You can find more details on security groups here.
Once you have set up your resource group, you are ready to use it in Nirmata.
Creating Cloud Provider & Host Group
Before using Azure as a cloud provider in Nirmata, you need to set up Nirmata as an application in Azure Active Directory. Detailed steps for this can be found here.
Once you have added Nirmata as to Azure AD, you should have Client ID, Client Secret, Tenant ID and Subscription ID to use when creating a Cloud Provider in Nirmata.
Before using a resource group in Nirmata, you need to give Nirmata user access to your resource group using the Access Control (IAM) settings.
Now you can create a Host Group in Nirmata. When creating aHost Group, you need to select the Resource Group you would like to use for this Host Group. Once you select the resource group, you can select the image, instance type, virtual network, security group and storage account. You can also specify if you want to attach public IP address to instances in this host group and setup a default username and password. Once the host group is created, you are ready to deploy your applications.
Deploying an application using Nirmata is extremely easy. After creating the host group, you need to create a Resource Selection Policy to map an Environment Type to the Host Group and then create an Environment of that type for your Application.
Here is a short video showing how to deploy containerized applications on Azure using Nirmata:
As you can see, with Nirmata it is extremely easy to deploy and manage containerized applications on any cloud. Nirmata integrates natively with cloud provider specific constructs and provides tremendous control on how infrastructure is configured while presenting a uniform interface for deploying and managing applications on any cloud. This not only lowers the learning curve for your DevOps teams but also eliminates any cloud provider lock-in.