Dynamic Resource Management for Application Containers

Dynamic Resource Management for Application Containers

Nirmata provides multi-cloud container services for enterprise DevOps teams. Nirmata is built to leverage Infrastructure as a Service (IaaS) providers, for dynamic allocation and deallocation of resources. In this post, I will discuss a Nirmata feature that helps you optimize your cloud resource usage: Host Auto-Scaling.

Host Groups and Policies

A core principal in Nirmata is to directly leverage cloud provider constructs, and not attempt to hide or abstract away the underlying infrastructure services. Nirmata supports multiple public or private Cloud Providers, and makes it easy to define pools of compute resources using a construct called Host Groups. For example, you can define an OpenStack Cloud Provider, and created several Host Groups each with a unique machine template and settings for security, networking, and storage.

In Nirmata, policies are used to map compute, network, and storage resources to applications and services. To auto-scale Hosts, you can define Host Scaling Rules that will control how your Host Groups will scale-up, or down, based on the runtime characteristics of your services.  One nice aspect of this, like most features provided by the Nirmata platform, is that it works on any supported cloud provider: AWS, Digital Ocean, vCloud Air, OpenStack, Cisco Metapod and VMware’s vSphere.

Let’s take a look at this in action…

Host Scaling Policy Configuration

To demonstrate this feature, we are going to start with an OpenStack Host Group with only one host instance (a virtual machine). This instance has 3,953 MB of memory available and is not running any containers.

host-scaling-hostgroup-1

 

In order to enable auto-scaling on this Host Group, you have to create a  Host Scaling Rule. There are 3 easy steps to create a scaling rule:

Step 1:
Provide a user-friendly name for the rule and select one or multiple Host Groups to which this rule will apply. In our case, we select the ‘openstack-hostgroup’.

host-scaling-scaling-rule-2

Step 2:
Next, you can now define the condition that will trigger a host to be added to this Host Group.

host-scaling-scaling-rule-3

This rule specifies that a new host must be created when the memory allocated across all the hosts of the Host Group exceeds 80% for more than a minute. Under this condition, Nirmata will keep adding hosts to this Host Group till it reaches a size of 10 hosts.

Since we are starting with only one host with 3,953 MB of memory, a second host will be automatically created when the first host reaches 3,162 MB of memory allocated.

Step 3:
Finally you can define when hosts can be removed from the Host Group(s).

host-scaling-scaling-rule-4

This condition specifies that an empty host can be removed from this Host Group when the total memory allocated is below 60% for more than a minute.

The percentage of memory allocated in the Host Group is computed without taking into account the host that we want to remove. For instance, let’s assume we have a Host Group with the following hosts:

  • Host 1: total memory=1000 MB, allocated memory=700 MB
  • Host 2: total memory=1000 MB, allocated memory=700 MB
  • Host 3: total memory=1000 MB, allocated memory=0 MB

Host 3 cannot be removed because the Host Group memory usage without host 3 would be (700+700)*100/(1000+1000) = 70%.

Scaling Up

Now, let’s see how this Host Group will scale up automatically when we deploy and scale an application. To illustrate this behavior, we are going to use a simple ‘hello world’ application composed of only one service. Here is the blueprint of this application:

host-scaling-helloworld-blueprint-5

The two pieces of information that are important for our test are:

  1. The size of the container required to run this service is 256 MB
  2. We use a dynamically allocated host port, so multiple service instances can run on the same host.

We can now deploy this application on our OpenStack Host Group:

host-scaling-create-environment-6

After deploying the hello world application, we can take a look at the Host Group view to observe how much memory is now in use:

host-scaling-hostgroup-7

We can see that we are running 1 container and that this container occupies 6.5% of the Host Group memory.

Going back to the environment running our application, we can now start scaling the application. Let’s add 11 more service instances of our hello-world service:

host-scaling-scale-up-hello-8

Eleven services are added to the environment:

host-scaling-environment-9

 

 

host-scaling-hostgroup-10

Since we haven’t crossed the threshold of 80%, there is no host created yet. However, if we add one more hello-world service instance to our environment then we can see that a new host is added to the Host Group.

host-scaling-hostgroup-11

Scaling Down

In order to understand when an empty host can be removed from a Host Group, there are two important points to consider:

  1. Container removal strategy
  2. Memory usage computation

Container Removal Strategy

When you are scaling down your application, Nirmata must decide which containers should be removed first. The strategy depends on whether or not you have configured a host scaling rule.

If you haven’t configured a host scaling rule, Nirmata will pick a container running on the most loaded host. This strategy is applied to guaranty that your services are evenly distributed across your hosts.

If you have configured a host scaling rule, Nirmata will deduce that you favor minimizing your cloud resource usage and therefor it will pick a container running on the least loaded host. This strategy is applied to free-up a host as soon as possible.

Let’s consider the following example:

host-scaling-hostgroup-12

If we disable the host scaling rule, and then scale down the number of services of our application from 16 to 15, then Nirmata will remove one container from the host running 15 containers (IP=192.168.1.96).

If we do the operation with the host scaling rule enabled then Nirmata will remove one container from the host running 2 containers (IP=192.168.1.101)

Memory Usage Computation

The computation of the Host Group usage is not based on the current usage but rather on the usage that will result from removing an empty host.  Let’s consider the following example:

host-scaling-hostgroup-13

We can see that there is an empty host and the current memory usage is 42%. Since we configured a host scaling rule with a threshold of 60% one could expect the host to be removed. This is not the case because removing the host would cause the Host Group usage to jump back to 84% which is obviously higher that our scale down threshold of 60%. In this particular case, we have to scale down the application down to 9 containers to see the host being removed.

host-scaling-hostgroup-14

What’s Next?

Host auto-scaling provides a powerful, easy to use, and fully automated way of optimizing the usage of your cloud resources. We’ve designed and implemented this feature working with our customers. However, there is more to come. We will soon be releasing container level auto-scaling as well as various container re-balancing strategies.

We would love to hear your feedback on these features, or anything else that could help you manage application containers across clouds.

Regards,

Damien Toledo



Signup for free trial!

For more updates and news follow us at:  social-1_logo-linkedin   social-1_logo-twitter

 

Continuous Delivery For Containerized Applications
Easiest way to leverage EC2 Spot Instances with Docker and Nirmata
No Comments

Sorry, the comment form is closed at this time.