Distributed Application Management

VoltStack Distributed Application Management service enables customers to deploy and operate their applications across distributed and heterogeneous infrastructure such as private/public, network and edge cloud. VoltStack’s key differentiator is enabling enterprises to operate their distributed applications like a fleet. Operating as a fleet means that enterprise declaratively defines his/her intent once, and VoltStack service then takes over the responsibility of ensuring the impacted sites are aligned with the defined intent. Examples of intent include locations to deploy the applications, software version of the application, resources requested (CPU, memory), etc. VoltStack Distributed Application Management service enables enterprises to deploy heterogeneous workloads such as containers and virtual machines as a fleet.

image5
Figure 1


Distributed Application Management Features

VoltStack Distributed Application Management service includes the following key features:

  • Namespaces
  • Fleet Segmentation
  • Virtual Kubernetes (vK8s)
  • Bring-your-own Registry

Namespaces

Each customer is allocated one or more tenants. A tenant is the owner of a given set of infrastructure sites and associated configuration. Every tenant could have multiple namespaces, each representing an administrative domain such as lines of business, departments, application team, or even individual developers. Each administrative domain can discretely manage their application deployments on the same set of infrastructure sites using namespaces. Each administrative domain can discretely deploy and operate applications in their own namespace without impacting applications in other namespaces.

Fleet Segmentation

In every namespace, users can sub-segment their fleet using labels and virtual sites. Users often have a mix of sites in the fleet such as dev, test, production compute sites. Different workloads need to be deployed to a specific segment of sites due to organization policies such as dev workloads only go on dev sites. Operators can tag such sites with flexible labels which are made up of two parts, key and value as follows

  • region=US-west, region=JP-north
  • model-year=(2015, 2016, 2017, 2018)
  • function=(spray, weld)

Once the sites are tagged, operators can define a virtual site using label key-value conditions. These virtual sites are sub-segments that can be used for application deployment, service policies, network policies etc. This can be visualized as follows taking an example of a robot vertical use case.

image2
Figure 2

Virtual Kubernetes (vK8s)

Virtual Kubernetes (vK8s) provides users the ability to manage applications across their fleet using Kubernetes APIs. Enterprise operator deploys their applications using standard Kubernetes methods with deployment manifest file and indicates the segment of sites (or entire fleet) where the application needs to be deployed. VoltStack virtual Kubernetes (vK8s) service then takes over the responsibility of deploying the application to every site in (or segment of) the fleet. If there are any failures during application deployment such as connectivity or infrastructure failures, VoltStack virtual Kubernetes keeps on retrying deployment following the Kubernetes paradigm of an eventually consistent model.

The steps to deploy an application using vK8s can be visualized in the figure below and described next.

image3
Figure 3

  1. There is one vK8s object per namespace. Each vK8s object has a kubeconfig file, that can be downloaded from the VoltConsole.
  2. The user associates the vK8s object with one or more virtual sites.
  3. Users can then deploy applications to the vK8s using their regular deployment manifest files.
  4. Users could have multiple deployments per vK8s object. Users can optionally select virtual sites using annotations in their deployment manifest file.
  5. Users in two different namespaces can deploy different applications onto the same sites. Each namespace can sub-segment its fleet per its own requirements and deploy applications on different subsegments of fleets. This can be visualized as follows.

image1
Figure 4

  1. There could be scenarios when the user wishes to test a new version of their application on a few test sites. They could achieve this on the same vK8s object using the override command as follows.

    1. The user associates their vK8s object to a virtual site representing a segment of their fleet and deploys deployment-3 as shown in the figure below.
    2. The user creates a virtual site for test-sites that includes specific sites such as Tokyo, Site-1, and Site-4 as shown.
    3. The user then wishes to test an updated version of the application, deployment-4 on test-sites specifically. The user can do so using the override command.

image4
Figure 5

Bring-your-own Registry

The customer has full control over where they store their application artifacts, i.e., images. They could choose to use popular tools like Jfrog Artifactory, Docker Hub or public cloud registries such as Azure container registry for example. The path to the image is indicated in the deployment manifest file along with credentials to access the image.


Concepts

The following concepts are used by VoltStack’s Distributed Application Management features. Click on each one to learn more.


How-to’s

The following How-to guides are examples of using VoltStack’s Distributed Application Management features: