Multi-Cloud Networking

Objective

This guide provides instructions on how to seamlessly connect and secure applications between multiple cloud networks using VoltMesh and VoltConsole.

The steps to connect and secure applications between multiple cloud networks are:

Seq
Figure: Multi-Cloud Networking and Security Setup Steps

The following images shows the topology of the example for the use case provided in this document:

Top
Figure: Multi-Cloud Networking and Security Sample Topology

Using the instructions provided in this guide, you can setup Amazon Virtual Private Cloud (Amazon VPC), data center cloud gateway, deploy your web application in your AWS VPC, setup secure networking between the 2 clouds, provide load balancer, secure the application, and setup end-to-end monitoring.

The example shown in this guide deploys secure gateways on AWS VPC and the data center cloud. In AWS VPC, it deploys an application called as Hipster Webapp in two clusters with each representing a different environment namely development and staging. The application consists of the following services:

  • frontend
  • cartservice
  • productcatalogservice
  • currencyservice
  • paymentservice
  • shippingservice
  • emailservice
  • checkoutservice
  • recommendationservice
  • adservice
  • cache

Prerequisites

  • VoltConsole SaaS account.

    Note: If you do not have an account, see Create a VES Account.

  • Amazon Web Services (AWS) account.

    Note: This is required to deploy a Volterra site.

  • Private cloud environment (data center) with networking connectivity to internet and TOR from the hardware.

    Note: The management IP address for your hardware is required.

  • Volterra vesctl utility.

    Note: See vesctl for more information.

  • Docker.
  • Self-signed or CA-signed certificate for your application domain.

Configuration

The use case provided in this guide sets up Volterra sites as gateways for the ingress and egress traffic for the two cloud networks. The example web application has a front end service to which all the user requests are sent and it redirects to other services accordingly. The datacenter gateway site is on a physical hardware in an on-premise datacenter location. This datacenter also has TOR behind which we have VM based hosts sitting on two different subnet.

The following actions outline the activities in setting up secure networking between the AWS VPC and private data center cloud.

  • The two cloud environments are connected using the Volterra global network and secured using the network policies in a way that the development services on the development EKS cluster can communicate with only development services on the private DC. Similarly, staging services on the staging cluster can communicate with the staging services on the DC.
  • The frontend service of the application needs to be externally available. Therefore, two HTTPS load balancers are created for each cluster.
  • The load balancer TLS configuration secured by applying Volterra Blindfold encryption to the TLS key.
  • Security policies are configured to restrict ingress traffic selectively using the BGP ASN sets.
  • A WAF configuration is applied to secure the externally available load balancer VIPs. Also, the load balancer is secured with a javascript challenge to protect against bots.

Note: Ensure that you keep the Amazon Elastic IP VIPs ready for later use in configuration.

Step 1: Deploy Site (AWS)

The following video shows the site deployment workflow:

Perform the following steps to deploy Volterra sites and web application in your VPCs:

Step 1.1: Download the Volterra quickstart deployment script.

Download the latest quickstart utility:

docker pull volterraio/volt-terraform

Download the deployment script:

docker run --rm -v $(pwd):/opt/bin:rw docker.io/volterraio/volt-terraform:latest cp /deploy-terraform.sh /opt/bin
Step 1.2: Prepare input variables file for terraform deployment. The following example shows the sample entries of the input variables:
{
    "vpc_cidr": "192.168.32.0/22",
    "deployment": "hipster-aws-mcns",
    "fleet_label": "hipster-aws-mcns",
    "access_key": "<aws_access_key>",
    "secret_key": "<aws_secret_key>",
    "region": "<your aws region>"
    "api_p12_file": "<path-to-api-credentials>",
    "api_url": "https://<tenant>.console.ves.volterra.io/api",
    "machine_public_key": "<public-key>"
}

Note: Download the API credentials in the VoltConsole from the IAM->Credentials->My Credentials option as per the instructions in the Generate API Certificate document. The credentials get downloaded in the file with the .p12 extension. Use export VES_P12_PASSWORD=<api credentials password> command to set the VES_P12_PASSWORD environment variable.

Step 1.3: Deploy the VPC with Volterra sites and the K8s clusters using the deployment script.
./deploy-terraform.sh apply -p aws -i <tfvars> -tn self-serve/secure_kgw

The deployment performs the following:

  • Creates AWS infrastructure such as VPC, subnets, and route-tables
  • Deploys AMI instance of Volterra node into the created VPC
  • Performs automatic site registration (approval)
  • Creates Volterra fleet and network connector objects with fleet label as hipster-aws-mcns
  • Creates the hipster-dev-cluster and hipster-staging-cluster
  • Creates the hipster-dev and hipster-stage kubernetes namespaces
  • Deploys the Hipster shop web application in the clusters

Note:

  • The deployment approximately takes 10 to 15 minutes to complete.
  • The deployed Volterra site act as the Secure Kubernetes Gateway for the web application services.
Step 1.4: Verify the site creation and status from VoltConsole.
Step 1.4.1 Verify that site and fleet are created.

Log into the VoltConsole and click Site Map on the left configuration menu. Click Filter button on the map and select Labels on the loaded filter window. Enter label key as ves.io/fleet and select the label value you included in the variables file. This example uses the the label value as hipster-aws-mcns. Click Apply to verify that the site is created.

filter fleet
Figure: Site Verification Using Fleet Label

Step 1.4.2 Verify the site status using site monitoring functionality.
  • Place your cursor on the site shown in the map to display a brief site overview. Select the site to expand its short status view on the left side of the map and click Explore Site to load the site dash board. The site dashboard gives you a snapshot of overall site status such as health score, metrics, RE connectivity, etc.
  • Click on Site Status tab and verify that the Volterra Software Status and Volterra OS Status sections have Successful value for the Last Upgrade field. Also check that the IPSEC Status is up for the RE Connectivity section.
Step 1.5: Download the kubeconfig files of the created K8s clusters.

Download the kubeconfigs and save them to files using the following commands:


./deploy-terraform.sh output -n hipster-aws-mcns dev_cluster_kubeconfig > hipster-dev-kubeconfig

./deploy-terraform.sh output -n hipster-aws-mcns stage_cluster_kubeconfig > hipster-stage-kubeconfig
Step 1.6: Create a secret policy.

The secret policy is used for encryption of Kubeconfig file and private key of the certificates using Volterra Blindfold.

Log into the VoltConsole and navigate to Security -> Secret Policies. Click Add secret policy and enter the following configuration:

  • Enter a name in the Name field. This example uses the name mcns-secret-policy.
  • Select First Rule Match for the Rule Combining Algorithm field.
  • Select the Allow Volterra checkbox. This allows Volterra infrastructure services to decrypt the secret.
  • Click Add secret policy to complete secret policy creation.

SecPol
Figure: Create Secret Policy

Step 1.7: Obtain a pubic key and store the output to a file.

Public key is used for encryption of Kubeconfig file and private key of the certificates using Volterra Blindfold.

Note: Public key is a part of the Volterra secret management key needed while performing the secret encryption.

vesctl request secrets get-public-key > hipster-co-public-key
Step 1.8: Obtain a policy document for the secret policy created in Step 1.6 and store the output to a file.

The policy document is used for encryption of Kubeconfig file and private key of the certificates using Volterra Blindfold.

Note: The policy document contains information about all the rules in the secret policy and policy_id.

vesctl request secrets get-policy-document --namespace system --name mcns-secret-policy> mcns-secret-policy-doc
Step 1.9: Encrypt the kubeconfig files using the Volterra Blindfold.

Use the public key and policy document created in previous steps. Store the output a file.

Note: The encrypted bytes could only decrypted by users/components defined in the policy document.

For the dev cluster:

vesctl request secrets encrypt --policy-document mcns-secret-policy-doc --public-key hipster-co-public-key hipster-dev-kubeconfig > hipster-dev-bf-secret

For the stage cluster:

vesctl request secrets encrypt --policy-document mcns-secret-policy-doc --public-key hipster-co-public-key hipster-stage-kubeconfig > hipster-stage-bf-secret
Step 1.10: Go back to VoltConsole and start discovery object creation.

Select Manage -> Site Management. Select Discovery and click Add discovery. Enter the configuration as per the following guidelines:

  • Enter a name in the Name field.
  • Select Site for the Where field.
  • Click Select ref, select hipster-aws-mcns as the site, and click Select ref.
  • Select Site Local Inside Network for the Network Type field.
  • Select Kubernetes for the Type field.
  • Select K8s for the Discovery Service Access Information field and select Kubeconfig for the Oneoff field.

Adddisc
Figure: Discovery Object Configuration

Step 1.11: Perform configuration for the Kubeconfig.

Click Kubeconfig and enter the configuration as per the following guidelines:

  • Select Blindfold for the Secret info field.
  • Enter the encrypted secret in the Location field. Use the secret from the hipster-dev-bf-secret file generated in Step 1.9.

    Note: Use cat <filename> to copy the secret.

  • Select EncodingNone for the Secret Encoding field.

DiscBlf
Figure: Discovery Object Kubeconfig Setting

Step 1.12:Complete creating discovery object.

Select Apply and Add discovery to create discovery object.

Step 1.13: Repeat from Step 1.10 to Step 1.12 for the second K8s cluster.

Note: Create the discovery object with site reference as hipster-prod-wasp-west and Blindfold secret from the hipster-prod-west-bf-secret file.


Step 2: Deploy Site (Private DC)

Deploying site in your private data center consists of downloading the Volterra hardware site image, installing gateway site on the data center, configuring network settings, and creating a fleet for the data center cloud.

Note: Refer to the Prerequisites chapter for data center site deployment prerequisites.

The following video shows the data center site deployment workflow:

Perform the following steps for deploying gateway site on the data center:

Step 2.1: Create site token.
  • Log into the VoltConsole and select Manage -> Site Management. Select Site Tokens and click Add site token.

    SiteToken
    Figure: Create site token

  • Enter a name for your site token and click Add site token to create the token. Note down the token value (UID) for using it in the site installation.
Step 2.2: Download and install the Volterra hardware image on your physical hardware.
Step 2.3: Perform initial configuration and registration for your site.
  • Log into your hardware terminal using ssh.

    Note: Use the temporary password provided in the Hardware Installation guide for the initial configuration.

  • Press TAB, select configure on the shell menu, and press ENTER to set initial configuration.
  • Enter the site token and other optional parameters such as site name, hostname, etc.
  • Type Yes to confirm the configuration.

hw initial
Figure: Hardware Site Initial Configuration

  • Log into VoltConsole and visit Manage -> Site Management -> Registrations. Click Pending Registrations tab and approve registration for your site.
Step 2.4: Create internet facing virtual network for your site.
  • Select Manage in the configuration menu. Select Virtual Networks under the Networking option. Click Add virtual network.
  • Enter a name and select Site Local Network for the Network Type field.
  • Click Add subnet. Enter the IP prefix of your hardware management network in the Prefix field. Enter the prefix length for your subnet in the Prefix Length field. Click Add subnet to apply the subnet to the virtual network.
  • Click Add virtual network to complete creating the network.

dc nw slo
Figure: Hardware Site Network Configuration

Step 2.5: Create internet facing network interface for your site.
  • Select Manage in the configuration menu. Select Network Interfaces under the Networking option. Click Add network interface.
  • Enter a name for the interface.
  • Select Ethernet for the Type field and eth0 for the Device Name field.
  • Click Select virtual network and select the virtual network created in previous step. Click Select virtual network again to add the selected virtual network.
  • Select Disable for the Enable DHCP Client field.
  • Select Disable DHCP Server for the Enable DHCP Server field.
  • Click Add network interface to complete creating the network.

dc ni slo
Figure: Hardware Site Network Interface Configuration

Step 2.6: Configure BGP for site to TOR connectivity.
  • Select Manage from configuration menu and select BGP under Networking. Click Add a BGP object.
  • Enter a name.
  • Enter ASN for the data center site. This example adds 65520 for the dc-cloud-gw-01 site.
  • Select Router ID type as From Interface.
    bgp self
    Figure: BGP Current Site ASN and Router ID Configuration
  • Click Add bgp peer and enter bgp peer details. These are the details for your TOR:

    • Enter your TOR’s bgp ASN as 65510, select peer address type as From IP Address.
    • Enter the IP address of your TOR.

    bgp peer tor
    Figure: BGP Peer Configuration

  • Select data center cloud gateway site as the site where the BGP is to be installed.
  • Select network type as Site Local Network. Click Select network interface and select the network interface configured in previous step.

    bgp site
    Figure: BGP Site and Network Configuration

  • Click Add BGP to complete creating BGP.
Step 2.7: Configure fleet for the data center cloud.
  • Select Manage from configuration menu and select Fleet under Site Management. Click Add fleet.
  • Enter a name for this fleet. This example uses the name dc-cloud-gw.
  • Enter a fleet label value. This example uses the label dc-cloud-gw.
  • Click Add Device and enter the device configuration as per the following guidelines:

    • Enter eth0 for the device name.
    • Select Owner VER as the device owner.
    • Select Networking Device for the Device Instance field.
    • Click Select interface and select the interface you created in previous step.
    • Select Outside Interface for the Use field.
    • Click Add device to add the device to the fleet configuration.
  • Click Add fleet to complete fleet configuration and create the fleet.

    dc fleet
    Figure: Data Center Fleet Configuration

Step 2.8: Add the data center gateway site to the fleet.
  • Select Sites -> Site List from the configuration menu and find your data center site. Click ... -> Edit to open the site edit form.
  • Click in the Labels field and select ves.io/fleet as the label. Select the value as the fleet label you created in the previous step.

    dc add to fleet
    Figure: Addition of Data Center Gateway Site to Fleet

Step 2.9: Verify the site and fleet using the VoltConsole site monitoring.
  • Click Site Map on the left configuration menu. Click Filter button on the map and select Labels on the loaded filter window. Enter label key as ves.io/fleet and select the label of the fleet you created.
  • Place your cursor on the site shown in the map to display a brief site overview. Select the site to expand its short status view on the left side of the map and click Explore Site to load the site dash board. The site dashboard gives you a snapshot of overall site status such as health score, metrics, RE connectivity, etc.
  • Click on Site Status tab and verify that the Volterra Software Status and Volterra OS Status sections have Successful value for the Last Upgrade field. Also check that the IPSEC Status is up for the RE Connectivity section.

Step 3: Connect & Secure Networks

Securely connecting the networks includes connecting inside networks of the two cloud environments and adding security policies. The policies are such that the services on development subnet of the AWS VPC can communicate only with services on development subnet of private data center. Also, similar policy for services of the staging subnet is configured.

The following video shows the workflow of connecting and securing the two networks:

Perform the following to connect and secure the two cloud networks:

Step 3.1: Create a global network.
  • Log into the VoltConsole and select Manage from the configuration menu and select Networking -> Virtual Network from the options.
  • Click Add virtual network and enter a name. This example sets the name as mcns-global-network.
  • Select the network type as Global Network.
  • Click Add virtual network to complete the configuration.
Step 3.2: Create a network connector to connect the AWS VPC to the global network.
  • Select Manage from configuration menu and select Networking -> Network Connectors in the options.
  • Click Add network connector and enter a name. This example sets the mcns-aws-global name.
  • Select Dynamic Gateway Without Snat for the Network Connector Type field.
  • Select Global Network for the Outside Virtual Network Type field.
  • Click Select outside network and select the global network created in previous step.
  • Select Site Local Inside Network for the Inside Virtual Network Type field.
  • Click Select inside network and select the inside network created in chapter 1. This example uses hipster-aws-mcns-sli.
  • Select No Proxy for the Proxy Type field .
  • Click Add network connector to complete the configuration.

nwc glob aws vpc
Figure: Network Connector for AWS VPC to Global Network

Step 3.3: Create a network connector to connect the data center network to the global network.
  • Select Manage from configuration menu and select Networking -> Network Connectors in the options.
  • Click Add network connector and enter a name. This example sets the mcns-dc-gw-global name.
  • Select Dynamic Gateway Without Snat for the Network Connector Type field.
  • Select Global Network for the Outside Virtual Network Type field.
  • Click Select outside network and select the created global network (mcns-global-network).
  • Select Site Local Network for the Inside Virtual Network Type field.
  • Click Select inside network and select the inside network created in chapter 2. This example uses dc-cloud-slo.
  • Select No Proxy for the Proxy Type field .
  • Click Add network connector to complete the configuration.
Step 3.4: Add the network connectors to the fleets of both cloud networks.

Adding the created global connectors to the fleets of both cloud sites enables connectivity through Volterra global network.

  • Select Manage from the configuration menu. Select Site Management -> Fleet.
  • Find the AWS VPC fleet created in chapter 1. Click ... -> Edit to open the fleet edit form.
  • Click Select network connector and add the network connector connected to the global network. In this example, it is the mcns-aws-global network connector.
  • Click Save changes.
  • Repeat the above steps for the fleet of the data center cloud and the data center global network.
Step 3.5: Add inbound rules to allow the data center subnets to EKS worker node security group.
  • Log into your EC2 console and select Instances on the left menu. Search for your EC2 instance using the deployment name you specified in the variables file in the Step 1 chapter.
  • Select the development cluster and select Security Groups in the left menu. Click Inbound rules tab and click Edit inbound rules.
    ec2 secgrp
    Figure: EC2 Security Groups Configuration
  • Add rules to allow all TCP traffic from the data center subnets.
    ec2 ed ib
    Figure: EC2 Inbound Rules Creation
  • Repeat the above steps for staging cluster.
  • Verify the connectivity to the clusters from the data center networks.

    Note: Use ping command to test connectivity to the Amazon elastic VIP.

Step 3.6: Setup network policies for the cloud network

Create the policies in a manner that development subnet can only communicate with the development subnet and staging subnet can only communicate with the staging subnet.

Step 3.6.1:Create network policy rules for the allowing communication with the DC subnets.
  • Select Security from configuration menu and select Network Policy Rules under Network Security. Click Add network policy rule.
  • Enter a name. This example sets dc-dev-subnet-mcns as the name to indicate that this rule allows communication from the dev subnet of the private DC.
  • Enter Allow for the Action field.
  • Select IP Prefix for the Remote Endpoint field.
  • Click Add prefix and enter the prefix of the private dc development subnet. This example uses the 10.200.1.0/24 prefix.
  • Click Add network policy rule to complete rule creation.
    dc dev rule
    Figure: Network Policy Rule for Data Center Dev Subnet
  • Repeat the previous steps to create another rule for the staging subnet. This example creates the staging subnet rule dc-stage-subnet-mcns with the allow action and prefix 10.200.2.0/24.
    dc stage rule
    Figure: Network Policy Rules for Data Center Staging Subnet
Step 3.6.2:Create network policy rules for allowing communication with the AWS VPC subnets.

Repeat previous steps creating allow rules for the AWS VPC subnets. This example creates the rules aws-dev-subnet-mcns and aws-stage-subnet-mcns with the prefixes 192.168.33.0/24 and 192.168.34.0/24 respectively.

Step 3.6.3:Create network policies defining the local end points and applying the network rules for the data center cloud.
  • Select Security from configuration menu and select Network Policies under Network Security. Click Add network policy.
  • Enter a name. This example sets dc-dev-subnet-mcns as the name.
  • Select Prefix for the Local Endpoint field.
  • Click Add prefix and enter local endpoint subnet prefix as 10.200.1.0/24 (the dev subnet of DC).
  • Click Select ingress rule and add the policy rule created to allow traffic from AWS VPC dev subnet. This example adds aws-dev-subnet-mcns.
  • Click Select egress rule and add the policy rule created to allow traffic to AWS VPC dev subnet. This example adds aws-dev-subnet-mcns.
  • Click Add network policy to complete policy creation.
    pol dc dev
    Figure: Network Policy for Data Center Dev Subnet
  • Repeat the previous steps to create another policy for the staging subnet. This example creates the staging subnet policy dc-stage-subnet-mcns with prefix 10.200.2.0/24 and rules(ingress/egress) as aws-stage-subnet-mcns.
    pol dc stage
    Figure: Network Policy for Data Center Staging Subnet
Step 3.6.4:Create network policies defining the local end points and applying the network rules for the AWS VPC.

Repeat previous steps creating policies for the AWS VPC subnets. This example creates the policies aws-dev-subnet-mcns and aws-stage-subnet-mcns with the local endpoint prefixes 192.168.33.0/24 and 192.168.34.0/24 respectively. The ingress and egress rules in both policies are dc-dev-subnet-mcns and dc-stage-subnet-mcns.

Step 3.6.5:Create network policy sets for both cloud networks.
  • Select Security from configuration menu and select Network Policies under Network Security. Click Add network policy set.
  • Enter a name. This example sets dc-mcns denoting that the policy set is for data center.
  • Click Select policy and select the policies created for the data center subnets. This example selects dc-dev-subnet-mcns and dc-stage-subnet-mcns.
  • Click Add network policy set to complete creating the network policy set.
    polset dc
    Figure: Network Policy Set for Data Center Cloud
  • Repeat the previous steps to create a network policy set for the AWS VPC. Select the policies created for the dev and staging subnets of the AWS VPC. This example creates aws-mcns policy set with the aws-dev-subnet-mcns and aws-stage-subnet-mcns policies.
    polset aws
    Figure: Network Policy Set for AWS VPC
Step 3.7: Create network firewalls.
  • Select Security from configuration menu and select Network Firewall under Network Security. Click Add network firewall.
  • Enter a name for this object. This example sets dc-mcns as name.
  • Click Select network policy set and add the network policy set created for the data center cloud. This example adds the dc-mcns policy set.
  • Click Add network firewall to complete creating the network firewall.

dc nwf
Figure: Data Center Network Firewall

  • Repeat the above steps to create another network firewall for the AWS VPC selecting the network policy set created for AWS VPC. This example creates firewall with name aws-mcns and network policy set aws-mcns.
Step 3.8: Add the created firewalls to the fleets of their respective cloud networks.
  • Select Manage from configuration menu. Select Fleets under Site Management.
  • Search for your fleet created for the DC. For this example, it is the dc-cloud-gw.
  • Click ... -> Edit to open the fleet edit form.
  • Click Select network firewall and now select the firewall created for the DC. For this example, it is dc-mcns.
  • Click Save changes to apply the firewall to the fleet.
  • Repeat the above steps to edit the fleet of AWS VPC and apply the firewall created for the AWS VPC. For this example, the fleet is hipster-aws-mcns-fleet and the firewall is aws-mcns.

Note: At this point, you can verify that the attempt to communicate from development subnets to staging subnets and vice versa is blocked.


Step 4: Connect & Secure Apps

Securely connecting the applications require configuring load balancers for the services of the web applications and applying service policies to secure them.

The following video shows the workflow of connecting and securing the apps:

Perform the following steps to setup secure connectivity to the applications.

Step 4.1 Log into VoltConsole and create Volterra namespaces to manage the EKS clusters separately.
  • Open the namespace dropdown and click Manage namespaces
  • Click Add namespace and enter a name for your namespace. This example sets the hipster-dev name for the development environment.
  • Click Save to create the namespace.
  • Repeat the above steps to create another namespace for staging environment. This example creates the hipster-stage namespace.
Step 4.2 Create origin pools for the frontend service running on the development EKS cluster.

Origin pool is a way to define pool of origin servers where your services are present. The origin pools are defined by creating resources like endpoint, cluster, and health check on the Volterra SaaS.

Step 4.2.1 Create endpoint.

Change to the hipster-dev namespace and select Manage from the configuration menu. Select Endpoints in the options pane and click Add endpoint.

  • Set a name for the endpoint. This example sets frontend as the name.
  • Enter Virtual Site for the Where field and select the hipster-aws-mcns for the Select ref field.
  • Select Site Local Inside Network for the network type.
  • Select Service Selector Info for Endpoint Specifier field.
  • Select Kubernetes for the Discovery field and Service Name for the Service field.
  • Enter service name in the <servicename.k8s-namespace> format. This example uses frontend.hipster-dev as the service name.
  • Select TCP as the protocol.
  • Enter 80 for the Port field.
  • Click Add endpoint to create endpoint.

dev ep
Figure: Endpoint Creation

Step 4.2.2 Create health check.

Select Manage from the configuration menu. Select Health Checks in the options pane and click Add health check.

  • Enter a name in the Name field. This example sets the frontend name.
  • Select HTTP HealthCheck the Health check field.
  • Enter / for the Path field.
  • Enter 5 and 5 for Timeout and Interval fields respectively. This sets timeout as 5 seconds and interval as 2 seconds for health check.
  • Enter 3 and 1 for Unhealthy Threshold and Healthy Threshold fields.
  • Click Add health check.

dev hc
Figure: Healtcheck Creation

Step 4.2.3 Create cluster.

Select Manage from the configuration menu. Select Clusters in the options pane and click Add cluster.

  • Enter a name in the Name field. This example sets the frontend name.
  • Select the endpoint created in Step 4.2.1 for the Select endpoint field.
  • Select the healthcheck object created in Step 4.2.2 for the Select healthcheck field.
  • Select Round Robin for the LoadBalancer Algorithm field.
  • Click Add cluster.

dev cluster
Figure: Cluster Creation

Step 4.3 Create a route object.

Select Manage -> Routes. Click Add route and enter the configuration as per the following guidelines:

  • Enter a name. This example sets the name as frontend.
  • Click Add match. Select ANY for the HTTP Method field and Regex for the Path Match field. Enter (.*?) for the Regex field and click Add match.
  • Select Destination List for the Route action field and click Add destination. Click Select cluster and select the cluster object created in previous step. Click Select cluster and Add destination to add the cluster.
  • Click Add route to create the route.
    route
    Figure: Route Creation
Step 4.4 Create an advertise policy.
  • Select Manage -> Advertise Policies and click Add advertise policy.
  • Enter a name. This example sets frontend-https as the name.
  • Select Virtual Site for the Where field and select the dc-cloud-gw site (DC site) for the Select ref field.
  • Select Site Local Network for the Network Type field.
  • Enter 443 for the TCP/UDP Port field and click Add advertise policy to complete creating advertise policy.

ap
Figure: Advertise Policy Creation

Step 4.5 Encrypt the private key of the certificate using the Volterra Blindfold.

Securing the application requires you to encrypt the private key of the TLS certificate prior to applying in the load balancer.

Note: The prerequisite for this step is a CA-signed or self-signed certificate.

Change to the terminal and use the vesctl to encrypt the key using Volterra Blindfold. Use the public key and policy document obtained in the Step 1: Deploy Site (AWS) chapter. This example shows the sample of generating a secret for your application domain. Store the output to a file.

vesctl request secrets encrypt --policy-document mcns-secret-policy-doc --public-key hipster-co-public-key tls-demo.key > tls-demo.key.secret

Note: The tls-demo.key is the private key of your certificate.

Step 4.6 Add a virtual host.

Select Manage -> Virtual Hosts. Click Add virtual host and set the configuration as per the following guidelines:

  • Enter name, application domain, and your proxy type. This sample uses HTTPS_PROXY as the proxy type and hipster-shop-dev.demo.helloclouds.app as the domain.
  • Select previously defined route in Step 4.3.
  • Select previously created advertise policy Step 4.4.
  • Click TLS Parameters and click Add TLS certificate in the TLS configuration form.

    • Generate Base64 string of your certificate and enter it in the string:/// format in the Certificate URLfield.
  • Click Private key and select Secret info as Blindfold secret and enter the secret in the Location field. Select Secret Encoding as EncodingNone. Click Apply and Add virtual host.

Note: Use the secret created in previous step. Enter cat <secrets-file> command to display the secret and then copy it. In case of this example, the cat tls-demo.key.secret command is used.

Step 4.7 Add a service policy rule to only allow dev subnet on private DC to access the dev subnet VIP of the load balancer created in previous step.
  • Select Security from the configuration menu.
  • Select IP Prefix Sets under the Network Security and click Add IP prefix set.
  • Set a name. This example sets dc-dev-mcns.
  • Click Add prefix and enter the prefix of your DC dev subnet. This example adds 10.200.1.0.24. Click Add IP Prefix set to complete creating the prefix set.
  • Select Service Policy Rules under Network Security and click Add service policy rule.
  • Enter a name. This example sets allow-dev-subnet.
  • Select Allow for the Action field.
  • Click IP Prefix Matcher. Click Select prefix set and select the IP Prefix set created for the dev subnet. Click Select prefix set again to add the prefix set to the prefix matcher configuration.
  • Click Apply and click Add service policy rule to complete creating the service policy rule.
Step 4.8 Create service policy and policy set.
  • Select Service Policies under Network Security and click Add service policy.
  • Set a name. This example sets dc-vip.
  • Select First Rule Match for the Rule Combining Algorithm.
  • Click Select rule and add the rule created in the previous step.
  • Click Add service policy to complete creating the service policy.
  • Select Service Policy Sets under Network Security and click Add service policy set.
  • Set a name. This example sets hipster-dev.
  • Click Select policy and select the policy created in the previous step. Click Select policy again to add the policy to the policy set configuration.
  • Click Add service policy set to complete creating the service policy.
Step 4.9 Create origin pools, load balancer, and service policy sets for the EKS staging cluster.

Change to the hipster-stage namespace and repeat from steps 4.2 to 4.8 for the EKS staging cluster. Ensure that you use the appropriate objects associated with the staging environment. This example configures the following objects:

  • Route - frontend
  • Advertise policy - frontend
  • Encrypted private key - tls-stage.key.secret.
  • Virtual host - frontend
  • Virtual host domain name - hipster-shop-stage.demo.helloclouds.app
  • IP prefix set - stage-subnet and the prefix 10.200.2.0/24.
  • Service policy rule - stage-subnet
  • Service policy - stage-vip
  • Service policy set - hipster-stage.
Step 4.10 Create network policy rules to allow the HTTPS and DNS for the data center dev and staging subnets.

Change to system namespace. Select Security from configuration menu and select Network Policy Rules under the Network Security. Click Add network policy rule.

  • Enter a name. This example uses dc-https-mcns name.
  • Enter Action as Allow.
  • Select IP Prefix for the Remote Endpoint field.
  • Add the private DC dev subnet prefix. This example adds 172.16.10.2/32 which is the outside IP address of DC cloud gateway.
  • Select protocol as TCP and port as 443.
  • Click Add network policy rule to complete creating the rule for HTTPS.
  • Click Add network policy rule.
  • Enter a name. This example uses dc-dns-mcns name.
  • Enter Action as Allow.
  • Select IP Prefix for the Remote Endpoint field.
  • Add the private DC dev subnet prefix. This example adds 172.16.10.2/32 which is the outside IP address of DC cloud gateway.
  • Select protocol as udp and port as 53.
  • Click Add network policy rule to complete creating the rule for DNS.
Step 4.11 Apply the rules to network policies created in the Step 3: Connect & Secure Networks chapter.

Select Security from configuration menu and select Network Policies under the Network Security.

  • Find the network policy created for the development subnet of the data center. For this example, it is the dc-dev-subnet-mcns policy.
  • Click ... -> Edit and scroll down to Egress Rules section.
  • Click Select egress rule and select the DNS and HTTPS rules created in previous step. Click Select egress rule again to add the rules to the policy.
  • Click Save changes to apply the update to the policy.

dc dev nw pol ed
Figure: Add Egress Rules to Data Center Dev Network Policy

  • Repeat the above steps to edit the data center staging subnet policy and add the DNS and HTTPS rules created in previous step.

Verification

Perform the following to verify the web application security.

Login to the DC host on the stage subnet where the DC cloud gateway site is set as the DNS server.

Step 4.11 Verify the if the DNS requests are resolved.

Enter the following commands:

nslookup hipster-shop-dev.demo.helloclouds.app 

nslookup hipster-shop-stage.demo.helloclouds.app
Step 4.12 Verify the HTTPS requests to the application VIPs.
  • Verify posting a CURL request to staging VIP:
curl -sD - -o /dev/null https://hipster-shop-stage.demo.helloclouds.app
  • Verify posting a CURL request to dev VIP
curl -sD - -o /dev/null https://hipster-shop-dev.demo.helloclouds.app

The client on staging subnet should not be able to access development VIP.


Concepts