Secure Kubernetes Gateway

Objective

This guide provides instructions on how to create a Secure Kubernetes Gateway using VoltConsole and VoltMesh.

The steps to create Secure Kubernetes Gateway are:

SeqSecK8GW
Figure: Steps to Deploy Secure Kubernetes Gateway

The following images shows the topology of the example for the use case provided in this document:

TopSecK8GW
Figure: Secure Kubernetes Gateway Sample Topology

Using the instructions provided in this guide, you can deploy an Secure K8s Gateway in your Amazon Virtual Private Cloud (Amazon VPC), discover the cluster services from that VPC, setup load balancer for them, and secure those services with javascript challenge and Web Application Firewall (WAF).

The example shown in this guide deploys the Secure K8s Gateway on a single VPC for an application called as hipster-shop deployed in an EKS cluster. The application consists of the following services:

  • frontend
  • cartservice
  • productcatalogservice
  • currencyservice
  • paymentservice
  • shippingservice
  • emailservice
  • checkoutservice
  • recommendationservice
  • adservice
  • cache

Prerequisites

  • VoltConsole SaaS account.

    Note: If you do not have an account, see Create a Volterra Account.

  • Amazon Web Services (AWS) account.

    Note: This is required to deploy a Volterra site.

  • Volterra vesctl utility.

    Note: See vesctl for more information.

  • Docker.
  • Self-signed or CA-signed certificate for your application domain.
  • AWS IAM Authenticator.

    Note: See IAM Authenticator Installation for more information.


Configuration

The use case provided in this guide sets up a Volterra site as Secure K8s Gateway for the ingress and egress traffic for the K8s cluster deployed in the Amazon VPC. The example web application has a front end service to which all the user requests are sent and it redirects to other services accordingly. The following actions outline the activities in setting up the Secure K8s Gateway:

  1. The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created with origin pool pointing to the frontend service on the EKS cluster.
  2. The domain of the load balancer is delegated to Volterra so that Volterra can manage its DNS and TLS certificates.
  3. Security policies are configured to block egress communication to Google DNS for DNS query resolution and allow github, docker, AWS, and other required domains for code repository management.
  4. A WAF configuration is applied to secure the externally available load balancer VIP.
  5. Javascript challenge is set for the load balancer to apply further protection from attacks such as botnets.

Note: Ensure that you keep the Amazon Elastic IP VIPs ready for later use in configuration.

Step 1: Deploy & Secure Site

The following video shows the site deployment workflow:

Perform the following steps to deploy a Volterra site as the Secure K8s Gateway in your VPC.

Step 1.1: Log into the VoltConsole and create cloud credentials object.
  • Select Manage -> Site Management -> Cloud Credentials in the configuration menu of the System namespace. Click Add Cloud Credentials.
  • Enter a name for your credentials object and select AWS Programmatic Access Credentials for the Select Cloud Credential Type field.
  • Enter your AWS access key ID in the Access Key ID field.

cc meta awskey
Figure: Credentials Meta and AWS Key Configuration

Step 1.2: Configure AWS secret access key.
  • Click Configure under the Secret Access Key field.
  • Enter your AWS secret access key into the Type field. Ensure that the Text radio button is selected.

cc secret
Figure: Secret Key Configuration

  • Click Blindfold to encrypt your secret using Volterra Blindfold. The Blindfold configured message gets displayed.
  • Click Apply.
Step 1.3: Complete creating the credentials.

Click Save and Exit to complete creating the AWS cloud credentials object.

Step 1.4: Start creating AWS VPC site object.
  • Select Manage -> Site Management -> AWS VPC Site in the configuration menu of the System namespace. Click Add AWS VPC Site.
  • Enter a name for your VPC object in the metadata section.
Step 1.4.1: Configure site type selection.
  • Go to Site Type Selection section` and perform the following:

    • Select a region in the AWS Region drop-down field. This example selects us-east-2.
    • Select New VPC Parameters for the Select existing VPC or create new VPC field. Enter the name tag in the AWS VPC Name Tag field and enter the CIDR in the Primary IPv4 CIDR blocks field.
    • Select Ingress/Egress Gateway (Two Interface) for the Select Ingress Gateway or Ingress/Egress Gateway field.

aws vpc basic
Figure: AWS VPC Site Configuration of Site Type

Step 1.4.2: Configure ingress/egress gateway nodes.
  • Click Edit to open the two-interface node configuration wizard and enter the configuration as per the following guidelines.

    • Select an option for the AWS AZ name field that matches the configured AWS Region.
    • Select New Subnet for the Select Existing Subnet or Create New field in the Subnet for Inside Interface section. Enter a subnet address in the IPv4 Subnet field.
    • Similarly configure a subnet address for the Subnet for Outside Interface section.

two int cidrs
Figure: Ingress/Egress Gateway Nodes Configuration

Step 1.4.3: Configure the site network policy.
  • Go to Site Network Firewall section and select Active Network Policies for the Manage Network Policy field. Click on the Network Policy field and select Create new network policy. Enter the configuration as per the following guidelines:

    • Enter a name and enter a CIDR for the IPv4 Prefix List field. This CIDR should be within the CIDR block of the VPC.

nw pol cidr
Figure: Network Policy Endpoint Subnet

  • Click Configure under Ingress Rules and enter the following configuration:

    • Set a name for the Rule Name field and select Allow for the Action field.
    • Select Match Protocol and Port Ranges for the Select Type of Traffic to Match field.
    • Select tcp for the Protocol field.
    • Enter a port range for the List Port of Range field.
    • Click Apply.

ingress rule
Figure: Network Policy Ingress Rule

  • Click Configure under Egress Rules and enter the following configuration:

    • Set a name for the Rule Name field and select Deny for the Action field. This example configures a deny rule for Google DNS query traffic.
    • Select IPv4 Prefix List for the Select Other Endpoint field and enter 8.8.4.4/32 for the IPv4 Prefix List field.
    • Select Match Application Traffic for the Select Type of Traffic to Match field.
    • Select APPLICATION_DNS for the Application Protocols field.

egress rules
Figure: Network Policy Ingress Rule

  • Click Add item and configure another rule of type allow for the endpoint prefix 8.8.8.8/32. This is another Google DNS endpoint prefix.
  • Click Add item and configure another rule with Allow action to allow rest of all egress TCP traffic.
  • Click Apply.
  • Click Continue to apply the network policy configuration.
Step 1.4.3: Configure the site forward proxy policy.
  • Select Active Forward Proxy Policies for the Manage Forward Proxy Policy field. Click on the Forward Proxy Policies field and select Create new forward proxy policy. Enter the configuration as per the following guidelines:

    • Enter a name and select All Proxies on Site for the Select Forward Proxy field.
    • Select Allowed connections for the Select Policy Rules section and click Configure under the TLS Domains field.
    • Click Add item. Select Exact Value from the drop-down list of the Enter Domain field and enter github.com for the Exact Value field. Click Add item and add the following domains with the Suffix Value type.
    • gcr.io
    • storage.googleapis.com
    • docker.io
    • docker.com
    • amazonaws.com

tls doms
Figure: TLS Domains

Note: The Allowed connections option allows the configured TLS domains and HTTP URLs. Everything else is denied.`

  • Click Apply. Click Continue to apply the forward proxy policy configuration.
Step 1.4.4: Configure static route for the inside interface towards the EKS CIDR.
  • Enable Show Advanced Fields in the Advanced Options section.
  • Select Simple Static Route for the Static Route Config Mode field.
  • Enter a route for your EKS subnet in the Simple Static Route field.

simple static
Figure: Static Roue Configuration

  • Click Apply to return to the AWS VPC site configuration screen.
Step 1.4.5: Complete AWS VPC site object creation.
  • Select Automatic Deployment for the Select Automatic or Assisted Deployment field.
  • Select the AWS credentials created in Step 1.1 for the Automatic Deployment field.
  • Select an instance type for the node for the AWS Instance Type for Node field in the Site Node Parameters section.
  • Enter your public SSH key in the Public SSH key field. This is required to access Volterra site once it is deployed.

ssh rsa
Figure: Automatic Deployment and Site Node Parameters

  • Click Save and Exit to complete creating the AWS VPC object. The AWS VPC site object gets displayed.
Step 1.5: Deploy AWS VPC site.
  • Click on the Apply button for the created AWS VPC site object. This will create the VPC site.

tf applied
Figure: Terraform Apply for the VPC Object

  • Click ...-> Terraform Parameters. Click Apply Status tab. Copy the VPC ID from the tf_output section.

tf ouput vpc id
Figure: VPC ID from Terraform Apply Status

Step 1.6: Deploy the EKS cluster and hipster-shop application in it.

The EKS CIDR and VPC ID obtained in Step 1.5 are required for this step.

Step 1.6.1: Download the Volterra quickstart deployment script.

Download the latest quickstart utility:

docker pull gcr.io/volterraio/volt-terraform

Download the deployment script:

docker run --rm -v $(pwd):/opt/bin:rw gcr.io/volterraio/volt-terraform:latest cp /deploy-terraform.sh /opt/bin
Step 1.6.2: Prepare input variables file for terraform deployment. The following example shows the sample entries of the input variables:
{
    "access_key": "<aws_access_key>",
    "secret_key": "<aws_secret_key>",
    "region": "us-east-2",
    "vpc_id": "vpc-065cdb075a2f05deb",
    "eks_cidr": "192.168.1.0/24",
    "deployment": "eks-skg",
    "volterra_site_name": "aws-skg-svcs",
    "machine_public_key": "<pub-key>"
}
Step 1.6.3: Deploy the EKS cluster and application using the deployment script.
./deploy-terraform.sh apply -p aws -i <tfvars> -tn self-serve/eks_only 

Note: The deployment approximately takes 10 to 15 minutes to complete.

  • After the deployment is complete, download the kubeconfig files of the created EKS cluster using the following commands:
 
./deploy-terraform.sh output -n eks-skg eks_kubeconfig > /tmp/eks-test
 
  • Set the KUBECONFIG environment variable with the downloaded kubeconfig file.

export KUBECONFIG=/tmp/eks-test

  • Verify that the EKS worker node joined the cluster and pods are deployed on the K8s namespace. Enter the following commands:
kubectl get node -o wide
kubectl get pods -n hipster

Note: The deployment creates hipster K8s namespace.


Step 2: Discover & Delegate

Discovering services in the VPC requires configuring service discovery objects for the front end service. Also, this includes delegating the domain to Volterra to manage the DNS and certificates for the domain.

The following video shows the service discovery workflow:

Perform the following steps for discovering services.

Step 2.1: Create a secret discovery object.

Log into the VoltConsole and navigate to Manage -> App Management -> Service Discovery. Click Add Discovery and enter the following configuration:

  • Enter a name in the Name field.
  • Select Site for the Where field and click Select ref object. Select the site you created as part of Step 1 and click Select ref object to add the site to discovery configuration.
  • Select the Site Local Inside Network for the Network Type field.
  • Select Kubernetes Service Discovery for the Select How Endpoints are Discovered field.
  • Select Kubeconfig for the Select Kubernetes Credentials field. Click Configure under the Kubeconfig field to open the secret configuration.
  • Enter the kubeconfig downloaded as part of Step 1 in the Type field. Ensure that you select Text radio button.

SecPol
Figure: Secret Encryption

  • Click Blindfold and wait till the Blindfold process is complete. Click Apply.

SecPol
Figure: Discovery Object Configuration

  • Click Save and Exit to create the discovery object.

Verify in VoltConsole that the discovery object is created and discovered services. Click ... -> Show Global Status for the discovery object to view the discovered services.

Step 2.2: Delegate your domain to Volterra.
  • From the system namespace, select Manage in the configuration menu. Select Networking -> Delegated Domains from the options pane and click Add delegated domain.
  • Enter the name for your domain in the Domain Name field as per the DNS 1035 standard. Ensure that this is a valid and functional domain. This example configures skg.helloclouds.app as the domain name.
  • Select Managed by Volterra for the Domain Method field.

create dd new
Figure: Create Delegated Domain

  • Click Continue to complete creating the delegated domain object.
  • A TXT string gets generated for the created object and the verification status is set to DNS_DOMAIN_VERIFICATION_PENDING. Copy the string for use in updating TXT records in your domain.
  • Add a TXT record in your domain with the verification string generated in the VoltConsole.
  • Wait for the domain verification to complete. The status DNS_DOMAIN_VERIFICATION_SUCCESS indicates that domain is verified. The field Name Servers now shows the name servers for the delegated domain.

ns dd
Figure: Delegated Domain Object

  • Add NS records in your domain with the name servers obtained from the VoltConsole.

Step 3: Load Balancer

A HTTP load balancer is required be configured to make the frontend service externally available. As part of the HTTP load balancer, the origin pools are created that define the origin servers where the frontend service is available.

The following video shows the load balancer creation workflow:

Perform the following to configure load balancer:

Step 3.1: Create a namespace and change to it.
  • Select General on the namespace selector. Select Personal Management -> Manage Namespaces. Click Add namespace.

add ns
Figure: Add a Namespace

  • Enter a name and click Save.
  • Click on App tab of the namespace selector and click on the namespace drop-down menu and select your namespace to change to it.

changeto ns
Figure: Change to Application Namespace

Step 3.2:Create HTTP load balancer.

Select Manage -> Load Balancers in the configuration menu and HTTP Load Balancers in the options. Click Add HTTP load balancer.

Step 3.2.1: Enter metadata and set basic configuration.
  • Enter a name for your load balancer in the metadata section.
  • Enter a domain name in the Domains field. Ensure that its sub-domain is delegated to Volterra. This example sets hipster.skg.helloclouds.app domain. The hipster part is prefix and skg.helloclouds.app part is the domain delegated in Step 2.
  • Select HTTPS for the Select Type of Load Balancer field.
Step 3.2.2: Configure origin pool.
  • Click Configure in the Default Origin Servers section and configure the origin pool as per the following guidelines:

    • Click Add item in the Origin Pools configuration, Click Create new pool to load a new pool creation form.
    • In the pool creation form, enter a name for your pool in the metadata section.
    • In the Select Type of Origin Server field of Basic Configuration section, select k8s Service Name of Origin Server on given Sites.
    • Enter service name in the <servivename.k8s-namespace> format for the Service Name field. This example sets frontend.hipster as the service name. The hipster is the K8s namespace created in Step 1.
    • Select Site for the Select Site or Virtual Site field and select the site you created in Step 1.
    • Enter 80 for the Port field.

    orig pools
    Figure: Origin Pool Configuration

    • Click Continue to apply the origin pool and click Apply to set the origin pool to the load balancer configuration.
Step 3.2.3: Complete load balancer creation.

Scroll down and click Save and Exit to create load balancer. The load balancer object gets displayed with TLS Info field value as DNS Domain Verification. Wait for it to change to Certificate Valid.

vh ready
Figure: Created HTTP Load Balancer

The load balancer is now ready and you can verify it by accessing the domain URL from a browser.


Step 4: Secure App

Securing the ingress and egress traffic includes applying WAF and javascript challenge to the load balancer.

The following video shows the workflow of securing the ingress and egress:

Perform the following steps to configure WAF and javascript challenge:

Step 4.1: Configure WAF to load balancer.
  • Select Manage -> Load Balancers from the configuration menu and select HTTP Load Balancers in the options. Click ... -> Edit for the load balancer for which WAF is to be applied.
  • Navigate to security configuration and enable Show Advanced Fields option. Select Specify WAF Intent for the Select Web Application Firewall (WAF) Config field. Click on the Specify WAF Intent field and click Create new WAF. This loads WAF creation form.

lb sec cfg
Figure: Security Configuration for Load balancer

  • Set a name for the WAF and select BLOCK for the Mode field. Click Continue to create WAF and apply to the load balancer.

waf
Figure: WAF Configuration

  • Click Save and Exit to save load balancer configuration.
  • Verify that the WAF is operating. Enter the following command to apply an SQL injection attack:
docker run --name waf -e APP_URL=https://hipster.skg.helloclouds.app -t madhukar32/waf-client:v0.1

The return status 403 indicates that the WAF is operational and blocks the SQL injection attempt.

  • Inspect the WAF events from the load balancer monitoring view. Navigate to Virtual Hosts -> HTTP Load Balancers in the configuration menu and click on your load balancer from the displayed list. Click App Firewall tab to view the security information such as security events, bot requests, top rules hit, etc.

monitor waf
Figure: Load Balancer App Firewall View

Step 4.2:Configure javascript challenge for the load balancer.
  • Select Manage -> Load Balancers from the configuration menu and select HTTP Load Balancers in the options. Click ... -> Edit for the load balancer for which javascript challenge is to be applied.
  • Select Javascript Challenge for the Select Type of Challenge field and click Configure for the Javascript Challenge field.
  • Enter 3000 and 1800 for the Javascript Delay and Cookie Expiry period fields respectively. This sets the delay as 3000 milliseconds and cookie expiry as 1800 seconds.

jscript
Figure: Javascript Challenge Configuration

  • Click Apply to apply the javascript challenge to load balancer.

lb final
Figure: Javascript Challenge Applied to Load Balancer

  • Click Save and Exit to save load balancer configuration.
  • Verify that the javascript challenge is applied. Enter your domain URL from a browser. The javascript challenge default page appears for 3000 milliseconds before loading the hipster website.

Concepts