Secure Kubernetes Gateway
On This Page:
Objective
This guide provides instructions on how to create a Secure Kubernetes Gateway using VoltConsole and VoltMesh.
The steps to create Secure Kubernetes Gateway are:
The following images shows the topology of the example for the use case provided in this document:
Using the instructions provided in this guide, you can deploy an Secure K8s Gateway in your Amazon Virtual Private Cloud (Amazon VPC), discover the cluster services from that VPC, setup load balancer for them, and secure those services with javascript challenge and Web Application Firewall (WAF).
The example shown in this guide deploys the Secure K8s Gateway on a single VPC for an application called as hipster-shop deployed in an EKS cluster. The application consists of the following services:
- frontend
- cartservice
- productcatalogservice
- currencyservice
- paymentservice
- shippingservice
- emailservice
- checkoutservice
- recommendationservice
- adservice
- cache
Prerequisites
-
VoltConsole SaaS account.
Note: If you do not have an account, see Create a Volterra Account.
-
Amazon Web Services (AWS) account.
Note: This is required to deploy a Volterra site.
-
Volterra vesctl utility.
Note: See vesctl for more information.
- Docker.
- Self-signed or CA-signed certificate for your application domain.
-
AWS IAM Authenticator.
Note: See IAM Authenticator Installation for more information.
Configuration
The use case provided in this guide sets up a Volterra site as Secure K8s Gateway for the ingress and egress traffic for the K8s cluster deployed in the Amazon VPC. The example web application has a front end service to which all the user requests are sent and it redirects to other services accordingly. The following actions outline the activities in setting up the Secure K8s Gateway:
- The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created with origin pool pointing to the frontend service on the EKS cluster.
- The domain of the load balancer is delegated to Volterra so that Volterra can manage its DNS and TLS certificates.
- Security policies are configured to block egress communication to
Google DNS
for DNS query resolution and allow github, docker, AWS, and other required domains for code repository management. - A WAF configuration is applied to secure the externally available load balancer VIP.
- Javascript challenge is set for the load balancer to apply further protection from attacks such as botnets.
Note: Ensure that you keep the Amazon Elastic IP VIPs ready for later use in configuration.
Step 1: Deploy & Secure Site
The following video shows the site deployment workflow:
Perform the following steps to deploy a Volterra site as the Secure K8s Gateway in your VPC.
Step 1.1: Log into the VoltConsole and create cloud credentials object.
- Select
Manage
->Site Management
->Cloud Credentials
in the configuration menu of theSystem
namespace. ClickAdd Cloud Credentials
. - Enter a name for your credentials object and select
AWS Programmatic Access Credentials
for theSelect Cloud Credential Type
field. - Enter your AWS access key ID in the
Access Key ID
field.
Step 1.2: Configure AWS secret access key.
- Click
Configure
under theSecret Access Key
field. - Enter your AWS secret access key into the
Type
field. Ensure that theText
radio button is selected.
- Click
Blindfold
to encrypt your secret using Volterra Blindfold. TheBlindfold configured
message gets displayed. - Click
Apply
.
Step 1.3: Complete creating the credentials.
Click Save and Exit
to complete creating the AWS cloud credentials object.
Step 1.4: Start creating AWS VPC site object.
- Select
Manage
->Site Management
->AWS VPC Site
in the configuration menu of theSystem
namespace. ClickAdd AWS VPC Site
. - Enter a name for your VPC object in the metadata section.
Step 1.4.1: Configure site type selection.
-
Go to
Site Type Selection
section` and perform the following:- Select a region in the
AWS Region
drop-down field. This example selectsus-east-2
. - Select
New VPC Parameters
for theSelect existing VPC or create new VPC
field. Enter the name tag in theAWS VPC Name Tag
field and enter the CIDR in thePrimary IPv4 CIDR blocks
field. - Select
Ingress/Egress Gateway (Two Interface)
for theSelect Ingress Gateway or Ingress/Egress Gateway
field.
- Select a region in the
Step 1.4.2: Configure ingress/egress gateway nodes.
-
Click
Edit
to open the two-interface node configuration wizard and enter the configuration as per the following guidelines.- Select an option for the
AWS AZ name
field that matches the configuredAWS Region
. - Select
New Subnet
for theSelect Existing Subnet or Create New
field in theSubnet for Inside Interface
section. Enter a subnet address in theIPv4 Subnet
field. - Similarly configure a subnet address for the
Subnet for Outside Interface
section.
- Select an option for the
Step 1.4.3: Configure the site network policy.
-
Go to
Site Network Firewall
section and selectActive Network Policies
for theManage Network Policy
field. Click on theNetwork Policy
field and selectCreate new network policy
. Enter the configuration as per the following guidelines:- Enter a name and enter a CIDR for the
IPv4 Prefix List
field. This CIDR should be within the CIDR block of the VPC.
- Enter a name and enter a CIDR for the
-
Click
Configure
underIngress Rules
and enter the following configuration:- Set a name for the
Rule Name
field and selectAllow
for theAction
field. - Select
Match Protocol and Port Ranges
for theSelect Type of Traffic to Match
field. - Select
tcp
for theProtocol
field. - Enter a port range for the
List Port of Range
field. - Click
Apply
.
- Set a name for the
-
Click
Configure
underEgress Rules
and enter the following configuration:- Set a name for the
Rule Name
field and selectDeny
for theAction
field. This example configures a deny rule for Google DNS query traffic. - Select
IPv4 Prefix List
for theSelect Other Endpoint
field and enter8.8.4.4/32
for theIPv4 Prefix List
field. - Select
Match Application Traffic
for theSelect Type of Traffic to Match
field. - Select
APPLICATION_DNS
for theApplication Protocols
field.
- Set a name for the
- Click
Add item
and configure another rule of typeallow
for the endpoint prefix8.8.8.8/32
. This is another Google DNS endpoint prefix. - Click
Add item
and configure another rule withAllow
action to allow rest of all egress TCP traffic. - Click
Apply
. - Click
Continue
to apply the network policy configuration.
Step 1.4.3: Configure the site forward proxy policy.
-
Select
Active Forward Proxy Policies
for theManage Forward Proxy Policy
field. Click on theForward Proxy Policies
field and selectCreate new forward proxy policy
. Enter the configuration as per the following guidelines:- Enter a name and select
All Proxies on Site
for theSelect Forward Proxy
field. - Select
Allowed connections
for theSelect Policy Rules
section and clickConfigure
under theTLS Domains
field. - Click
Add item
. SelectExact Value
from the drop-down list of theEnter Domain
field and entergithub.com
for theExact Value
field. ClickAdd item
and add the following domains with theSuffix Value
type. gcr.io
storage.googleapis.com
docker.io
docker.com
amazonaws.com
- Enter a name and select
Note: The
Allowed connections
option allows the configured TLS domains and HTTP URLs. Everything else is denied.`
- Click
Apply
. ClickContinue
to apply the forward proxy policy configuration.
Step 1.4.4: Configure static route for the inside interface towards the EKS CIDR.
- Enable
Show Advanced Fields
in theAdvanced Options
section. - Select
Simple Static Route
for theStatic Route Config Mode
field. - Enter a route for your EKS subnet in the
Simple Static Route
field.
- Click
Apply
to return to the AWS VPC site configuration screen.
Step 1.4.5: Complete AWS VPC site object creation.
- Select
Automatic Deployment
for theSelect Automatic or Assisted Deployment
field. - Select the AWS credentials created in Step 1.1 for the
Automatic Deployment
field. - Select an instance type for the node for the
AWS Instance Type for Node
field in theSite Node Parameters
section. - Enter your public SSH key in the
Public SSH key
field. This is required to access Volterra site once it is deployed.
- Click
Save and Exit
to complete creating the AWS VPC object. The AWS VPC site object gets displayed.
Step 1.5: Deploy AWS VPC site.
- Click on the
Apply
button for the created AWS VPC site object. This will create the VPC site.
- Click
...
->Terraform Parameters
. ClickApply Status
tab. Copy the VPC ID from thetf_output
section.
Step 1.6: Deploy the EKS cluster and hipster-shop application in it.
The EKS CIDR and VPC ID obtained in Step 1.5 are required for this step.
Step 1.6.1: Download the Volterra quickstart deployment script.
Download the latest quickstart utility:
docker pull gcr.io/volterraio/volt-terraform
Download the deployment script:
docker run --rm -v $(pwd):/opt/bin:rw gcr.io/volterraio/volt-terraform:latest cp /deploy-terraform.sh /opt/bin
Step 1.6.2: Prepare input variables file for terraform deployment. The following example shows the sample entries of the input variables:
{
"access_key": "<aws_access_key>",
"secret_key": "<aws_secret_key>",
"region": "us-east-2",
"vpc_id": "vpc-065cdb075a2f05deb",
"eks_cidr": "192.168.1.0/24",
"deployment": "eks-skg",
"volterra_site_name": "aws-skg-svcs",
"machine_public_key": "<pub-key>"
}
Step 1.6.3: Deploy the EKS cluster and application using the deployment script.
./deploy-terraform.sh apply -p aws -i <tfvars> -tn self-serve/eks_only
Note: The deployment approximately takes 10 to 15 minutes to complete.
- After the deployment is complete, download the kubeconfig files of the created EKS cluster using the following commands:
./deploy-terraform.sh output -n eks-skg eks_kubeconfig > /tmp/eks-test
- Set the
KUBECONFIG
environment variable with the downloaded kubeconfig file.
export KUBECONFIG=/tmp/eks-test
- Verify that the EKS worker node joined the cluster and pods are deployed on the K8s namespace. Enter the following commands:
kubectl get node -o wide
kubectl get pods -n hipster
Note: The deployment creates
hipster
K8s namespace.
Step 2: Discover & Delegate
Discovering services in the VPC requires configuring service discovery objects for the front end service. Also, this includes delegating the domain to Volterra to manage the DNS and certificates for the domain.
The following video shows the service discovery workflow:
Perform the following steps for discovering services.
Step 2.1: Create a secret discovery object.
Log into the VoltConsole and navigate to Manage
-> App Management
-> Service Discovery
. Click Add Discovery
and enter the following configuration:
- Enter a name in the
Name
field. - Select
Site
for theWhere
field and clickSelect ref object
. Select the site you created as part of Step 1 and clickSelect ref object
to add the site to discovery configuration. - Select the
Site Local Inside Network
for theNetwork Type
field. - Select
Kubernetes Service Discovery
for theSelect How Endpoints are Discovered
field. - Select
Kubeconfig
for theSelect Kubernetes Credentials
field. ClickConfigure
under theKubeconfig
field to open the secret configuration. - Enter the kubeconfig downloaded as part of Step 1 in the
Type
field. Ensure that you selectText
radio button.
- Click
Blindfold
and wait till the Blindfold process is complete. ClickApply
.
- Click
Save and Exit
to create the discovery object.
Verify in VoltConsole that the discovery object is created and discovered services. Click ...
-> Show Global Status
for the discovery object to view the discovered services.
Step 2.2: Delegate your domain to Volterra.
- From the
system
namespace, selectManage
in the configuration menu. SelectNetworking
->Delegated Domains
from the options pane and clickAdd delegated domain
. - Enter the name for your domain in the
Domain Name
field as per the DNS 1035 standard. Ensure that this is a valid and functional domain. This example configuresskg.helloclouds.app
as the domain name. - Select
Managed by Volterra
for theDomain Method
field.
- Click
Continue
to complete creating the delegated domain object. - A TXT string gets generated for the created object and the verification status is set to
DNS_DOMAIN_VERIFICATION_PENDING
. Copy the string for use in updating TXT records in your domain. - Add a TXT record in your domain with the verification string generated in the VoltConsole.
- Wait for the domain verification to complete. The status
DNS_DOMAIN_VERIFICATION_SUCCESS
indicates that domain is verified. The fieldName Servers
now shows the name servers for the delegated domain.
- Add NS records in your domain with the name servers obtained from the VoltConsole.
Step 3: Load Balancer
A HTTP load balancer is required be configured to make the frontend service externally available. As part of the HTTP load balancer, the origin pools are created that define the origin servers where the frontend service is available.
The following video shows the load balancer creation workflow:
Perform the following to configure load balancer:
Step 3.1: Create a namespace and change to it.
- Select
General
on the namespace selector. SelectPersonal Management
->Manage Namespaces
. ClickAdd namespace
.
- Enter a name and click
Save
. - Click on
App
tab of the namespace selector and click on the namespace drop-down menu and select your namespace to change to it.
Step 3.2:Create HTTP load balancer.
Select Manage
-> Load Balancers
in the configuration menu and HTTP Load Balancers
in the options. Click Add HTTP load balancer
.
Step 3.2.1: Enter metadata and set basic configuration.
- Enter a name for your load balancer in the metadata section.
- Enter a domain name in the
Domains
field. Ensure that its sub-domain is delegated to Volterra. This example setshipster.skg.helloclouds.app
domain. Thehipster
part is prefix andskg.helloclouds.app
part is the domain delegated in Step 2. - Select
HTTPS
for theSelect Type of Load Balancer
field.
Step 3.2.2: Configure origin pool.
-
Click
Configure
in theDefault Origin Servers
section and configure the origin pool as per the following guidelines:- Click
Add item
in theOrigin Pools
configuration, ClickCreate new pool
to load a new pool creation form. - In the pool creation form, enter a name for your pool in the metadata section.
- In the
Select Type of Origin Server
field ofBasic Configuration
section, selectk8s Service Name of Origin Server on given Sites
. - Enter service name in the
<servivename.k8s-namespace>
format for theService Name
field. This example setsfrontend.hipster
as the service name. Thehipster
is the K8s namespace created in Step 1. - Select
Site
for theSelect Site or Virtual Site
field and select the site you created in Step 1. - Enter 80 for the
Port
field.
Figure: Origin Pool Configuration - Click
Continue
to apply the origin pool and clickApply
to set the origin pool to the load balancer configuration.
- Click
Step 3.2.3: Complete load balancer creation.
Scroll down and click Save and Exit
to create load balancer. The load balancer object gets displayed with TLS Info
field value as DNS Domain Verification
. Wait for it to change to Certificate Valid
.
The load balancer is now ready and you can verify it by accessing the domain URL from a browser.
Step 4: Secure App
Securing the ingress and egress traffic includes applying WAF and javascript challenge to the load balancer.
The following video shows the workflow of securing the ingress and egress:
Perform the following steps to configure WAF and javascript challenge:
Step 4.1: Configure WAF to load balancer.
- Select
Manage
->Load Balancers
from the configuration menu and selectHTTP Load Balancers
in the options. Click...
->Edit
for the load balancer for which WAF is to be applied. - Navigate to security configuration and enable
Show Advanced Fields
option. SelectSpecify WAF Intent
for theSelect Web Application Firewall (WAF) Config
field. Click on theSpecify WAF Intent
field and clickCreate new WAF
. This loads WAF creation form.
- Set a name for the WAF and select
BLOCK
for theMode
field. ClickContinue
to create WAF and apply to the load balancer.
- Click
Save and Exit
to save load balancer configuration. - Verify that the WAF is operating. Enter the following command to apply an SQL injection attack:
docker run --name waf -e APP_URL=https://hipster.skg.helloclouds.app -t madhukar32/waf-client:v0.1
The return status 403 indicates that the WAF is operational and blocks the SQL injection attempt.
- Inspect the WAF events from the load balancer monitoring view. Navigate to
Virtual Hosts
->HTTP Load Balancers
in the configuration menu and click on your load balancer from the displayed list. ClickApp Firewall
tab to view the security information such as security events, bot requests, top rules hit, etc.
Step 4.2:Configure javascript challenge for the load balancer.
- Select
Manage
->Load Balancers
from the configuration menu and selectHTTP Load Balancers
in the options. Click...
->Edit
for the load balancer for which javascript challenge is to be applied. - Select
Javascript Challenge
for theSelect Type of Challenge
field and clickConfigure
for theJavascript Challenge
field. - Enter
3000
and1800
for theJavascript Delay
andCookie Expiry period
fields respectively. This sets the delay as 3000 milliseconds and cookie expiry as 1800 seconds.
- Click
Apply
to apply the javascript challenge to load balancer.
- Click
Save and Exit
to save load balancer configuration. - Verify that the javascript challenge is applied. Enter your domain URL from a browser. The javascript challenge default page appears for 3000 milliseconds before loading the hipster website.