App Delivery Network (ADN)

Objective

This guide provides instructions on how to deploy and secure network edge applications using VoltConsole and VoltMesh.

The following image shows the steps to deploy network edge applications:

Seq
Figure: Steps to Deploy and Secure Network Edge Applications

The following images shows the topology of the example for the use case provided in this document:

Top
Figure: Network Edge Applications Sample Topology

Using the instructions provided in this guide, you can deploy your web application in the Volterra virtual K8s (vK8s) clusters, deliver the application using load balancer, advertise the application services on the Volterra global network (exposing to internet), protect the application using Volterra security features, and monitor the application using Volterra monitoring features.

The example shown in this guide deploys a micro services application called as hipster-shop across Volterra global network using the Volterra vK8s. The application consists of the following services:

  • frontend
  • cartservice
  • productcatalogservice
  • currencyservice
  • paymentservice
  • shippingservice
  • emailservice
  • checkoutservice
  • recommendationservice
  • adservice
  • cache

Prerequisites

  • VoltConsole SaaS account.

    Note: If you do not have an account, see Create a Volterra Account.

  • Volterra vesctl utility.

    Note: See vesctl for more information.

  • Docker.
  • Self-signed or CA-signed certificate for your application domain.

Configuration

The use case provided in this guide deploys the web application across all of the Volterra Regional Edge (RE) sites in the vK8s clusters. It then exposes the application to Volterra global network using Volterra load balancer and secures it using the Volterra security features. The following actions outline the activities in deploying the web app and securely expose it to the internet.

  1. Volterra vK8s cluster is created and using its kubeconfig and K8s manifest, the web application is deployed in the vK8s clusters in all RE sites.
  2. The frontend service of the application needs to be externally available. Therefore, a HTTPS load balancer is created for each cluster with the required origin pools such as endpoint, health check, and cluster. The appropriate route and advertise policy are enabled for exposing to internet. Also, the subdomain is delegated to Volterra to manage the DNS and certificates.
  3. A WAF configuration is applied to secure the externally available loadbalancer VIPs. Also, the load balancer is secured with a javascript challenge to protect against bots.
  4. The deployed services of the K8s application are monitored using the observability features such as load balancer and service mesh monitoring.

Step 1: Deploy K8s App

The following video shows the site deployment workflow:

Perform the following steps to deploy the web application in Volterra vK8s clusters:

Step 1.1: Log into the VolConsole and create a namespace.

This example creates a sample namespace called tutorial-hipster namespace.

  • Click on the General option on the namespace selector. Select Personal Management -> My namespaces.

ns nav new
Figure: Navigate to Manage Namespaces

  • Click Add namespace and enter a name for your namespace. Click Save to complete creating the namespace.

ns create
Figure: Create Namespace

Step 1.2: Create vK8s cluster and download its kubeconfig.
  • Click on the application namespace option on the namespace selector. Select the namespace created in previous step from the namespace dropdown list to change to that namespace.
  • Select Applications in the configuration menu and Virtual K8s in the options pane.

    vk8s nav new
    Figure: Navigate to vK8s Creation

  • Click Add virtual K8s and enter a name for your vK8s cluster.

vk8s new
Figure: Select Virtual Site for vK8s

  • Click Select vsite ref object and select ves-all-res or ves-io-all-res. Click Continue to apply the virtual site to the vK8s configuration.
  • Click Save and Exit to complete creating the vK8s clusters in all RE sites.
  • Click ...-> Kubeconfig for the created vK8s object to download its kubeconfig file.
Step 1.3: Deploy the web application in all Volterra RE sites.

To deploy the web application in a K8s cluster, the following are required:

  • Kubeconfig of the K8s cluster. For this, use the vK8s kubeconfig downloaded in previous step.
  • Manifest file of your web application. Download the sample this example uses and edit its fields as per your application.

Enter the following command to deploy the application:

kubectl apply -f k8s-app-manifest.yaml --kubeconfig vk8s-kubecfg.yaml

This completes deployment of application across all RE sites.


Step 2: Deliver K8s App

Delivering the application requires creating load balancer and origin pool for the services. Origin pools consist of endpoints and clusters. Also routes and advertise policies are required to make the application available to the internet. In addition, this example shows creation of app type object and associate it with the load balancer for API discovery. This use case also shows how to delegate your subdomain to Volterra to manage the DNS and certificates for your domain.

The following video shows the application delivery workflow:

Perform the following steps for creating origin pool and load balancer for your application:

Step 2.1 Create app type object.

App type object is required for Volterra to perform API discovery for your application. After creating the app type, it is required to set it as a label to the load balancer advertising your app.

Perform the following steps to create the app type object:

  • Change to the Shared namespace and navigate to Security in the left menu. Select AI & ML -> App Types from the options. Click Add app type to start app type creation.

nav apptype
Figure: Navigate to App Type Object

  • Set a name for the app type object. This is the value for the app type label to be assigned to the load balancer for which the API discovery needs to be enabled.
  • Click Add item in the Features field of the Application Type Features section. Select API Discovery for the AI/ML Feature Type field.
  • Click Add item again and select Per API Request Analysis for the AI/ML Feature Type field.
  • Optionally, select Enable learning from redirect traffic in the Business Logic Markup Setting section. This enables AI engine to learn the endpoints from redirected traffic.

apptype
Figure: App Type Object Configuration

  • Click Save and Exit to complete creating the app type object.
Step 2.2 Delegate your domain to Volterra.
  • Change to the system namespace and navigate to Manage -> Networking. Select Delegated Domains and click Add Delegated domain.

dom delegate
Figure: Delegated Domain Creation

  • Enter your domain name in the Domain Name field. Ensure that Managed Volterra is selected for the Domain Method field. Click Save and Exit.
  • Verify that the delegated domain object is displayed in the list and copy the value of the TXT Record field.

dd txt
Figure: TXT Record for Delegated Domain

  • Add a TXT record in your domain records with the copied TXT string. This example shows how to add the record in Google domains.

txt gdomain
Figure: TXT Record Addition in Google Domains

  • Go back to VoltConsole and select your delegated domain entry. Click ... -> Start Verification. Click Start Verification in the confirmation dialogue box. After verification, the field Verification Status shows successful verification and the nameservers get displayed on the Name Servers field.

verified dd
Figure: Successful Domain Verification

  • Go back to your domain and add the NS records with the nameservers obtained from the VoltConsole. This example shows adding to the Google domains.

nss gdomain
Figure: NS Record Addition in Google Domains

Step 2.3 Create a HTTP load balancer.
  • Change to your application namespace and navigate to Virtual Hosts -> HTTP Load Balancers. Click Add HTTP Load Balancer.
  • Enter a name and your domain in the Name and Domains field respectively.
  • Click on the Labels field and select ves.io/app_type as the key. Type the name of the app type object created in the previous step and click Assign Custom Value to add the app type label. This enables API discovery.
  • Select HTTPS With Automatic Certificate option for the Select Type of Load Balancer field.

https lb new
Figure: HTTP Load balancer creation

  • Click Configure under the Origin Pools field. This opens origin pool configuration.
  • Click Add item and select Create new pool in the Origin Pool selection field. This opens new origin pool creation form.
  • Enter a name for your pool and select k8s Service Name of Origin Server on given Sites. Enter frontend.<namespace> for the Service Name field. This example sets frontend.adn.
  • Select Virtual Site for the Select Site or Virtual Site field and select Shared/ves-all-res for the Virtual Site field.

    Note: The virtual site reference can be or Shared/ves-io-all-res depending on your vK8s configuration.

  • Select VK8s Networks on Site for the Select Network on the site field. Enter 80 for the Port field and Round Robin for the LoadBalancer Algorithm fields respectively. Click Continue.

orig pool 80
Figure: Origin Pool Creation

  • Enter 0 for the Weight field and click Apply to apply the origin pool to the load balancer configuration. This will return to the load balancer configuration form.
  • Scroll down and click Save and Exit to create the load balancer.

Step 3: Secure K8s App

Securing the web application requires you to setup ingress filtering using BGP ASN sets, javascript challenge, DDoS protection using rate limiting, and WAF.

The following video shows the workflow of securing the K8s application:

The examples in this chapter demonstrate how to setup the java script challenge and WAF to the load balancer to complete securing the application.

Note: Javascript challenge enforces the users to send requests through the browser preventing automated attacks.

Step 3.1: Enable javascript challenge.

Create a file with a custom message in plain text or HTML element and convert it to Base64. Enter the following commands in the terminal.

echo '<h1> hi javacript challenge </h1>' | base64  
PGgxPiBoaSBqYXZhY3JpcHQgY2hhbGxlbmdlICA8L2gxPgo=

Copy the output. In this case, it is the PGgxPiBoaSBqYXZhY3JpcHQgY2hhbGxlbmdlICA8L2gxPgo= string. Return to VoltConsole and perform the following:

  • Navigate to Virtual Hosts -> HTTP Load Balancers. Click ...->Edit for your load balancer to edit its configuration. Scroll down or click on Security Configuration in the left menu to the security configuration. Enable advanced settings using the Show Advanced Fields option.
  • Select Javascript Challenge for the Select Type of Challenge field. Click Configure to open the javascript challenge configuration and set the following:
  • Set javascript delay and cookie expiry periods. This example sets 2000 milliseconds of delay and 120 seconds of cookie expiry.
  • Enter the custom page URL in the string:///<custompage-url> format. Use the Base64 string generated for the custom page.
  • Click Apply to apply the javascript challenge and click Save and Exit to save the updated load balancer configuration.

jsc
Figure: Javascript Challenge Configuration

Note: You can verify the javascript challenge functionality by visiting your application domain from the browser. The request gets redirected to the custom page you configured. Ensure that you clear cookies or request in the incognito or private mode.

Step 3.2: Create a Web Application Firewall (WAF) and apply to the load balancer.
  • Navigate to Virtual Hosts -> HTTP Load Balancers. Click ...->Edit for your load balancer to edit its configuration. Scroll down or click on Security Configuration in the left menu to the security configuration.

Note: Enable advanced settings using the Show Advanced Fields option in case security configuration settings are not enabled.

Perform the configuration as per the following guidelines:

  • Select Specify WAF Intent for the Select Web Application Firewall (WAF) Config field.
  • Click on the Specify WAF Intent field and select Create new WAF.
  • Set a name for the firewall.
  • Select BLOCK for the Mode field. This blocks all the suspicious requests.
  • Click Continue to complete creating the WAF.

Note: The firewall is enabled by the BLOCK mode by default . This blocks all the suspicious requests.

WAF
Figure: Web Application Firewall Creation

  • Click Save and Exit to add WAF to the load balancer configuration.

Note: The WAF blocks suspicious requests and DDoS attacks even if the javascript challenge is disabled. The security is further enhanced when the javascript challenge is also enabled.


Step 4: Observe K8s App

You can monitor the deployed K8s application using the VoltConsole monitoring.

The following video shows the workflow of using VoltConsole to monitor your application:

Step 4.1: Open the application service mesh.
  • Log into the VoltConsole and change to your namespace.
  • Select Mesh and Service Mesh from the configuration menu.
  • Click on the service mesh object for your application to open its service graph. The service graph shows the service mesh graph for your application services.

smgraph
Figure: Service Mesh Service Graph View

  • Click API Endpoints tab to display the details for all endpoints along with the trends in graphical manner.

sm eps new
Figure: Service Mesh Endpoints View for all Endpoints

  • Click Dashboard tab to display the overall details for your services.

    sm db new
    Figure: Service Mesh Dashboard View

  • Click Metrics tab to display the details for service metrics.

    sm metrics
    Figure: Service Mesh Endpoints View for all Endpoints

  • Click Virtual Services tab to display the list of all services. You can click on each entry to open its graph view.

    sm vs
    Figure: Service Mesh Virtual Services View

  • Click Alerts tab to display the alerts for the services.

    sm alerts
    Figure: Service Mesh Alerts View

  • Click Requests tab to display the sampled requests to your services.

    sm eps new
    Figure: Service Mesh Endpoints View for all Endpoints

  • Click Connections tab to display the connection details for your services.
Step 4.2: Open the load balancer dashboard.
  • Log into the VoltConsole and change to your namespace.
  • Select Virtual Hosts from the configuration menu and HTTP Load Balancers in the options pane.
  • Click on your load balancer to open its dashboard. The dashboard loads the overall status of the load balancer such as health score, origin servers, latency, etc.

lbdb
Figure: Load balancer Dashboard View

  • Click Metrics tab check the metrics such as request rate, error rate, latency, and throughput.

lbme
Figure: Load balancer Metrics View

  • Click Origin Servers tab check the origin servers and the associated details like requests, errors, latency, RTT, etc.

lborig
Figure: Load balancer Origin Servers View

  • Click Requests tab check the information on the sampled requests trend and list of requests.

lbreqs
Figure: Load balancer Sampled Requests View

  • Click App Firewall tab check the information on the firewall details such as security events, bots requests, events by location, etc.

lbwafdb
Figure: Load balancer Firewall View

  • Click Security Events tab check the information on the trend of security events in graphical manner.

lbsecevents
Figure: Load balancer Security Events View

  • Click Realtime tab check the traffic overview in realtime.

lbsecevents
Figure: Load balancer Realtime View


Concepts