Troubleshooting Volterra Site

Objective

This guide provides quick troubleshooting instructions for Volterra Customer Edge (CE) site. A site is a physical or cloud location where Volterra nodes are deployed. For more information on sites, see Volterra Site.

The instructions provided in this guide cover troubleshooting the following types of sites:

  • Single-node site with single or dual interfaces
  • Multi-node cluster site with single or dual interfaces

Prerequisites

The following prerequisites apply:


Troubleshoot During the Site Provisioning

This chapter lists out the commands to check the site during its provisioning.

Check VPM Configuration and Logs

Check current VPM configuration:

cat /etc/vpm/config.yaml

Check VPM logs:

journalctl -feu vpm

Check VPM container logs:

docker logs vpm

Check the Operating System Version of Site

Current CE OS is Atomic CentOS. Check the OS using the following command:

atomic host status

Monitor CE OS Upgrade Progress and Logs

You can upgrade current CE OS version from the VoltConsole. While the upgrade is in progress, monitor the process on the node using the following command:

journalctl -fu update-atomic-host.service

Also, check CE Uptime:

uptime

Note: The CE reboots during the OS upgrade.


Check CE Network Configuration

The CE site uses Atomic CentOS and networking configuration for it will be availble in the /etc/systemd/network/ folder. Check the configuration as shown in the following example:

[root@master-0 ~]# cd /etc/systemd/network/
[root@master-0 network]# ls -ltr
total 20
-rw-r--r--. 1 root root 64 Mar 30 04:15 10-wlan0.network
-rw-r--r--. 1 root root 39 Mar 30 04:15 10-vhost.network
-rw-r--r--. 1 root root 39 Mar 30 04:15 10-vhost1.network
-rw-r--r--. 1 root root 63 Mar 30 04:15 10-fabric-eth1-dhcp.network
-rw-r--r--. 1 root root 63 Mar 30 04:15 10-fabric-dhcp.network
lrwxrwxrwx. 1 root root 33 Mar 30 04:41 01-wlan0.network -> /var/run/vrouter/01-wlan0.network
[root@master-0 network]#

Note: Always update the network configuration using the CE configuration wizard.


Check Running K8s Pods

Check all pods running in CE:

kubectl get pods -A

Check pod events and containers in detail:

kubectl describe pod <pod-name>

Check logs of VER pods:

kubectl logs <pod_ver-name> -c vega
kubectl logs <pod_ver-name> -c argo
kubectl logs <pod_ver-name> -c envoy
kubectl logs <pod_ver-name> -c frr
kubectl logs <pod_ver-name> -c bfd
kubectl logs <pod_ver-name> -c ike
kubectl logs <pod_ver-name> -c openvpn1
kubectl logs <pod_ver-name> -c openvpn2
kubectl logs <pod_ver-name> -c wingman

Note: For more information, see the K8s Troubleshooting guide and the K8s Troubleshooting Flow Chart.


Capture Packets in a Pod Directly from PC

Install wireshark software and after installing it, install ksniff . Ksniff is available for download at the Ksniff Installation page.

The following example directly captures IP packets from pod productpage-v1-XXXX using your vK8s namespace:

kubectl sniff productpage-v1-79fb47655d-8cv98 -n ns-vk8s-qasim-02

Note: Before using kubectl, download the vK8s kubeconfig file and set the KUBECONFIG environment varible with the export KUBECONFIG=<vK8s kubeconfig> command.

Executing above command opens wireshark and starts capturing packets in realtime. You can also save the file in PCAP for later use.


Cascade Delete a Namespace

curl -v -k --cert-type P12 --cert ~/Downloads/<api-creds>.p12:volterra -X POST https://<tenant>.console.ves.volterra.io/api/web/namespaces/<namespace>/cascade_delete

Note: Replace <api-creds> with the path to your API credentials file, replace <tenant> with your tenant name, and replace <namespace> with the namespace name.


Access Logs for Virtual Host in a Namespace

curl -v -k --cert-type P12 --cert ~/Downloads/volt-demos-dhhjefbh.console.ves.volterra.io.api-creds.p12:volterra -X GET https://<tenant>.console.ves.volterra.io/api/data/namespaces/ns-vk8s-snsss-8c/access_logs

POST query for access log

curl -v -k --cert-type P12 --cert ~/Downloads/<api-creds>.p12:volterra -X POST https://<tenant>.console.ves.volterra.io/api/data/namespaces/ns-vk8s-snsss-8c/access_logs --data-binary '{"query":"{vh_name=\"vho1-httpbin\"}","namespace":"ns-vk8s-snsss-8c","start_time":"2020-03-21T15:43:00.000Z","end_time":"2020-03-21T16:43:00.000Z"}' --compressed

Inspect Logs for Service Policy

kubectl logs opera-4mrh9 -c opera -f

Troubleshoot Volterra CE Pods Using Script

To troubleshoot Volterra CE pods, you can use the functions.sh script. Download the script from the Functions.sh page.

Set environment variables from the script:

source function.sh

Command to login to Argo

shell argo

Command to login to Vega

shell vega

Command to login to Envoy

shell envoy

Note: Command to log off from the pods is exit


Connect to Envoy Pod and Check Configuration

source functions.sh

Obtain current Envoy configuration and listeners.

envoy_get config_dump
envoy_get listeners

Connect to VER Argo Pod and Capture Packets

vifdump -i 17 proto ICMP
vifdump -i 9 host 104.13.125.59 and port 8001
vifdump -i 9 host 104.13.125.59 and port 80
vifdump -i 9 host 104.13.125.59 and port 443
vifdump -i 17 host 192.168.2.10 and port 443 -w /tmp/github-http-proxy-connect.pcap

You can also create aliases and check objects directly from the CE side.

# Create following Alias
alias vcc='vegactl -t -u localhost:8505 configuration create ';alias vos='vegactl -t -u localhost:8505 introspection  show  oper status '
alias vcl='vegactl -t -u localhost:8505 configuration list --namespace "*" '
alias vcg='vegactl -t -u localhost:8505 configuration get '
alias vcd='vegactl -t -u localhost:8505 configuration delete '
alias vcr='vegactl -t -u localhost:8505 configuration replace '
alias vdb='vegactl -u localhost:8505 -t introspection dump-db'
alias vdbt='vegactl -u localhost:8505 -t introspection dump-table '
alias vil='vegactl -t -u localhost:8505 introspection list '
alias vig='vegactl -t -u localhost:8505 introspection get '

vcl | grep end
vcl ves.io.vega.cfg.adc.endpoint.Object | grep qasim
vig ves.io.vega.cfg.adc.endpoint.Object a348e960-bda4-4cc9-a18d-f6facd4972e1
vig ves.io.vega.cfg.adc.endpoint.Object a348e960-bda4-4cc9-a18d-f6facd4972e1
vcg ves.io.vega.cfg.adc.endpoint.Object a348e960-bda4-4cc9-a18d-f6facd4972e1

CURL Commands

Test HTTP connect proxy using curl. In curl, use -x option for sending all traffic to HTTP connect proxy:

curl -x 172.16.1.10:80 https://www.cnn.com -vv

Save output in a file:

curl -x 172.16.1.10:80 http://www.apache.com -vv > /tmp/apache 2>&1
curl -x 172.16.1.10:80 https://github.com -vv > /tmp/git 2>&1

Concepts