Module 4: Exercises
This exercise section provides hands-on practice with accessing metrics and logs directly from Envoy sidecar proxies. You will learn how to query Envoy metrics, view access logs, and use built-in Istio tools to explore observability data without installing external tools.
Prerequisites
Before starting these exercises, ensure you have:
-
Access to an OpenShift cluster with Service Mesh 3 (Istio >= 1.23) installed
-
Your namespace (
$NAMESPACE) configured with sidecar injection enabled -
The
ocCLI tool installed and configured -
Services deployed in your namespace (from previous modules)
Exercise 1: Access Envoy Metrics
In this exercise, you will learn how to access Prometheus-formatted metrics directly from Envoy sidecar proxies.
Step 2: Deploy a Test Service
Deploy a simple service to generate metrics:
oc apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-test
namespace: ${NAMESPACE}
spec:
replicas: 2
selector:
matchLabels:
app: metrics-test
template:
metadata:
labels:
app: metrics-test
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: metrics-test
namespace: ${NAMESPACE}
spec:
selector:
app: metrics-test
ports:
- port: 80
targetPort: 80
EOF
Wait for pods to be ready:
oc get pods -n $NAMESPACE -l app=metrics-test
Step 3: Generate Some Traffic
Generate traffic to create metrics:
for i in {1..20}; do
oc run curl-test-$i --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s http://metrics-test > /dev/null
done
Step 4: Access Envoy Metrics Endpoint
Envoy sidecars expose metrics on port 15020. Port-forward to access the metrics endpoint:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
oc port-forward -n $NAMESPACE $POD_NAME 15020:15020 &
Access the metrics endpoint:
curl http://localhost:15020/stats/prometheus
You should see Prometheus-formatted metrics. Look for metrics like:
* istio_requests_total
* istio_request_duration_milliseconds
* istio_request_bytes
* istio_response_bytes
Step 5: Query Specific Metrics
Query for specific metrics using grep:
curl -s http://localhost:15020/stats/prometheus | grep istio_requests_total
Query for request duration metrics:
curl -s http://localhost:15020/stats/prometheus | grep istio_request_duration
Exercise 2: Query Metrics from Multiple Pods
In this exercise, you will query metrics from multiple Envoy sidecars and compare them.
Step 1: Get All Pod Names
Get the names of all pods for the service:
oc get pods -n $NAMESPACE -l app=metrics-test -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
Step 2: Query Metrics from Each Pod
Query metrics from the first pod:
POD1=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
oc exec -n $NAMESPACE $POD1 -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep istio_requests_total | head -5
Query metrics from the second pod:
POD2=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[1].metadata.name}')
oc exec -n $NAMESPACE $POD2 -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep istio_requests_total | head -5
Compare the request counts between pods to see load distribution.
Step 3: Aggregate Metrics Across Pods
Get total request count across all pods:
for pod in $(oc get pods -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[*].metadata.name}'); do
echo "Pod: $pod"
oc exec -n $NAMESPACE $pod -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep "istio_requests_total.*destination_service_name=\"metrics-test\"" | head -3
done
Exercise 3: View Envoy Access Logs
In this exercise, you will view and analyze Envoy access logs directly from the sidecar containers.
Step 1: Generate Traffic with Different Patterns
Generate various types of traffic to create interesting logs:
# Successful requests
for i in {1..5}; do
oc run curl-success-$i --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s http://metrics-test > /dev/null
done
# Requests with different methods
oc run curl-post --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s -X POST http://metrics-test > /dev/null
# Requests with headers
oc run curl-headers --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s -H "X-Test-Header: test-value" http://metrics-test > /dev/null
Step 2: View Access Logs
View the most recent access logs from a pod:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy --tail=20
The logs show detailed information about each request, including:
* Request method and path
* Response code
* Request and response sizes
* Duration
* Upstream service information
Step 3: Filter Logs by Response Code
View only successful requests (2xx status codes):
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy --tail=50 | grep "\" 200 "
View only error responses:
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy --tail=50 | grep -E "\" [45][0-9]{2} "
Step 4: View Logs with Timestamps
View logs with timestamps to see request timing:
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy --tail=20 --timestamps
Step 5: Follow Logs in Real-Time
Follow logs in real-time while generating traffic:
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy -f &
LOG_PID=$!
# Generate some traffic
for i in {1..3}; do
oc run curl-realtime-$i --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s http://metrics-test > /dev/null
sleep 1
done
# Stop following logs
kill $LOG_PID
Exercise 4: Analyze Request Patterns from Logs
In this exercise, you will analyze access logs to understand request patterns and service behavior.
Step 1: Generate Diverse Traffic
Generate traffic with different characteristics:
# Fast requests
for i in {1..10}; do
oc run curl-fast-$i --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s http://metrics-test > /dev/null
done
# Slow request (simulate with timeout)
oc run curl-slow --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s --max-time 5 http://metrics-test > /dev/null
Step 2: Extract Request Durations
Extract and analyze request durations from logs:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy --tail=50 | grep -oP 'duration="\K[0-9]+' | head -20
Step 3: Count Requests by Response Code
Count requests by HTTP response code:
oc logs -n $NAMESPACE $POD_NAME -c istio-proxy --tail=100 | grep -oP '"\s\K[0-9]{3}' | sort | uniq -c
Exercise 5: Use istioctl for Observability
In this exercise, you will use istioctl commands to view proxy configuration and statistics.
Step 1: View Proxy Configuration
View the Envoy proxy configuration for a pod:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
istioctl proxy-config dump $POD_NAME -n $NAMESPACE | head -50
Step 2: View Proxy Stats
View Envoy proxy statistics using istioctl:
istioctl proxy-stats $POD_NAME -n $NAMESPACE
This shows a summary of key statistics.
Step 3: View Specific Proxy Stats
View specific statistics, such as request counts:
istioctl proxy-stats $POD_NAME -n $NAMESPACE | grep -i request
View connection statistics:
istioctl proxy-stats $POD_NAME -n $NAMESPACE | grep -i connection
Step 4: View Cluster Information
View cluster (upstream service) information:
istioctl proxy-config cluster $POD_NAME -n $NAMESPACE
This shows all the upstream services this pod can connect to.
Exercise 6: Monitor Metrics Over Time
In this exercise, you will monitor how metrics change over time by querying them periodically.
Step 1: Create a Monitoring Script
Create a simple script to monitor request counts:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
for i in {1..5}; do
echo "=== Sample $i at $(date) ==="
oc exec -n $NAMESPACE $POD_NAME -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep "istio_requests_total.*destination_service_name=\"metrics-test\"" | head -3
echo ""
sleep 5
# Generate some traffic
oc run curl-monitor-$i --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s http://metrics-test > /dev/null
done
Step 2: Monitor Error Rates
Monitor error rates over time:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
for i in {1..5}; do
echo "=== Error Check $i at $(date) ==="
oc exec -n $NAMESPACE $POD_NAME -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep "istio_requests_total.*response_code=\"5" | head -3
echo ""
sleep 5
done
Step 3: Compare Metrics Before and After Traffic
Get baseline metrics:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
BASELINE=$(oc exec -n $NAMESPACE $POD_NAME -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep "istio_requests_total.*destination_service_name=\"metrics-test\"" | head -1)
echo "Baseline: $BASELINE"
Generate traffic:
for i in {1..10}; do
oc run curl-compare-$i --image=curlimages/curl:latest --rm -i --restart=Never -n $NAMESPACE -- curl -s http://metrics-test > /dev/null
done
Check metrics after traffic:
AFTER=$(oc exec -n $NAMESPACE $POD_NAME -c istio-proxy -- curl -s http://localhost:15020/stats/prometheus | grep "istio_requests_total.*destination_service_name=\"metrics-test\"" | head -1)
echo "After traffic: $AFTER"
Exercise 7: Explore Envoy Admin Interface
In this exercise, you will explore the Envoy admin interface to access additional observability information.
Step 1: Access Admin Interface
The Envoy admin interface is available on port 15000. Port-forward to access it:
POD_NAME=$(oc get pod -n $NAMESPACE -l app=metrics-test -o jsonpath='{.items[0].metadata.name}')
oc port-forward -n $NAMESPACE $POD_NAME 15000:15000 &
Step 2: View Server Info
View Envoy server information:
curl http://localhost:15000/server_info
This shows Envoy version and build information.
Step 3: View Clusters
View cluster information via admin interface:
curl http://localhost:15000/clusters | head -50
This shows all upstream clusters with detailed statistics.
Step 5: View Config Dump
View the full configuration dump:
curl http://localhost:15000/config_dump | head -100
This provides a JSON representation of the entire Envoy configuration.
Summary
In these exercises, you have:
-
Accessed Prometheus-formatted metrics directly from Envoy sidecar proxies
-
Queried metrics from multiple pods and compared load distribution
-
Viewed and analyzed Envoy access logs to understand request patterns
-
Used
istioctlcommands to view proxy configuration and statistics -
Monitored metrics over time to observe changes
-
Explored the Envoy admin interface for additional observability data
These skills enable you to gain deep insights into your service mesh behavior using only the tools built into Istio and Envoy, without requiring external observability tools.
Troubleshooting
If you encounter issues:
-
Cannot access metrics endpoint: Verify the pod has a sidecar injected and check that port 15020 is accessible
-
No metrics appearing: Ensure traffic has been generated to create metrics
-
Logs not showing: Verify access logging is enabled in your Istio configuration
-
istioctl not working: Ensure istioctl is installed and can connect to your cluster
-
Port-forwarding fails: Check that the pod is running and the ports are not already in use
-
Admin interface not accessible: Verify port 15000 is available and not blocked by network policies