livekit-egress is an open-source media egress service for real-time audio, video, and data, designed for low latency and scalability
Chainguard Containers are regularly-updated, secure-by-default container images.
For those with access, this container image is available on cgr.dev
:
docker pull cgr.dev/ORGANIZATION/livekit-egress:latest
Be sure to replace the ORGANIZATION
placeholder with the name used for your organization's private repository within the Chainguard Registry.
Chainguard's livekit-egress
container image is comparable to the livekit/egress image, with the following differences:
- Default Audio Encoder:
libav AAC (Advanced Audio Coding) encoder
- Default Video Encoder:
OpenH264 video encoder
Before deploying LiveKit Egress, you'll need:
- LiveKit Server - A running LiveKit server instance
- Redis - For state management and coordination
- API Credentials - API key and secret from your LiveKit server configuration
This guide walks you through setting up a complete LiveKit Egress environment using Docker, including all required dependencies.
docker network create lknet
LiveKit Egress requires Redis for state management:
docker run -d \
--name redis \
--network lknet \
-p 6379:6379 \
cgr.dev/ORGANIZATION/redis:latest
Create a configuration file for LiveKit Server:
cat <<EOF > livekit.yaml
port: 7880
rtc:
tcp_port: 7881
redis:
address: redis:6379
keys:
devkey: secret
room:
auto_create: true
EOF
Start LiveKit Server:
docker run -d \
--name livekit-server \
--network lknet \
-p 7880:7880 \
-p 7881:7881 \
-v $(pwd)/livekit.yaml:/etc/livekit.yaml \
cgr.dev/ORGANIZATION/livekit-server:latest \
--config /etc/livekit.yaml
Create the Egress configuration file:
cat <<EOF > egress.yaml
api_key: devkey
api_secret: secret
ws_url: ws://livekit-server:7880
redis:
address: redis:6379
logging:
level: info
EOF
Create an output directory for recordings:
mkdir -p ./egress-output
chmod 777 ./egress-output
Start LiveKit Egress with the required capabilities:
docker run -d \
--name livekit-egress \
--network lknet \
--shm-size=1g \
-e EGRESS_CONFIG_FILE=/etc/egress.yaml \
-v $(pwd)/egress.yaml:/etc/egress.yaml \
-v $(pwd)/egress-output:/out \
cgr.dev/ORGANIZATION/livekit-egress:latest
Check that all services are running:
docker ps
docker logs livekit-server
docker logs livekit-egress
You should see logs indicating successful connections to Redis and that services are ready.
Install the LiveKit CLI tool:
# macOS
brew install livekit-cli
# Linux
curl -sSL https://get.livekit.io/cli | bash
# Windows
winget install LiveKit.LiveKitCLI
# From source
git clone github.com/livekit/livekit-cli
make install
Set environment variables for the CLI:
export LIVEKIT_URL="ws://localhost:7880"
export LIVEKIT_API_KEY="devkey"
export LIVEKIT_API_SECRET="secret"
Create a simple web page to record:
mkdir -p ./web-content
cat <<'EOF' > ./web-content/index.html
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>LiveKit Egress Test</title>
</head>
<body>
<h1>Hello from LiveKit Egress!</h1>
<p>This page is being recorded.</p>
<div id="timestamp"></div>
<script>
setInterval(() => {
document.getElementById('timestamp').textContent = new Date().toISOString();
}, 1000);
</script>
</body>
</html>
EOF
Serve the web page:
docker run -d \
--name web-server \
--network lknet \
-v $(pwd)/web-content:/usr/share/nginx/html:ro \
-p 8080:8080 \
cgr.dev/ORGANIZATION/nginx:latest
Create a web egress configuration:
cat <<EOF > web-egress.json
{
"url": "http://web-server:8080",
"preset": "H264_720P_30",
"file_outputs": [
{
"file_type": "MP4",
"filepath": "/out/web-recording.mp4",
"disable_manifest": true
}
]
}
EOF
Start the egress job:
lk egress start --type web web-egress.json
The command will output an egress ID (e.g., EG_xxxxxxxxxxxxx
). Let it run for about 15-20 seconds, then stop it:
# Replace with your egress ID
lk egress stop --id EG_xxxxxxxxxxxxx
# Check the status
lk egress list
The recording will be available in ./egress-output/web-recording.mp4
.
This guide demonstrates how to deploy LiveKit Egress on Kubernetes using Helm charts.
- Helm 3.x installed
- Redis instance (can be deployed alongside)
Add the OT-Container-Kit Helm repository:
helm repo add ot-helm https://ot-container-kit.github.io/helm-charts/
helm repo update
Install the Redis operator:
helm install redis-operator ot-helm/redis-operator \
--namespace default \
--create-namespace
Wait for the operator to be ready:
kubectl wait --for=condition=ready pod \
--selector name=redis-operator \
--timeout=30s
Create a Redis instance:
cat <<EOF | kubectl apply -f -
apiVersion: redis.redis.opstreelabs.in/v1beta2
kind: Redis
metadata:
name: redis-livekit
spec:
kubernetesConfig:
image: cgr.dev/ORGANIZATION/redis:latest
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
EOF
Wait for Redis to be ready:
kubectl wait --for=condition=ready pod \
--selector app=redis-livekit \
--timeout=30s
Add the LiveKit Helm repository:
helm repo add livekit https://helm.livekit.io
helm repo update
Create a values file for LiveKit Server:
cat <<EOF > livekit-server-values.yaml
image:
repository: cgr.dev/ORGANIZATION/livekit-server
tag: latest
livekit:
redis:
address: redis-livekit.default.svc.cluster.local:6379
keys:
devkey: thisisasecretkeythatislongenoughforlivekit
rtc:
use_external_ip: false
room:
auto_create: true
EOF
Install LiveKit Server:
helm install livekit-server livekit/livekit-server \
--namespace default \
-f livekit-server-values.yaml
Wait for LiveKit Server to be ready:
kubectl rollout status deployment/livekit-server --timeout=30s
kubectl wait --for=condition=ready pod \
--selector app.kubernetes.io/name=livekit-server \
--timeout=30s
Create a values file for LiveKit Egress:
cat <<EOF > livekit-egress-values.yaml
image:
repository: cgr.dev/ORGANIZATION/livekit-egress
tag: latest
egress:
apiKey: devkey
apiSecret: thisisasecretkeythatislongenoughforlivekit
wsUrl: ws://livekit-server.default.svc.cluster.local
insecure: true
redis:
address: redis-livekit.default.svc.cluster.local:6379
EOF
Install LiveKit Egress:
helm install livekit-egress livekit/egress \
--namespace default \
-f livekit-egress-values.yaml
Wait for the deployment:
kubectl rollout status deployment/livekit-egress --timeout=30s
kubectl wait --for=condition=ready pod \
--selector app.kubernetes.io/name=egress \
--timeout=30s
The upstream Helm chart doesn't expose volume configuration for the egress pod. If you need to write recordings to a persistent volume, you'll need to patch the deployment:
kubectl -n default patch deployment/livekit-egress --type=json -p='[
{"op":"add","path":"/spec/template/spec/volumes","value":[]},
{"op":"add","path":"/spec/template/spec/volumes/-","value":{"name":"out","emptyDir":{}}},
{"op":"add","path":"/spec/template/spec/containers/0/volumeMounts","value":[]},
{"op":"add","path":"/spec/template/spec/containers/0/volumeMounts/-","value":{"name":"out","mountPath":"/out"}},
{"op":"add","path":"/spec/template/spec/securityContext","value":{"fsGroup":65532}}
]'
For persistent storage, replace emptyDir
with a persistentVolumeClaim
.
Port-forward to access the LiveKit server:
kubectl port-forward deployment/livekit-server 7880:7880
In another terminal, set up the LiveKit CLI:
export LIVEKIT_URL="ws://localhost:7880"
export LIVEKIT_API_KEY="devkey"
export LIVEKIT_API_SECRET="thisisasecretkeythatislongenoughforlivekit"
Create a test web server in Kubernetes:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: test-webpage
data:
index.html: |
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>K8s Egress Test</title>
</head>
<body>
<h1>Hello from Kubernetes!</h1>
<p>Recording test in Kubernetes environment.</p>
</body>
</html>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-web-server
spec:
replicas: 1
selector:
matchLabels:
app: test-web
template:
metadata:
labels:
app: test-web
spec:
containers:
- name: nginx
image: cgr.dev/ORGANIZATION/nginx:latest
ports:
- containerPort: 8080
volumeMounts:
- name: content
mountPath: /usr/share/nginx/html
volumes:
- name: content
configMap:
name: test-webpage
---
apiVersion: v1
kind: Service
metadata:
name: test-web-server
spec:
selector:
app: test-web
ports:
- port: 8080
targetPort: 8080
EOF
Wait for the test-web-server to be ready
kubectl rollout status deployment/test-web-server --timeout=30s
kubectl wait --for=condition=ready pod \
--selector app=test-web \
--timeout=30s
Create an egress configuration:
cat <<EOF > k8s-web-egress.json
{
"url": "http://test-web-server.default.svc.cluster.local:8080",
"preset": "H264_720P_30",
"file_outputs": [
{
"file_type": "MP4",
"filepath": "/out/k8s-recording.mp4",
"disable_manifest": true
}
]
}
EOF
Start the egress job:
lk egress start --type web k8s-web-egress.json
Let it run for 15-20 seconds, then stop it:
# Replace with your egress ID
lk egress stop --id EG_xxxxxxxxxxxxx
# Check status
lk egress list
To retrieve the recording from the egress pod:
# Get the egress pod name
EGRESS_POD=$(kubectl get pods -l app.kubernetes.io/instance=livekit-egress -o jsonpath='{.items[0].metadata.name}')
# Copy the recording
kubectl cp default/$EGRESS_POD:/out/k8s-recording.mp4 ./k8s-recording.mp4
Chainguard's free tier of Starter container images are built with Wolfi, our minimal Linux undistro.
All other Chainguard Containers are built with Chainguard OS, Chainguard's minimal Linux operating system designed to produce container images that meet the requirements of a more secure software supply chain.
The main features of Chainguard Containers include:
For cases where you need container images with shells and package managers to build or debug, most Chainguard Containers come paired with a development, or -dev
, variant.
In all other cases, including Chainguard Containers tagged as :latest
or with a specific version number, the container images include only an open-source application and its runtime dependencies. These minimal container images typically do not contain a shell or package manager.
Although the -dev
container image variants have similar security features as their more minimal versions, they include additional software that is typically not necessary in production environments. We recommend using multi-stage builds to copy artifacts from the -dev
variant into a more minimal production image.
To improve security, Chainguard Containers include only essential dependencies. Need more packages? Chainguard customers can use Custom Assembly to add packages, either through the Console, chainctl
, or API.
To use Custom Assembly in the Chainguard Console: navigate to the image you'd like to customize in your Organization's list of images, and click on the Customize image button at the top of the page.
Refer to our Chainguard Containers documentation on Chainguard Academy. Chainguard also offers VMs and Libraries — contact us for access.
This software listing is packaged by Chainguard. The trademarks set forth in this offering are owned by their respective companies, and use of them does not imply any affiliation, sponsorship, or endorsement by such companies.