Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
Chainguard Containers are regularly-updated, secure-by-default container images.
For those with access, this container image is available on cgr.dev
:
docker pull cgr.dev/ORGANIZATION/lvm-driver:latest
Be sure to replace the ORGANIZATION
placeholder with the name used for your organization's private repository within the Chainguard Registry.
As of now, the upstream helm chart is a version ahead of the lvm-driver app that they use.
That is, v1.6.2
version of the helm chart while the appVersion
is v1.6.1
while v1.6.1
of helm chart has been retracted. The upstream discussion can be found here. However our image comes up with v1.6.2
of lvm-driver which brings the change of using OPENEBS_NAMESPACE
instead of LVM_NAMESPACE
. As a workaround and till upstream fixes chart version with the app, you might have to fix this with an additional step that is described later.
There are some prerequisites that need to be met before enabling lvm-localpv support in OpenEBS:
- All the nodes must have LVM2 utils package installed
- All the nodes must have dm-snapshot Kernel Module loaded - (Device Mapper Snapshot)
- SSH into each worker node and install open-iSCSI:
sudo apt-get update
sudo apt-get install -y open-iscsi
sudo systemctl enable --now iscsid
sudo systemctl status iscsid
NOTE: The commands above are for Ubuntu, please replace them with the appropriate commands for your OS.
You can find more information on the project repository here.
There is an official Helm chart for deploying the OpenEBS with lvm-localpv support enabled.
You can find the chart here.
You can use the following command to install the OpenEBS with lvm-localpv support enabled using Chainguard's image:
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install openebs --namespace openebs openebs/openebs \
--set lvm-localpv.enabled="true" \
--set lvm-localpv.lvmPlugin.image.registry="cgr.dev/" \
--set lvm-localpv.lvmPlugin.image.repository="chainguard/lvm-driver" \
--set lvm-localpv.lvmPlugin.image.tag="latest" --create-namespace
kubectl edit daemonset openebs-lvm-localpv-node -n openebs
You can then edit the references of LVM_NAMESPACE
with OPENEBS_NAMESPACE
and then wait for the pods to rollout.
Once done, you can verify with the below command and logs should look like them:
kubectl logs openebs-lvm-localpv-node-rdq9w -n openebs -c openebs-lvm-plugin
Here's how logs might appear
I1017 11:31:48.938020 1 main.go:149] LVM Driver Version :- 1.6.2 - commit :- 45e0fbdd6af38f3025132fff5b6d10c3c2eec1fb
I1017 11:31:48.938074 1 main.go:150] DriverName: local.csi.openebs.io Plugin: agent EndPoint: unix:///plugin/csi.sock NodeID: ip-10-0-3-185.us-east-2.compute.internal SetIOLimits: false ContainerRuntime: containerd RIopsPerGB: [] WIopsPerGB: [] RBpsPerGB: [] WBpsPerGB: []
I1017 11:31:48.938098 1 driver.go:49] enabling volume access mode: SINGLE_NODE_WRITER
I1017 11:31:48.938851 1 grpc.go:190] Listening for connections on address: &net.UnixAddr{Name:"//plugin/csi.sock", Net:"unix"}
I1017 11:31:48.939748 1 builder.go:84] Creating event broadcaster
I1017 11:31:48.939848 1 builder.go:90] Creating lvm snapshot controller object
I1017 11:31:48.939880 1 builder.go:99] Adding Event handler functions for lvm snapshot controller
I1017 11:31:48.939897 1 start.go:72] Starting informer for lvm snapshot controller
I1017 11:31:48.939916 1 start.go:74] Starting Lvm snapshot controller
I1017 11:31:48.939929 1 snapshot.go:195] Starting Snap controller
I1017 11:31:48.939937 1 snapshot.go:198] Waiting for informer caches to sync
I1017 11:31:48.940171 1 builder.go:84] Creating event broadcaster
I1017 11:31:48.940327 1 builder.go:90] Creating lvm volume controller object
I1017 11:31:48.940446 1 builder.go:101] Adding Event handler functions for lvm volume controller
I1017 11:31:48.940617 1 start.go:73] Starting informer for lvm volume controller
I1017 11:31:48.940762 1 start.go:75] Starting Lvm volume controller
I1017 11:31:48.940835 1 volume.go:295] Starting Vol controller
I1017 11:31:48.941021 1 volume.go:298] Waiting for informer caches to sync
I1017 11:31:48.963647 1 builder.go:95] Creating lvm node controller object
I1017 11:31:48.968900 1 builder.go:110] Adding Event handler functions for lvm node controller
I1017 11:31:48.970674 1 start.go:98] Starting informer for lvm node controller
I1017 11:31:48.970704 1 start.go:101] Starting Lvm node controller
I1017 11:31:48.970711 1 lvmnode.go:223] Starting Node controller
I1017 11:31:48.970716 1 lvmnode.go:226] Waiting for informer caches to sync
I1017 11:31:49.040068 1 snapshot.go:202] Starting Snap workers
I1017 11:31:49.040111 1 snapshot.go:209] Started Snap workers
I1017 11:31:49.042803 1 volume.go:302] Starting Vol workers
I1017 11:31:49.042996 1 volume.go:309] Started Vol workers
I1017 11:31:49.086469 1 lvmnode.go:231] Starting Node workers
I1017 11:31:49.086494 1 lvmnode.go:238] Started Node workers
I1017 11:31:49.287634 1 lvmnode.go:90] lvm node controller: creating new node object for &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:ip-10-0-3-185.us-east-2.compute.internal GenerateName: Namespace:openebs SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[{APIVersion:v1 Kind:Node Name:ip-10-0-3-185.us-east-2.compute.internal UID:1294e0df-187f-4ef6-927e-95d8e64e064d Controller:0xc00032f9a6 BlockOwnerDeletion:<nil>}] Finalizers:[] ManagedFields:[]} VolumeGroups:[]}
I1017 11:31:49.299925 1 lvmnode.go:94] lvm node controller: created node object openebs/ip-10-0-3-185.us-east-2.compute.internal
I1017 11:31:49.299952 1 lvmnode.go:306] Successfully synced 'openebs/ip-10-0-3-185.us-east-2.compute.internal'
I1017 11:31:49.301261 1 lvmnode.go:153] Got add event for lvm node openebs/ip-10-0-3-185.us-east-2.compute.internal
I1017 11:31:49.461978 1 grpc.go:72] GRPC call: /csi.v1.Identity/GetPluginInfo requests {}
I1017 11:31:49.463759 1 grpc.go:81] GRPC response: {"name":"local.csi.openebs.io","vendor_version":"1.6.2"}
I1017 11:31:49.477923 1 grpc.go:72] GRPC call: /csi.v1.Node/NodeGetInfo requests {}
I1017 11:31:49.479442 1 lvmnode.go:306] Successfully synced 'openebs/ip-10-0-3-185.us-east-2.compute.internal'
I1017 11:31:49.485697 1 grpc.go:81] GRPC response: {"accessible_topology":{"segments":{"kubernetes.io/hostname":"ip-10-0-3-185.us-east-2.compute.internal","openebs.io/nodename":"ip-10-0-3-185.us-east-2.compute.internal"}},"node_id":"ip-10-0-3-185.us-east-2.compute.internal"}
I1017 11:32:49.267418 1 lvmnode.go:306] Successfully synced 'openebs/ip-10-0-3-185.us-east-2.compute.internal'
I1017 11:33:49.297315 1 lvmnode.go:306] Successfully synced 'openebs/ip-10-0-3-185.us-east-2.compute.internal
Chainguard's free tier of Starter container images are built with Wolfi, our minimal Linux undistro.
All other Chainguard Containers are built with Chainguard OS, Chainguard's minimal Linux operating system designed to produce container images that meet the requirements of a more secure software supply chain.
The main features of Chainguard Containers include:
For cases where you need container images with shells and package managers to build or debug, most Chainguard Containers come paired with a development, or -dev
, variant.
In all other cases, including Chainguard Containers tagged as :latest
or with a specific version number, the container images include only an open-source application and its runtime dependencies. These minimal container images typically do not contain a shell or package manager.
Although the -dev
container image variants have similar security features as their more minimal versions, they include additional software that is typically not necessary in production environments. We recommend using multi-stage builds to copy artifacts from the -dev
variant into a more minimal production image.
To improve security, Chainguard Containers include only essential dependencies. Need more packages? Chainguard customers can use Custom Assembly to add packages, either through the Console, chainctl
, or API.
To use Custom Assembly in the Chainguard Console: navigate to the image you'd like to customize in your Organization's list of images, and click on the Customize image button at the top of the page.
Refer to our Chainguard Containers documentation on Chainguard Academy. Chainguard also offers VMs and Libraries — contact us for access.
This software listing is packaged by Chainguard. The trademarks set forth in this offering are owned by their respective companies, and use of them does not imply any affiliation, sponsorship, or endorsement by such companies.