/
DirectorySecurity Advisories
Sign In
Directory
spark-operator-fips logoFIPS

spark-operator-fips

Last changed

Create your Free Account

Be the first to hear about exciting product updates, critical vulnerability alerts, compare alternative images, and more.

Sign Up
Versions
Overview
Provenance
Specifications
SBOM
Vulnerabilities
Advisories

Chainguard Container for spark-operator-fips

A minimal, FIPS 140-3 compliant image for Spark Operator. Facilitates the deployment and management of Apache Spark applications in Kubernetes environments.

Chainguard Containers are regularly-updated, secure-by-default container images.

Download this Container Image

For those with access, this container image is available on cgr.dev:

docker pull cgr.dev/ORGANIZATION/spark-operator-fips:latest

Be sure to replace the ORGANIZATION placeholder with the name used for your organization's private repository within the Chainguard Registry.

Compatibility Notes

While Chainguard's Spark Operator FIPS image is comparable to the Kubeflow Spark Operator image on Docker Hub, Chainguard's image is also FIPS 140-3 compliant. Chainguard's image includes only the minimum set of dependencies needed to run Spark Operator.

Chainguard provides a FIPS compliant Spark image that must be used with the operator.

FIPS Support

This image contains Bouncy Castle crypto libraries for FIPS.

The FIPS certified version of Bouncy Castle (CMVP [#4743]) is compliant with the FIPS 140-3 standard when used in accordance with the [Bouncy Castle Security Policy].

This image also ships with a validated redistribution of the OpenSSL's FIPS provider module. For more on FIPS support in Chainguard Images, consult the guide on FIPS-enabled Chainguard Images on Chainguard Academy.

Getting Started

Creating a KeyStore

Before getting up and running with Spark Operator FIPS, you'll need to create a BCFKS KeyStore:

docker run -v $(pwd):/tmp/keystore --entrypoint keytool cgr.dev/ORGANIZATION/spark-operator-fips:TAG \
  -v -keystore /tmp/keystore/keystore.bcfks \
  -storetype bcfks \
  -providername BCFIPS \
  -alias "localhost" \
  -genkeypair -sigalg SHA512withRSA -keyalg RSA \
  -dname CN="localhost" \
  -storepass "<YOUR TLS KEYSTORE PASSWORD>" \
  -keypass "<YOUR TLS KEY PASSWORD, can be the same>"

You can now use keytool to view the KeyStore:

docker run -v $(pwd):/tmp/keystore --entrypoint keytool cgr.dev/ORGANIZATION/spark-operator-fips:TAG \
  -v -keystore /tmp/keystore/keystore.bcfks \
  -list \
  -storepass "<YOUR TLS KEYSTORE PASSWORD>"

After the KeyStore has been generated, the operator will need to be configured to use it. To do so, you'll need to mount a custom configuration file that will be used by Spark when applications are submitted by the operator.

Create a file, spark.properties, with the following contents:

spark.ssl.enabled=true
spark.ssl.keyStorePassword=<YOUR TLS KEYSTORE PASSWORD>
spark.ssl.keyStoreType=BCFKS
spark.ssl.keyStore=/usr/lib/spark/conf/keystore.bcfks

We will use this later when you deploy the operator. Please consult the official documentation for configuring Spark for guidance on what properties are best for your environment.

Creating a TrustStore

To create a TrustStore and import and trust an existing CA certifcate you can also use keytool:

docker run -v $(pwd):/tmp/keystore --entrypoint keytool cgr.dev/ORGANIZATION/spark-operator-fips:TAG \
  -v -keystore /tmp/keystore/truststore.bckfs \
  -storetype bcfks \
  -providername BCFIPS \
  -import -file /tmp/keystore/MyCA.crt \
  -storepass "<YOUR TRUSTSTORE PASSWORD>" \
  -trustcacerts \
  -noprompt

Configuring the installation of Spark shipped with the operator for use with your TrustStore can be done in your config file as with the KeyStore above. For example, the properties below may be set:

spark.ssl.trustStorePassword=<YOUR TRUSTSTORE PASSWORD>
spark.ssl.trustStoreType=BCFKS
spark.ssl.trustStore=/usr/lib/spark/conf/truststore.bcfks

If you'd like to use a TrustStore type other than BCFKS, you'll need to set the environment variables below in your values manifest:

controller:
  env:
  - name: JAVA_TRUSTSTORE_OPTIONS
    value: "-Djavax.net.ssl.trustStoreType=<TRUSTSTORE TYPE>"
  - name: JDK_JAVA_OPTIONS
    value: "--add-exports=java.base/sun.security.internal.spec=ALL-UNNAMED --add-exports=java.base/sun.security.provider=ALL-UNNAMED -Djavax.net.ssl.trustStoreType=<TRUSTSTORE TYPE>"

The same variables will need to be overriden for the driver and executor in each job.

Preparation

Before deploying Spark Operator FIPS, you'll need to create a ConfigMap, spark-props, containing your Spark configuration file:

kubectl create configmap spark-props --from-file=spark.properties

Then you'll need to create a PV, secret, or ConfigMap that contains your KeyStore/TrustStore. As an example, to create a secret that contains your KeyStores, you'd first encode both via base64:

base64 -i ./keystore.bcfks
base64 -i ./truststore.bcfks

Then create a resource for the secret in a file, keystores-secret.yaml:

apiVersion: v1
data:
  keystore.bcfks: |-
    <base64 encoded KeyStore goes here>
  truststore.bcfks: |-
    <base64 encoded TrustStore goes here>
kind: Secret
metadata:
  name: keystores
type: Opaque

And apply the resource:

kubectl apply -f ./keystores-secret.yaml

Finally, you'll need to create a values manifest, values.yaml:

image:
  registry: "cgr.dev"
  repository: "ORGANIZATION/spark-operator-fips"
  tag: "latest"

controller:
  volumes:
  - name: "keystores"
    secret:
      secretName: "keystores"
  - name: "spark-props"
    configMap:
      name: "spark-props"
  - name: "tmp"
    emptyDir:
      sizeLimit: "1Gi"
  volumeMounts:
  - name: "keystores"
    mountPath: "/keystores"
    readOnly: false
  - name: "spark-props"
    mountPath: "/usr/lib/spark/conf/spark-defaults.conf"
    subPath: "spark.properties"
    readOnly: true
  - name: "tmp"
    mountPath: "/tmp"
    readOnly: false

Note that a volume must be mounted for tmp as it contains Spark artifacts.

Deployment

To deploy Spark Operator FIPS, start by adding the Helm chart repository:

helm repo add spark-operator https://kubeflow.github.io/spark-operator
helm repo update

Then deploy the operator with Helm:

helm install spark-operator spark-operator/spark-operator --values ./values.yaml

Submitting Applications

All resources using the operator must be updated as every driver and executor must be configured to use the KeyStore and TrustStore you've generated above. In addition to the KeyStore and TrustStore, you'll also need to provide the same configuration file you created for the operator.

Unlike with the operator, we'll need to mount the config file as spark.properties instead of spark-defaults.conf, as the operator overrides the default properties file.

As an example, every job's resource should have a similar template:

apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
...
spec:
  ...
  image: cgr.dev/ORGANIZATION/spark-fips:latest
  ...
  volumes:
  - name: keystores
    secret:
      secretName: keystores
  - name: spark-props
    configMap:
      name: spark-props
      ...
  driver:
    ...
    volumeMounts:
    - name: keystores
      mountPath: /keystores
      readOnly: false
    - name: spark-props
      mountPath: /usr/lib/spark/conf/spark.properties
      subPath: spark.properties
      readOnly: true
  executor:
    ...
    volumeMounts:
    - name: keystores
      mountPath: /keystores
      readOnly: false
    - name: spark-props
      mountPath: /usr/lib/spark/conf/spark.properties
      subPath: spark.properties
      readOnly: true
      ...

Submitting your application is as easy as applying the resource that contains the definition for the job:

kubectl apply -f ./path/to/your/job's/resource/definition.yaml

You should now be up and running with Spark Operator FIPS!

Documentation and Resources

What are Chainguard Containers?

Chainguard Containers are minimal container images that are secure by default.

In many cases, the Chainguard Containers tagged as :latest contain only an open-source application and its runtime dependencies. These minimal container images typically do not contain a shell or package manager. Chainguard Containers are built with Wolfi, our Linux undistro designed to produce container images that meet the requirements of a more secure software supply chain.

The main features of Chainguard Containers include:

For cases where you need container images with shells and package managers to build or debug, most Chainguard Containers come paired with a -dev variant.

Although the -dev container image variants have similar security features as their more minimal versions, they feature additional software that is typically not necessary in production environments. We recommend using multi-stage builds to leverage the -dev variants, copying application artifacts into a final minimal container that offers a reduced attack surface that won’t allow package installations or logins.

Learn More

To better understand how to work with Chainguard Containers, please visit Chainguard Academy and Chainguard Courses.

In addition to Containers, Chainguard offers VMs and Libraries. Contact Chainguard to access additional products.

Trademarks

This software listing is packaged by Chainguard. The trademarks set forth in this offering are owned by their respective companies, and use of them does not imply any affiliation, sponsorship, or endorsement by such companies.

Licenses

Chainguard container images contain software packages that are direct or transitive dependencies. The following licenses were found in the "latest" version of this image:

  • Apache-2.0

  • BSD-2-Clause

  • BSD-3-Clause

  • Bitstream-Vera

  • FTL

  • GCC-exception-3.1

  • GPL-2.0-only

For a complete list of licenses, please refer to this Image's SBOM.

Software license agreement

Compliance

This is a FIPS validated image for FedRAMP compliance.

This image is STIG hardened and scanned against the DISA General Purpose Operating System SRG with reports available.

Learn more about STIGsGet started with STIGs

Related images

Category
FIPS
STIG

Safe Source for Open Sourceâ„¢
Media KitContact Us
© 2025 Chainguard. All Rights Reserved.
Private PolicyTerms of Use

Products

Chainguard ContainersChainguard LibrariesChainguard VMs