Skip to content

Understanding Workflows

Note

Throughout this walkthrough we only cover ArtifactType and WorkflowTemplate. However please note, that cluster-wide equivalents exist (ClusterArtifactType and ClusterWorkflowTemplate).

ARC does not orchestrate the workflows, but relies on Argo Workflows as workflow engine.

Resource Relationships

The following diagram illustrates how ARC resources work together to instantiate and configure Argo Workflows:

graph TB
    Order["📋 Order<br/>(User Request)"]
    ClusterArtifactType["📦 ClusterArtifactType"]
    ArtifactTypeDef["🏷️ ArtifactType"]
    SrcEndpoint["🔌 Endpoint (Source)"]
    DstEndpoint["🔌 Endpoint (Destination)"]
    SrcSecret["🔐 Secret<br/>(Source Credentials)"]
    DstSecret["🔐 Secret<br/>(Destination Credentials)"]
    WorkflowTemplate["⚙️ WorkflowTemplate"]
    Workflow["🚀 Workflow"]

    Order -->|creates| ClusterArtifactType
    Order -->|references| SrcEndpoint
    Order -->|references| DstEndpoint
    Order -->|specifies type| ArtifactTypeDef
    ClusterArtifactType -->|references| WorkflowTemplate
    ClusterArtifactType -->|references| SrcSecret
    ClusterArtifactType -->|references| DstSecret

    ArtifactTypeDef -->|validates src/dst types| Order
    ArtifactTypeDef -->|references| WorkflowTemplate

    SrcEndpoint -->|references| SrcSecret
    DstEndpoint -->|references| DstSecret

    WorkflowTemplate -->|blueprint for| Workflow
    ClusterArtifactType -->|provides params & instantiates| Workflow
    SrcSecret -->|mounts to| Workflow
    DstSecret -->|mounts to| Workflow

    style Order stroke:#e1f5ff,stroke-width:2px
    style ClusterArtifactType stroke:#f3e5f5,stroke-width:2px
    style ArtifactTypeDef stroke:#e8f5e9,stroke-width:2px
    style SrcEndpoint stroke:#fff3e0,stroke-width:2px
    style DstEndpoint stroke:#fff3e0,stroke-width:2px
    style SrcSecret stroke:#fce4ec,stroke-width:2px
    style DstSecret stroke:#fce4ec,stroke-width:2px
    style WorkflowTemplate stroke:#f1f8e9,stroke-width:2px
    style Workflow stroke:#ffe0b2,stroke-width:2px
Hold "Alt" / "Option" to enable pan & zoom

Walkthrough

A workflow created by ARC is composed out of three parts:

  1. A workflowTemplateRef which references a WorkflowTemplate-Object
  2. Parameters passed to the entrypoint of the workflow
  3. A mount for the source and destination secrets respectively

When a ClusterArtifactType is created (usually by an Order from a user) it might look as follows:

apiVersion: arc.opendefense.cloud/v1alpha1
kind: ArtifactWorkflow
metadata:
  name: example-frag
spec:
  workflowTemplateRef: # based on Artifact Type used in Order
    name: oci-workflow-template
  srcSecretRef:
    name: mysrc-creds
  dstSecretRef:
    name: mydst-creds
  parameters:
    - name: srcType
      value: oci
      # ...

The two referenced Endpoints by srcRef and dstRef might look as follows respectively:

apiVersion: arc.opendefense.cloud/v1alpha1
kind: Endpoint
metadata:
  name: mysrc
spec:
  type: oci # Endpoint Type!
  remoteURL: https://...
  secretRef:
    name: mysrc-creds
  usage: PullOnly
---
apiVersion: arc.opendefense.cloud/v1alpha1
kind: Endpoint
metadata:
  name: mydst
spec:
  type: oci # Endpoint Type!
  remoteURL: https://...
  secretRef:
    name: mydst-creds
  usage: PushOnly

How these objects are tied into a workflow is described by the ClusterArtifactType:

apiVersion: arc.opendefense.cloud/v1alpha1
kind: ClusterArtifactType
metadata:
  name: oci
spec:
  rules:
    srcTypes: # Endpoint Types
      - oci
    dstTypes:
      - oci
  parameters:
    - name: scanSeverity
      value: HIGH,CRITICAL
  workflowTemplateRef: # argo.ClusterWorkflowTemplate
    name: oci-image-pipeline

The ClusterArtifactType defines which ArtifactType is used. In our case oci and therefore the controller will instantiate the oci-workflow-template.

The two endpoints specified by the ClusterArtifactType are compliant as the workflow does only support endpoints of the type oci. It is important to understand that there are both endpoint types and artifact types.

The controller will verify the endpoints and retrieve the associated secrets.

Resulting parameters and runtime-configuration

The above resources will instantiate the workflow with the following parameters:

  • srcType: oci
  • srcRemoteURL: https://...
  • srcSecret: true (special variable for conditional steps, true or false depending if secret was provided)
  • dstType: oci
  • dstRemoteURL: https://...
  • dstSecret: true (see above)
  • specImage: library/alpine:3.18
  • specOverride: myteam/alpine:3.18-dev

Parameter names are derived from the API spec, but translated to camelCase. The values are always strings!

The parameters do not contain secrets, but can be used to interact with third-party tools in the workflow and create conditional steps in the workflow, e.g. for different support source or destination types.

Note

Parameters can come from the Order and ArtifactType. These parameters are merged when creating the ClusterArtifactType with ArtifactType taking precedence over Order.

However the source and destination secrets are mounted at /secret/src/ and /secret/dst/ respectively. If no secret was provided an emptyDir is mounted to make sure Argo Workflows continue to work.

Example for an OCI usecase

WorkflowTemplate

The following template is an example for a workflow that uses the oci source and destination. It can be used as a starting point to create your own workflows.

apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
  name: oci-image-pipeline
spec:
  retryStrategy:
    retryPolicy: OnError
    limit: 1

  serviceAccountName: arc-workflow
  securityContext:
    fsGroup: 65532
    runAsUser: 65532

  # Define the PVC template for inter-step data sharing
  volumeClaimTemplates:
    - metadata:
        name: work-volume
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 10Gi

  # Mount certificate bundle
  volumes:
    - name: ca-certificates
      configMap:
        name: "root-bundle"
        defaultMode: 0644
        optional: false
        items:
          - key: trust-bundle.pem
            path: trust-bundle.pem

  # --- Input Parameters ---
  arguments:
    parameters:
      - name: srcType
      - name: srcRemoteURL
      - name: srcSecret
      - name: dstType
      - name: dstRemoteURL
      - name: dstSecret
      - name: specRepository # The source image repository (e.g. library/alpine)
      - name: specTag # The source image tag or semver pattern, e.g. 3.18 or >3.18.0 < 3.19.0
      - name: specOverride # Defines how destination repository:tag is overriden, supports go templates
        value: ""
      - name: scanSeverity # Additional parameter coming from ArtifactType

  entrypoint: oci-pipeline

  templates:
    # --- Main Pipeline Entrypoint ---
    - name: oci-pipeline
      steps:
        - - name: authenticate-registries
            template: authenticate-registries

        - - name: check-src-availability
            template: check-src-availability

        # If source is unavailable we assume it is a constraint, so let's find the latest semver
        - - name: retrieve-tags
            template: retrieve-tags
            when: "{{ steps.check-src-availability.outputs.result }} == 'unavailable'"

        - - name: filter-tags
            template: filter-tags
            when: "{{ steps.check-src-availability.outputs.result }} == 'unavailable'"

        - - name: compute-images
            template: compute-images

        - - name: check-dst-availability
            template: check-dst-availability

        - - name: pull-image
            template: pull-image
            when: "{{ steps.check-dst-availability.outputs.result }} == 'unavailable'"

        - - name: scan-image
            template: scan-image
            when: "{{ steps.check-dst-availability.outputs.result }} == 'unavailable'"

        - - name: push-image
            template: push-image
            when: "{{ steps.check-dst-availability.outputs.result }} == 'unavailable'"

        - - name: attest-scan
            template: attest-scan-results
            when: "{{ steps.check-dst-availability.outputs.result }} == 'unavailable'"
            arguments:
              parameters:
              - name: dst
                value: "{{steps.compute-images.outputs.parameters.dst}}"

        - - name: sign-image
            template: sign-image
            when: "{{ steps.check-dst-availability.outputs.result }} == 'unavailable'"
            arguments:
              parameters:
              - name: dst
                value: "{{steps.compute-images.outputs.parameters.dst}}"

    # --- Authenticate (Skopeo) ---
    - name: authenticate-registries

      script:
        image: quay.io/skopeo/stable:latest
        command: [sh, -c]

        # Mount the volumes
        volumeMounts: &volumeMounts
          - name: src-secret-vol
            readOnly: true
            mountPath: /secrets/src
          - name: dst-secret-vol
            readOnly: true
            mountPath: /secrets/dst
          - name: work-volume
            mountPath: /data
          - mountPath: /etc/ssl/certs/
            name: ca-certificates
            readOnly: true

        env: &env
          - name: DOCKER_CONFIG
            value: /data/.docker
          - name: REGISTRY_AUTH_FILE
            value: /data/.docker/config.json
          # Argo Workflows is considering disallowing accessing parameters in the body of the script,
          # so let's explicitly pass them in as variables:
          - name: SRC_TYPE
            value: "{{workflow.parameters.srcType}}"
          - name: SRC_REMOTE_URL
            value: "{{workflow.parameters.srcRemoteURL}}"
          - name: SRC_SECRET
            value: "{{workflow.parameters.srcSecret}}"
          - name: DST_TYPE
            value: "{{workflow.parameters.dstType}}"
          - name: DST_REMOTE_URL
            value: "{{workflow.parameters.dstRemoteURL}}"
          - name: DST_SECRET
            value: "{{workflow.parameters.dstSecret}}"
          - name: SPEC_REPOSITORY
            value: "{{workflow.parameters.specRepository}}"
          - name: SPEC_TAG
            value: "{{workflow.parameters.specTag}}"
          - name: SPEC_OVERRIDE
            value: "{{workflow.parameters.specOverride}}"
          - name: SCAN_SEVERITY
            value: "{{workflow.parameters.scanSeverity}}"

        source: |
          set -euo pipefail

          # Conditionally login to source registry
          if [ "{{workflow.parameters.srcSecret}}" = "true" ]; then
              echo "Authenticating to source registry {{workflow.parameters.srcRemoteURL}}..."
              cat /secrets/src/password | skopeo login -u "$(cat /secrets/src/username)" --password-stdin {{workflow.parameters.srcRemoteURL}} --compat-auth-file "${DOCKER_CONFIG}/config.json"
          fi

          # Conditionally login to destination registry
          if [ "{{workflow.parameters.dstSecret}}" = "true" ]; then
              echo "Authenticating to destination registry {{workflow.parameters.dstRemoteURL}}..."
              cat /secrets/dst/password | skopeo login -u "$(cat /secrets/dst/username)" --password-stdin {{workflow.parameters.dstRemoteURL}} --compat-auth-file "${DOCKER_CONFIG}/config.json"
          fi

    # --- Check Source Availability (Skopeo) ---
    - name: check-src-availability
      retryStrategy:
        limit: "2"
      script:
        image: quay.io/skopeo/stable:latest
        command: [sh, -c]
        volumeMounts: *volumeMounts
        env: *env

        source: |
          skopeo inspect "docker://$SRC_REMOTE_URL/$SPEC_REPOSITORY:$SPEC_TAG" >/dev/null 2>&1

          EXIT_CODE=$?

          if [ "$EXIT_CODE" -eq 0 ]; then
              echo "$SPEC_TAG" > /data/tag.txt
              echo "available"
          else
              # Image does not exist
              echo "unavailable"
          fi
          exit 0

    # --- Retrieve Tags (Skopeo) ---
    - name: retrieve-tags
      retryStrategy:
        limit: "2"
      script:
        image: quay.io/skopeo/stable:latest
        command: [sh, -c]
        volumeMounts: *volumeMounts
        env: *env

        source: |
          set -euo pipefail

          skopeo list-tags "docker://$SRC_REMOTE_URL/$SPEC_REPOSITORY" > /data/list-tags.json

    # --- Filter Tags (semver) ---
    - name: filter-tags
      retryStrategy:
        limit: "2"
      script:
        image: ghcr.io/trevex/semver:latest
        command: [bash, -c]
        volumeMounts: *volumeMounts
        env: *env

        source: |
          set -euo pipefail

          cat /data/list-tags.json | jq -r '.Tags[]' | semver filter "$SPEC_TAG" -i | semver latest > /data/tag.txt

    # --- Compute Images
    - name: compute-images
      retryStrategy:
        limit: "2"
      outputs:
        parameters:
        - name: src
          valueFrom:
            path: /tmp/src.txt
        - name: dst
          valueFrom:
            path: /tmp/dst.txt
      script:
        image: hairyhenderson/gomplate:alpine
        command: [sh, -c]
        volumeMounts: *volumeMounts
        env: *env
        source: |
          set -euo pipefail

          # escaping go templates sucks, so let's just not...
          cat > /tmp/images.tpl <<EOF
          <<- \$srcRemoteURL := "$SRC_REMOTE_URL" >>
          <<- \$dstRemoteURL := "$DST_REMOTE_URL" >>
          <<- \$repository := "$SPEC_REPOSITORY" >>
          <<- \$override := "$SPEC_OVERRIDE" >>
          <<- \$tag := "$(cat /data/tag.txt)" >>
          <<- \$values := dict "tag" \$tag "repository" \$repository >>
          << \$srcRemoteURL >>/<< \$repository >>:<< \$tag >>
          <<- if \$override >>
          <<- \$repoTag := tmpl.Inline \$override \$values >>
          << \$dstRemoteURL >>/<< \$repoTag >>
          <<- else >>
          << \$dstRemoteURL >>/<< \$repository >>:<< \$tag >>
          <<- end >>
          EOF

          sed -i 's/<</{{/g' /tmp/images.tpl
          sed -i 's/>>/}}/g' /tmp/images.tpl
          cat /tmp/images.tpl

          cat /tmp/images.tpl | gomplate --out - > /data/gomplate.txt

          sed -i '/^[[:space:]]*$/d' /data/gomplate.txt
          cat /data/gomplate.txt

          head -1 /data/gomplate.txt > /data/src.txt
          cp /data/src.txt /tmp/src.txt
          head -2 /data/gomplate.txt | tail -1 > /data/dst.txt
          cp /data/dst.txt /tmp/dst.txt

    # --- Check Destination Availability (Skopeo) ---
    - name: check-dst-availability
      retryStrategy:
        limit: "2"

      script:
        image: quay.io/skopeo/stable:latest
        command: [sh, -c]
        volumeMounts: *volumeMounts
        env: *env

        source: |
          skopeo inspect "docker://$(cat /data/dst.txt)" >/dev/null 2>&1

          EXIT_CODE=$?

          if [ "$EXIT_CODE" -eq 0 ]; then
              echo "available"
          else
              # Image does not exist
              echo "unavailable"
          fi
          exit 0

    # --- Pull Image (Skopeo) ---
    - name: pull-image
      retryStrategy:
        limit: "2"

      script:
        image: quay.io/skopeo/stable:latest
        command: [sh, -c]
        volumeMounts: *volumeMounts
        env: *env

        source: |
          set -euo pipefail

          echo "Pulling image from $(cat /data/src.txt)"
          mkdir -p /data/image-storage

          skopeo copy --preserve-digests --multi-arch all "docker://$(cat /data/src.txt)" "oci:/data/image-storage/scanned-image"

    - name: scan-image
      script:
        image: aquasec/trivy:latest
        command: [sh, -c]
        volumeMounts: *volumeMounts

        source: |
          echo "Scanning image for severities: $SCAN_SEVERITY"

          # Trivy scan in a directory mode
          trivy image \
            --cache-dir /data/.trivy/cache \
            --exit-code 1 \
            --severity "$SCAN_SEVERITY" \
            --format json \
            --output /data/scan-results.json \
            --input /data/image-storage/scanned-image

          if [ $? -eq 0 ]; then
              echo "✅ Scan successful and no vulnerabilities found above severity: $SCAN_SEVERITY"
          else
              echo "❌ Scan failed!"
              cat /data/scan-results.json
              exit 1
          fi
        env: *env

    # --- Push Image (Skopeo) ---
    - name: push-image
      retryStrategy:
        limit: "2"

      script:
        image: quay.io/skopeo/stable:latest
        command: [sh, -c]

        # Mount the secret volume to a temporary directory
        volumeMounts: *volumeMounts

        source: |
          set -euo pipefail

          echo "Pushing image to $(cat /data/dst.txt)"
          skopeo copy --dest-precompute-digests --preserve-digests "oci:/data/image-storage/scanned-image" "docker://$(cat /data/dst.txt)"
        env: *env

    # --- Attest Scan Results (Cosign) ---
    - name: attest-scan-results
      inputs:
        parameters:
        - name: dst
      container:
        image: cgr.dev/chainguard/cosign:latest
        command: [cosign, attest]
        args:
          [
            "--allow-http-registry=true",
            "--key",
            "k8s://{{workflow.namespace}}/cosign-key",
            "--predicate",
            "/data/scan-results.json",
            "--type",
            "vuln",
            "{{inputs.parameters.dst}}",
          ]
        env: *env
        volumeMounts: *volumeMounts

    # --- Sign the image (Cosign) ---
    - name: sign-image
      inputs:
        parameters:
        - name: dst
      container:
        image: cgr.dev/chainguard/cosign:latest
        command: [cosign, sign]
        args:
          [
            "--yes",
            "--allow-http-registry=true",
            "--key",
            "k8s://{{workflow.namespace}}/cosign-key",
            "{{inputs.parameters.dst}}",
          ]
        env: *env
        volumeMounts: *volumeMounts

Secrets

These are the example secrets for pulling and pushing.

apiVersion: v1
kind: Secret
metadata:
  name: dst-reg-secret
data:
  username: YWRtaW4=
  password: YWRtaW4=
---
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: YWRtaW4=
  AWS_SECRET_ACCESS_KEY: YWRtaW5hZG1pbg==
kind: Secret
metadata:
  name: s3-secret

Workflow Example

There are many example below examples for different usecases. Examples for Helm, OCI, OCM and Blob stores are available in the corresponding subdirectories.