Uyuni 2026.04 Proxy Deployment on Kubernetes

1. Proxy on Kubernetes changes

There were multiple changes in how to install Uyuni proxies running on Kubernetes:

  • mgrpxy is no longer handling proxies on Kubernetes, helm and the proxy-helm chart need to be used instead.

  • The TLS certificates have to be in secrets, rather than in the configuration tarball. This aims at allowing cloud-native TLS certificates management for the proxies.

  • The proxy queries the server at the start of the container to verify that the versions are compatible.

  • The needed persistent volume claims has been reduced to the squid cache only.

2. Prerequisites

Installing the Kubernetes cluster and configuring it is out of the scope of this document.

The cluster is assumed to be ready to be used with a user having rights on a namespace dedicated to Uyuni.

Create Role and RoleBinding if they do not exist already. The minimum rights required to deploy proxy-helm are defined as:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: example-resource-manager
  namespace: $NAMESPACE
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log", "services", "secrets", "configmaps", "persistentvolumeclaims"]
  verbs: ["*"]
- apiGroups: ["apps"]
  resources: ["deployments"]
  verbs: ["*"]
- apiGroups: ["networking.k8s.io"]
  resources: ["ingresses"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: example-resource-manager-binding
  namespace: $NAMESPACE
subjects:
- kind: User
  name: $USERNAME
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: example-resource-manager
  apiGroup: rbac.authorization.k8s.io

This guide assumes the reader knows how to work with Kubernetes: the concepts will not be explained here as they are extensively documented in the official Kubernetes documentation.

The Uyuni administrator needs to deploy the proxy-helm Helm chart. However, this chart requires to prepare:

  • TLS certificates chain for the proxy,

  • a ConfigMap for the proxy root CA certificate,

  • a persistent volumes for the claim the chart will create or a storage class automatically creating it,

  • Load balancers or other mechanisms to expose the Salt, SSH and TFTP ports.

Run the following command to read the full details on how to use the proxy Helm chart:

helm show readme --version 2026.4.0 \
    oci://registry.opensuse.org/uyuni/proxy-helm

Helm requires the version to be semver2 compatible when using OCI repositories. This is why the version (2026.4.0) is formatted differently from the release number (2026.04).

For older versions, the Helm chart is located in a snapshot repository like: oci://registry.opensuse.org/systemsmanagement/uyuni/snapshots/<version>/opensuse_tumbleweed/uyuni/proxy-helm.

2.1. TLS setup

The proxy-cert TLS secret is expected. It contains the TLS certificate and key for the Ingress rule and needs to have the public FQDN as Subject Alternate Name.

These secrets can be created using the kubectl create secret tls -n $NAMESPACE command. The certificate file passed to this command needs to start with the server certificate followed by the chain of intermediary CA certificates if any. The root CA is not needed in these secrets as it is expected in a ConfigMap.

The Root CA certificate of proxy-cert is expected in a ConfigMap named uyuni-ca stored in the ca.crt key. It can be created with a command like kubectl create cm -n $NAMESPACE uyuni-ca --from-file=ca.crt=/path/to/uyuni-ca.crt.

2.2. Storage

The proxy chart defines a volume as a Persistent Volume Claim (PVC).

  • The creation of the underlying PV is the responsibility of the cluster administrators.

  • The PVC use the ReadWriteOnce access mode.

The created PVC can be tuned Helm chart values, it can have the following values:

  • size: to set the requested size of the PVC.

  • storageClass: can be used to select the storage class to use for the PVC.

  • extraLabels: can be used to add custom labels to the PVC.

  • annotations: can be used to set custom annotations on the PVC.

  • volumeName: can be used to hard code which volume the PVC should be bound to.

  • selector: is the YAML fragment of the PVC selector to use to find the PV to bind to.

Refer to https://kubernetes.io/docs/concepts/storage/persistent-volumes/ for more information on persistent volumes and their claims.

Refer to the proxy-helm README for the list of persistent volume claims which will be created and will need to be bound to persistent volumes.

While the default sizes are provided, it is highly recommended to change them based on the distributions you plan to synchronize.

For more information on storage requirements see General Requirements.

2.3. Exposing ports

Uyuni proxy requires some TCP and UDP ports to be routed to its services. Refer to the proxy-helm README for the list of ports to be exposed.

RKE2 ships with nginx as the default ingress controller. However, as this is deprecated and soon to be unsupported, the proxy-helm chart defaults to use Traefik as ingress controller. Using the nginx ingress controller might work and will not be documented, use at your own risk.

The proxy-helm chart supports Gateway API version 1.4. Since this requires experimental CRDs which are not shipped with RKE2 1.35, it is not recommended to be used in production.

There are multiple ways to expose the ports, but this documentation will only mention how to configure RKE2’s Traefik for this. This is not a task for the Uyuni administrator, but the Kubernetes cluster administrator as it requires configuration to be set on the cluster nodes.

To set Traefik to expose and route the needed ports, create a /var/lib/rancher/rke2/server/manifests/uyuni-traefik.yaml on each node with the following content. Note that Traefik takes a few seconds to be reinstalled after saving the file.

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-traefik
  namespace: kube-system
spec:
  valuesContent: |-
    ports:
      ssh:
        port: 8022
        expose:
          default: true
        exposedPort: 8022
        protocol: TCP
        hostPort: 8022
      salt-publish:
        port: 4505
        expose:
          default: true
        exposedPort: 4505
        protocol: TCP
        hostPort: 4505
        containerPort: 4505
      salt-request:
        port: 4506
        expose:
          default: true
        exposedPort: 4506
        protocol: TCP
        hostPort: 4506
        containerPort: 4506

If Traefik is used as the Ingress controller, the user needs access to additional resources. Add the following to the rules of the previously defined role:

- apiGroups: ["traefik.io", "traefik.containo.us"]
  resources: ["ingressroutetcps"]
  verbs: ["*"]

If Gateway API is used instead, add the following to the rules of the previously defined role:

- apiGroups: ["gateway.networking.k8s.io"]
  resources: ["gateways", "httproutes", "tcproutes"]
  verbs: ["*"]

TFTP is complex to expose from a Kubernetes pod due to the nature of the protocol: the TFTP server receives requests on port 69, but negotiates another random port to continue. This port also needs to stay the same through the whole session for the server to recognize the client as being the same. This means that there are only two possible ways to use the TFTP server:

  • using a load balancer compatible with TFTP,

  • using the host network for the TFTP pod. This can be achieved by setting the tftp.hostNetwork helm chart value to true.

3. Configuration generation

Before deploying the Uyuni proxy, a configuration archive needs to be generated.

3.1. Generate the Proxy Configuration with Web UI

Procedure: Generating a Proxy Container Configuration Using Web UI
  1. In the Web UI, navigate to Systems  Proxy Configuration and fill the required data:

  2. In the Proxy FQDN field type fully qualified domain name for the proxy.

  3. In the Parent FQDN field type fully qualified domain name for the Uyuni Server or another Uyuni Proxy.

  4. In the Proxy SSH port field type SSH port on which SSH service is listening on Uyuni Proxy. Recommended is to keep default 8022.

  5. In the Max Squid cache size [MB] field type maximal allowed size for Squid cache. Recommended is to use at most 80% of available storage for the containers.

    2 GB represents the default proxy squid cache size. This will need to be adjusted for your environment.

  6. In the SSL certificate selection list choose if new server certificate should be generated for Uyuni Proxy or an existing one should be used. You can consider generated certificates as Uyuni builtin (self signed) certificates. If Uyuni server runs on Kubernetes, the generated certificate option is not possible and replaced with no SSL certificate as they are managed outside the containers.

    Depending on the choice then provide either path to signing CA certificate to generate a new certificate or path to an existing certificate and its key to be used as proxy certificate.

    The CA certificates generated by the server are stored in the /var/lib/containers/storage/volumes/root/_data/ssl-build directory.

    For more information about existing or custom certificates and the concept of corporate and intermediate certificates, see Import SSL Certificates.

  7. Click Generate to register a new proxy FQDN in the Uyuni Server and generate a configuration archive (config.tar.gz) containing details for the container host.

  8. After a few moments you are presented with file to download. Save this file locally.

3.2. Generate Proxy Configuration With spacecmd and Self-Signed Certificate

You can generate a Proxy configuration using spacecmd. This is only possible if Uyuni server runs on podman and has a self-signed root CA certificate.

Procedure: Generating Proxy Configuration with spacecmd and Self-Signed Certificate
  1. SSH into your container host.

  2. Execute the following command replacing the Server and Proxy FQDN:

    mgrctl exec -ti 'spacecmd proxy_container_config_generate_cert -- dev-pxy.example.com dev-srv.example.com 2048 email@example.com -o /tmp/config.tar.gz'
  3. Copy the generated configuration from the server container:

    mgrctl cp server:/tmp/config.tar.gz .

3.3. Generate Proxy Configuration With spacecmd and Custom Certificate

You can generate a Proxy configuration using spacecmd for custom certificates rather than the default self-signed certificates.

Procedure: Generating Proxy Configuration with spacecmd and Custom Certificate
  1. SSH into your Server container host.

  2. Execute the following commands, replacing the Server and Proxy FQDN:

    for f in ca.crt proxy.crt proxy.key; do
      mgrctl cp $f server:/tmp/$f
    done
    mgrctl exec -ti 'spacecmd proxy_container_config -- -p 8022 pxy.example.com srv.example.com 2048 email@example.com /tmp/ca.crt /tmp/proxy.crt /tmp/proxy.key -o /tmp/config.tar.gz'
  3. If your setup uses an intermediate CA, copy it as well and include it in the command with the -i option (can be provided multiple times if needed) :

    mgrctl cp intermediateCA.pem server:/tmp/intermediateCA.pem
    mgrctl exec -ti 'spacecmd proxy_container_config -- -p 8022 -i /tmp/intermediateCA.pem pxy.example.com srv.example.com 2048 email@example.com /tmp/ca.crt /tmp/proxy.crt /tmp/proxy.key -o /tmp/config.tar.gz'
  4. Copy the generated configuration from the server container:

    mgrctl cp server:/tmp/config.tar.gz .

3.4. Generate Proxy Configuration With spacecmd and no Certificate

You can generate a Proxy configuration using spacecmd with no TLS certificates. This is needed for Uyuni running on Kubernetes as the certificates are handled outside of the containers.

Procedure: Generating Proxy Configuration with spacecmd and no Certificate
  1. SSH into your Server container host.

  2. Execute the following commands, replacing the Server and Proxy FQDN:

    for f in ca.crt proxy.crt proxy.key; do
      mgrctl cp $f server:/tmp/$f
    done
    mgrctl exec -ti 'spacecmd proxy_container_config_nossl -- -p 8022 pxy.example.com srv.example.com 2048 email@example.com -o /tmp/config.tar.gz'
  3. Copy the generated configuration from the server container:

    mgrctl cp server:/tmp/config.tar.gz .

4. Deploying the Uyuni Proxy Helm Chart

Copy and extract the generated configuration tar.gz file and then install using helm:

helm install uyuni-proxy \
    oci://registry.opensuse.org/uyuni/proxy-helm \
    -n $NAMESPACE \
    --description "Proxy installation" \
    --set-file global.config=path/to/config.yaml \
    --set-file global.ssh=path/to/ssh.yaml \
    --set-file global.httpd=path/to/httpd.yaml \

When setting multiple values, using a YAML values file is recommended instead of passing several --set parameters. Refer to the helm command help for more details.

5. Example helm charts

Some helm charts using the proxy-helm chart can be found in the main branch of the uyuni-charts git repository. They show case how the TLS certificate can be generated using cert-manager and trust-manager. Those examples may assume to have Kubernetes cluster administrator permissions.