Migrating the Uyuni Server to a Containerized Environment

To migrate a legacy Uyuni Server (RPM installation) to a container, a new machine is required.

It is not possible to perform an in-place migration.

Self trusted GPG keys are not migrated. GPG keys that are trusted in the RPM database only are not migrated. Thus synchronizing channels with spacewalk-repo-sync can fail.

The administrator must migrate these keys manually from the previous Uyuni installation to the container host after the actual server migration.

  1. Copy the keys from the previous Uyuni server to the container host of the new server.

  2. Later, add each key to the migrated server with the command mgradm gpg add <PATH_TO_KEY_FILE>.

The migration procedure currently does not include any hostname renaming functionality. The fully qualified domain name (FQDN) on the new server will remain identical to that on the old server. Therefore, following migration, it will be necessary to manually adjust the DNS records to point to the new server.

1. Initial Preparation on the Legacy Server

Procedure: Initial preparation on the legacy server
  1. Stop the Uyuni services:

    spacewalk-service stop
  2. Stop the PostgreSQL service:

    systemctl stop postgresql

2. Prepare the SSH Connection

Procedure: Preparing the SSH connection
  1. Ensure that for root an SSH key exists on the new 2024.10 server. If a key does not exist, create it with:

    ssh-keygen -t rsa
  2. The SSH configuration and agent should be ready on the new server host for a passwordless connection to the legacy server.

    To establish a passwordless connection, the migration script relies on an SSH agent running on the new server. If the agent is not active yet, initiate it by running eval $(ssh-agent). Then add the SSH key to the running agent with ssh-add followed by the path to the private key. You will be prompted to enter the password for the private key during this process.

  3. Copy the public SSH key to the legacy Uyuni Server (<oldserver.fqdn>) with ssh-copy-id. Replace <oldserver.fqdn> with the FQDN of the legacy server:

    ssh-copy-id <oldserver.fqdn>
    The SSH key will be copied into the legacy server's [path]``~/.ssh/authorized_keys`` file.
    For more information, see the [literal]``ssh-copy-id`` manpage.
  4. Establish an SSH connection from the new server to the legacy Uyuni Server to check that no password is needed. Also there must not by any problem with the host fingerprint. In case of trouble, remove old fingerprints from the ~/.ssh/known_hosts file. Then try again. The fingerprint will be stored in the local ~/.ssh/known_hosts file.

3. Perform the Migration

When planning your migration from a legacy Uyuni to a containerized Uyuni, ensure that your target instance meets or exceeds the specifications of the old setup. This includes, but is not limited to, Memory (RAM), CPU Cores, Storage, and Network Bandwidth.

Procedure: Performing the Migration
  1. This step is optional. If custom persistent storage is required for your infrastructure, use the mgr-storage-server tool.

    • For more information, see mgr-storage-server --help. This tool simplifies creating the container storage and database volumes.

    • Use the command in the following manner:

      mgr-storage-server <storage-disk-device> [<database-disk-device>]

      For example:

      mgr-storage-server /dev/nvme1n1 /dev/nvme2n1

      This command will create the persistent storage volumes at /var/lib/containers/storage/volumes.

      For more information, see List of persistent storage volumes.

  2. Execute the following command to install a new Uyuni server. Replace <oldserver.fqdn> with the FQDN of the legacy server:

    mgradm migrate podman <oldserver.fqdn>
  3. Migrate trusted SSL CA certificates.

Trusted SSL CA certificates that were installed as part of an RPM and stored on a legacy Uyuni in the /usr/share/pki/trust/anchors/ directory will not be migrated. Because SUSE does not install RPM packages in the container, the administrator must migrate these certificate files manually from the legacy installation after migration:

  1. Copy the file from the legacy server to the new server. For example, as /local/ca.file.

  2. Copy the file into the container with:

    mgradm cp /local/ca.file server:/etc/pki/trust/anchors/

After successfully running the mgradm migrate command, the Salt setup on all clients will still point to the old legacy server.

To redirect them to the 2024.10 server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same Fully Qualified Domain Name and IP address as legacy server.

4. Prepare for Kubernetes

Before executing the migration with mgradm migrate command, it is essential to predefine Persistent Volumes, especially considering that the migration job initiates the container from scratch. For more information, see the installation section for comprehensive guidance on preparing these volumes in List of persistent storage volumes.

5. Migrating

Execute the following command to install a new Uyuni server, replacing <oldserversource.fqdn> with the appropriate FQDN of the old server:

mgradm migrate podman <oldnserver.fqdn>

or

mgradm migrate kubernetes <oldnserver.fqdn>

After successfully running the mgradm migrate command, the Salt setup on all clients will still point to the old server. To redirect them to the new server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same FQDN and IP address as the old server.