Migrating the Uyuni Server to a Containerized Environment
To migrate a legacy Uyuni Server (RPM installation) to a container, a new machine is required.
It is not possible to perform an in-place migration. |
Self trusted GPG keys are not migrated.
GPG keys that are trusted in the RPM database only are not migrated.
Thus synchronizing channels with The administrator must migrate these keys manually from the previous Uyuni installation to the container host after the actual server migration.
|
The migration procedure currently does not include any hostname renaming functionality. The fully qualified domain name (FQDN) on the new server will remain identical to that on the old server. Therefore, following migration, it will be necessary to manually adjust the DNS records to point to the new server.
1. Initial Preparation on the Legacy Server
-
Stop the Uyuni services:
spacewalk-service stop
-
Stop the PostgreSQL service:
systemctl stop postgresql
2. Prepare the SSH Connection
-
Ensure that for
root
an SSH key exists on the new 2024.10 server. If a key does not exist, create it with:ssh-keygen -t rsa
-
The SSH configuration and agent should be ready on the new server host for a passwordless connection to the legacy server.
To establish a passwordless connection, the migration script relies on an SSH agent running on the new server. If the agent is not active yet, initiate it by running
eval $(ssh-agent)
. Then add the SSH key to the running agent withssh-add
followed by the path to the private key. You will be prompted to enter the password for the private key during this process. -
Copy the public SSH key to the legacy Uyuni Server (
<oldserver.fqdn>
) withssh-copy-id
. Replace<oldserver.fqdn>
with the FQDN of the legacy server:ssh-copy-id <oldserver.fqdn>
The SSH key will be copied into the legacy server's [path]``~/.ssh/authorized_keys`` file. For more information, see the [literal]``ssh-copy-id`` manpage.
-
Establish an SSH connection from the new server to the legacy Uyuni Server to check that no password is needed. Also there must not by any problem with the host fingerprint. In case of trouble, remove old fingerprints from the
~/.ssh/known_hosts
file. Then try again. The fingerprint will be stored in the local~/.ssh/known_hosts
file.
3. Perform the Migration
When planning your migration from a legacy Uyuni to a containerized Uyuni, ensure that your target instance meets or exceeds the specifications of the old setup.
This includes, but is not limited to, |
-
This step is optional. If custom persistent storage is required for your infrastructure, use the
mgr-storage-server
tool.-
For more information, see
mgr-storage-server --help
. This tool simplifies creating the container storage and database volumes. -
Use the command in the following manner:
mgr-storage-server <storage-disk-device> [<database-disk-device>]
For example:
mgr-storage-server /dev/nvme1n1 /dev/nvme2n1
This command will create the persistent storage volumes at
/var/lib/containers/storage/volumes
.For more information, see List of persistent storage volumes.
-
-
Execute the following command to install a new Uyuni server. Replace
<oldserver.fqdn>
with the FQDN of the legacy server:mgradm migrate podman <oldserver.fqdn>
-
Migrate trusted SSL CA certificates.
Trusted SSL CA certificates that were installed as part of an RPM and stored on a legacy Uyuni in the
|
After successfully running the To redirect them to the 2024.10 server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same Fully Qualified Domain Name and IP address as legacy server. |
4. Prepare for Kubernetes
Before executing the migration with mgradm migrate
command, it is essential to predefine Persistent Volumes, especially considering that the migration job initiates the container from scratch.
For more information, see the installation section for comprehensive guidance on preparing these volumes in List of persistent storage volumes.
5. Migrating
Execute the following command to install a new Uyuni server, replacing <oldserversource.fqdn> with the appropriate FQDN of the old server:
mgradm migrate podman <oldnserver.fqdn>
or
mgradm migrate kubernetes <oldnserver.fqdn>
After successfully running the |