Server - X Upgrade
Migrating Uyuni from one major version to the next major version must be done using two systems. The migration happens from the original source system to a new target system. In-place migration is not available.
While this means that you temporarily need two systems, it also means that the source system remains fully functional. This is useful to reduce downtime, and can act as a fallback if the migration is not successful.
Given the complexity of this process, if you experience any problems during the migration, you will need to start over from the beginning. The migration involves exporting the entire database from the source system and restoring it on the target system. Additionally, all of the channels and packages need to be copied to the target system. You should expect the entire process to take several hours.
Migrating to 4.x from an older version such as version 3.2 can be difficult. We strongly recommend that you contact SUSE Consulting to assist with this process.
The source system must be running Uyuni 3.2 with all the latest updates applied. Before you start, ensure that the system is up to date and all updates have been installed successfully.
It is important that PostgreSQL 10 is already running on your Uyuni 3.2 system. For more information, see Database Migration from Version 9 to 10.
During migration, the database on the source system needs to get exported.
The database export is compressed, and temporarily stored on the target system.
The compression is done using
gzip using the default compression options.
Maximum compression only yields about 10% of space savings.
Before you begin, check the size of the database on the source system with:
du -sch /var/lib/pgsql/data
Ensure you have at least 30% of the total database size available in
/var/spacewalk/tmp on the target system.
/var/spacewalk/tmp directory will be created if it does not exist.
If you want the export to be stored somewhere else, change the
$TMPDIR variable at the beginning of the migration script.
As the target system, install Uyuni Server 4.1 using the unified installer. Uyuni Server 4.1 is based on SUSE Linux Enterprise 15 SP2. For more information about installing Uyuni, see Installing Uyuni 2020.07 Server.
From the command prompt, run the YaST Uyuni setup tool:
On the setup screen, check
Migrate a SUSE Manager compatible server.
Hostname of source SUSE Manager Serverfield, enter the source system hostname and domain.
Enter the database credentials of the source system.
Enter the IP address of the target system, or accept the default value if it is correct. If multiple IP addresses are available, ensure you specify the correct one.
Follow the prompts to complete the migration. YaST will terminate after the process is complete.
Be careful when you specify the database credentials. Ensure you use the same database parameters as the source system. Even if you intend to change it later on, the database credentials must match during migration.
During the migration process, the target system will fake its hostname to match the source system. Do not change the hostname during the process. Be careful when you log in to your systems during migration, as they will both show the same hostname.
To speed up the actual migration and thus reducing the server downtime, you can copy the system data in advance. For more information, see Copy System Data to the Target System.
When your target system is ready, begin the migration with this command:
While the data migration is in progress, the Uyuni services are shut down. This is to ensure that no data is written to the database during the migration.
This command reads the data that was gathered during the setup procedure, sets up Uyuni on the new target system, and transfers all of the data from the source system.
Several operations need to be performed on the source system using SSH, so you will be prompted once for the root password of the source system.
A temporary SSH key named
migration-key is created and installed on the source system, so you need to give the root password only once.
The temporary SSH key will be deleted after the migration is finished.
Depending on the size of the installation, the migration can take several hours.
When the migration has finished successfully, a
migration complete message is shown, and you are prompted to shut down the source system.
When you have received the
migration complete message, you need to reconfigure the network of the target system to use the same IP address and host name as the original system.
You will also need to restart the target system before it can be used.
A complete migration can consume a lot of time. This is caused by the amount of data that must be copied. Here are some hints how you can compensate it.
These numbers from a test installation illustrate the approximate time it takes to export and import a small 1.8 GB database:
14:53:37 Dumping remote database to /var/spacewalk/tmp/susemanager.dmp.gz on target system. Please wait... 14:58:14 Database successfully dumped. Size is: 506M 14:58:29 Importing database dump. Please wait... 15:05:11 Database dump successfully imported.
In this example, exporting the database took around five minutes, and importing the export onto the target system took an additional seven minutes. For big installations this can take up to several hours.
You also need to account for the time it takes to copy all the package data to the target system. Depending on your network infrastructure and hardware, this can also take a significant amount of time.
You can copy the data at any time before the migration process. Copying the data before you migrate can significantly reduce the amount of downtime required when you perform the migration.
At any time before the migration, you can copy data with this command:
This command performs a copy using
rsync, and does not require system downtime.
When you perform the migration, some data will still need to be copied, but it will be significantly reduced if you have recently copied the data.
This can make a significant difference to the amount of downtime required for a migration.
If you have package data on external storage you do not need to copy this data to the new system.
For example, if you have an NFS mount at
Follow this procedure after migration is finished, and before you start your target system for the first time.
Open the script at
rsynccommand on or around line 442, delete or comment it out, and save the file.
Ensure your external storage is mounted on the target system.
/srv/www/htdocs/pubexists on your external storage, ensure it is mounted.
Start the upgraded target system for the first time, and ensure it can access your external storage device.
All files and directories that have not been copied by the migration tool will need to be manually copied to the new system.