VHM and SUSE CaaS Platform

You can use a Uyuni VHM to gather instances from SUSE CaaS Platform.

The VHM allows Uyuni to obtain and report information about your virtual machines. For more information on VHMs, see client-configuration:vhm.adoc.

Onboarding CaaSP nodes

You can register each SUSE CaaS Platform node to Uyuni using the same method as you would a Salt client. For more information, see client-configuration:registration-overview.adoc.

We recommend that you create an activation key to associate SUSE CaaS Platform channels, and to onboard the associated nodes. For more on activation keys, see client-configuration:clients-and-activation-keys.adoc.

If you are using cloud-init, we recommended that you use a bootstrap script in the cloud-init configuration. For more on bootstrapping, see client-configuration:registration-bootstrap.adoc.

When you have added the SUSE CaaS Platform nodes to Uyuni, the registered system will automatically apply the system lock Salt formula to prevent unintended actions on the client. When a system is locked, the Web UI shows a warning and you can schedule actions using the Web UI or the API, but the action will fail. For more information about system locks, see client-configuration:system-locking.adoc.

You can disable the System Lock formula from being automatically applied by editing the configuration file. Open /etc/rhn/rhn.conf and add this line at the end of the file:

Add this line at the end of the /etc/rhn/rhn.conf file:

java.automatic_system_lock_cluster_nodes = false

Restart the spacewalk service to pick up the changes:

spacewalk-service restart

Updates related to Kubernetes are managed using the skuba-update tool. For more information, see https://documentation.suse.com/suse-caasp/4/html/caasp-admin.

When using Salt or Uyuni (either via UI or API) on any SUSE CaaS Platform nodes:

  • Do not apply a patch (if the patch is marked as interactive)

  • Do not mark a system to automatically install patches

  • Do not perform an SP migration

  • Do not reboot a node

  • Do not issue any power management action via Cobbler

  • Do not install a package if it breaks or conflicts the patterns-caasp-Node-x.y

  • Do not remove a package if it breaks or conflicts or is one of the packages related with the patterns-caasp-Node-x.y

  • Do not upgrade a package if it breaks or conflicts or is one of the packages related with the patterns-caasp-Node-x.y

Issuing those operations could render your SUSE CaaS Platform cluster unusable. Uyuni will not stop you from issuing these operations if the system is not locked.

Autoinstallation Profile of a SUSE CaaS Platform 4 Node

SUSE CaaS Platform 4 provides an AutoYaST profile that you can use to autoinstall a node. The profile is in the patterns-caasp-Management package. For more information about the profile, see https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-deployment/#_autoyast_preparation.

For an example script based based on the SUSE CaaS Platform 4 template, customized to make use of Uyuni, see client-configuration:caasp-autoinstallation-example.adoc.

Manage a SUSE CaaS Platform Cluster With Uyuni

You can use Uyuni to manage one or more existing SUSE CaaS Platform clusters.

Only SUSE CaaS Platform 4 is currently supported.

Before you begin, ensure you have installed your SUSE CaaS Platform cluster.

Elect a Management Node

To manage a SUSE CaaS Platform cluster, you need to elect a client as the management node for the cluster. The management node cannot be part of the cluster, and it must have the SUSE CaaS Platform channels associated with it before you begin. You can use a single management node for multiple clusters, as long as the clusters are all of the same kind.

Procedure: Electing a Management Node
  1. In the Uyuni Web UI, navigate to Systems  System List and click the name of the client to elect as the management node.

  2. Navigate to the Formulas  Configuration tab, and check the CaaSP Management Node formula.

  3. Click Save and apply the highstate.

You will not be able to use the management node until the highstate has been completed.

List all known clusters by navigating to Clusters  Overview. This list displays all existing clusters, along with the cluster type, and which management node they are associated with. It also shows the nodes within the cluster, if the nodes are registered to Uyuni. For the nodes within a cluster, additional information from skuba and the Kubernetes API are shown, including the role, status, and whether any updates are available.

For more information about the data available for nodes, see https://documentation.suse.com/suse-caasp/4/html/caasp-admin/_cluster_updates.html.

You will need to prepare the configuration from your cluster to the management node:

  1. Copy the skuba configuration directory from your cluster to the management node. This is the directory that the skuba service creates after the cluster has been bootstrapped. Take a note of the new file location for adding the cluster in the Uyuni Web UI.

  2. Provide a way to authentication. There are two ways you can achieve this, choose the method that best suits your environment:

    • Copy the passwordless private SSH key used to access the cluster nodes to the Uyuni Server, and take a note of the file location. You need the current keys, and keys for any clients that you want to use in the future.

    • You can use an ssh-agent socket, and provide the path to the socket when setting up the cluster. There are two ways of using the ssh-agent with SUSE CaaS Platform:

      • By using ssh-agent locally:

        • Start the ssh-agent locally: eval $(ssh-agent)

        • Add the SSH key: ssh-add <key>

        • The socket used to access the agent is available in the $SSH_AGENT environment variable.

      • Forward the ssh-agent to the management node from another machine:

        • From your source machine: ssh -A <management node>. The socket path is also available in the $SSH_AGENT environment variable.

If you are using the ssh-agent method, the path of the socket changes every time a new ssh-agent` is started or a new ssh -A connection is started. The ssh-agent socket path can be updated at any time from the Uyuni Web UI. The socket path can also be overridden when starting any cluster action that requires SSH access.

Manage Clusters

To manage a cluster in Uyuni, add the cluster in the Web UI.

Procedure: Adding an Existing Cluster
  1. In the Uyuni Web UI, navigate to Clusters  Overview and click FIXME.

  2. Follow the prompts to provide information about your cluster, including the cluster type, and select the management node to associate.

  3. Type the path to the skuba configuration file for the cluster.

  4. Type the passwordless SSH key you want to use, or to the ssh-agent socket.

  5. Type a name, label, and description for the cluster.

  6. Click FIXME.

For each cluster you manage with Uyuni, a corresponding system group is created. By default, the system group is called Cluster <cluster_name>. Refresh the system group to update the list of nodes. Only nodes known to Uyuni are shown.

You can remove clusters from Uyuni by navigating to Clusters  Overview, unchecking the cluster to be deleted, and clicking Delete Cluster.

Deleting a cluster removes the cluster from Uyuni, it does not delete the cluster nodes. Workloads running on the cluster will continue uninterrupted.

Manage Nodes

When you have the cluster created in Uyuni, you can manage nodes within the cluster.

Before you add a new node to the cluster, check the management node can access the node you want to add using passwordless SSH, or the ssh-agent socket you are forwarding.

You also need to ensure that the node you want to add is registered to Uyuni, and has a SUSE CaaS Platform channel assigned.

Procedure: Adding Nodes to a Cluster
  1. In the Uyuni Web UI, navigate to Clusters  Overview and click Join Node.

  2. Select the nodes to add from the list of available nodes. The list of available nodes includes only nodes that are registered to Uyuni, are not management nodes, and are not currently part of any cluster.

  3. Follow the prompts to enter the SUSE CaaS Platform parameters for the nodes to be added.

  4. OPTIONAL: Specify a custom ssh-agent socket that is valid only for the nodes that are being added.

  5. Click Save to schedule an action to add the nodes. During this action, Uyuni prepares the nodes for joining by disabling swap, then joins the nodes to the cluster.

Procedure: Removing Nodes from a Cluster
  1. In the Uyuni Web UI, navigate to Clusters  Overview, uncheck the nodes to remove, and click Remove Node.

  2. Follow the prompts to define the parameters for the nodes to be removed.

  3. OPTIONAL: Specify a custom ssh-agent socket that is valid only for the nodes that are being removed.

  4. Click Save to schedule an action to remove the nodes.

Upgrade the Cluster

If the cluster has available updates, you can use Uyuni to schedule and manage the upgrade.

Uyuni upgrades all control planes first, and then upgrades the workers. For more information, see https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-admin/#_cluster_updates.

Procedure: Upgrading the Cluster
  1. In the Uyuni Web UI, navigate to Clusters  Overview, and click the cluster to upgrade.

  2. OPTIONAL: The are no SUSE CaaS Platform parameters available for you to customize for upgrade. However, you can specify a custom ssh-agent socket that is valid only for the nodes that are being upgraded.

  3. Click Save to schedule an action to upgrade the cluster.

Uyuni will only interact with skuba to upgrade the cluster. Any other required action, such as configuration changes, are not issued by Uyuni.

For more information about upgrading, see https://www.suse.com/releasenotes/x86_64/SUSE-CAASP/4.