[DEPRECATION WARNING]: ANSIBLE_COLLECTIONS_PATHS option, does not fit var naming standard, use the singular form ANSIBLE_COLLECTIONS_PATH instead. This feature will be removed from ansible-core in version 2.19. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. No config file found; using defaults running playbook inside collection fedora.linux_system_roles PLAY [Test qdevice - all options] ********************************************** TASK [Gathering Facts] ********************************************************* Thursday 25 July 2024 08:24:28 -0400 (0:00:00.008) 0:00:00.008 ********* [WARNING]: Platform linux on host managed_node1 is using the discovered Python interpreter at /usr/bin/python3.12, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.17/reference_appendices/interpreter_discovery.html for more information. ok: [managed_node1] TASK [Set qnetd address] ******************************************************* Thursday 25 July 2024 08:24:29 -0400 (0:00:01.149) 0:00:01.157 ********* ok: [managed_node1] => { "ansible_facts": { "__test_qnetd_address": "localhost" }, "changed": false } TASK [Run test] **************************************************************** Thursday 25 July 2024 08:24:29 -0400 (0:00:00.020) 0:00:01.178 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/template_qdevice.yml for managed_node1 TASK [Set up test environment] ************************************************* Thursday 25 July 2024 08:24:29 -0400 (0:00:00.022) 0:00:01.200 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Set node name to 'localhost' for single-node clusters] *** Thursday 25 July 2024 08:24:29 -0400 (0:00:00.028) 0:00:01.228 ********* ok: [managed_node1] => { "ansible_facts": { "inventory_hostname": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Ensure facts used by tests] ******* Thursday 25 July 2024 08:24:29 -0400 (0:00:00.023) 0:00:01.252 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "'distribution' not in ansible_facts", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** Thursday 25 July 2024 08:24:29 -0400 (0:00:00.016) 0:00:01.268 ********* ok: [managed_node1] => { "changed": false, "stat": { "exists": false } } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** Thursday 25 July 2024 08:24:30 -0400 (0:00:00.419) 0:00:01.687 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_is_ostree": false }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Do not try to enable RHEL repositories] *** Thursday 25 July 2024 08:24:30 -0400 (0:00:00.021) 0:00:01.709 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "ansible_distribution == 'RedHat'", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Copy nss-altfiles ha_cluster users to /etc/passwd] *** Thursday 25 July 2024 08:24:30 -0400 (0:00:00.013) 0:00:01.722 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__ha_cluster_is_ostree | d(false)", "skip_reason": "Conditional result was False" } TASK [Clean up test environment for qnetd] ************************************* Thursday 25 July 2024 08:24:30 -0400 (0:00:00.021) 0:00:01.744 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** Thursday 25 July 2024 08:24:30 -0400 (0:00:00.030) 0:00:01.775 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Removed: corosync-qnetd-3.0.3-6.el10.x86_64" ] } TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present] *** Thursday 25 July 2024 08:24:31 -0400 (0:00:01.285) 0:00:03.060 ********* ok: [managed_node1] => { "changed": false, "path": "/etc/corosync/qnetd", "state": "absent" } TASK [Set up test environment for qnetd] *************************************** Thursday 25 July 2024 08:24:32 -0400 (0:00:00.453) 0:00:03.513 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Install qnetd packages] *********** Thursday 25 July 2024 08:24:32 -0400 (0:00:00.034) 0:00:03.548 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Installed: corosync-qnetd-3.0.3-6.el10.x86_64" ] } lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Set up qnetd] ********************* Thursday 25 July 2024 08:24:33 -0400 (0:00:01.641) 0:00:05.190 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "pcs", "--start", "--", "qdevice", "setup", "model", "net" ], "delta": "0:00:01.203868", "end": "2024-07-25 08:24:35.343951", "failed_when_result": false, "rc": 0, "start": "2024-07-25 08:24:34.140083" } STDERR: Quorum device 'net' initialized Starting quorum device... quorum device started TASK [Back up qnetd] *********************************************************** Thursday 25 July 2024 08:24:35 -0400 (0:00:01.635) 0:00:06.825 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/tests/ha_cluster/tasks/qnetd_backup_restore.yml for managed_node1 TASK [Create /etc/corosync/qnetd_backup directory] ***************************** Thursday 25 July 2024 08:24:35 -0400 (0:00:00.026) 0:00:06.852 ********* changed: [managed_node1] => { "changed": true, "gid": 0, "group": "root", "mode": "0700", "owner": "root", "path": "/etc/corosync/qnetd_backup", "secontext": "unconfined_u:object_r:etc_t:s0", "size": 6, "state": "directory", "uid": 0 } TASK [Back up qnetd settings] ************************************************** Thursday 25 July 2024 08:24:35 -0400 (0:00:00.351) 0:00:07.204 ********* changed: [managed_node1] => { "changed": true, "cmd": [ "cp", "--preserve=all", "--recursive", "/etc/corosync/qnetd", "/etc/corosync/qnetd_backup" ], "delta": "0:00:00.007859", "end": "2024-07-25 08:24:36.072003", "rc": 0, "start": "2024-07-25 08:24:36.064144" } TASK [Restore qnetd settings] ************************************************** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.342) 0:00:07.546 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "operation == \"restore\"", "skip_reason": "Conditional result was False" } TASK [Start qnetd] ************************************************************* Thursday 25 July 2024 08:24:36 -0400 (0:00:00.014) 0:00:07.560 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "operation == \"restore\"", "skip_reason": "Conditional result was False" } TASK [Run HA Cluster role] ***************************************************** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.013) 0:00:07.574 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.057) 0:00:07.631 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/set_vars.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Ensure ansible_facts used by role] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.022) 0:00:07.653 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "__ha_cluster_required_facts | difference(ansible_facts.keys() | list) | length > 0", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Check if system is ostree] ******** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.023) 0:00:07.677 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set flag to indicate system is ostree] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.017) 0:00:07.694 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "not __ha_cluster_is_ostree is defined", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Set platform/version specific variables] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.018) 0:00:07.712 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [], "__ha_cluster_fence_agent_packages_default": "{{ ['fence-agents-all'] + (['fence-virt'] if ansible_architecture == 'x86_64' else []) }}", "__ha_cluster_fullstack_node_packages": [ "corosync", "libknet1-plugins-all", "resource-agents", "pacemaker", "openssl" ], "__ha_cluster_pcs_provider": "pcs-0.10", "__ha_cluster_qdevice_node_packages": [ "corosync-qdevice", "bash", "coreutils", "curl", "grep", "nss-tools", "openssl", "sed" ], "__ha_cluster_repos": [], "__ha_cluster_role_essential_packages": [ "pcs", "corosync-qnetd" ], "__ha_cluster_sbd_packages": [ "sbd" ], "__ha_cluster_services": [ "corosync", "corosync-qdevice", "pacemaker" ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/RedHat.yml" ], "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } skipping: [managed_node1] => (item=CentOS.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__vars_file is file", "item": "CentOS.yml", "skip_reason": "Conditional result was False" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } ok: [managed_node1] => (item=CentOS_10.yml) => { "ansible_facts": { "__ha_cluster_cloud_agents_packages": [ "resource-agents-cloud", "fence-agents-aliyun", "fence-agents-aws", "fence-agents-azure-arm", "fence-agents-compute", "fence-agents-gce", "fence-agents-ibm-powervs", "fence-agents-ibm-vpc", "fence-agents-kubevirt", "fence-agents-openstack" ], "__ha_cluster_repos": [ { "id": "highavailability", "name": "HighAvailability" }, { "id": "resilientstorage", "name": "ResilientStorage" } ] }, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/CentOS_10.yml" ], "ansible_loop_var": "item", "changed": false, "item": "CentOS_10.yml" } TASK [fedora.linux_system_roles.ha_cluster : Set Linux Pacemaker shell specific variables] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.044) 0:00:07.756 ********* ok: [managed_node1] => { "ansible_facts": {}, "ansible_included_var_files": [ "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/vars/shell_pcs.yml" ], "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Enable package repositories] ****** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.018) 0:00:07.775 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-package-repositories.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.024) 0:00:07.800 ********* ok: [managed_node1] => (item=RedHat.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/RedHat.yml" }, "ansible_loop_var": "item", "changed": false, "item": "RedHat.yml" } ok: [managed_node1] => (item=CentOS.yml) => { "ansible_facts": { "__ha_cluster_enable_repo_tasks_file": "/var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml" }, "ansible_loop_var": "item", "changed": false, "item": "CentOS.yml" } skipping: [managed_node1] => (item=CentOS_10.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_10.yml", "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item=CentOS_10.yml) => { "ansible_loop_var": "item", "changed": false, "false_condition": "__ha_cluster_enable_repo_tasks_file_candidate is file", "item": "CentOS_10.yml", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.039) 0:00:07.839 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/enable-repositories/CentOS.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : List active CentOS repositories] *** Thursday 25 July 2024 08:24:36 -0400 (0:00:00.036) 0:00:07.875 ********* ok: [managed_node1] => { "changed": false, "cmd": [ "dnf", "repolist" ], "delta": "0:00:00.202257", "end": "2024-07-25 08:24:36.936179", "rc": 0, "start": "2024-07-25 08:24:36.733922" } STDOUT: repo id repo name appstream CentOS Stream 10 - AppStream baseos CentOS Stream 10 - BaseOS beaker-client Beaker Client - RedHatEnterpriseLinux9 beaker-harness Beaker harness beakerlib-libraries Copr repo for beakerlib-libraries owned by bgoncalv highavailability CentOS Stream 10 - HighAvailability TASK [fedora.linux_system_roles.ha_cluster : Enable CentOS repositories] ******* Thursday 25 July 2024 08:24:36 -0400 (0:00:00.535) 0:00:08.411 ********* skipping: [managed_node1] => (item={'id': 'highavailability', 'name': 'HighAvailability'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.id not in __ha_cluster_repolist.stdout", "item": { "id": "highavailability", "name": "HighAvailability" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => (item={'id': 'resilientstorage', 'name': 'ResilientStorage'}) => { "ansible_loop_var": "item", "changed": false, "false_condition": "item.name != \"ResilientStorage\" or ha_cluster_enable_repos_resilient_storage", "item": { "id": "resilientstorage", "name": "ResilientStorage" }, "skip_reason": "Conditional result was False" } skipping: [managed_node1] => { "changed": false } MSG: All items skipped TASK [fedora.linux_system_roles.ha_cluster : Install role essential packages] *** Thursday 25 July 2024 08:24:37 -0400 (0:00:00.020) 0:00:08.432 ********* ok: [managed_node1] => { "changed": false, "rc": 0, "results": [] } MSG: Nothing to do lsrpackages: corosync-qnetd pcs TASK [fedora.linux_system_roles.ha_cluster : Check and prepare role variables] *** Thursday 25 July 2024 08:24:37 -0400 (0:00:00.682) 0:00:09.115 ********* included: /var/ARTIFACTS/work-generalqe6kbm9_/plans/general/tree/tmp.ytRvH8g1eY/ansible_collections/fedora/linux_system_roles/roles/ha_cluster/tasks/shell_pcs/check-and-prepare-role-variables.yml for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Discover cluster node names] ****** Thursday 25 July 2024 08:24:37 -0400 (0:00:00.037) 0:00:09.152 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_node_name": "localhost" }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Collect cluster node names] ******* Thursday 25 July 2024 08:24:37 -0400 (0:00:00.023) 0:00:09.176 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_all_node_names": [ "localhost" ] }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if ha_cluster_node_options contains unknown or duplicate nodes] *** Thursday 25 July 2024 08:24:37 -0400 (0:00:00.025) 0:00:09.201 ********* skipping: [managed_node1] => { "changed": false, "false_condition": "(\n __nodes_from_options != (__nodes_from_options | unique)\n) or (\n __nodes_from_options | difference(__ha_cluster_all_node_names)\n)\n", "skip_reason": "Conditional result was False" } TASK [fedora.linux_system_roles.ha_cluster : Extract node options] ************* Thursday 25 July 2024 08:24:37 -0400 (0:00:00.021) 0:00:09.222 ********* ok: [managed_node1] => { "ansible_facts": { "__ha_cluster_local_node": {} }, "changed": false } TASK [fedora.linux_system_roles.ha_cluster : Fail if passwords are not specified] *** Thursday 25 July 2024 08:24:37 -0400 (0:00:00.026) 0:00:09.249 ********* failed: [managed_node1] (item=ha_cluster_hacluster_password) => { "ansible_loop_var": "item", "changed": false, "item": "ha_cluster_hacluster_password" } MSG: ha_cluster_hacluster_password must be specified TASK [Clean up test environment for qnetd] ************************************* Thursday 25 July 2024 08:24:37 -0400 (0:00:00.026) 0:00:09.275 ********* included: fedora.linux_system_roles.ha_cluster for managed_node1 TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed] *** Thursday 25 July 2024 08:24:37 -0400 (0:00:00.054) 0:00:09.329 ********* changed: [managed_node1] => { "changed": true, "rc": 0, "results": [ "Removed: corosync-qnetd-3.0.3-6.el10.x86_64" ] } TASK [fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present] *** Thursday 25 July 2024 08:24:38 -0400 (0:00:01.074) 0:00:10.403 ********* changed: [managed_node1] => { "changed": true, "path": "/etc/corosync/qnetd", "state": "absent" } PLAY RECAP ********************************************************************* managed_node1 : ok=32 changed=7 unreachable=0 failed=1 skipped=10 rescued=0 ignored=0 Thursday 25 July 2024 08:24:39 -0400 (0:00:00.358) 0:00:10.762 ********* =============================================================================== fedora.linux_system_roles.ha_cluster : Install qnetd packages ----------- 1.64s fedora.linux_system_roles.ha_cluster : Set up qnetd --------------------- 1.64s fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 1.29s Gathering Facts --------------------------------------------------------- 1.15s fedora.linux_system_roles.ha_cluster : Make sure qnetd is not installed --- 1.07s fedora.linux_system_roles.ha_cluster : Install role essential packages --- 0.68s fedora.linux_system_roles.ha_cluster : List active CentOS repositories --- 0.54s fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present --- 0.45s fedora.linux_system_roles.ha_cluster : Check if system is ostree -------- 0.42s fedora.linux_system_roles.ha_cluster : Make sure qnetd config files are not present --- 0.36s Create /etc/corosync/qnetd_backup directory ----------------------------- 0.35s Back up qnetd settings -------------------------------------------------- 0.34s Run HA Cluster role ----------------------------------------------------- 0.06s Clean up test environment for qnetd ------------------------------------- 0.05s fedora.linux_system_roles.ha_cluster : Set platform/version specific variables --- 0.04s fedora.linux_system_roles.ha_cluster : Find platform/version specific tasks to enable repositories --- 0.04s fedora.linux_system_roles.ha_cluster : Check and prepare role variables --- 0.04s fedora.linux_system_roles.ha_cluster : Run platform/version specific tasks to enable repositories --- 0.04s Set up test environment for qnetd --------------------------------------- 0.03s Clean up test environment for qnetd ------------------------------------- 0.03s