Packages changed:
  libxml2
  nvme-cli (1.13 -> 1.14)
  rook (1.5.10+git4.g309ad2f64 -> 1.6.2+git0.ge8fd65f08)
  snapper

=== Details ===

==== libxml2 ====
Subpackages: libxml2-2 libxml2-tools

- Security fix: [bsc#1185698, CVE-2021-3537]
  * NULL pointer dereference in valid.c:xmlValidBuildAContentModel
  * Add libxml2-CVE-2021-3537.patch

==== nvme-cli ====
Version update (1.13 -> 1.14)

- update to 1.14
  * nvme-discover: add json output
  * nvme: add support for lba status log page
  * nvme: add support for endurance group event aggregate log
  * nvme: add endurance group event configuration feature
  * nvme: add latest opcodes for command supported and effects log
  * zns: print select_all field for Zone Management Send
  * print topology for NVMe nodes in kernel and path
  * nvme: add support for predictable latency event aggregate log page
  * nvme: add support for persistent event log page
  * Show more async event config fields

==== rook ====
Version update (1.5.10+git4.g309ad2f64 -> 1.6.2+git0.ge8fd65f08)

- Update to v1.6.2
  * Set base Ceph operator image and example deployments to v16.2.2
  * Update snapshot APIs from v1beta1 to v1
  * Documentation for creating static PVs
  * Allow setting primary-affinity for the OSD
  * Remove unneeded debug log statements
  * Preserve volume claim template annotations during upgrade
  * Allow re-creating erasure coded pool with different settings
  * Double mon failover timeout during a node drain
  * Remove unused volumesource schema from CephCluster CRD
  * Set the device class on raw mode osds
  * External cluster schema fix to allow not setting mons
  * Add phase to the CephFilesystem CRD
  * Generate full schema for volumeClaimTemplates in the CephCluster CRD
  * Automate upgrades for the MDS daemon to properly scale down and scale up
  * Add Vault KMS support for object stores
  * Ensure object store endpoint is initialized when creating an object user
  * Support for OBC operations when RGW is configured with TLS
  * Preserve the OSD topology affinity during upgrade for clusters on PVCs
  * Unify timeouts for various Ceph commands
  * Allow setting annotations on RGW service
  * Expand PVC size of mon daemons if requested
- Update to v1.6.1
  * Disable host networking by default in the CSI plugin with option to enable
  * Fix the schema for erasure-coded pools so replication size is not required
  * Improve node watcher for adding new OSDs
  * Operator base image updated to v16.2.1
  * Deployment examples updated to Ceph v15.2.11
  * Update Ceph-CSI to v3.3.1
  * Allow any device class for the OSDs in a pool instead of restricting the schema
  * Fix metadata OSDs for Ceph Pacific
  * Allow setting the initial CRUSH weight for an OSD
  * Fix object store health check in case SSL is enabled
  * Upgrades now ensure latest config flags are set for MDS and RGW
  * Suppress noisy RGW log entry for radosgw-admin commands
- Update to v1.6.0
  * Removed Storage Providers
  * CockroachDB
  * EdgeFS
  * YugabyteDB
  * Ceph
  * Support for creating OSDs via Drive Groups was removed.
  * Ceph Pacific (v16) support
  * CephFilesystemMirror CRD to support mirroring of CephFS volumes with Pacific
  * Ceph CSI Driver
  * CSI v3.3.0 driver enabled by default
  * Volume Replication Controller for improved RBD replication support
  * Multus support
  * GRPC metrics disabled by default
  * Ceph RGW
  * Extended the support of vault KMS configuration
  * Scale with multiple daemons with a single deployment instead of a separate deployment for each rgw daemon
  * OSDs
  * LVM is no longer used to provision OSDs
  * More efficient updates for multiple OSDs at the same time
  * Multiple Ceph mgr daemons are supported for stretch clusters
    and other clusters where HA of the mgr is critical (set count: 2 under mgr in the CephCluster CR)
  * Pod Disruption Budgets (PDBs) are enabled by default for Mon,
    RGW, MDS, and OSD daemons. See the disruption management settings.
  * Monitor failover can be disabled, for scenarios where
    maintenance is planned and automatic mon failover is not desired
  * CephClient CRD has been converted to use the controller-runtime library

==== snapper ====
Subpackages: libsnapper5

- fixed systemd sandboxing (gh#openSUSE/snapper#651)