August 15th 2024
Please read this section carefully as it may be breaking changes.
vcluster.yaml
This release introduces the new vcluster.yaml
file which centralizes all the configuration options for vCluster and serves as the Helm values at the same time. This new configuration features a completely revamped format designed to enhance the user experience:
Validation: We provide a JSON schema for vcluster.yaml
, which is used by vCluster CLI and vCluster Platform UI now validate configurations before creating or upgrading virtual clusters. This schema has also been published to SchemaStore, so that most IDEs will recognize the vcluster.yaml
file and provide autocomplete and validation directly in the IDE editor.
Consolidated configuration: All configurations are centralized in the vcluster.yaml
file, eliminating confusion previously caused by the mix of CLI flags, annotations, environment variables, and Helm values.
Consistent grouping and naming: Fields in vcluster.yaml
are logically grouped under topical categories, simplifying navigation and enhancing discoverability of related features.
Docs alignment: Our documentation now mirrors the structure of vcluster.yaml
, making it easier to cross-reference settings within the file and corresponding sections in the docs.
vcluster.yaml
In order to make it easy to convert your old values.yaml
(v0.19 and below) to the new vcluster.yaml
format, you can run the new vcluster convert config
command. For example, let's take these pre-v0.20 configuration values:
# values.yaml
sync:
ingresses:
enabled: true
nodes:
enabled: true
fake-nodes:
enabled: false
syncer:
replicas: 3
extraArgs:
- --tls-san=my-vcluster.example.com
Running vcluster convert config --distro k3s < /path/to/values.yaml
will generate the following vcluster.yaml
:
# vcluster.yaml
sync:
toHost:
ingresses:
enabled: true
fromHost:
ingressClasses:
enabled: true
nodes:
enabled: true
controlPlane:
distro:
k3s:
enabled: true
proxy:
extraSANs:
- my-vcluster.example.com
statefulSet:
highAvailability:
replicas: 3
scheduling:
podManagementPolicy: OrderedReady
For more details on upgrading from older versions to v0.20, please read our configuration conversion guide.
We consolidated the distro-specific vCluster Helm charts (vcluster (k3s), vcluster-k8s, vcluster-k0s, and vcluster-eks) into a single, unified chart. This change is designed to simplify management and upgrading of virtual clusters:
Single source: No more juggling multiple charts. The vcluster.yaml
serves as the single source for all configuration in a unified Helm chart for all distros.
Enhanced validation: We've introduced a JSON schema for the Helm values, ensuring that upgrades will only proceed if your configuration matches the expected format to reduce deployment errors.
Customizable distributions: The new unified chart structure enables easier customization of Kubernetes distributions directly via the Helm chart values:
# vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
So far, virtual clusters running the vanilla k8s distro only supported etcd as storage backend which made this distro comparatively harder to operate than k3s. With vCluster v0.20, we’re introducing two new backing store options for vanilla k8s besides etcd:
SQLite offers a more lightweight solution for data storage without the overhead associated with more complex choices like etcd or external databases. It is the new default for virtual clusters running the vanilla k8s distro.
External Databases allow users to use any MySQL or Postgres compatible databases as backing stores for virtual clusters running the vanilla k8s distro. This especially useful for users who plan to outsource the backing store operations to managed database offerings such as AWS RDS or Azure Database.
Note: Switching backing stores is currently not supported. In order to use this new backing store, you will need to deploy net new virtual clusters and migrate the data manually with backup and restore tooling such as Velero. Upgrading your configuration via vcluster convert config
will explicitly write the previously used data store into your configuration to make sure upgrading an existing virtual cluster does not require changing the backing store.
Previously, vCluster offered the option to use EKS as a distro to run vCluster. However, this lead many users to believe they had to use the EKS distro to run vCluster on an EKS host cluster, which is not correct because any vCluster distro is able to run on an EKS host cluster. Given that the EKS distro did not provide any benefits beyond the vanilla k8s distro and introduced unnecessary confusion and maintenance effort, we decided to discontinue this distro. If you want to deploy virtual clusters on an EKS host cluster, we recommend using the k8s distro for vCluster going forward. If you plan on upgrading a virtual cluster that used EKS as a distro, please carefully read and follow this upgrade guide in the docs.
There are several changes in the default configuration of a vCluster that are important for any users upgrading to v0.20+ or deploying net new clusters.
We changed the default distribution for the vCluster control plane from K3s to K8s. This is the least opinionated option, offering greater flexibility and compatibility:
Flexibility: More customization and scalability options, catering to a broader range of deployment needs.
Compatibility: In addition to embedded and external etcd, you can now use various storage backends including SQLite, Postgres, and MySQL. This addition addresses previous challenges with using K8s for smaller virtual clusters.
Upgrade Notes: Switching distributions is not supported, so in order to use this new default, you will need to deploy net new virtual clusters.
We've updated the default image repository for vCluster to ghcr.io/loft-sh/vcluster-pro
. This change allows users to seamlessly test and adopt vCluster Pro features without having to switch images from OSS to Pro. The Pro features are integrated into the Pro image but remain inactive by default to ensure that your experience remains exactly the same as with the OSS image.
Upgrade Notes: When upgrading from previous versions, the image will automatically be updated to start to pull from the new repository. For users who prefer to continue using the open-source image, simply adjust your vcluster.yaml
configuration to set the repository to loft-sh/vcluster-oss
. See the docs for details.
We’ve updated the scheduling rule of the control plane from OrderedReady
to Parallel
. Since vCluster typically runs as a StatefulSet, this setting cannot be changed after the virtual cluster been deployed.
We increased the default resource requests for vCluster including increasing:
Ephemeral storage from 200Mi to 400Mi (to ensure that SQLite powered virtual clusters have enough space to store data without running out of storage space when they are used over a prolonged period of time)
CPU from 3m to 20m
Memory from 16Mi to 64Mi
These changes are minimal and won’t have any significant impact on the footprint of a virtual cluster.
When deploying virtual clusters with vCluster CLI, there is no automatic enabling of syncing real nodes for Kind clusters anymore.
Upgrade Notes: If you want to continue to enable this syncing, then you will need to this configuration to your vcluster.yaml
:
sync:
fromHost:
nodes:
enabled: true
controlPlane:
service:
spec:
type: NodePort
There have been significant CLI changes as the above changes have required refactoring how the CLI worked in some areas. Besides the above changes, we merged the overlapping commands found in loft
and vcluster pro
. The full summary of CLI changes can be found in our docs at the following sites:
General List of CLI Changes - Listing out what’s new, what’s been renamed or dropped.
Guide using vcluster convert
to convert values.yaml
files for pre-v0.20 virtual clusters to the updated vcluster.yaml
to be used in upgrading to a v0.20+ vCluster
Reference guide of loft
CLI commands to new vcluster
commands
Prior to v0.20, when you enabled syncing Ingresses from the virtual to the host cluster, it would also automatically sync all IngressClasses from the host cluster. However, this required a cluster role which some vCluster users don’t have. We’ve now decoupled these syncing behaviors so you can individually enable syncing Ingresses as well as IngressClasses separately.
sync:
toHost:
ingresses:
enabled: true
fromHost:
ingressClasses:
enabled: true
Our Cluster API (CAPI) provider has been updated with a new version (v0.2.0) that supports the new vcluster.yaml
format.