Changelog

Follow new updates and improvements to vCluster.

August 15th 2024

Major Changes

Please read this section carefully as it may be breaking changes.


New config format: vcluster.yaml

This release introduces the new vcluster.yaml file which centralizes all the configuration options for vCluster and serves as the Helm values at the same time. This new configuration features a completely revamped format designed to enhance the user experience:

  • Validation: We provide a JSON schema for vcluster.yaml, which is used by vCluster CLI and vCluster Platform UI now validate configurations before creating or upgrading virtual clusters. This schema has also been published to SchemaStore, so that most IDEs will recognize the vcluster.yaml file and provide autocomplete and validation directly in the IDE editor.

  • Consolidated configuration: All configurations are centralized in the vcluster.yaml file, eliminating confusion previously caused by the mix of CLI flags, annotations, environment variables, and Helm values.

  • Consistent grouping and naming: Fields in vcluster.yaml are logically grouped under topical categories, simplifying navigation and enhancing discoverability of related features.

  • Docs alignment: Our documentation now mirrors the structure of vcluster.yaml, making it easier to cross-reference settings within the file and corresponding sections in the docs.

Migrating to vcluster.yaml

In order to make it easy to convert your old values.yaml (v0.19 and below) to the new vcluster.yaml format, you can run the new vcluster convert config command. For example, let's take these pre-v0.20 configuration values:

# values.yaml
sync:
  ingresses:
    enabled: true
  nodes:
    enabled: true
  fake-nodes:
    enabled: false
syncer:
  replicas: 3
  extraArgs:
  - --tls-san=my-vcluster.example.com

Running vcluster convert config --distro k3s < /path/to/values.yaml will generate the following vcluster.yaml:

# vcluster.yaml
sync:
  toHost:
    ingresses:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true
    nodes:
      enabled: true
controlPlane:
  distro:
    k3s:
      enabled: true
  proxy:
    extraSANs:
    - my-vcluster.example.com
  statefulSet:
    highAvailability:
      replicas: 3
    scheduling:
      podManagementPolicy: OrderedReady

For more details on upgrading from older versions to v0.20, please read our configuration conversion guide.


Unified Helm chart for simplified deployment

We consolidated the distro-specific vCluster Helm charts (vcluster (k3s), vcluster-k8s, vcluster-k0s, and vcluster-eks) into a single, unified chart. This change is designed to simplify management and upgrading of virtual clusters:

  • Single source: No more juggling multiple charts. The vcluster.yaml serves as the single source for all configuration in a unified Helm chart for all distros.

  • Enhanced validation: We've introduced a JSON schema for the Helm values, ensuring that upgrades will only proceed if your configuration matches the expected format to reduce deployment errors.

  • Customizable distributions: The new unified chart structure enables easier customization of Kubernetes distributions directly via the Helm chart values:

# vcluster.yaml
controlPlane:
  distro:
    k8s:
      enabled: true

K8s distro now supports SQLite & external databases

So far, virtual clusters running the vanilla k8s distro only supported etcd as storage backend which made this distro comparatively harder to operate than k3s. With vCluster v0.20, we’re introducing two new backing store options for vanilla k8s besides etcd:

  • SQLite offers a more lightweight solution for data storage without the overhead associated with more complex choices like etcd or external databases. It is the new default for virtual clusters running the vanilla k8s distro.

  • External Databases allow users to use any MySQL or Postgres compatible databases as backing stores for virtual clusters running the vanilla k8s distro. This especially useful for users who plan to outsource the backing store operations to managed database offerings such as AWS RDS or Azure Database.

Note: Switching backing stores is currently not supported. In order to use this new backing store, you will need to deploy net new virtual clusters and migrate the data manually with backup and restore tooling such as Velero. Upgrading your configuration via vcluster convert config will explicitly write the previously used data store into your configuration to make sure upgrading an existing virtual cluster does not require changing the backing store.

EKS distro has been discontinued

Previously, vCluster offered the option to use EKS as a distro to run vCluster. However, this lead many users to believe they had to use the EKS distro to run vCluster on an EKS host cluster, which is not correct because any vCluster distro is able to run on an EKS host cluster. Given that the EKS distro did not provide any benefits beyond the vanilla k8s distro and introduced unnecessary confusion and maintenance effort, we decided to discontinue this distro. If you want to deploy virtual clusters on an EKS host cluster, we recommend using the k8s distro for vCluster going forward. If you plan on upgrading a virtual cluster that used EKS as a distro, please carefully read and follow this upgrade guide in the docs.


Changes in defaults for vCluster

There are several changes in the default configuration of a vCluster that are important for any users upgrading to v0.20+ or deploying net new clusters.

Default distro changed from k3s to vanilla k8s

We changed the default distribution for the vCluster control plane from K3s to K8s. This is the least opinionated option, offering greater flexibility and compatibility:

  • Flexibility: More customization and scalability options, catering to a broader range of deployment needs.

  • Compatibility: In addition to embedded and external etcd, you can now use various storage backends including SQLite, Postgres, and MySQL. This addition addresses previous challenges with using K8s for smaller virtual clusters.

Upgrade Notes: Switching distributions is not supported, so in order to use this new default, you will need to deploy net new virtual clusters.

Default image vcluster-pro

We've updated the default image repository for vCluster to ghcr.io/loft-sh/vcluster-pro. This change allows users to seamlessly test and adopt vCluster Pro features without having to switch images from OSS to Pro. The Pro features are integrated into the Pro image but remain inactive by default to ensure that your experience remains exactly the same as with the OSS image.

Upgrade Notes: When upgrading from previous versions, the image will automatically be updated to start to pull from the new repository. For users who prefer to continue using the open-source image, simply adjust your vcluster.yaml configuration to set the repository to loft-sh/vcluster-oss. See the docs for details.

New Default Scheduling of Control Plane Pod: Parallel

We’ve updated the scheduling rule of the control plane from OrderedReady to Parallel. Since vCluster typically runs as a StatefulSet, this setting cannot be changed after the virtual cluster been deployed.

Increased Resource Requests

We increased the default resource requests for vCluster including increasing:

  • Ephemeral storage from 200Mi to 400Mi (to ensure that SQLite powered virtual clusters have enough space to store data without running out of storage space when they are used over a prolonged period of time)

  • CPU from 3m to 20m

  • Memory from 16Mi to 64Mi

These changes are minimal and won’t have any significant impact on the footprint of a virtual cluster.

Disabled Node Syncing for Kind Clusters

When deploying virtual clusters with vCluster CLI, there is no automatic enabling of syncing real nodes for Kind clusters anymore.

Upgrade Notes: If you want to continue to enable this syncing, then you will need to this configuration to your vcluster.yaml :

sync:
  fromHost:
    nodes:
      enabled: true
controlPlane:
  service:
    spec:
      type: NodePort

Behavior Changes

CLI Updates

There have been significant CLI changes as the above changes have required refactoring how the CLI worked in some areas. Besides the above changes, we merged the overlapping commands found in loft and vcluster pro. The full summary of CLI changes can be found in our docs at the following sites:

  • General List of CLI Changes - Listing out what’s new, what’s been renamed or dropped.

  • Guide using vcluster convert to convert values.yaml files for pre-v0.20 virtual clusters to the updated vcluster.yaml to be used in upgrading to a v0.20+ vCluster

  • Reference guide of loft CLI commands to new vcluster commands

Ingress syncing behavior has changed

Prior to v0.20, when you enabled syncing Ingresses from the virtual to the host cluster, it would also automatically sync all IngressClasses from the host cluster. However, this required a cluster role which some vCluster users don’t have. We’ve now decoupled these syncing behaviors so you can individually enable syncing Ingresses as well as IngressClasses separately.

sync:
  toHost:
    ingresses:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true

Updated CAPI Provider Support

Our Cluster API (CAPI) provider has been updated with a new version (v0.2.0) that supports the new vcluster.yaml format.

July 11th 2024

Announcing vCluster Platform v4.0.0-beta

View the full changelog here

Highlights

Deploy vCluster your way

Deploy vCluster with your existing tools like Argo CD without requiring a Platform Agent to be installed in the host cluster. Externally deployed instances will now connect and register directly with the Platform after running the vCluster CLI command: vcluster add vcluster VCLUSTER_NAME

Alternatively, configure the Platform secret in the vcluster.yaml configuration file:

external:
  platform:
    apiKey:
      secretName: "vcluster-platform-api-key"
      namespace: "" # empty defaults to the Helm release namespace

The Platform now supports multiple vCluster deployment types:

  • Deployed by Platform, managed by Platform.

  • Deployed by Helm, managed by Platform with Platform Agent on host cluster.

  • Deployed by Helm, managed by Platform without a Platform Agent on host cluster.

Support for vCluster v0.20

Now you can use the latest vCluster version v0.20.0-beta together with the Platform v4.0.0-beta capabilities and activate vCluster Pro features.

Migrating vCluster from v0.19 to v0.20

The Platform automatically attempts to convert existing vCluster v0.19 values to the new v0.20 vcluster.yaml configuration file when upgrading it via the UI. This is in addition to the vCluster v0.20 CLI command you can run to convert pre-v0.19 values: vcluster convert config --distro k3s -f VALUES_FILE > vcluster.yaml


Redesigned vCluster UI editor

  • The new vCluster UI editor brings together configuration, cluster resource visibility and audit logs into one full page view.

  • vCluster v0.20 instances display a new vcluster.yaml viewer and editor to make it easier to configure with validation and auto-complete.

  • "Spaces" has been renamed to "Host Namespaces" for clarity, however, functionality remains the same as Platform v3.4

Other Changes

  • vCluster v0.20.x is now the default version when creating virtual clusters via the Platform.

  • Offline virtual clusters without an Agent on the host cluster are automatically deregistered and removed from the Platform after 24 hours of being disconnected from the Platform.

  • Added a status filter to the Namespaces product page, formally called "Spaces".

Breaking Changes

  1. Project namespaces: The default namespace prefix changed from loft-p- to just p-.
    Note: Existing Platform users need to explicitly set this configuration to projectNamespacePrefix: loft-p- in the Platform configuration when upgrading or re-installing from pre-v4 to v4 to ensure the existing namespace prefix is maintained.

  2. Isolated Control Plane: Isolated Control plane configuration moved from the Platform to the vcluster.yaml configuration file under experimental.isolatedControlPlane.

  3. Spaces: Existing users of the Loft Spaces product need to use the vCluster v0.20 CLI in conjunction with this Platform v4.0.0-beta release.

  4. Removed APIs: virtualclusters.cluster.loft.sh and spaces.cluster.loft.sh

  5. Externally deployed: Externally deployed virtual clusters now have a spec.external boolean field on the VirtualClusterInstance CRD instead of the previous loft.sh/skip-helm-deploy annotation.

Deprecations

  • Loft CLI: The Loft CLI is now deprecated. The majority of commands have been migrated to the vCluster v0.20 CLI.

  • Auto-import: Automatically importing via annotation is no longer supported. Virtual clusters can be automatically imported by configuring the external.platform.apiKey.secretName or by creating them via the vCluster CLI while logged into the platform vcluster create VCLUSTER_NAME --driver platform.

Upgrading

  • Ensure that you have upgraded first to v3 before attempting to upgrade to v4

  • Existing virtual clusters cannot have their vCluster version modified via the UI at the moment. This will be enabled in a subsequent release. However, upgrading from v0.19 to v0.20 is currently possible via the vCluster list page within the "vCluster Version" column.

  • Upgrading from Platform v3 to v4 is only possible with vCluster v0.20 CLI. The UI will support upgrading in a future release.

View upgrade guide

View the full changelog here

April 17th 2024

New

Improved

Fixed

Changes

  • feature: Add support for istio ingress gateway sleep mode activity tracking (by @lizardruss in #2519)

  • enhancement: Performance improvements for loft use space and loft use vcluster commands (by @lizardruss in #2609)

  • fix(agent): NetworkPeer proxy is now running highly available on all agent replicas (by @ThomasK33 in #2527)

  • fix(loftctl): Loftctl is now printing additional debug messages if the --debug flag is set (by @ThomasK33 in #2521)

  • fix: The Platform will now wait with project deletion until the underlying namespace is deleted correctly (by @neogopher in #2544)

  • fix: fix an issue where generic sync was blocked without pro subscription (by @rohantmp in #2537)

  • fix: Fixed an IPAM race condition, potentially causing multiple network peers to have the same IP assigned on startup (by @ThomasK33 in #2550)

  • fix: The cluster controller will update a cluster's phase status during its initialization (by @ThomasK33 in #2566)

  • fix: Fixed an issue where the Platform wasn't able to deploy vClusters with version v0.20.0-alpha.1 or newer (by @FabianKramm in #2572)

  • fix: Automatically fix incorrect IPAM state in NetworkPeer CRDs (by @ThomasK33 in #2573)

  • fix: Fixed an issue with clusters not connecting to a highly available control plane (by @ThomasK33 in #2579)

  • fix: Fixed an issue where ts net server would restart if multiple access keys were found (by @FabianKramm in #2612)

  • chore: Updated default vCluster version to 0.19.5 (by @ThomasK33 in #2551)

April 15th 2024

New

Improved

Fixed

We’re thrilled to introduce the beta release of vCluster v0.20 marking a significant milestone driven by user feedback and insights gathered over three years since we launched vCluster.

Read the blog post more details or the conversion guide to get started.

⚠ Breaking Changes ⚠

Unified Helm chart for simplified deployment

We've streamlined the deployment process by consolidating all different vCluster Helm charts (vcluster, vcluster-k8s, vcluster-k0s, and vcluster-eks) into a single, unified chart. This change is designed to simplify management and upgrading of virtual clusters:

  • Single source: No more juggling multiple charts.

  • Value conversion: A new vCluster CLI command to convert vCluster v0.19 to v0.20 values is provided (view conversion guide)

  • Enhanced validation: We've introduced a values schema JSON to the Helm chart, ensuring that upgrades will only proceed if your configuration matches the expected format to reduce deployment errors.

  • Customizable distributions: The new unified chart structure enables easier customization of Kubernetes distributions directly via the Helm chart values:

controlPlane:
  distro:
    k8s:
      enabled: true

View the new format

New intuitive vcluster.yaml configuration & docs

We're excited to introduce the new vcluster.yaml file, replacing the previous Helm values.yaml. This new configuration features a completely revamped format designed to enhance the user experience:

  • Validation: The vCluster CLI and Platform UI now validate configurations when creating virtual clusters. In addition, most IDEs will now automatically provide validation and autocomplete for vCluster configurations.

  • Consolidated configuration: All configurations are centralized in the vcluster.yaml file, eliminating confusion previously caused by the mix of CLI flags and Helm values. Please note, this release has a set of unsupported CLI flags (view release notes) however, the vCluster CLI vcluster convert config command makes it easy to transition to the new vcluster.yaml format.

  • Renamed fields: We've updated field names to be more intuitive, making them easier to understand and remember.

  • Reorganized structure: Fields are now logically grouped under topical categories, simplifying navigation and enhancing discoverability of related features.

  • Docs alignment: Our documentation now mirrors the structure of vcluster.yaml, making it easier to cross-reference settings within the file and corresponding sections in the docs.

New vCluster CLI command to convert old values to vcluster.yaml

In order to make it easy to convert your old values (pre-v0.20) to the new vcluster.yaml format, you can leverage the new CLI command: vcluster convert config command. For example, let's take these pre-v0.20 configuration values:

service:
  type: NodePort
sync:
  nodes:
    enabled: true

Passing the above old values using the vCluster CLI command vcluster convert config --distro k8s < /path/to/this/file.yaml will generate the following values:

controlPlane:
  backingStore:
    etcd:
      deploy:
        enabled: true
  distro:
    k8s:
      enabled: true
  service:
    spec:
      type: NodePort
  statefulSet:
    scheduling:
      podManagementPolicy: OrderedReady
sync:
  fromHost:
    nodes:
      enabled: true

View configuration conversion guide

Vanilla K8s is now the default distribution

We changed the default distribution for the vCluster control plane from K3s to K8s. This is the least opinionated option, offering greater flexibility and compatibility:

  • Flexibility: More customization and scalability options, catering to a broader range of deployment needs.

  • Compatibility: In addition to embedded and external etcd, you can now use various storage backends including SQLite, Postgres, and MySQL. This addition addresses previous challenges with using K8s for smaller virtual clusters.

Embedded SQLite is now the default backing store

Embedded SQLite has been set as the default backing store for the K8s distribution. This is to simplify operations and enhance performance for smaller virtual clusters:

  • Efficiency: SQLite offers a more lightweight solution for data storage without the overhead associated with more complex choices like etcd.

  • Simplicity: Setup is more straightforward, reducing the complexity and time required to get virtual clusters up and running.

Continued Support for etcd: For users with larger deployments or those needing more advanced features, external etcd deployed by vCluster remains a fully supported option:

controlPlane:
  distro:
    k8s:
      enabled: true
  backingStore:
    etcd:
      deploy:
        enabled: true

Pro is now the default image

We've updated the default image for vCluster to ghcr.io/loft-sh/vcluster-pro. This change allows users to seamlessly test and adopt vCluster Pro features without disrupting the existing open-source functionality. The Pro features are integrated into the Pro image but remain inactive by default to ensure that your experience remains consistent with the open-source version unless you specifically activate Pro features.

For users who prefer using the open-source image, simply adjust your vcluster.yaml configuration to use ghcr.io/loft-sh/vcluster-oss:

controlPlane:
  statefulSet:
    image:
      repository: ghcr.io/loft-sh/vcluster-oss

Ingress syncing behavior has changed

Pre-v0.20.0-beta.1, when you enabled syncing Ingresses from the virtual to the host cluster, it would also automatically sync all IngressClasses from the host cluster. However, this required a cluster role which some vCluster users don’t have. We’ve now decoupled these syncing behaviors so you can individually enable syncing Ingresses as well as IngressClasses separately.

sync:
  toHost:
    ingresses:
      enabled: true
  fromHost:
    ingressClasses:
      enabled: true

See the full release notes

March 7th 2024

New

Improved

Fixed

Egress-Only Agent Connections

Connected Host Clusters now communicate to the Platform via egress connections only. Previously, the Platform would reach out to each connected cluster to communicate with the remote vClusters. This change enhances security, simplifies connecting private clusters to the Platform, avoids creating non-expiring Kubeconfigs, and makes it more scalable by moving functionality into the Platform's Agent.For more information, please take a look at the documentation.

Notable Changes

  • vClusters that were created externally (e.g. Helm, Argo CD) are now automatically imported into the Platform

  • We added the ability to clone vCluster templates

  • vCluster and Platform configurations are now validated in the editor UI

  • Backup and restore the Platform via the UI

  • We now support installing Helm Apps with schema validation, which previously would fail.

  • The Platform now uses vCluster v0.19.4 as the default version

Other Changes

  • feat: Added loft devpod rebuild $WORKSPACE_NAME --project=$PROJECT_ID command to force rebuild cloud workspaces (by @pascalbreuninger in #2496)

  • feat: Check connectivity to the Platform router prior to configuring it by @ThomasK33

  • feat: We now allow ignoring user agents for Sleep Mode

  • feat: Added embedded derp mesh to the Platform by @ThomasK33

  • feat: Added import / export to workspaces / fix nonce by @FabianKramm

  • feat: Added passwordRef & usernameRef to apps by @FabianKramm

  • feat: Added clusters/accesskey by @FabianKramm

  • feat: Added online status to clusters by @ThomasK33

  • feat: Added automatic virtual cluster import by @FabianKramm

  • feat: Agent Upgrade Flow using NetworkPeer by @ThomasK33

  • feat: allow ignoring useragents for sleepmode

  • feat(apigateway): Added endpoints to create, serve and apply Platform backups by @pascalbreuninger

  • feat(cli): Create instance now allows specifying labels/annotations for instances

  • feat(cli): Added --product flag to loftctl start by @pascalbreuninger

  • feat(cli): Added get cluster-access-key command by @ThomasK33

  • feat(ui) - The Platform admin ui now displays a settings preview

  • feat(ui): runners ui by @andyluak

  • feat(ui): Accessing sleeping spaces if pods found is now allowed by @andyluak

  • feat(ui): Hide refresh license when offline by @andyluak

  • feat(ui): Added editor for workspace template definition by @pascalbreuninger

  • feat(ui): Icons and monaco are now lazy loaded by @andyluak

  • feat(ui): Tooltip displayed when Project quota is reached by @andyluak

  • feat(ui): DevPod workspaces by @andyluak

  • feat(ui): Restrict UI based configuration of Helm managed virtual cluster instances by @pascalbreuninger

  • feat(ui): New cluster connect flow by @andyluak

  • feat(ui): Added trial ended page by @pascalbreuninger

  • feat(ui): Preserve view in project change by @andyluak

  • feat(ui): Standardized status columns by @andyluak

  • feat(ui): Improved visibility of sleep mode by @andyluak

  • feat(ui): Hide editor secret values @andyluak

  • fix(ui): Crashing page on yaml editor by @andyluak

  • fix(ui): Style consistency by @andyluak

  • fix(ui): Sidebar updates by @andyluak

  • fix(ui): Age column filtering by @andyluak

  • fix(ui): Scrollbar when collapsing by @andyluak

  • fix(ui): Find default DevPod template if all templates are allowed by @pascalbreuninger

  • fix(ui): Align profile menu avatar with other nav items by @pascalbreuninger

  • fix(ui): Remove helm managed warning from vcluster objects by @pascalbreuninger

  • fix(ui): Login improvements by @andyluak

  • fix(ui): Force logout after error during login on /login page by @pascalbreuninger

  • fix(ui): vCluster status sync by @andyluak

  • fix(ui): unable to deploy manifests by @andyluak

  • fix(ui): Quota percentage now shows a status bar for bigint based quantities by @pascalbreuninger

  • fix(ui): Remove confusing text about kubectl app by @Oleg Matskiv

  • fix(loftctl): filter DevPod.Pro projects for runner access by @pascalbreuninger

  • fix(router): Disable Platform router on startup if config does not use router domains by @ThomasK33

  • fix(agent): Fixed a network peer issue when running agents in a highly-available setup (by @ThomasK33 in #2511)

  • fix: Fixed an issue where ArgoCD strategic merge patches failed with an unknown format error. (by @lizardruss in #2514)

  • fix: Rename default devpodworkspacetemplates by @pascalbreuninger

  • fix: Improved handling of editor errors by @andyluak

  • fix: Use insecure skip verify for runner task logs by @lizardruss

  • fix: Reconciler panic & vcluster upgrade by @FabianKramm

  • fix: Improve logging for failed cluster access by @rohantmp

  • fix: Use forked Helm with skip-schema-validation by @rohantmp

  • fix: Platform login message & remove trace output by @FabianKramm

  • fix: Proxy-handler connection by @FabianKramm

  • fix: Project roles now include DevPod.Pro workspace instances by @pascalbreuninger

  • fix: Platform config update & Platform upgrade timeout by @FabianKramm

  • fix: Allow features that are active, included & preview by @FabianKramm

  • fix: Validate clusterref for existing VirtualCluster and Spaces by @neogopher

  • fix: Allow service account token for cluster / direct virtual cluster by @FabianKramm

  • fix: Conditions & patch helper usage issues by @lizardruss

  • fix: Updated Platform vars cluster to work with projects by @lizardruss

  • fix: A Space from a different project is able to take over namespace owned by someone else by @lizardruss

  • fix: Rancher integration problem by @FabianKramm in #2427 [Alpha]

  • fix: Logging of debug errors by @FabianKramm in #2428

  • refactor(ui): Improved vCluster logging by @FabianKramm

  • refactor: Improved DevPod logging by @FabianKramm

  • refactor: Use new vCluster telemetry

  • refactor: Automatically set forwardToken to true on import by @FabianKramm

  • refactor: Remove replace imports by @FabianKramm

February 11th 2024

New

Improved

Fixed

Changes made since: v0.18.1

vCluster.Pro Changes

Embedded Etcd for EKS, K0s & K8s

We previously released embedded etcd for K3s and have now added support for the EKS, K0s and K8s distributions. When enabled, vCluster will start managing an embedded etcd cluster within the Syncer container. vCluster will automatically add or remove peers based on new replicas of the statefulset. This makes using HA a lot easier.

For more information, refer to the doc

Centralized Admission Control

The Centralized Admission Control feature allows platform admins to enforce webhook configurations (both validating and mutating) referencing the host cluster or external policy services from within the vCluster.

These configurations will be read-only within the vCluster and can only be set from the vCluster CLI or Helm values upon creation. This provides assurance to platform admins that vCluster admins will not be able to bypass or alter the hooks they set for a vCluster.

For more information, refer to the doc

Other Changes

  • Allow node port service for remote vCluster by @FabianKramm

  • Added offline license support by @FabianKramm

  • Added OSS license report automation by @ThomasK33

  • Bumped k8s version by @FabianKramm

  • Added Kyverno guide to docs by @facchettos

  • Removed enableHA field by @facchettos

  • Added migration support for etcd by @facchettos

  • Fix remote vCluster kubeconfig creation by @FabianKramm

vCluster OSS Changes

Merged K8s Api-Server and Controller-Manager into Syncer

vClusters are now even more streamlined with only 1 Pod instead of 3+ Pods. Similar to how we refactored K3s and K0s in the earlier version, we have now refactored the K8s and EKS distros to copy the api-server and controller-manager binary directly into the Syncer container to reduce complexity and to make the different vCluster distributions more similar and streamline certain features, such as metrics-server proxying.

Plugin API v2

We refactored how plugins in vCluster work and moved from a sidecar pattern to an init container pattern, where plugin binaries are copied through an init container into the syncer container.
This allows us to reuse go-plugin, which is one of the most used plugin frameworks out there. This makes logging easier as there is only a single container as well as allows you to directly package the plugin binary into the syncer image if needed.

Besides changing the architecture of plugins we also now allow specifying plugin configuration through a config Helm value:

plugin:
  my-plugin:
    version: v2
    image: ...
    config:
      my-plugin-config: my-value
      other-plugin-config: other-value

This config will be passed to the plugin and can easily be used within the plugin to unmarshal into a config struct. We also got rid of a lot of tech debt with this refactoring and added a new example plugin to sync secrets from the host cluster to the virtual cluster.

For more information about plugins, refer to the doc

Other Changes

New Contributors

Full Changelog: https://github.com/loft-sh/vcluster/compare/v0.18.1...v0.19.0

December 6th 2023

Improved