Follow new updates and improvements to vCluster.
August 15th 2024
Please read this section carefully as it may be breaking changes.
vcluster.yaml
This release introduces the new vcluster.yaml
file which centralizes all the configuration options for vCluster and serves as the Helm values at the same time. This new configuration features a completely revamped format designed to enhance the user experience:
Validation: We provide a JSON schema for vcluster.yaml
, which is used by vCluster CLI and vCluster Platform UI now validate configurations before creating or upgrading virtual clusters. This schema has also been published to SchemaStore, so that most IDEs will recognize the vcluster.yaml
file and provide autocomplete and validation directly in the IDE editor.
Consolidated configuration: All configurations are centralized in the vcluster.yaml
file, eliminating confusion previously caused by the mix of CLI flags, annotations, environment variables, and Helm values.
Consistent grouping and naming: Fields in vcluster.yaml
are logically grouped under topical categories, simplifying navigation and enhancing discoverability of related features.
Docs alignment: Our documentation now mirrors the structure of vcluster.yaml
, making it easier to cross-reference settings within the file and corresponding sections in the docs.
vcluster.yaml
In order to make it easy to convert your old values.yaml
(v0.19 and below) to the new vcluster.yaml
format, you can run the new vcluster convert config
command. For example, let's take these pre-v0.20 configuration values:
# values.yaml
sync:
ingresses:
enabled: true
nodes:
enabled: true
fake-nodes:
enabled: false
syncer:
replicas: 3
extraArgs:
- --tls-san=my-vcluster.example.com
Running vcluster convert config --distro k3s < /path/to/values.yaml
will generate the following vcluster.yaml
:
# vcluster.yaml
sync:
toHost:
ingresses:
enabled: true
fromHost:
ingressClasses:
enabled: true
nodes:
enabled: true
controlPlane:
distro:
k3s:
enabled: true
proxy:
extraSANs:
- my-vcluster.example.com
statefulSet:
highAvailability:
replicas: 3
scheduling:
podManagementPolicy: OrderedReady
For more details on upgrading from older versions to v0.20, please read our configuration conversion guide.
We consolidated the distro-specific vCluster Helm charts (vcluster (k3s), vcluster-k8s, vcluster-k0s, and vcluster-eks) into a single, unified chart. This change is designed to simplify management and upgrading of virtual clusters:
Single source: No more juggling multiple charts. The vcluster.yaml
serves as the single source for all configuration in a unified Helm chart for all distros.
Enhanced validation: We've introduced a JSON schema for the Helm values, ensuring that upgrades will only proceed if your configuration matches the expected format to reduce deployment errors.
Customizable distributions: The new unified chart structure enables easier customization of Kubernetes distributions directly via the Helm chart values:
# vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
So far, virtual clusters running the vanilla k8s distro only supported etcd as storage backend which made this distro comparatively harder to operate than k3s. With vCluster v0.20, we’re introducing two new backing store options for vanilla k8s besides etcd:
SQLite offers a more lightweight solution for data storage without the overhead associated with more complex choices like etcd or external databases. It is the new default for virtual clusters running the vanilla k8s distro.
External Databases allow users to use any MySQL or Postgres compatible databases as backing stores for virtual clusters running the vanilla k8s distro. This especially useful for users who plan to outsource the backing store operations to managed database offerings such as AWS RDS or Azure Database.
Note: Switching backing stores is currently not supported. In order to use this new backing store, you will need to deploy net new virtual clusters and migrate the data manually with backup and restore tooling such as Velero. Upgrading your configuration via vcluster convert config
will explicitly write the previously used data store into your configuration to make sure upgrading an existing virtual cluster does not require changing the backing store.
Previously, vCluster offered the option to use EKS as a distro to run vCluster. However, this lead many users to believe they had to use the EKS distro to run vCluster on an EKS host cluster, which is not correct because any vCluster distro is able to run on an EKS host cluster. Given that the EKS distro did not provide any benefits beyond the vanilla k8s distro and introduced unnecessary confusion and maintenance effort, we decided to discontinue this distro. If you want to deploy virtual clusters on an EKS host cluster, we recommend using the k8s distro for vCluster going forward. If you plan on upgrading a virtual cluster that used EKS as a distro, please carefully read and follow this upgrade guide in the docs.
There are several changes in the default configuration of a vCluster that are important for any users upgrading to v0.20+ or deploying net new clusters.
We changed the default distribution for the vCluster control plane from K3s to K8s. This is the least opinionated option, offering greater flexibility and compatibility:
Flexibility: More customization and scalability options, catering to a broader range of deployment needs.
Compatibility: In addition to embedded and external etcd, you can now use various storage backends including SQLite, Postgres, and MySQL. This addition addresses previous challenges with using K8s for smaller virtual clusters.
Upgrade Notes: Switching distributions is not supported, so in order to use this new default, you will need to deploy net new virtual clusters.
We've updated the default image repository for vCluster to ghcr.io/loft-sh/vcluster-pro
. This change allows users to seamlessly test and adopt vCluster Pro features without having to switch images from OSS to Pro. The Pro features are integrated into the Pro image but remain inactive by default to ensure that your experience remains exactly the same as with the OSS image.
Upgrade Notes: When upgrading from previous versions, the image will automatically be updated to start to pull from the new repository. For users who prefer to continue using the open-source image, simply adjust your vcluster.yaml
configuration to set the repository to loft-sh/vcluster-oss
. See the docs for details.
We’ve updated the scheduling rule of the control plane from OrderedReady
to Parallel
. Since vCluster typically runs as a StatefulSet, this setting cannot be changed after the virtual cluster been deployed.
We increased the default resource requests for vCluster including increasing:
Ephemeral storage from 200Mi to 400Mi (to ensure that SQLite powered virtual clusters have enough space to store data without running out of storage space when they are used over a prolonged period of time)
CPU from 3m to 20m
Memory from 16Mi to 64Mi
These changes are minimal and won’t have any significant impact on the footprint of a virtual cluster.
When deploying virtual clusters with vCluster CLI, there is no automatic enabling of syncing real nodes for Kind clusters anymore.
Upgrade Notes: If you want to continue to enable this syncing, then you will need to this configuration to your vcluster.yaml
:
sync:
fromHost:
nodes:
enabled: true
controlPlane:
service:
spec:
type: NodePort
There have been significant CLI changes as the above changes have required refactoring how the CLI worked in some areas. Besides the above changes, we merged the overlapping commands found in loft
and vcluster pro
. The full summary of CLI changes can be found in our docs at the following sites:
General List of CLI Changes - Listing out what’s new, what’s been renamed or dropped.
Guide using vcluster convert
to convert values.yaml
files for pre-v0.20 virtual clusters to the updated vcluster.yaml
to be used in upgrading to a v0.20+ vCluster
Reference guide of loft
CLI commands to new vcluster
commands
Prior to v0.20, when you enabled syncing Ingresses from the virtual to the host cluster, it would also automatically sync all IngressClasses from the host cluster. However, this required a cluster role which some vCluster users don’t have. We’ve now decoupled these syncing behaviors so you can individually enable syncing Ingresses as well as IngressClasses separately.
sync:
toHost:
ingresses:
enabled: true
fromHost:
ingressClasses:
enabled: true
Our Cluster API (CAPI) provider has been updated with a new version (v0.2.0) that supports the new vcluster.yaml
format.
July 11th 2024
Deploy vCluster with your existing tools like Argo CD without requiring a Platform Agent to be installed in the host cluster. Externally deployed instances will now connect and register directly with the Platform after running the vCluster CLI command: vcluster add vcluster VCLUSTER_NAME
Alternatively, configure the Platform secret in the vcluster.yaml
configuration file:
external:
platform:
apiKey:
secretName: "vcluster-platform-api-key"
namespace: "" # empty defaults to the Helm release namespace
The Platform now supports multiple vCluster deployment types:
Deployed by Platform, managed by Platform.
Deployed by Helm, managed by Platform with Platform Agent on host cluster.
Deployed by Helm, managed by Platform without a Platform Agent on host cluster.
Now you can use the latest vCluster version v0.20.0-beta together with the Platform v4.0.0-beta capabilities and activate vCluster Pro features.
The Platform automatically attempts to convert existing vCluster v0.19 values to the new v0.20 vcluster.yaml
configuration file when upgrading it via the UI. This is in addition to the vCluster v0.20 CLI command you can run to convert pre-v0.19 values: vcluster convert config --distro k3s -f VALUES_FILE > vcluster.yaml
The new vCluster UI editor brings together configuration, cluster resource visibility and audit logs into one full page view.
vCluster v0.20 instances display a new vcluster.yaml
viewer and editor to make it easier to configure with validation and auto-complete.
"Spaces" has been renamed to "Host Namespaces" for clarity, however, functionality remains the same as Platform v3.4
vCluster v0.20.x is now the default version when creating virtual clusters via the Platform.
Offline virtual clusters without an Agent on the host cluster are automatically deregistered and removed from the Platform after 24 hours of being disconnected from the Platform.
Added a status filter to the Namespaces product page, formally called "Spaces".
Project namespaces: The default namespace prefix changed from loft-p-
to just p-
.
Note: Existing Platform users need to explicitly set this configuration to projectNamespacePrefix: loft-p-
in the Platform configuration when upgrading or re-installing from pre-v4 to v4 to ensure the existing namespace prefix is maintained.
Isolated Control Plane: Isolated Control plane configuration moved from the Platform to the vcluster.yaml
configuration file under experimental.isolatedControlPlane
.
Spaces: Existing users of the Loft Spaces product need to use the vCluster v0.20 CLI in conjunction with this Platform v4.0.0-beta release.
Removed APIs: virtualclusters.cluster.loft.sh
and spaces.cluster.loft.sh
Externally deployed: Externally deployed virtual clusters now have a spec.external
boolean field on the VirtualClusterInstance
CRD instead of the previous loft.sh/skip-helm-deploy
annotation.
Loft CLI: The Loft CLI is now deprecated. The majority of commands have been migrated to the vCluster v0.20 CLI.
Auto-import: Automatically importing via annotation is no longer supported. Virtual clusters can be automatically imported by configuring the external.platform.apiKey.secretName
or by creating them via the vCluster CLI while logged into the platform vcluster create VCLUSTER_NAME --driver platform
.
Ensure that you have upgraded first to v3 before attempting to upgrade to v4
Existing virtual clusters cannot have their vCluster version modified via the UI at the moment. This will be enabled in a subsequent release. However, upgrading from v0.19 to v0.20 is currently possible via the vCluster list page within the "vCluster Version" column.
Upgrading from Platform v3 to v4 is only possible with vCluster v0.20 CLI. The UI will support upgrading in a future release.
April 17th 2024
New
Improved
Fixed
feature: Add support for istio ingress gateway sleep mode activity tracking (by @lizardruss in #2519)
enhancement: Performance improvements for loft use space
and loft use vcluster
commands (by @lizardruss in #2609)
fix(agent): NetworkPeer proxy is now running highly available on all agent replicas (by @ThomasK33 in #2527)
fix(loftctl): Loftctl is now printing additional debug messages if the --debug
flag is set (by @ThomasK33 in #2521)
fix: The Platform will now wait with project deletion until the underlying namespace is deleted correctly (by @neogopher in #2544)
fix: fix an issue where generic sync was blocked without pro subscription (by @rohantmp in #2537)
fix: Fixed an IPAM race condition, potentially causing multiple network peers to have the same IP assigned on startup (by @ThomasK33 in #2550)
fix: The cluster controller will update a cluster's phase status during its initialization (by @ThomasK33 in #2566)
fix: Fixed an issue where the Platform wasn't able to deploy vClusters with version v0.20.0-alpha.1 or newer (by @FabianKramm in #2572)
fix: Automatically fix incorrect IPAM state in NetworkPeer CRDs (by @ThomasK33 in #2573)
fix: Fixed an issue with clusters not connecting to a highly available control plane (by @ThomasK33 in #2579)
fix: Fixed an issue where ts net server would restart if multiple access keys were found (by @FabianKramm in #2612)
chore: Updated default vCluster version to 0.19.5 (by @ThomasK33 in #2551)
April 15th 2024
New
Improved
Fixed
We’re thrilled to introduce the beta release of vCluster v0.20 marking a significant milestone driven by user feedback and insights gathered over three years since we launched vCluster.
Read the blog post more details or the conversion guide to get started.
We've streamlined the deployment process by consolidating all different vCluster Helm charts (vcluster, vcluster-k8s, vcluster-k0s, and vcluster-eks) into a single, unified chart. This change is designed to simplify management and upgrading of virtual clusters:
Single source: No more juggling multiple charts.
Value conversion: A new vCluster CLI command to convert vCluster v0.19 to v0.20 values is provided (view conversion guide)
Enhanced validation: We've introduced a values schema JSON to the Helm chart, ensuring that upgrades will only proceed if your configuration matches the expected format to reduce deployment errors.
Customizable distributions: The new unified chart structure enables easier customization of Kubernetes distributions directly via the Helm chart values:
controlPlane:
distro:
k8s:
enabled: true
vcluster.yaml
configuration & docsWe're excited to introduce the new vcluster.yaml
file, replacing the previous Helm values.yaml
. This new configuration features a completely revamped format designed to enhance the user experience:
Validation: The vCluster CLI and Platform UI now validate configurations when creating virtual clusters. In addition, most IDEs will now automatically provide validation and autocomplete for vCluster configurations.
Consolidated configuration: All configurations are centralized in the vcluster.yaml file, eliminating confusion previously caused by the mix of CLI flags and Helm values. Please note, this release has a set of unsupported CLI flags (view release notes) however, the vCluster CLI vcluster convert config
command makes it easy to transition to the new vcluster.yaml format.
Renamed fields: We've updated field names to be more intuitive, making them easier to understand and remember.
Reorganized structure: Fields are now logically grouped under topical categories, simplifying navigation and enhancing discoverability of related features.
Docs alignment: Our documentation now mirrors the structure of vcluster.yaml
, making it easier to cross-reference settings within the file and corresponding sections in the docs.
vcluster.yaml
In order to make it easy to convert your old values (pre-v0.20) to the new vcluster.yaml
format, you can leverage the new CLI command: vcluster convert config
command. For example, let's take these pre-v0.20 configuration values:
service:
type: NodePort
sync:
nodes:
enabled: true
Passing the above old values using the vCluster CLI command vcluster convert config --distro k8s < /path/to/this/file.yaml
will generate the following values:
controlPlane:
backingStore:
etcd:
deploy:
enabled: true
distro:
k8s:
enabled: true
service:
spec:
type: NodePort
statefulSet:
scheduling:
podManagementPolicy: OrderedReady
sync:
fromHost:
nodes:
enabled: true
View configuration conversion guide
We changed the default distribution for the vCluster control plane from K3s to K8s. This is the least opinionated option, offering greater flexibility and compatibility:
Flexibility: More customization and scalability options, catering to a broader range of deployment needs.
Compatibility: In addition to embedded and external etcd, you can now use various storage backends including SQLite, Postgres, and MySQL. This addition addresses previous challenges with using K8s for smaller virtual clusters.
Embedded SQLite has been set as the default backing store for the K8s distribution. This is to simplify operations and enhance performance for smaller virtual clusters:
Efficiency: SQLite offers a more lightweight solution for data storage without the overhead associated with more complex choices like etcd.
Simplicity: Setup is more straightforward, reducing the complexity and time required to get virtual clusters up and running.
Continued Support for etcd: For users with larger deployments or those needing more advanced features, external etcd deployed by vCluster remains a fully supported option:
controlPlane:
distro:
k8s:
enabled: true
backingStore:
etcd:
deploy:
enabled: true
We've updated the default image for vCluster to ghcr.io/loft-sh/vcluster-pro
. This change allows users to seamlessly test and adopt vCluster Pro features without disrupting the existing open-source functionality. The Pro features are integrated into the Pro image but remain inactive by default to ensure that your experience remains consistent with the open-source version unless you specifically activate Pro features.
For users who prefer using the open-source image, simply adjust your vcluster.yaml
configuration to use ghcr.io/loft-sh/vcluster-oss
:
controlPlane:
statefulSet:
image:
repository: ghcr.io/loft-sh/vcluster-oss
Pre-v0.20.0-beta.1, when you enabled syncing Ingresses from the virtual to the host cluster, it would also automatically sync all IngressClasses from the host cluster. However, this required a cluster role which some vCluster users don’t have. We’ve now decoupled these syncing behaviors so you can individually enable syncing Ingresses as well as IngressClasses separately.
sync:
toHost:
ingresses:
enabled: true
fromHost:
ingressClasses:
enabled: true
See the full release notes
March 7th 2024
New
Improved
Fixed
Egress-Only Agent Connections
Connected Host Clusters now communicate to the Platform via egress connections only. Previously, the Platform would reach out to each connected cluster to communicate with the remote vClusters. This change enhances security, simplifies connecting private clusters to the Platform, avoids creating non-expiring Kubeconfigs, and makes it more scalable by moving functionality into the Platform's Agent.For more information, please take a look at the documentation.
Notable Changes
vClusters that were created externally (e.g. Helm, Argo CD) are now automatically imported into the Platform
We added the ability to clone vCluster templates
vCluster and Platform configurations are now validated in the editor UI
Backup and restore the Platform via the UI
We now support installing Helm Apps with schema validation, which previously would fail.
The Platform now uses vCluster v0.19.4 as the default version
Other Changes
feat: Added loft devpod rebuild $WORKSPACE_NAME --project=$PROJECT_ID
command to force rebuild cloud workspaces (by @pascalbreuninger in #2496)
feat: Check connectivity to the Platform router prior to configuring it by @ThomasK33
feat: We now allow ignoring user agents for Sleep Mode
feat: Added embedded derp mesh to the Platform by @ThomasK33
feat: Added import / export to workspaces / fix nonce by @FabianKramm
feat: Added passwordRef & usernameRef to apps by @FabianKramm
feat: Added clusters/accesskey by @FabianKramm
feat: Added online status to clusters by @ThomasK33
feat: Added automatic virtual cluster import by @FabianKramm
feat: Agent Upgrade Flow using NetworkPeer by @ThomasK33
feat: allow ignoring useragents for sleepmode
feat(apigateway): Added endpoints to create, serve and apply Platform backups by @pascalbreuninger
feat(cli): Create instance now allows specifying labels/annotations for instances
feat(cli): Added --product flag to loftctl start by @pascalbreuninger
feat(cli): Added get cluster-access-key command by @ThomasK33
feat(ui) - The Platform admin ui now displays a settings preview
feat(ui): runners ui by @andyluak
feat(ui): Accessing sleeping spaces if pods found is now allowed by @andyluak
feat(ui): Hide refresh license when offline by @andyluak
feat(ui): Added editor for workspace template definition by @pascalbreuninger
feat(ui): Icons and monaco are now lazy loaded by @andyluak
feat(ui): Tooltip displayed when Project quota is reached by @andyluak
feat(ui): DevPod workspaces by @andyluak
feat(ui): Restrict UI based configuration of Helm managed virtual cluster instances by @pascalbreuninger
feat(ui): New cluster connect flow by @andyluak
feat(ui): Added trial ended page by @pascalbreuninger
feat(ui): Preserve view in project change by @andyluak
feat(ui): Standardized status columns by @andyluak
feat(ui): Improved visibility of sleep mode by @andyluak
feat(ui): Hide editor secret values @andyluak
fix(ui): Crashing page on yaml editor by @andyluak
fix(ui): Style consistency by @andyluak
fix(ui): Sidebar updates by @andyluak
fix(ui): Age column filtering by @andyluak
fix(ui): Scrollbar when collapsing by @andyluak
fix(ui): Find default DevPod template if all templates are allowed by @pascalbreuninger
fix(ui): Align profile menu avatar with other nav items by @pascalbreuninger
fix(ui): Remove helm managed warning from vcluster objects by @pascalbreuninger
fix(ui): Login improvements by @andyluak
fix(ui): Force logout after error during login on /login page by @pascalbreuninger
fix(ui): vCluster status sync by @andyluak
fix(ui): unable to deploy manifests by @andyluak
fix(ui): Quota percentage now shows a status bar for bigint based quantities by @pascalbreuninger
fix(ui): Remove confusing text about kubectl app by @Oleg Matskiv
fix(loftctl): filter DevPod.Pro projects for runner access by @pascalbreuninger
fix(router): Disable Platform router on startup if config does not use router domains by @ThomasK33
fix(agent): Fixed a network peer issue when running agents in a highly-available setup (by @ThomasK33 in #2511)
fix: Fixed an issue where ArgoCD strategic merge patches failed with an unknown format error. (by @lizardruss in #2514)
fix: Rename default devpodworkspacetemplates by @pascalbreuninger
fix: Improved handling of editor errors by @andyluak
fix: Use insecure skip verify for runner task logs by @lizardruss
fix: Reconciler panic & vcluster upgrade by @FabianKramm
fix: Improve logging for failed cluster access by @rohantmp
fix: Use forked Helm with skip-schema-validation by @rohantmp
fix: Platform login message & remove trace output by @FabianKramm
fix: Proxy-handler connection by @FabianKramm
fix: Project roles now include DevPod.Pro workspace instances by @pascalbreuninger
fix: Platform config update & Platform upgrade timeout by @FabianKramm
fix: Allow features that are active, included & preview by @FabianKramm
fix: Validate clusterref for existing VirtualCluster and Spaces by @neogopher
fix: Allow service account token for cluster / direct virtual cluster by @FabianKramm
fix: Conditions & patch helper usage issues by @lizardruss
fix: Updated Platform vars cluster to work with projects by @lizardruss
fix: A Space from a different project is able to take over namespace owned by someone else by @lizardruss
fix: Rancher integration problem by @FabianKramm in #2427 [Alpha]
fix: Logging of debug errors by @FabianKramm in #2428
refactor(ui): Improved vCluster logging by @FabianKramm
refactor: Improved DevPod logging by @FabianKramm
refactor: Use new vCluster telemetry
refactor: Automatically set forwardToken to true on import by @FabianKramm
refactor: Remove replace imports by @FabianKramm
February 11th 2024
New
Improved
Fixed
Changes made since: v0.18.1
We previously released embedded etcd for K3s and have now added support for the EKS, K0s and K8s distributions. When enabled, vCluster will start managing an embedded etcd cluster within the Syncer container. vCluster will automatically add or remove peers based on new replicas of the statefulset. This makes using HA a lot easier.
For more information, refer to the doc
The Centralized Admission Control feature allows platform admins to enforce webhook configurations (both validating and mutating) referencing the host cluster or external policy services from within the vCluster.
These configurations will be read-only within the vCluster and can only be set from the vCluster CLI or Helm values upon creation. This provides assurance to platform admins that vCluster admins will not be able to bypass or alter the hooks they set for a vCluster.
For more information, refer to the doc
Allow node port service for remote vCluster by @FabianKramm
Added offline license support by @FabianKramm
Added OSS license report automation by @ThomasK33
Bumped k8s version by @FabianKramm
Added Kyverno guide to docs by @facchettos
Removed enableHA field by @facchettos
Added migration support for etcd by @facchettos
Fix remote vCluster kubeconfig creation by @FabianKramm
vClusters are now even more streamlined with only 1 Pod instead of 3+ Pods. Similar to how we refactored K3s and K0s in the earlier version, we have now refactored the K8s and EKS distros to copy the api-server and controller-manager binary directly into the Syncer container to reduce complexity and to make the different vCluster distributions more similar and streamline certain features, such as metrics-server proxying.
We refactored how plugins in vCluster work and moved from a sidecar pattern to an init container pattern, where plugin binaries are copied through an init container into the syncer container.
This allows us to reuse go-plugin, which is one of the most used plugin frameworks out there. This makes logging easier as there is only a single container as well as allows you to directly package the plugin binary into the syncer image if needed.
Besides changing the architecture of plugins we also now allow specifying plugin configuration through a config
Helm value:
plugin:
my-plugin:
version: v2
image: ...
config:
my-plugin-config: my-value
other-plugin-config: other-value
This config will be passed to the plugin and can easily be used within the plugin to unmarshal into a config struct. We also got rid of a lot of tech debt with this refactoring and added a new example plugin to sync secrets from the host cluster to the virtual cluster.
For more information about plugins, refer to the doc
Added basic comparison matrix for vCluster distro versions by @ishankhare07 in https://github.com/loft-sh/vcluster/pull/1411
Disabled dualstack for k0s by @facchettos in https://github.com/loft-sh/vcluster/pull/1413
Added connect cluster
command by @ThomasK33 in https://github.com/loft-sh/vcluster/pull/1415
Now writes the config to disk to avoid race condition with secret update by @facchettos in https://github.com/loft-sh/vcluster/pull/1418
Added the cp subcommand by @facchettos in https://github.com/loft-sh/vcluster/pull/1423
Feat: add node port config by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1426
Added initial Generic Sync Example for Traefik by @MarkTurney in https://github.com/loft-sh/vcluster/pull/1431
Added how to eneable-ssl-passthrough so users can avoid leaving the docs by @mpetason in https://github.com/loft-sh/vcluster/pull/1441
Merged k8s api-server, controller-manager, scheduler into syncer container by @facchettos in https://github.com/loft-sh/vcluster/pull/1440
Removed special cases for setup with k8s by @facchettos in https://github.com/loft-sh/vcluster/pull/1443
Added OSS license report action by @ThomasK33 in https://github.com/loft-sh/vcluster/pull/1447
Changed distro detection by @facchettos in https://github.com/loft-sh/vcluster/pull/1451
Added field to specify dedicated loadbalancer annotations by @ThomasK33 in https://github.com/loft-sh/vcluster/pull/1450
Use external package to manage values & fix imports by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1452
Adde plugin v2 by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1455
Adde hint about wildcard support for sync-labels field in docs by @neogopher in https://github.com/loft-sh/vcluster/pull/1461
Added cli info
command by @facchettos in https://github.com/loft-sh/vcluster/pull/1462
Added loft crds to scheme by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1470
Added ignore-not-found flag by @mariuskimmina in https://github.com/loft-sh/vcluster/pull/1458
Removed unused syncer.noargs
by @facchettos in https://github.com/loft-sh/vcluster/pull/1475
Improved startup by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1479
Now handles both deprecated replicas
and syncer.replicas
by @facchettos in https://github.com/loft-sh/vcluster/pull/1474
Added embedded etcd in k8s by @facchettos in https://github.com/loft-sh/vcluster/pull/1459
Added volume mount by @facchettos in https://github.com/loft-sh/vcluster/pull/1482
Migrated all replicas to new format by @facchettos in https://github.com/loft-sh/vcluster/pull/1485
Sync endpoint updates for service mappings of headless services by @neogopher in https://github.com/loft-sh/vcluster/pull/1481
Changed the default to not delete the persistent volume claim by @facchettos in https://github.com/loft-sh/vcluster/pull/1488
Removed unused values since the merge into a single container by @facchettos in https://github.com/loft-sh/vcluster/pull/1476
Show vCluster output only in debug by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1495
Changed migrate arguments by @facchettos in https://github.com/loft-sh/vcluster/pull/1494
Renamed kubelet-config to worker-config as it is removed in k0s 1.29 by @facchettos in https://github.com/loft-sh/vcluster/pull/1516
Updated analytics client lib by @facchettos in https://github.com/loft-sh/vcluster/pull/1520
Bumped k3s to 1.29 by @ishankhare07 in https://github.com/loft-sh/vcluster/pull/1442
Bumped k8s dependencies by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1471
Bumped k8s to 1.29 and kind to 1.28 by @ishankhare07 in https://github.com/loft-sh/vcluster/pull/1410
Refactor: add syncer watch on host by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1493
Refactor: enqueue host events by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1497
Refactor: events controller by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1510
Refactor: add isRemote to WriteKubeConfigToSecret by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1433
Refactor: allow extra sans by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1434
Fix: issue where vcluster would fallback to 8.8.8.8 in isolated mode without any way to configure it by @facchettos in https://github.com/loft-sh/vcluster/pull/1511
Fix: show pro vclusters if not logged in by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1416
Fix: increase limits for init containers by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1422
Fix: wrong volumes check by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1432
Fix: metrics server proxying by @FabianKramm in https://github.com/loft-sh/vcluster/pull/1480
Fix: serviceCIDR bug by @facchettos in https://github.com/loft-sh/vcluster/pull/1477
Fix: issue where vcluster would create pvcs even with persistence disabled by @facchettos in https://github.com/loft-sh/vcluster/pull/1492
Fix: failing Conformance test - evicts pods with minTolerationSeconds by @neogopher in https://github.com/loft-sh/vcluster/pull/1506
Fix: issue where emptyDir data volume never gets created regardless of .Values.syncer.storage.persistence value by @Guent4 in https://github.com/loft-sh/vcluster/pull/1513
Fix: Resolved an issue where running applications in vCluster on ARM64 nodes were encountering architecture label mismatches by @yeahdongcn in https://github.com/loft-sh/vcluster/pull/1514
@MarkTurney made their first contribution in https://github.com/loft-sh/vcluster/pull/1431
@mariuskimmina made their first contribution in https://github.com/loft-sh/vcluster/pull/1458
@Guent4 made their first contribution in https://github.com/loft-sh/vcluster/pull/1513
@yeahdongcn made their first contribution in https://github.com/loft-sh/vcluster/pull/1514
Full Changelog: https://github.com/loft-sh/vcluster/compare/v0.18.1...v0.19.0
December 6th 2023
Improved
added the load-testing docs to vcluster-pro by @facchettos in #182
add e2e test for cross vcluster coredns targets by @ishankhare07 in #183
chore: bump vcluster dep by @FabianKramm in #186