Homelab Considerations for VMware vSphere 8

There has been a lot of great technical content from both VMware and the broader community since the announcement of vSphere 8, which happened a few weeks ago.

2 years ago   •   6 min read

By CloudNerve©
Table of contents
Article Credit: WilliamLam.com

There has been a lot of great technical content from both VMware and the broader community since the announcement of vSphere 8, which happened a few weeks ago. I know many of you are excited to get your hands on both vSphere 8 and vSAN 8 and while we wait for GA, I wanted to share some of my own personal experiences but also some of the considerations for those interested in running vSphere 8 in their homelab.

As with any vSphere release, you should always carefully review the release notes when they are made available and verify that all of your hardware and the underlying components are officially listed on the VMware HCL, which will be updated when vSphere 8 and vSAN 8 GA’s. This is the only way to ensure that you will have the best possible experience and a supported configuration from VMware.

Disclaimer: The following considerations are based on early observations using pre-GA builds of vSphere 8 and it does not reflect any official guidance or support from VMware.

CPU Support

The following CPU generations listed below for both Intel and AMD will not be supported with vSphere 8.

  • Intel
  • SandyBridge-DT, SandyBridge-EP/EN
  • IvyBridge-DT, IvyBridge-EP/EN, IvyBridge-EX
  • Haswell-DT, Haswell-EP, Haswell-EX
  • Broadwell-DT/H
  • Avoton
  • AMD
  • Bulldozer – Interlagos, Valencia, Zurich
  • PileDriver – Abu Dhabi, Seoul, Delhi
  • Steamroller – Berlin
  • Kyoto

If you boot the ESXi 8.0 installer and it detects that you have an unsupported CPU, you will see the following error message.

The default behavior of the ESXi 8.0 installer is to prevent users from installing ESXi on a system that has a CPU that is not officially supported.

ProTip: With that said, there is a workaround for those that wish to forgo official support from VMware or for homelab and testing purposes, you can add the following ESXi kernel option (SHIFT+O):

allowLegacyCPU=true

which will turn the error message into a warning message and enable the option to install ESXi, knowing that the CPU is not officially supported and accept any risks in doing so.

I/O Device Support

Similarly for I/O devices such as networking, storage, etc. that are not be supported with vSphere 8, the ESXi 8.0 installer will also list out the type of device and its respective Vendor & Device ID (see screenshot above for an example).

To view the complete list of unsupported I/O devices for vSphere 8, please refer to the following VMware KB 88172 article for more information. I know many in the VMware Homelab community makes use of the Mellanox ConnectX-3 for networking, so I just wanted to call this out as that is no longer supported and folks should look to using either the ConnectX-4 or ConnectX-5 as an alternative.

ProTip: A very easy and non-impactful way to check whether your existing CPU and I/O devices will run ESXi 8.0 is to simply boot an ESXi 8.0 installer using USB and check whether it detects all devices. You do NOT have to perform an installation to check for compatibility and you can also drop into the ESXi Shell (Alt+F1) using root and no password to perform additional validation. If you are unsure whether ESXi 8.0 will run on your platform, this is the eaiest way to validate without touching your existing installation and workloads.

For those that require the use of the Community Networking Driver for ESXi Fling to detect onboard networking like some of the recent Intel NUC platforms, folks should be be happy to learn that this Fling has been officially productized as part of vSphere 8 and custom ESXi ISO image will no longer be needed. For those that require the USB Network Native Driver for ESXi Fling, a new version of the Fling that is compatible with vSphere 8 will be required and folks should wait for that to be available before installing and/or upgrading to vSphere 8.

USB Install/Upgrade Support

Last year, VMware published revised guidance in VMware KB 85685 regarding the installation media for ESXi which includes ESXi 8.0 and specifically when using an SD or USB device. While ESXi 8.0 will continue to support installation and upgrade using an SD/USB device, it is highly recommended that customers consider a more reliable installation media like an SSD, especially for the ESXi OSData partition. Post-ESXi 8.0, USB installation and upgrades using an SD/USB device will no longer be supported and it is best to have a solution now than to wait for that to happen, if you ask me.

If you do decide to install and/or upgrade ESXi 8.0 using SD/USB device, the following warning message will be presented before allowing you to proceed with the install or upgrade.

Hardware Platform Support

While this is not an exhaustive list of hardware platforms that can successfully run ESXi 8.0, I did want to share the list of systems that I have personally tested and hope others may also contribute to this list over time to help others within the community.

The following hardware platforms can successfully run ESXi 8.0:

  • Intel NUC 9 Extreme (Ghost Canyon)
  • Intel NUC 9 Pro (Quartz Canyon)
  • Intel NUC 10 Performance (Frost Canyon)
  • Intel NUC 11 Performance (Panther Canyon)
  • Intel NUC 11 Pro (Tiger Canyon)
  • Intel NUC 11 Extreme (Beast Canyon)
  • Intel NUC 12 (Dragon Canyon)
  • Intel NUC 12 Pro (Wall Street Canyon)
  • Supermicro E200-8D
  • Supermicro E302-12D

VCSA Resource Requirements

With all the new capabilities in vSphere 8, it should come as no surprise that additional resources are required for the vCenter Server Appliance (VCSA). Compared to vSphere 7, the only change is the amount of memory for each of the VCSA deployment sizes, which has increased to an additional 2GB of memory. For example, in vSphere 7, a “Tiny” configuration would required 12GB of memory and in vSphere 8, it now will require 14GB of memory.

ProTip: It is possible to change the memory configurations after the initial deployment and from my limited use, I have been able to configure a Tiny configuration with just 10GB of memory without noticing any impact or issues. Depending on your usage and feature consumption, you may need more memory but so far it has been working fine for a small 3-node vSAN cluster.

Nested ESXi Resource Requirements

Using Nested ESXi is still by the far the easiest and most efficient way to try out all the cool new features that vSphere 8 has to offer. If you plan to kick the tires with the new vSAN 8 Express Storage Architecture (ESA), at least from a workflow standpoint, make sure you can spare at least 16GB of memory (per ESXi VM), which is the minimum required to enable this feature.


Note: If you intend to only use vSAN 8 Original Storage Architecture (OSA), then you can ignore the 16GB minimum as that only applies to enabling vSAN ESA. For basic vSAN OSA enablement, 8GB (per ESXi VM) is sufficient and if you plan to run workloads, you may want to allocate more memory but behavior should be the same as vSphere 7.x.

A bonus capability that I think is worth mentioning is that configuring MAC Learning on a Distributed Virtual Portgroup is now possible through the vSphere UI as part of vSphere 8. The MAC Learning feature was introduced back in vSphere 6.7 but was only available using the vSphere API and I am glad to finally see this available in the vSphere UI for those looking to run Nested Virtualization!

Spread the word