Klara

Create and Maintain a Virtual Infrastructure with bhyve in FreeBSD has its own high-performance hypervisor called “bhyve”. Much like the Linux kernel’s KVM hypervisor, bhyve enables the creation and maintenance of virtual machines—aka “guests”—which run at near-native speed alongside the host operating system.

 

From 0 to Bhyve on FreeBSD 13.1 

FreeBSD has its own high-performance hypervisor called “bhyve”. Much like the Linux kernel’s KVM hypervisor, bhyve enables the creation and maintenance of virtual machines—aka “guests”—which run at near-native speed alongside the host operating system.

Although bhyve got a later start than Linux KVM, in most ways it has caught up with its primary rival—and in some ways surpassed it. When configured properly, bhyve guests perform similarly to KVM guests, and in some cases outperform them.

The major remaining hurdle bhyve needs to overcome to reach parity with KVM revolves around the tooling and documentation needed for a system administrator to create and manage their virtual machines. Today, we’re going to take a crack at the latter by providing a full, newbie-friendly guide to setting up a bhyve-based virtual machine host on FreeBSD 13.1.

Once you finish following this guide, you’ll have a shiny new FreeBSD 13.1 system that runs VMs with operating systems including but not limited to Linux, FreeBSD, and Windows—and it will automatically take and prune snapshots of both the host OS and the guests automatically, according to policies you control.

 

Installing FreeBSD 13.1 

You don’t need incredible hardware to make virtualization practical—any recent AMD CPU and most recent Intel CPUs will fit the bill nicely. Similarly, you can get by with 8GiB of RAM—or even less, depending on how many guests you need and what you expect them to do.

In addition to hardware virtualization support on the CPU itself, you’ll need to make certain this feature is enabled in BIOS. This is unfortunately out of scope for the article, since every BIOS is different—but don’t gloss over this step. In my experience, the majority of motherboard vendors disable virtualization by default—particularly on brand-name OEM machines and laptops.

Once you’ve made sure hardware virtualization is supported and enabled on your hardware, it’s time to install FreeBSD 13.1. Grab yourself a copy of the amd64-dvd1 ISO from the official download site, prep a thumbdrive with it, and get to installing!

For the purposes of this guide, we recommend treating the install as a next-fest—don’t mess with the defaults unless you know what you’re doing. In particular, we strongly recommend letting it automatically partition your drive with a ZFS root—that’s going to be crucial for some of our later steps.

When FreeBSD asks if you’d like to create additional users at the end of the installation process, tell it yes—you’ll need one standard user account for yourself, attached to the wheel group.

 

A shortcut for the impatient 

Although we walk you through both installing the GNOME desktop and enabling and initializing VM support step by step here, we’ve also made a script available to do all the grunt work for you.

If you’d prefer to let the script handle installation, git clone our 0-bhyve repository into a working directory, run the script, and let the machine handle the magic—once it’s done, you’ll just need to manually install the correct graphics driver for your system and reboot.

On a freshly installed system, the process looks like this (with some output as you run the commands elided):

root@fbsd:/# pkg update  
root@fbsd:/# pkg install -y git 
root@fbsd:/# git clone https://github.com/jimsalterjrs/0-to-bhyve 
root@fbsd:/# cd 0-to-bhyve 
root@fbsd:/# sh ./install-bhyve.sh 

After the reboot, you’ll have:

  • GNOME desktop installed, enabled, and configured for automatic start
  • bhyve itself enabled and configured
  • The vm-bhyve guest management system installed and configured
  • A custom uefi.conf template installed to create new VMs with

—and you can skip the next couple of sections, picking the guide back up at Creating a Windows Server 2019 VM using vm-bhyve.

However, we still recommend reading the guide first—that way, you understand what the script is doing, and can better troubleshoot in case of any issues!

 

Installing the GNOME desktop 

Now that you’ve installed FreeBSD 13.1 and rebooted, it’s time to get yourself a GUI so that you can manage your VMs.

Purists may consider this step optional, and if you want to run your machine headless you can—but without a GUI, you won’t be able to pull graphical consoles on your VMs locally. (You can still manage them graphically from a remote machine using VNC, but for this tutorial we’re going to assume you’ve got a local desktop available.)

The first step is installing the necessary packages:

root@fbsd:/# pkg update 
root@fbsd:/# pkg install xorg gdm gnome gnome-desktop 

You will also probably need to install a package that supports your GPU. Our test hardware for this tutorial is a SuperMicro server which uses on-motherboard AST graphics; without also installing the xf86-video-ast package, xorg wouldn’t start at all.

You can get a list of supported video drivers with the command pkg search xf86-video, which shows you a list of all packages beginning with xf86-video. Once you’ve found the correct package for your video chipset, pkg install it just like you did for xorg and gnome3 themselves.

In the next step, we need to enable the graphical environment in /etc/rc.conf by appending the following lines:

# enable GNOME3 desktop environment 
gnome_enable=”YES” 
moused_enable=”YES” 
dbus_enable=”YES” 
hald_enable=”YES” 
gdm_enable=”YES” 

Most guides recommend enabling the optional /proc filesystem. We did not find that necessary in our testing, but it’s possible that some features of the GNOME3 desktop depend on it and we just didn’t notice them. If you’d like to enable /proc, add the following line to the bottom of /etc/fstab:

proc	/proc	procfs	rw	0	0 

Now that you’ve installed your packages and configured /etc/rc.conf (and optionally enabled /proc in /etc/fstab), it’s time to reboot the system into your new graphical desktop!

root@fbsd:/# shutdown -r now

When the system comes back up, you’ll have a graphical login prompt which will take you to a full GNOME3 desktop environment.

 

Installing, enabling, and initializing bhyve-vm 

Now that we’ve got a nice desktop environment active, let’s enable and configure bhyve. We will be using the vm-bhyve package to create, configure, and manage our VMs. (You may have noticed virt-manager in the pkg repositories and be tempted to use it instead—but it’s nowhere near ready for production yet.)

If you followed our recommendations in the “Installing FreeBSD” section above, you’ll have a ZFS pool named zroot, and the following directions can be followed exactly. If you did something different, you’ll need to modify these steps accordingly!

root@fbsd:/# pkg install vm-bhyve bhyve-firmware tigervnc-viewer 
root@fbsd:/# zfs create zroot/bhyve 
root@fbsd:/# zfs set recordsize=64K zroot/bhyve 
root@fbsd:/# zfs create zroot/bhyve/.templates 

With the above steps, we first installed the vm-bhyve package (and a VNC viewer we’ll use to pull graphical consoles on our VMs later), then created a parent dataset for the VMs we will be creating.

We then changed the ZFS recordsize property from its default value of 128K to 64K to suit the workload our VMs will impose and created a dataset to store templates which we’ll use to create VMs from.

For now, let’s go ahead and initialize our VM root. This step is only performed once—don’t repeat it when you create more VMs later! First, add the following lines to /etc/rc.conf:

# needed for virtualization support 
vm_enable=”YES” 
vm_dir=”zfs:zroot/bhyve” 

Now, add the following line to the end of /boot/loader.conf:

# needed for virtualization support 
vmm_load=”YES” 

With this done, it’s time to configure networking for our bhyve guests. First, we’ll need to figure out the device name of our network interface. This device name will differ from one system to the next based on the type of network card being used, but you should be able to find it like this:

root@fbsd:/# ifconfig | grep RUNNING 
ix1: flags=8963<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500 
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 

On a brand-new, freshly installed system you should only see two interfaces listed: loopback (lo0) and your actual hardware network interface. On this system, that’s ix1—but other systems may have designations like em0, re0, and so forth.

Now that you know your interface name, substitute it if necessary, where we refer to “ix1” below!

root@fbsd:/# vm init 
root@fbsd:/# vm switch create public 
root@fbsd:/# vm switch add public ix1 

We now have a switch interface named vm-public which will show up in future ifconfig commands; the VMs that we create will automatically get new tap interfaces created and assigned to them using that vm-public switch.

At this point, we’re done with our initial configuration, so let’s reboot the system one more time to apply all the changes we made in /etc/rc.conf and /etc/loader.conf:

root@fbsd:/# shutdown -r now 

When the system comes back up, log into your GNOME3 desktop once more, and we’ll be ready to create our first VMs!

 

Creating a Windows Server 2019 VM using vm-bhyve

In this tutorial, our first guest will be a Windows 2019 server. First, download the ISO for Windows Server 2019. (If you don’t already have a web browser to use for this step, pkg install firefox-esr will get you one.)

If you’re willing to register with Microsoft, you can download the ISO for Server 2019 directly from Microsoft. If you’d prefer to bypass registering, you can use isoriver’s landing page instead. In either case, the actual ISO file will come from Microsoft—isoriver just deeplinks directly to the file, bypassing the registerwall.

Whichever route you take to get the ISO, download it directly to /zroot/bhyve on your new FreeBSD system. You may also want to shorten the filename of the downloaded ISO to make it less cumbersome on the command line later—we renamed ours to a simple “Server2019.iso,” which you’ll see referenced in the actual creation steps below.

Once you’ve downloaded your ISO and renamed it Server2019.iso, you’ll want to download an ISO containing the latest stable virtio drivers for Windows. The stable download directory offers two ISO files—they’re both the same file, but one is just named “virtio-win.iso” while the other includes the version number; as of this writing that’s “virtio-win-0.1.217.iso”.

We’re going to download it using the simpler “virtio-win.iso” name, but in practice it’s generally a better idea to use the name which includes the version number; this makes it easier to see whether you need to update your local copy in the future.

Now that we’ve got both our Windows Server 2019 ISO and our virtio driver ISO, it’s time to create the guest! None of the templates that vm-bhyve ship with are quite right, so first we’ll create our own at /zroot/bhyve/.templates/uefi.conf:

# If you want to pull a graphical console, you'll need the UEFI loader, 
# no matter what OS you're installing on the guest. 

loader="uefi" 
graphics="yes" 
xhci_mouse="yes" 

# If not specified, cpu=n will give the guest n discrete CPU sockets. 
# This is generally OK for Linux or BSD guests, but Windows throws a fit 
# due to licensing issues, so we specify CPU topology manually here. 

cpu=2 
cpu_sockets=1 
cpu_cores=2 

# Remember, a guest doesn’t need extra RAM for filesystem caching-- 
# the host handles that for it. 4G is ludicrously low for Windows on hardware, 
# but it’s generally more than sufficient for a guest. 
memory=4G 

 # put up to 8 disks on a single ahci controller. This avoids the creation of 
 #  a new “controller” on a new “PCIe slot” for each drive added to the guest. 

ahci_device_limit="8" 

 # e1000 works out-of-the-box, but virtio-net performs better. Virtio support 
# is built in on FreeBSD and Linux guests, but Windows guests will need 
# to have virtio drivers manually installed. 

#network0_type="e1000" 

network0_type="virtio-net" 
network0_switch="public" 

 # bhyve/nvme storage is considerably faster than bhyve/virtio-blk 
 # storage in my testing, on Windows, Linux, and FreeBSD guests alike. 

disk0_type="nvme" 
disk0_name="disk0.img" 

# This gives the guest a virtual "optical" drive. Specifying disk1_dev=”custom” 
# allows us to provide a full path to the ISO. 

disk1_type="ahci-cd" 
disk1_dev="custom" 
disk1_name="/zroot/bhyve/virtio-win.iso" 

# windows expects the host to expose localtime by default, not UTC 

utctime="no" 

This newly created template will serve us well for guests running FreeBSD, Linux, or Windows. Now that we’ve got a nice clean template to use, let’s create our first guest!

root@fbsd:/# vm create -t uefi -s 100G windows2019

This command creates the framework for our new guest. -t uefi tells vm-bhyve to use the uefi.conf template we created in /zroot/bhyve/.templates. The next argument, windows2019, will be the name we give the VM. Finally, -s 100G specifies a 100GiB image file for our VM’s virtual C: drive.

If you like, you can inspect and/or modify the new guest’s configuration before you begin its actual installation. The command vm config windows2019 brings the configs up in our system default text editor... which is vi, on a newly installed FreeBSD 13.1 host.

If you’ve already opened the guest’s configs in vi and you don’t know what’s happening, type in :q! and press enter to get out; once you’re back at a command prompt you can setenv EDITOR ee to use FreeBSD’s much friendlier Easy Editor in future invocations of vm config.

Now that our windows2019 guest’s hardware configuration is the way we want it, it’s time to actually install Windows on it:

root@fbsd:/# vm install windows2019 /zroot/bhyve/server2019.iso 
Starting windows2019 
  * found guest in /zroot/bhyve/windows2019 
  * booting... 

We can now find our VM—and the VNC port to connect to its console with—as follows:

root@fbsd:/# vm list 
NAME         DATASTORE  LOADER     CPU  MEMORY  VNC           AUTOSTART  STATE 
windows2019  default    uefi       2    2G      0.0.0.0:5900  Yes [1]    Locked (FreeBSD) 

The VNC column reads 0.0.0.0:5900, which tells us that we can connect on localhost port 5900. So, we now run vncviewer from the command line, which pops up a TigerVNC app window; it already defaults to localhost, so we can just click “Connect” and we’ll get a GUI window with our Windows Server 2019 install process in it!

We’re not going to walk you through every step of the Windows installation process, since it’s mostly no different under bhyve than it would be under KVM or even on bare metal—but if you’re a newbie to Windows Server specifically, we strongly recommend choosing “Standard Evaluation (Desktop Experience).”

When you get the “Where do you want to install Windows?” prompt, you should click “Load driver” down at the bottom of that window. Doing so will allow you to browse through the VirtIO driver ISO we attached to the guest during configuration, where you’ll want to head to NetKVM\2k19\amd64 and install the RedHat VirtIO Ethernet Adapter.

From here on out, it’s smooth sailing and a next-fest. When you’re done, the guest will reboot—and when it gets to the Windows login screen, press F8 to bring up TigerVNC’s context menu, from where you can tell it to send Ctrl-Alt-Del to the guest and allow you to type in your username and password.

Congratulations—you’re now the proud owner of a Windows VM running under bhyve!

 

Creating non-Windows guests under bhyve 

If you’d like to create guests based on other operating systems, you don’t actually need to do anything different—the uefi.conf template we created earlier is perfectly suited to installations of FreeBSD and Linux distributions as well.

FreeBSD and Linux guests both support virtio-net adapters out of the box, so you don’t need the virtio-win driver ISO for them. You might also be tempted to use virtio-blk storage in those guests—but we don’t recommend it; in our testing bhyve/nvme greatly outperformed bhyve/virtio-blk, regardless of the guest operating system type.

You might also be interested in “headless” guest installs here, which can be managed over faux serial console by using the command vm console guestname. If you prefer to omit the VNC server and manage your FreeBSD or Linux guests solely via serial console, just change the “graphics” setting in vm config from “yes” to “no.”

Once you’ve started a serial console session with vm console guestname, you can exit it with the same shortcut you’d use to exit a running SSH session—namely, ~. (tilde, dot, enter). (If you’re already in an SSH session, you can exit the console without exiting SSH by typing ~~. Instead.)

A final note—FreeBSD and Linux guests can also be started headless with parameters “loader=bhyveload” and “loader=grub” respectively, but for general purpose use cases we recommend sticking with UEFI.

 

Automatically starting guests on boot 

If you’d like some of your guests to start automatically when the host system boots, you just need to add a couple of stanzas to /etc/rc.conf. In the following example, we auto start three VMs:

# start the following vms automatically, at vm_delay second intervals 
vm_list="windows2019 freebsd ubuntujammy" 
vm_delay="5" 

It’s worth noting that vm_delay isn’t really necessary on most systems. The 5 second delay shown above will slightly decrease the spike in storage load caused by booting all the VMs simultaneously, but it also takes longer to get them all booted.

On a laptop—or a host with conventional hard drives instead of solid state—you will likely want to tune vm_delay as shown, or perhaps even higher. On a fast desktop or server, you may want to tune the delay sharply downward, or off entirely.

 

Automatically snapshot your host and guests with Sanoid 

There’s one last box to check before we can call this build done—our guests need automated snapshot protection, so that if anything goes wrong with one, we can simply power it off and roll back its storage.

We recommend the sanoid package for this task—it’s actively maintained, has a wide userbase, is easy to configure, and it’s available directly from the FreeBSD package repository itself:

root@fbsd:/# pkg install sanoid 

Once you’ve installed the package, you’ll need to do a bit of minor configuration. FreeBSD’s package manager installs a default config file in /usr/local/etc/sanoid/sanoid.conf which offers both usable templates and a sample configuration. We’re going to keep the templates but change the above-the-fold configuration to match our system.

When you open sanoid.conf in your favorite text editor, look for this stanza:

############################# 
# templates below this line # 
############################# 

Everything below that line is a template, and (for now at least) we’ll leave it unmodified. Everything above that line is just a sample config, which we’ll replace with the following:

[zroot] 
        use_template = production 
        recursive = yes 

This tells sanoid to snapshot every dataset and zvol beneath the zroot pool’s default dataset, according to policies laid out in the production template. You can see that template further down in the config file, and see that it specifies taking 36 hourly snapshots, 30 daily snapshots, and 3 monthly snapshots.

With that done, we just need a crontab entry for sanoid. Assuming you’re already root, enter the command crontab -e and add the following cron job:

*/5 * * * * /usr/local/bin/sanoid --cron 

That’s it! Your system will now check to see if it should take new snapshots and/or delete old, expired snapshots once every five minutes. If you should, for instance, get ransomware on your shiny new Windows VM, getting rid of it is as easy as finding and rolling back to a prior snapshot:

root@fbsd:/# zfs list -rt snap zroot/bhyve/windows2019 
NAME                                                                    USED  AVAIL     REFER  MOUNTPOINT 
zroot/bhyve/windows2019@syncoid_usp-dev1_2022-05-26:20:32:32-GMT00:00  1.13G      -     5.98G  - 
zroot/bhyve/windows2019@autosnap_2022-06-10_16:08:39_monthly              0B      -     6.16G  - 
zroot/bhyve/windows2019@autosnap_2022-06-10_16:08:39_daily                0B      -     6.16G  - 
zroot/bhyve/windows2019@autosnap_2022-06-11_00:00:00_daily                0B      -     12.9G  - 
zroot/bhyve/windows2019@autosnap_2022-06-12_00:00:00_daily                0B      -     12.9G  - 
zroot/bhyve/windows2019@autosnap_2022-06-12_07:00:01_hourly               0B      -     12.9G  - 

root@fbsd:/# vm poweroff -f windows2019 

root@fbsd:/# zfs rollback -r zroot/bhyve/windows2019@autosnap_2022-06-12_07:00:01_hourly 

root@fbsd:/# vm start windows2019 

That’s all there is to it—your VM has “time traveled” back to the instant the snapshot was created, presumably prior to the ransomware infection (or other catastrophe, such as a botched Windows Update) we need to undo.

Sanoid can do much, much more—such as simplified or even automated ZFS replication using the syncoid command, or functioning as a Nagios plugin to alert you if your pool throws a disk or a backup system stops getting fresh backups—but the rest is outside the scope of this article; interested readers are encouraged to check out the project’s upstream repository.

 

Conclusions

Assuming you’ve followed our guide from start to finish, you should now have a shiny new FreeBSD 13.1 server capable of and ready to run high performance virtual machines with a wide variety of guest operating systems.

FreeBSD’s bhyve hypervisor—and its associated tooling—is considerably younger than better-known hypervisors, but it’s off to a strong start with competitive performance and fewer bits of legacy cruft than most of its peers.

Back to Articles

What makes us different, is our dedication to the FreeBSD project.

Through our commitment to the project, we ensure that you, our customers, are always on the receiving end of the best development for FreeBSD. With our values deeply tied into the community, and our developers a major part of it, we exist on the border between your infrastructure and the open source world.