I am not overly happy with my current firewall setup and looking into alternatives.
I previously was somewhat OK with OPNsense running on a small APU4, but I would like to upgrade from that and OPNsense feels like it is holding me back with it’s convoluted web-ui and (for me at least) FreeBSD strangeness.
I tried setting up IPfire, but I can’t get it to work reliably on hardware that runs OPNsense fine.
I thought about doing something custom but I don’t really trust myself sufficiently to get the firewall stuff right on first try. Also for things like DHCP and port forwarding a nice easy web GUI is convenient.
So one idea came up to run a normal Linux distro on the firewall hardware and set up OPNsense in a VM on it. That way I guess I could keep a barebones OPNsense around for convenience, but be more flexible on how to use the hardware otherwise.
Am I assuming correctly that if I bind the VM to hardware network interfaces for WAN and LAN respectively it should behave and be similarly secure to a bare metal firewall?
I’d been running OPNsense in a VM for some time. I used xen as a hypervisor, but that shouldn’t really be a requirement. Passed the nics through and it was golden! All the benefits of a VM - quick boot-up, snapshots on the hypervisor - it’s truly glorious :)
Sounds great. What about hardware acceleration features of the NIC? I read somewhere that its better to disable the support for that in OPNsense when running it in a VM?
Dunno, worked well for me. Give it a shot and see if anything needs to be disabled.
in my case the driver had a bug with power management, so i had to disable that on the hypervisor.
other than that everything worked well, passing the nics through also passes all the features.
Another option is to pass through the PCIe devices to the VM.
I just saw that option. What would be the advantages and disadvantages of this?
I guess when I pass the actual NIC device the hardware acceleration should work?
Edit: Looks like my host system does not support this, at least that is the error I get when trying ;)
For one you offload the entire processing and driver handling to the VM, so if the OS wants to do something funky, it can.
I run it in a Proxmox VM, and since I have 3 nodes with the same hardware (2 NICS) I configure the networking identical for all three, and have used HA for OPNsense. It’s triggered a couple times in fact, and the only way I know is that I get a notification that it’s jumped nodes, because I couldn’t tell just sitting there and streaming while it happened.
Big fan of virtualizing it, can take snapshots before upgrading and online backups are seamless. I’ve restored a backup when I had it act a bit weird after an upgrade. I restored the previous backup in an inactive state, then cut them over pretty much live as I started up the restored VM and downed the borked one.
Edit: I wouldn’t use passthrough if you’re running a multinode setup like this. Just configure network bridges with the same name and giv’er.
Yes, this is totally possible and I did it for a couple of years with OPNsense. I actually had an OPNsense box and a pfSense box both on Hyper-V. I could toggle between them easily and it worked well. There are CPU considerations which depend on your traffic load. Security is not an issue as long as you have the network interface assignments correct and have not accidentally attached the WAN interface to any other guest VM’s.
Unfortunately, when I upgraded to 1Gb/s (now 2Gb/s) on the WAN, the VM could not keep up. No amount of tuning in the Hyper-V host (dual Xeon 3GHz) or the VM could resolve the poor throughput. I assume it came down to the 10Gb NICs and their drivers, or the Hyper-V virtual switch subsystem. Depending on what hardware offload and other tuning settings I tried, I would get perfect throughput one way, but terrible performance in the other direction, or some compromise in between on either side. There was a lot of iperf3 testing involved. I don’t blame OPNsense/pfSense – these issues impacted any 10Gb links attached to VM’s.
Ultimately, I eliminated the virtual router and ended up where you are, with a baremetal pfSense on a much less powerful device (Intel Atom-based). I’m still not happy with it – getting a full 2Gb/s up and down is hard.
Aside from performance, one of the other reasons for moving the firewall back to a dedicated unit was that I wanted to isolate it from any issues that might impact the host. The firewall is such a core component of my network, and I didn’t like it going offline when I needed to reboot the server.
HyperV is a dog. I wouldn’t blame the VM.
Try VyOS. I run it on APU2 myself. No GUI no convolution.
I come from VyOS and really liked it, but still prefer opnsense for the GUI, constant updates and plugins. VyOS started losing appeal once they opted for subscription stable iso access (even if they did give me a free subscription for some comment contribution in their repo). Also, I have to admit, that VyOS needs a fraction of the resources needed by opnsense.
Open source projects need to make money somehow. I found VyOS method quite acceptable. They giving good instruction and tools to build your own stable ISO. So do not be lazy or contribute somehow. Unfortunately their paid support costs too much. I was considering trying to push VyOS to be used as virtual router at my work, but it costs more than Cisco C8000v
I keep wanting to look into that one. Can it be easily extended from the Debian repositories?
nope, it is very deeply customized debian. Need to be installed from scratch.
So you’re planning to reuse the same hardware that the firewall is running on now, by installing a hypervisor and then only running opnsense in that?
It is more powerful hardware with much higher single thread performance which should help with OPNsense networking; Ultimately to allow more than 1gbit WAN input which my current firewall hardware is incapable off, although that is still in the future.
But I feel like I could utilize this hardware better if it was running something other than OPNsense, thus the idea to make it run it in a VM.
Ah ok. I’ve done opnsense and pfsense both virtualized in proxmox and on bare metal. I’ve done the setup both at two work places now and at home. I vastly prefer bare metal. Managing it in a VM is a pain. The nic pass through is fine, but it complicates configuration and troubleshooting. If you’re not getting the speeds you want then there’s now two systems to troubleshoot instead of one. Additionally, now you need to worry about keeping your hypervisor up and running in addition to the firewall. This makes updates and other maintance more difficult. Hypervisors do provide snapshots, but opnsense is easy enough to back up that it’s not really a compelling argument.
My two cents is get the right equipment for the firewall and run bare metal. Having more CPU is great if you want to do intrusion detection, DNS filtering, vpns, etc. on the firewall. Don’t feel like you need to hypervisor everything
Yeah, I did do a test-setup with OPNsense in a VM today and it mostly works. But I see where you are coming from and usually I also prefer setups that are easier to maintain and with less footguns. I guess I’ll sleep over it first.
Am I assuming correctly that if I bind the VM to hardware network interfaces for WAN and LAN respectively it should behave and be similarly secure to a bare metal firewall?
Correct.
I did that in my old playground VMware stack. I’ll leave you with my cautionary tale (though depending on the complexity of your network, it may not fully apply).
My pfSense (OPNsense didn’t exist yet) firewall was a VM on my ESX server. I also had it managing all of my VLANs and firewall rules and everything was connected to distributed vSwitches in vmware… Everything worked great until I lost power longer than my UPS could hold on and had to shut down.
Shutdown was fine, but the cold start left me in a chicken/egg situation. vSphere couldn’t connect to the hypervisors because the firewall wasn’t routing to them. I could log into the ESX host directly to start the pfSense VM, but since vSphere wasn’t running, the distributed switches weren’t up.
The moral is: If you virtualize your core firewall, make sure none of the virtualization layers depend on it. 😆
Thanks for the quick reply.
What about the LAN side: Can I bridge that adapter to the internal network of the VM host somehow to avoid an extra hop to the main switch and back via another network port?
May depend on your hypervisor, but generally yes. Should be able to give the VM a virtual NIC in addition to the two physical ones you bind, and it shouldn’t care about the difference when you create a LAN bridge interface.
Depending on your setup/layout, either enable spanning tree or watch out for potential bridge loops, though.
Well I cant say specifically to your setup. But I have an old server that I use as a hypervisor running Proxmox. On it I have an OPNsense VM Ive been running for years now. I dont do any crazy pass thru stuff. I just added each NIC as a normal network device bridged to the specific hardware port. I also use it as a Reverse Proxy for all my internal service such as Emby.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HA Home Assistant automation software ~ High Availability PCIe Peripheral Component Interconnect Express VPN Virtual Private Network
4 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #638 for this sub, first seen 28th Mar 2024, 20:45] [FAQ] [Full list] [Contact] [Source code]
I use a Proxmox Cluster and assigned dedicated NICs to my OPNsense VMs (also clustered). I connected the NIC ports I assigned to the OPNsense VMs directly with a cable and reserved it for CARP usage. I can easily download with 1GB/s and the VMs switch without any packet loss during failover, 10/10 would do it again.
I followed some guide to put Opnsense on Proxmox. I pass through 2 NICs and set the KVM (using the Proxmox make-a-VM GUI) to be the CPU arch it runs on for that extra speed (but that setting precludes easy transfer to a new box with a different arch). Plenty fast and I run another Linux VM on the same box that does stuff I’d want Opnsense to do (DNS, VPN, etc.).
If I did it again I’d prob do LXD (Incus now), Proxmox has a long startup time and is fiddly to use (to me at least). Looks like Incus can do the same KVM thing, just with less steps and stock Debian.
If you have a managed switch you can also just do vlan tags for your wan and not have to pass any nics to the VM.
Yeah, I though about that, but that sounds like a footgun waiting to happen.
I’ve been doing it for years, no issues. It’s fairly common in the enterprise as well.
I can’t remember all the details, but depending on the CPU you are running you may need some extra configuration on opnsense.
There were a few issues, on my servers, running on older Intel Xeon CPUs, but I eventually fixed them adding proper flags to deal with different bugs.
Other than that, running on a VM is really handy.