It’s certainly doable and something like that was my setup for a few years. There isn’t much in the way of distros or software packages that provide such a ‘personal multiseat’ configuration out of the box.
I wanted bare metal GUI access, so instead of using Proxmox, I went about configuring Debian to the task. This might not directly answer any questions, but here's an idea of what it looked like.
Hardware
- i7, 48 GB RAM, 500 W PSU
- GTX 1650 (passed through to VM), Radeon R5 340X (basic bare metal output)
- 60 GB SSD boot disk
- 1 TB SSD for VM images
- 2 x 4 TB HDD for NAS
- 1 TB HDD for testing, “overflow”, etc.
Boot disk
- Debian stable with XFCE
- Virtual machines set up through virt-manager and each port forwarded to LAN
- unattended-upgrades, ufw / iptables firewall
- GUI more for ease of management, software on bare metal kept to a minimum
Virtual machines / (RAM allotment)
- Desktop (10 GB): I would use this VM while seated at the machine for productivity and web browsing.
- NAS / media server (4 GB): both 4 TB HDDs passed through to this VM, which hosted a Samba file server and Jellyfin. Also served as file storage for a couple other VMs via internal connections. 4 TB of usable capacity since I set it to rsync to the second drive at 02:30 every morning.
- Misc. services (4 GB): second Samba file server for devices I wanted to sync but didn’t trust with access to my full 4 TB library. Also an Apache server to host a couple of HTML pages on LAN. Various other services tested here as well.
- Windows (8 GB)
- GPU access (16 GB): GTX 1650 forwarded here. Intended for gaming, but ended up using it for Stable Diffusion and LLMs for reasons below.
I’d suggest starting with anything graphically intensive running on bare metal and setting up a VM with virt-manager / Virtualbox / etc. for the NAS part. Get a couple of disks specifically to pass through to the NAS VM, forward its ports to LAN, and connect to them on the host as you would any other machine. For a desk further away, you may be able to get away with a KVM extender, but I can’t say I’ve any experience with them.
If you try to virtualize everything like I did, there’s a couple of hurdles:
- Much time and manual configuration in the command line is needed
- Atrocious graphical and input latency on remote connections
- Very high RAM usage
- Input glitches and general slowness on the VM with GPU passthrough, remained unresolved despite scouring tutorials from people who somehow managed to get buttery-smooth gaming in a VM
- Lots of bandwidth used while updating all of the VMs. Probably optimizable, but not out of the box.
Go for AMD if you can, but NVIDIA hasn’t given me much trouble either. Make sure to install the driver from your distro’s repo, not NVIDIA’s website. IMO, this is less of an issue if you decide to pass through the GPU to a VM since any NVIDIA driver shenanigans will be contained to the VM.
It is evident from the current top-level comments that more education is needed.