I’m thinking about making some changes to my home server to make it a little more robust and let me do some cool new things with it (like actually trust it for backing up data to with NextCloud, replicating VMs or data across sites, etc). I’m just looking for any advice people might have for this process to migrate hypervisors.

What I currently have:

  • Windows 10 Pro OS with Hyper-V
  • Running some applications on the host OS (Plex/PRTG/Sonarr/Radarr)
  • Running a few VMs for things I set up after I realized “I should be doing these in VMs…”
  • 4 HDDs for data, each just mounted individually. 2 for TV, 1 for Movies, 1 for Backups

What I’d like to have:

  • Better OS for running the hypervisor (Proxmox is what I’m reading may be best, but I’m open to suggestions)
  • Nothing running on the host OS other than a hypervisor
  • All my services running virtualized, be that via Docker in a LXC or a guest OS.
  • My Drives all in a RAID 5. Planning to add more drives at some point as well.

My thoughts on the process are that the “easiest” way may be:

  1. Just throw a new OS drive in to install Proxmox on
  2. Export my VMs from Hyper-V and import them into Proxmox
  3. Set up the services I had running on the host OS previously in their own VMs/containers
  4. Make a new RAID either: a. with new disks or b. by combining data from my existing disks so I can get a free few disks to start the RAID with, then moving data into the RAID and clearing out more disks to then add to the RAID, rinse and repeat until done (that’s a lot of data moving I’d like to avoid…)

I wasn’t sure if it would be a smarter idea to do something more like this though (assuming this is all possible, I’m not even sure that it all is). If this is possible, it might reduce my downtime and make it so I can tackle this in bits at a time instead of having an outage the entire time and feeling like I need to rush to get it all done:

  1. New OS drive for Proxmox
  2. Use Proxmox to boot my Windows 10 drive (this I’m not sure about) so that everything continues as it’s currently set up.
  3. Slowly migrate my services out of the Windows 10-hosted VMs and host-installed services
  4. I probably still have to deal with the RAID the way I mentioned above

Is there any other method I’m just totally not thinking of? Any tips/tricks for migrating those Hyper-V VMs? That part seems straightforward enough, but looking for any gotchas.

The reason I haven’t done anything yet is because I only have so much time in the day, and I’m not trying to dedicate an entire weekend to this migration all at once. If I could split up the tasks it’d make it easier to do, obviously there are some parts that would be time-consuming.

Thanks in advance!

      • Bakkoda@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The wiki is all I used to move. I exported to vhdx, qemu-img convert to qcow2, qm importdisk into some premade VMs. As for VM settings they were all Debian VMs from an old HyperV install.

  • Sethayy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Personally I tried proxmox and ended up switching back to a basic debian with qemu.

    Proxmox is certainly easier, but they make some weird decisions like not supporting shared filesystem’s (albiet theyre a bit ghetto, and they probably don’t want to deal with those issues on an enterprise front)

    You can even use Broadway in gtk3 to have an accessible web interface, for something like virt-manager.

    LXD has a built in web UI as well

    • shadshack@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      How hard is replication across servers with just debian and qemu? I’m honestly not super great on linux knowledge. I’m a Windows sysadmin by trade, with maybe 10-20% linux. I run a few Ubuntu server VMs at home and some RHEL at work. So I’m looking for something as easy to set up and well-documented and supported as possible. Proxmox just seemed like the “industry standard” for selfhosting, but I was also looking at Unraid (which is supposedly better at storage and less good at virtualization) or even ESXI, but I didn’t want to get into the VMWare payment bubble if I needed anything more than a simple host.

      • Sethayy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Ngl proxmox would be your best bet then, especially for reliability and being able to forget about most of the linux background.

        Qemu is what proxmox uses behind the scenes, but puts an easy to use webUI in front of everything, minus a couple less used/less stable features some power users like

  • Anony Moose@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Noob question. What kind of apps are you (or other self-hosters) running in VMs? With containerized apps, it seems like VMs would be largely unnecessary, unless you need some particular device to be virtualized? Or am I misunderstanding?

    • skadden@ctrlaltelite.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I can’t speak for everyone else, but I run about 6 different VMs solely to run different docker containers. They’re split out by use case, so super critical stuff on one VM, *arr stuff on another, etc. I did this so my tinkering didn’t take down Jellyfin and other services for my wife and kids.

      Beyond that I also have two VMs for virtualized pihole running gravity sync on different hosts, and another I intend to use for virtualized opnsense.

      Everything is managed via ansible with each docker project in its own forgejo repo.

      • Anony Moose@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        So in addition to docker containers, you use separate VMs for isolation? Or are you just referring to the docker containers themselves?

        I have separate docker containers for different apps too, but no virtualization beyond that.

        • skadden@ctrlaltelite.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Yeah I use different VMs to separate out the different containers into arbitrary groups I decided on.

          I run my docker containers inside different Debian VMs that are on a couple different Proxmox hosts.

          • Anony Moose@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Interesting. Seems like it might be worth it for me to look into Ansible and perhaps even Proxmox on the new SFFPC I’m building as a media center for stability/reliability.

            Thanks!

  • Voroxpete@sh.itjust.works
    cake
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Don’t run docker directly on your host. It likes to fuck with your IP tables in ways that make VM networking break. Create a single VM to be your docker host and then put all the containers on there (KVM - which is what Proxmox is under the hood - offers near-native CPU and memory performance anyway).

    When building your VM environments I’d suggest going with XFS for your file system. At my work we recently switched over our containerised apps from EXT4 to XFS and got a 4-6 times improvement in IOPs as a result.

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I think exactly the opposite, in 5 years i never had any problem with IP tables (although i assign a manually created network to each docker).

      One of the biggest advantage of docker is that it can access unlimited memory and storage from host, without reserve a specific amount of both. If now it used 200mb, it doesn’t waste the 8gb you would assign to a VM. And if tomorrow it needs 10gb, it can access the extra unused RAM.

      • YIj54yALOJxEsY20eU@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        Docker will block networking to KVM VM’s if they are configured with a network bridge, aka giving the VM’s their own LAN IP address. I had so much trouble I switched to podman. Also, you’ve been working with containers for 5 years and call them dockers? Not the strongest ethos based claim.

        • Moonrise2473@feddit.it
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 months ago

          Never happened to me

          I’m not saying it’s impossible but it’s not a common issue that happens by default

          • YIj54yALOJxEsY20eU@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            This whole post is in the context of virtual machines, and the claim you felt exactly opposite about was that docker will mess up VM networking. If you don’t use a network bridge with KVM that’s fair but I’m not sure why you would disagree with advice to a problematic situation only because it does not apply to you.

            • Moonrise2473@feddit.it
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 months ago

              i use a network bridge with KVM and i have containers with no issues… It worked fine at first install and I don’t think i am the only one in the world.