Setting Up a Hyper-V VM Lab on Home Network

When I started tinkering with Hyper-V, I was looking for some guidance on setting up a VM lab behind a cable modem and a router. While I found plenty of how-to posts on specifics of Hyper-V tasks, I found little in terms of networking and other best-practice concerns specific to a home network. Granted that the Hyper-V activities don’t have to be any different on a home network vis-a-vis another kind of network, but I didn’t know enough at the time to be assured about it. Now, I have learned some lessons the hard way, and with this post, I am trying to compile and share my thoughts and experience related to such a setup.


I had the following requirements for this lab setup:

  1. The lab will run a Windows domain.
  2. Some VMs will not be in the domain.
  3. Every VM will be able to access the Internet.

Setup Considerations

I had numerous questions and dealt with several issues during this setup process. Addressing these questions and issues guided my decision making as discussed below.


One of the first questions that we need to answer is, “What hardware do I need?” The basic requirement for the hardware turns out to be that the CPU should support virtualization. Most servers today will probably meet this requirement, but it won’t hurt to check the CPU features on server under consideration.

Another item to address is the resource capacity such as the number of CPUs and the memory capacity on the motherboard. The answer really depends on the number of VMs you plan to use concurrently. I went with 2 quad-core CPUs and 32GB RAM capacity. I started with 16GB installed and later expanded it to full capacity. One thing to remember about server RAM installation is that there are usually restrictions on each  memory stick capacity and the combinations in which the sticks could be installed. It is important to consult the server manual about the allowed combinations and the slots to use for those combinations while buying and installing memory.

As hard disks are cheap, we may consider loading up the server to utilize the available slots. However, the bigger question to answer is whether to RAID or not? If yes, then which RAID configuration? Is there a hardware RAID controller in the server? My server had a software RAID controller only, and I went with RAID 1 to keep identical copies on two disks. Basically, that gave me redundancy with no performance benefit. As the number of VMs grew, I realized that they were all trying to access the same disk resources simultaneously. Further, I was also restricted in the kind and number of disks I could install in the remaining slots. After some research and heartburn, I rebuilt the setup without any RAID. My thought process was that this was just a lab setup, and I figured out a backup approach to ease the restore process should there be a disk failure.


One of the first questions on the software side is, “Which virtualization software should I use?”. This one was easy as I wanted to learn Hyper-V. The VMWare HyperVisor was another option. The free HyperVisor license has some limitations, though no big problem for this setup.

Once decided about Hyper-V, the next question was about the host OS edition (OS installed on the physical server). Initially, I went with the 2008 R2 Server Core with Hyper-V. This provided a minimal text-based interface for managing the  server, and I was using the Hyper-V Manager client (remote management) on Windows 7 to manage the VMs. It was a nightmare to  make the client connect, and every so often it would stop connecting to the Hyper-V server. I found a utility that temporarily eased my pain, but I was amazed that there was a need for someone to create a utility to configure the client and server for remote management. On the other hand, I have toyed with VMWare HyperVisor a little bit on another server, and the remote management just works without additional setup.

Another issue that I ran into was that certain Windows updates would keep failing on the server. Therefore, I ended up rebuilding the setup with full Windows 2008 R2 Server with Hyper-V role as the host OS. Now, I manage the VMs on the server via Remote Desktop using Server Manager on the host OS (no remote management). In this setup, I have not encountered any of the issues mentioned above.


My key networking concern was about the co-existence of the existing network and a separate Windows domain for the VMs. While it was a bit confusing earlier, the setup turned out to be quite simple. Each VM participates on two networks – a private virtual network (say, with IPs 10.0.1.*) and an external virtual network (say, with IPs 192.168.1.*). In other words, there are two virtual NICs (Network Interface Cards) on each VM. One is connected to the private virtual network for the Windows domain, and the other is connected to the existing network. The connection to the existing network is needed by each VM to connect to the Internet. There are probably other network configurations that would work, but this is what I set up.

Note that the virtual NIC (VNIC) for the private virtual network is not needed on any VM that would not participate in the Windows domain. I did encounter some cryptic issues in connecting to the Internet from the VMs in the Windows domain, even when everything seemed to be working fine. Finally, I ended up with static IP configuration for the VNICs connected to the private virtual network, and DHCP-provided IP addresses for the VNICs connected to the external virtual network. However, I set up static IP reservation for these VNICs in the router. With this setup, all VNICs get fixed IP addresses, and all network connectivity is working properly.

My server hardware has two NIC cards. I have dedicated one to the external virtual network and use the other for connecting to the Hyper-V host.

Windows VM Cloning

Once we have created a VM with the base OS install, it’s easy to create another VM by copying its virtual hard disk (VHD) file. We need a separate install and VHD copy for each OS type/edition. For example, we need separate VHDs for Windows 2008 R2 Server, Windows 7, and Ubuntu Server. For all Windows VMs cloned by copying the VHD file, it will serve you well to run SysPrep with the Generalize option selected. This needs to be done right after the cloned VM is started up for the first time. This process prevents duplication of internal IDs assigned by Windows. Until I learned about SysPrep, the cloned VMs were unable to participate in the Windows domain security properly.

VHD Location

One bottleneck mentioned above is the contention on the disk resources. Therefore, I decided to distribute the VHDs among all the hard disks on the server. This approach allows multiple VMs to be working off separate disks concurrently. Very likely, there will still be multiple VHDs on each disk (once there are more VMs than disks), but that is better than all VHDs being on one disk.

Linux VMs

Linux is not an “enlightened” (I do not like this term, but I am using the Hyper-V jargon) guest OS for Hyper-V. Basically, it is not designed to fully participate in the Hyper-V virtualization scheme. I tried a few recent versions of Fedora as the guest OS, but couldn’t get the mouse to work on it. I also ran into some networking issues. On the other hand, setting up a 64-bit Ubuntu VM was a piece of cake. The mouse worked and the network configured automatically.


The following diagram (click to enlarge) summarizes my setup.

Hyper-V VM Lab Setup
Sample Hyper-V VM Lab Setup

I will update the post with additional details if/when I recall them.


11 thoughts on “Setting Up a Hyper-V VM Lab on Home Network

  1. Wonderful post. I’ve read lots of information about how to set up Hyper-V, but many are written for those who already know they way around it (pretty useless imho). Have a question that I’m pretty clueless at though, would be great with some help on the issue:

    How do you connect to the VM from an remote location? Do you RDP using MSTSC to the Host Machine first and from there use Server Manager for remoting to the VM? How does your setup look like on this topic?

    I have activated remote desktop on a few VM’s and then using port forwarding to their respective IP address and port. But this is pretty cumbersome, maybe there is a better way (probably) to do this.

    1. RDP to host and then connecting or RDP to VM will work. I don’t prefer to open too many ports (one for each VM as well) for security reasons. If you have a Linux machine or VM, you could tunnel pretty much anything through one ssh connection. This approach lets you have multiple simultaneous TCP connections to different IP addresses on your network by opening just one port for SSH. If you don’t want to set up a Linux instance, you could set up an SSH daemon on the hyper-v host in case you would like to try this approach.

  2. Thanks for the answer and suggestions.

    Was pretty tired and forgot the part explaining that I already have a Debian machine with SSH that I use for port forwarding to the VM’s. I also think it’s nice to have as few ports as possible open and encrypted communication is a necessity.

    This approach works great when thinking about it. Only thing I would want is an easier way of connecting when using a new computer somewhere. I.e. setting up Putty, configure the tunnel and connecting using mstsc. Maybe the only thing I need is a tool (pre-configured) to make fewer steps on the client when connecting. Googling away.

    1. Putty is pretty portable and stores the session config in registry. So, you may export the registry entries to a .reg file. Instead of tunneling RDP through SSH, you could carry a VNC client executable and use rdesktop from Linux to RDP to Windows. Thus, you could carry a few small files and work with one tunnel for VNC. As there is only one tunnel, you could just configure it as needed instead of carrying the .reg file.

  3. Wow, I am not sure if someone has said thank you enough for sharing this great info. This is exactly what I am looking for, I have just built my server and I was about to ins WS 2008 R2, there are some nuanced differences between my server hardware and yours but overall it is close. Thanks again for sharing.

  4. How many VM’s did you end up creating? We are working on doing this setup at home as well and are on the fence whether to splurge for a motherboard with dual socket CPU’s or if we can get away with just one high powered one. I want to be ‘future-proofed’ but at the same time this will just be for our own personal home use, I just love the idea of a crazy ‘seats’ setup – each being able to log in to our personal hosts from anywhere in the house, easily switching between rooms and devices. At most times it will probably only be running 2 VM’s for ourseves (power users – gaming, photo editing, etc) and probably a VM for the dc that won’t require much resources. Any others (guests, media) if we create them will only be spun up occasionally I think.. Any advice will be greatly appreciated!

    1. I have several (10-15) but I don’t have them all running at the same time. I do have couple of workstations (Win7 and WIn8), a DC, and some Win servers to test out server software, and a Linux VM. For anything UI-focused, just remember that you will be remoting in, so the UI responsiveness might not be great. I don’t use the VMs for gaming or photo/video editing. If you just need 4-5 VMs, you could get away with a single quad-core CPU.

  5. Hi, I found your site whilst searching for setup of Hyper-V on a home server/lab. This is something I am also looking into doing, however the biggest difference being I would use Hyper-V 2012R2 Stand-alone (maybe 2016 Nano….). Have the remote management tools improved since 2008R2? My main PC is running Windows 10 Professional and I was toying with the idea of utilising the WIndows 10 RSAT on this to perform all necessary configuration.

    Anyway, the main question I have is what setup do you utilise for VM redundancy/backups? I am looking at running around 4x VMs, one for File Sharing/Personal Cloud, another for PLEX media streaming, and 2x OTHER. Ideally I want the first two to have some form of redundancy/backup as they will host Data that cannot be lost.

    The Server I am looking to purchase for this is a Dell PowerEdge T20, it has software RAID on the motherboard therefore I could in theory configure a number of disks in RAID 1 or 6, or alteratively create a pool of disks as Mirrored within Storage Spaces. Is this the best way, or am I best at looking at alternative backup solutions for the VMs only?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s