VMWare to HyperV architecture, planning, and migration

Sometimes, even the best planning needs to be tweaked on the fly.

In this case, what appeared to be a 'simple' 9TB, 6 Server, VMWare to HyperV migration, morphed into an "opportunity to get creative."

Issue: Replace aging server hardware, and incorporate a more robust (time to recovery) disaster recovery architecture. 

On the surface, this was a slam dunk.  Migrate the VMWare virtual machines to new hardware running HyperV, setup HyperV replication, and configure a second Exchange Server for DAG (Database Availability Group). 

While the architecture and planning seemed solid (we had done a lot of this type of migration), what was unexpected was the throughput of the existing network, the speed of the older server hardware, and having filenames/paths that exceeded the 255 character limit (didn't see that coming!). 

The first 4 machines migrated without a hitch (although slower than expected), inside the outage windows, and ran great on the new HyperV platform.

Regarding the migration speed, it turns out that the hardware drivers (Dell Servers) on the old Vmware servers was the out of the box, VMWare versions, and not the Dell optimized versions. The client never noticed this "opening a single file", or when "checking email" on the Exchange Server. 

We, on the other hand, were trying to grab multiple Terabytes of data from the drives (old controller driver) across the network (old network drivers).

The first big gotcha was existing corruption of the VMWare VMDK files.  Anytime we attempted to convert a copied out VMDK file to VHD(x), it failed. We tried all the different ways to copy it out, and multiple converters.  No luck.

Here is the first (and unsuccessful) clever part: we built a temporary, virtual, Windows 2012R2 Server, on the new HyperV server, and then joined the temporary server to the AD domain. Next, we started a robocopy from the old virtual machines D: drive to the temporary machines D: drive; basically the same setup as any physical to physical migration.

Sounds like a no-brainer, right?   

Not quite. The next issue was long file names.  Windows Server has a limit of 255 characters for the path and filename.  While not recommended, this isn't a problem when mapping a drive to a folder lower in the directory, but a big problem for RoboCopy (and as it turns out, the migration problem for our usual assortment of VMDK to VHD(x) migration tools)!

Here is the 2nd, and in this case, successful, clever part.   Robocopy clearly wasnt going to work on the long filenames.  So, we used the backup software to backup the old D: drive, and then recover the D: drive to the temporary server (on a Wednesday).  All of the files the client was updating where not in the 255+ range, so we ran Robocop every night, to sync the changes.  

Then on Friday night, for the migration, we ran one last robocopy to get a solid D: drive. All that was left was to shut down the old server, copy out the VMWare VHDK file of the C: drive, convert it to a VHDX file, and create a new virtual machine on the HyperV server with it.

We started the virtual server in its new home, on the HyperV Server.  After everything looked good, we shut it down, as well as the temporary server.  We removed the temporary server's D: drive and attached it to the server in its new location. After starting up, the server now saw its D: drive, and purred like a kitten! 

Now out of the bushes, we configured the HyperV servers (3 total) to replicate between each other, as part of the new disaster recovery plan (plain vanilla certificate-based replication). 

After deploying the new solution, the network speed and server response is incredibly faster, the disaster recovery window is much smaller (10 minutes), and the server footprint went from 16u to 6u.

Not the easiest migration we've done, but certainly one of the most educational!

Wireless network for 500 simultaneous users, wireless roaming, with multiple SSIDs

Wireless networks get complex when the use case includes roaming, high densities, multiple SSID's, VLANs, or interference.  

In this case, the client needed to provide a more reliable wireless network to church attendees, volunteers and employees. 

Issue:  Wireless users where getting dropped due to access point congestion (SonicPoint Access Points). 

After an onsite wireless assessment, we spec'd the model and location of the new access points, and ordered hardware (Ruckus wireless Access Points and Controller in this case). 

While waiting for hardware, we assisted the client in tested the existing Internet connection (50mbps) and configured the firewall with an additional isolated network.

After deploying the new solution, there are no dropped connections or lag, and the public/private networks are separated.

It's here! It's here! Server 2016 and Containers!

At App Gap, we do a lot of virtualization.  That includes P2V (physical 2 virtual), V2V (virtual to virtual), tuning, replication, backup, etc...you get the picture. 

It isn't unusual to P2V six servers onto 1 physical Hyper-V server. 

But Containers!! Well, "Containers" are THE next evolution of virtualization.  Containers will allow for EVEN MORE efficient use of hardware! Think dozens of Containers (virtual machines) on 1 physical server!

So what does that mean for my Windows Server environment?  Well... guess what! Container support is built into Windows Server 2016!

Microsoft has a great 5 minute read on Containers, here:

https://msdn.microsoft.com/en-us/virtualization/windowscontainers/about/about_overview

Then, take another minute, and at least check out the picture on this page, and read the paragraph above it, to get a quick understanding:

https://azure.microsoft.com/en-us/blog/containers-docker-windows-and-trends/

And for you hard core types, 'Docker' is what started it all:

https://www.docker.com/whatisdocker