SDDC in a Box – Part 2 (Decision-Making Process)
Decision-Making Process (Hardware):
I am sorry for those who were waiting on the step-by-step build guide. I felt that it was important to share my decision-making process. I get questions on why I choose certain things over others. Or why I did it this way and not the other. These are important questions that determine the outcome of the design. I thought I should answer these questions before I dive into the nitty-gritty details. I know all the techies (including me) like to jump into the good stuff but hear me out!
So why did I build it? My purpose was to build a lab that I can use at home and have the ability to take it with me to customers for a live demo. I want to demonstrate Cloud automation, stretched clustering, monitoring, DR, containers, and other cool VMware solutions. In the past, I’ve seen people load up ESXi on the laptops with custom drivers. I thought about doing this too but laptops have limited RAM and didn’t have the cool factor.
Why don’t I just do the software demo over the Internet? I’ve done demos over the Internet and so have millions (I made this number up) of others. It works well most of the time but I wanted to do something different. If you’ve done demos, you know that the Internet connection is not always reliable. At some point, you probably apologized to customers repeatedly for slow speed. Sometimes, abandon the demo all together and whip out PowerPoint slides. Having a demo unit with you guarantees you end to end control. Also, it’s always cool to have something to look and touch, bring a different perspective to the table on what SDDC is and can be. And no… I don’t carry this around everywhere I go.
The Case:
The case determined form-factor of the SDDC box. I needed to pick and choose equipment based on the case size and cram everything in it. Also, the case needs to confirm airline carry-on luggage size limit. SKB had 4U travel case for audio equipment that fit the bill. As I explained on my previous post, the audio gear rack mount points are compatible with server rack mounts. I would require 2U for four E200-8D servers, 1U for the switch, and 1U for the patch panel. In the back, 3U for three 120mm exhaust case fans and 1U for the PDU.
Server:
After researching reviews and videos, I decided to go with Supermicro E200-8D server. The server has five Ethernet ports: two 10GbE, two 1GbE, and one management port. It comes with Hyper-threaded 6 core Xeon processor and can have up to 128GB of RAM. vSphere 6.5/6.7 will work with this server on a fresh install, without needing to have custom drivers. Also, the size was just right to fit four of these servers into 2U space in the SDDC case. Please note that the server is not on VMware HCL.
Supermicro sells dual rack mount kit for E200-8D. Dual mount kits are expensive and it won’t fit into the SKB case because it is too long. If you are not taking these servers on the road and want to mount them in your rack, the rack kit may be a good choice. I found cheap 1U 8″ deep shelf with NO LIP. Most of the shelves have lips in front or the back to make the shelf more rigid. If you get these shelves with lips, chances are, you will not be able to fit all the gear in 4U case. I know this because I bought them and had to return them. I’ll talk about how to secure the server on the shelf on next post.
Network:
I wanted the full 10GbE throughput on a smart switch that supports creating VLANs and trunks. There are only few budget 10GbE switch vendors out there and Buffalo 12-Port 10Gbe managed switch stood out. It was the only managed 10GbE switch that had all Ethernet-based ports and it was cheap! Since E200-8d has 10GbE ethernet ports, it was a match made in heaven. The switch came with a rack mounting kit, which was great. Going copper based 10GbE is a lot cheaper than the Twinax or fiber with SFPs.
Since I was building lab/demo case for travel, I wanted everything wired up. Sad thing was I have total 20-ports and my 10GbE switch only has 12 ports. The solution was to add CAT6a patch panel and have everything wired to it. I was short on time to prepare the box for the customer demo. Instead of punch down the cables on the patch panel, I opted for a more expensive ready-made, shielded plug-in patch panel. If I do it again, I may opt for punching my own cables on the patch panel.
Real estate inside the case is valuable. Every millimeter counts and I couldn’t use thick and heavy shielded CAT6a cables. I had to look for thinnest and lightest CAT6a cable I could find. I noticed a couple of thin cables on Tinkertry.com. I googled and found them on Amazon.com. I picked up twenty 3 feet cables for internal cabling.
Last but not least, I wanted a small router for WIFI, routing, VPN, and internet gateway. This router would route traffic between the physical network and NSX Edge router in my SDDC box. In other words, the router will handle North-South traffic and NSX will handle all the East-West traffic. I searched for routers that were DD-WRT or Open-WRT based and in small sizes. I wanted SDDC box to connect out to the internet over my phone hotspot or any other internet connections out there. I found Mango router (OPEN-WRT) on Amazon that had all the features I wanted plus more. I’ve been using DD-WRT at home for a very long time. I am a huge fan of it. I buy consumer routers off the shelf and flash them with DD-WRT and it becomes super juiced-up-router with all the bells and whistles.
Storage:
I knew from the beginning, I was going to do All-Flash vSAN. E200-8D has limited* internal space for one SSD and one NVMe drives (I didn’t know about PCI-E 3X NVMe 1U adaptor at the time of the build). I decided to use 250GB NVMe for vSAN cache tier and SATA 1TB SSD for data tier. With vSAN Erasure Coding RAID-5 and Dedupe/Compression, It should be enough space for what I need. The sad thing about E200-8D is that the controller only has 32 queue depth. Read why this could be a problem on “Why queue depth matters!” at yellowbricks.com.
I am in the process of upgrading the vSAN to All-NVMe based. NVMe drive is extremely fast and unlocks up to 65536 queues, and up to 65536 commands per queue. It is so fast that it can take full advantage of 10GbE pipe and saturate it. Read about the benefits of running NVMe drives on vSAN on VMware blogs. I’ll be using spare 1TB SATA drives as VMFS datastore for nested vSAN. Nested vSAN does not work with vSAN, which I found out on my old home vSAN lab. Here’s the helpful blog link: thinkcharles.net.
For the ESXi boot, I decided to go with USB3 low profile thumb drive. Why not SATADOM? Because USB3 thumb drives are cheaper and they are pretty fast.
Cooling:
I live in a small apartment in DC metro area. My home office (den) is open and adjacent to the living room. I can’t have loud fan noise. It would drive my wife crazy. E200-8D is not a quiet server by any means. Based on reviews and research, I decided to go with Noctua fans. These are expensive little buggers but they are well worth it. Since they spin at a slower speed, I had to make sure I get enough cooling by adding a 3rd fan into the server. To make sure all the hot air gets pumped out, I chose the 3U fan panel with 120mm punches for three Noctua fans. I couldn’t find any 3U three 120mm fan panel that would fit the case. I ended up building my own. I actually like the look.
Power:
I didn’t want all the power cables hanging out of the box. I wanted to have an “appliance” feel with a power cable and a switch. I looked for rack mount 1U PDUs that had a switch and outlets on both inside and outside. Inside outlet to route all power cables internally and outside outlets to power any other devices and gadgets. Server and power cables are too long to fit into the case. I needed shorter cables so I went with 1 ft. server power cables and 3 ft Right-angle power for the switch. I think I could have gotten away with 2 feet for the switch.
19 thoughts on “SDDC in a Box – Part 2 (Decision-Making Process)”
Hi David,
Great srticle so far, and I can hardly wait for the next posts!
But I still don‘t get it how you managed your network cabling. Your 10G switch has 12 ports but you need 20 ports. How does the patch panel help here?
Best regards,
Karl
I have everything wired to the patch panel. I just need two 10GbE ports from each server, a total of 8 ports and one port for uplink. I use the external switch for Supermicro management port at home. On the road, I do without management port and use the good old power button.
Thanks for this series. Will there be a part 3 about how you’re using this?
Part 3 will be the final blog about SDDC HW build. I will be moving on to SDDC software components and vRealize suites.
So, you need to plug cables from the patch panel to the switch to use the lab, yes? Do you carry these cables with you? Do they fit in the case? 🙂
I have monoprice slim 1 foot and 2 feet patch cables. They can stay patched and I can close the cover for transport.
Did you try this 3U cooling device? I’m curious if it would fit.
https://www.racksolutions.com/3u-horizontal-19-rack-fan-3.html
I’ve been inspired to build something similar, but I wanted to go full network connectivity and went with this switch instead (not entirely sure how I’m going to cable it, but thinking between the gap in the servers direct patching.
https://www.netgear.com/support/product/XS724EM.aspx
Any fan unit that gets mounted inside the box will not fit if you are using the same box as-as mine. That’s why I had mount the fan outside (see rear picture).
Good to know. I’ve got a single E200-8D with a 55GB Optane 800p (too bad it’s only x2) for cache along with a 1TB 970 EVO with the NGFF PCI adapter connected into the XS724EM – LAG works and VLAN confirmed working so now I just need to build 3 mode E200-8D’s and build it into the SKB case and figure out the cable routing.
Sounds like you are building same box as mine. There are two versions of 4U SKB cases. Be sure to get the short version. Get thin cables. Thick normal shielded cables won’t fit.
I’m looking into duplicating this to create a small SDDC for my office rack.
I’ve been looking at your 10G router. Did you consider any other ones?
Cisco SG350XG-2F10 is near the same price, but can also be stacked (important for me, perhaps not for a mobile setup).
Cisco is a trusted brand for sure. It is still more expensive and it won’t fit into my SDDC carry-on box. It’s too long.
Hi David
Awsome work. Was also curious if the replacement model Buffalo BS-MP20 12-Port 10GbE Network Switch will fit in the carry on case.
Also, did you find any PCI adapters for the All-NVMe vSAN? Amazon says unavailable for now. I’m trying to also build an all NVMe PCI solution.
It looks like other brands are available.
https://www.amazon.com/NGFF-Adapter-heatsink-Server-Profile/dp/B077QRPR9S/ref=sr_1_1?ie=UTF8&qid=1540147819&sr=8-1&keywords=pcie+m2+1u
Hi David,
Very interesting reading – thanks for posting info on your build. Can I ask please… does the Buffalo switch allow you configure a gateway or SVI for each vlan or do you have to have the DD-WRT router do that work? Looking at the buffalo manuals I can’t see that it works as a top of rack switch in the sense that the VMware validated design would expect. Thanks.
Hi Bob,
Sorry for very, very late reply. Buffalo switch is a managed L2 switch. Therefore, it does not create gateways. You can use external routers or use NSX edge routers for L3 functions.
Great post! I’m looking at building a similar rig. Looking forward to the next post. Keep up the great work!
Thanks
Love this setup, haven’t found the post where you mention how you anchor the E200’s to the rack, how is that done. Also where you placing the power supplies. If your running ESX off the USB, are you then using both Nvme’s for storage drives for the VM’s to emulate vSan. With your 3Panel fan system how did you do the power for the three fans? THank you for a great write up.