To learn OpenStack, you need to build OpenStack… regularly.  Having a physical lab is imperative as all-virtual labs just don’t cut it.  I started my lab journey on eBay where you can find previous generations of server gear cheap.  I was most familiar with HP Proliant series servers so I went with the HP Proliant DL360 G6 and G7 generations.   Servers are all pretty much the same these days but here are a few things to keep in mind:

  1.  Memory is expensive on it’s own so buy servers with the amount of memory you need already included.  You can save a lot of money this way
  2. Stick with the same hardware brand and model.  This makes buying spare parts (Drives, memory, power supplies, raid batteries, etc.) much easier
  3. Choose a server brand that still offers free firmware downloads.   HP, for example no longer provides firmware / bios for their servers unless you have an active support contract for the particular model you are trying to obtain software for.   At the time of this writing, I believe Dell does not have that requirement.
  4. Network switch.  This is the heart and veins of your environment.  Don’t skimp on it.  I have always had working knowledge of Cisco so I chose to stick with it.  I wanted to test out OpenStack Neutron plug-in capabilities so i opted for the Cisco Nexus 3048.  You can find them used on eBay for around $800.00 with 1-2 year support.  For me, having an enterprise-class switch that I see I work with in the field has made testing multiple lab scenarios very easy.
  5. CPUs:  choose power-efficiency over performance.  I chose the opposite and I pay for it every month when the power bill comes.   I do recommend going with Dual-proc though as its much cheaper to buy used servers with dual-processors than add them later.
  6. Cabling.  Don’t skimp on cabling.  You can find multi-color packs on amazon for cheap. At first, I decided to make my own cables.   It was a pain and problematic in the long run plus they were only CAT5e.  I found these 3-foot CAT6 patch cables on Amazon at 10 x $14.99.
  7. Rack enclosure.  I use an XRackPro2 12U Server Rack Noise Reduction Rackmount Enclosure Cabinet.  In my opinion, it is worth the money for a home lab as it’s quiet, has built in cooling, and most importantly looks awesome.
  8. neatness.  Spend the time with cable management, labeling and switch port descriptions.  It will save you lots of time in the long run.
  9. IPMI.  Make sure that the server hardware you chose supports IPMI for lights out management.  If you plan on doing bare-metal provisioning (which you should be), you will need it.  Stick with the major server brands IMHO as they just work.  HP, Dell, Cisco, SuperMicro.  Avoid IBM like the plague (trust me on that one).
  10. Hard drives.  If you want to test Ceph and brag to your friends about your I/O performance, you are going to need a lot of them.  all drives are not created equal so if you have a mixture of 10k/15k single-port / dual-port / SAS / SATA, expect horrible performance.  Also, consumer SSDs don’t always negotiate the fastest speeds with servers making them actually slower than SAS drives (at least in HP Proliant servers.)  You can find used 15k dual-port SAS drives with sleds on Amazon for around $10.00/each for the 72GB models.  I don’t find that I need space.  I need performance.
  11. Make one server a utility hypervisor.  This way, you can build multiple versions of OpenStack Platform Director, Windows AD Domain Controllers, LDAP, bind, monitoring etc. all using VMs.  I am using one of my HP DL360 G7 with 128GBs of RAM.  I installed 4 x Samsung 850 EVO Pro 512GB SSDs in a RAID 10 array which I find to perform quite well for virtual machines.  I just use RHEL 7 and KVM with multiple NICs for bridges.