Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take #27

Welcome to Technology Short Take #27! This is my usual collection of links, thoughts, rants, and ideas about data center-related technologies. Here’s hoping you find something useful!

Networking

  • If you’re interested in learning more about OpenFlow and software-defined networking but need to do this on a shoestring budget in your home lab, a number of guides have been written to help out. I haven’t personally used any of these guides yet, but I’m working my way in that direction. (I needed to fill in some other knowledge gaps first.) First up is Brent Salisbury’s how to build an SDN lab without needing OpenFlow hardware. Brent is creating some fantastic content that I’ve found extremely useful. His earlier post on getting started with OpenFlow and Open vSwitch tutorial lab is also quite good. Another good resource is Dan Hersey’s guide to building an SDN-based private cloud in an hour. I encourage you to have a look at these posts if you’re at all interested in any of these technologies.

  • Bruce Davie and Martin Casado (with Nicira, now part of VMware) have written a post comparing the VXLAN and STT tunneling protocols. Not unsurprisingly, one of the key advantages of STT that’s highlighted is its improved performance due to TSO support in NIC hardware. VXLAN, on the other hand, is seeing broader adoption across multiple vendors. There’s no mention of NVGRE (or just plain GRE).

  • Related to the bare metal provisioning work (see below under “Servers/Hardware”), Mirantis also detailed some bare-metal networking stuff they’ve done for OpenStack in relation to the use of bare metal nodes.

Servers/Hardware

  • Mirantis published an article discussing a framework they built for bare-metal provisioning with OpenStack that allows OpenStack to place workloads onto bare-metal nodes instead of onto a hypervisor. It’s interesting work, but unfortunately it looks like this work won’t be returned to the community (it was developed for one or more of their clients). There are also a few follow-up posts, such as this one on placement control and multi-tenancy isolation and this one on preparing images for bare metal nodes. Also see the “Networking” section above for a related post on the networking aspects involved.

Security

I don’t have anything for this area this time around, but I’ll stay alert for articles to add next time. Feel free to share something in the comments!

Cloud Computing/Cloud Management

  • I might have mentioned this before, but Ken Pepple’s OpenStack Folsom architecture post is just awesome. It’s well worth reading and reviewing in depth.

  • This OpenStack-on-Debian HOWTO is a bit older (and probably out of date), but it does give a decent overview of the components that are involved and—via the configuration—how they relate to each other. While the details for installing a current version of OpenStack are likely to be different now, you might still find this conceptually helpful.

  • These articles are a bit long in the tooth, but CSS Corp has a useful series of articles on bundling various Linux distributions for use with OpenStack: bundling CentOS, bundling CentOS with VNC, bundling Debian, and bundling OpenSUSE. It would be interesting to me to see how much of this, if any, could be automated with something like Puppet. If any enterprise Puppet experts want to give it a go, I’d be happy to publish a guest blog post for you with full details on how it’s done.

  • Much like there are some great “how to’s” on how to run an SDN lab (see the Networking section earlier), there are also some great write-ups on doing the same for OpenStack. For example, Cody Bunch published this article on running OpenStack Private Cloud on ESXi, and Brent Salisbury (there he is again!) posted an older guide to OpenStack Essex on Ubuntu on VirtualBox as well as a newer guide to OpenStack DevStack on Fusion.

Operating Systems/Applications

Storage

  • I don’t fully understand all the details involved, but this post on changes in block protocol scalability in Xen outlines what sounds like good progress in improving efficiency.

  • This article is a bit older, published at the start of October, but it talks about an interesting project (product?) by Qlogic called “Mt. Rainier.” (Stu Miniman of Wikibon has more information here as well.) Apparently, “Mt. Rainier” will allow customers to combine PCIe-based SSD storage inside servers into a “virtual SAN” (now there’s an original and not over-used term). The really interesting aspect, in my opinion, is the use of “Mt. Rainier” to create shared caches across servers. Is this the beginning of the data center fractal edge?

Virtualization

  • Big news in the QEMU world: In the QEMU 1.3 release, the QEMU-KVM and QEMU projects have been merged. Why is this important? It’s first necessary to understand the relationship between QEMU and KVM. KVM is the set of kernel modules that leverage hardware virtualization functionality inside Intel and AMD CPUs, and it makes possible the virtualization of closed-source operating systems like Windows. QEMU, on the other hand, is needed to emulate everything else that a VM needs: networking, storage, USB, keyboard, mouse, etc. Both KVM and QEMU are needed for a full virtualization solution. Until the 1.3 release, QEMU (without hardware acceleration via KVM) was one branch, and QEMU-KVM (with KVM hardware acceleration) was a separate branch. The QEMU 1.3 release completes an effort to merge both efforts into a single development tree.

  • The merge of QEMU and QEMU-KVM isn’t the only cool thing happening with QEMU; also included in the 1.3 release is GlusterFS integration. This integration dramatically improves GlusterFS performance by allowing QEMU’s block layer to communicate directly with the Gluster backend without going through the userspace FUSE components.

  • Erik Scholten of VMGuru.nl has posted a good hypervisor feature comparison document. It includes RHEV 3.1 in the comparison, even though RHEV 3.1 wasn’t released (was still in beta) at the time the comparison was written.

  • Speaking of RHEV: apparently RHEV 3.1 was released yesterday (Wednesday, December 4, 2012), although I haven’t been able to find any sort of official press release or announcement.

  • Debunking an argument I’ve heard quite a bit is this article by Frank Denneman on using SIOC with multiple datastores backed by a single pool of disks.

  • Need to compact a virtual hard disk in Windows 8/Windows Server 2012? Ben Armstrong shows how here.

  • I enjoyed this article by Josh Townsend on using SUSE Studio and HAProxy to create a (free) open source load balancing solution for VMware View.

That’s it for this time around; no need to overwhelm you with too much information! Besides, I have to keep a few items around for Technology Short Take #28

As always, comments, thoughts, rants, or corrections are welcome below.

Metadata and Navigation

Be social and share this post!