HP takes giant leap in server design

FIRST LOOK: HP takes giant leap in server design

Moonshot hyperscale server brings high density and low power consumption to the data center

By Tom Henderson, Network World 
February 10, 2014 05:59 AM ET

Network World - When it comes to data center servers, the goal is to pack the most power into the smallest, most efficient package. HP has leapfrogged past traditional blade servers with its new Moonshot line that delivers high density and low power in a space-saving "cartridge-based" chassis.

We received the first publicly reviewable unit in the Moonshot series and at the end of testing we were exhausted, but also in awe. The enormous effort in initial configuration, we found, pays handsomely.

Strategically, HP has launched Moonshot as a response to the white-box server makers who are grabbing an increasing share of the server market, especially among cloud service providers and large enterprises. Moonshot falls into the general category of hyperscale computing, which means it’s designed for data center and Big Data environments where the ability to quickly add large numbers of servers is important.

So, what exactly is it? For about $62,000, you get 45 server cartridges, a 4.3U chassis (7.5 inches tall), power supplies, management unit, a crossbar internal switch (ours had two), an uplink 10Gigabit Ethernet controller, power cords, and rack mounts. In other words, a server farm in a box.

While one might have expected low-power ARM processors, HP went with x64 dual-core Intel Atom CPUs. Each core has two threads. And each cartridge came with a 1TB conventional hard drive along with 8GB of DDR3/1333MHz memory. SSD options are available, too.

Another interesting wrinkle: Only Linux distributions are supported: Red Hat, CentOS, Fedora, and Ubuntu. No Windows.

When it comes to density, Moonshot hits its target. Eight Moonshot chassis fit into a 42U rack for a total of 360 discrete server cartridges. That full rack would consume only 9,600 watts, representing a small fraction of power density/heatstack removal needed by an equivalent density of 42-1U servers containing four-CPU/four-core servers.

Inside Moonshot, there are four high-speed buses. The network I/O is handled by a Broadcom uplink chassis adapter with six-10G Ethernet SFP+ connectors, for a gross total of 60GB of Ethernet. Also on the rear of the chassis are power supply connections, an HP Integrated Lights-Out (iLO) GBE port for chassis control (but not switch control, initially), serial ports, and a microSD card drive.

In a typical blade server chassis, all of the blades connect to one backplane for networking and storage. With Moonshot, cartridges are managed into three total zones: two zones of equal size, and one smaller zone. One or two Ethernet switches can be installed internally. HP's Cluster Management Unit software was obtained for purposes of testing, and we feel that purchasers of the Moonshot system will very likely want to license the CMU software.
Each cartridge is of a uniform type, not shielded with metal casings as blades often are, and each Moonshot server is homogeneously built of a specific cartridge type. The cartridges look very much like single-board computers with a bus. As the cartridges aren't shielded, airflow through the chassis lacks the need for channels and barriers to route airflow. The cartridges use comparatively little power, and the overall chassis, including switches and infrastructure use less than 1,200W in aggregate power consumption. We tested at peak, the unit pulled 1,174W.

Configuration caveats

The unit arrives, unless optional configuration has been purchased, totally unconfigured, and is not provisioned in conventional ways initially. After installation, which can be very arduous for those not well versed in HP servers and networks, subsequent provisioning and re-provisioning is highly automated and can be fast. The docs very clearly needed work.

Because of its internal switching architecture and reliance on Linux, you'll need a hybrid network and systems engineer to make it work. We changed hats frequently as we ran Moonshot through its paces.

HP offers “Factory Direct” pre-installation and configuration options. We recommend going Factory Direct with pre-installed options for the faint of heart, as our installation wasn't fun.

Initial configuration comes through a connection to a serial port, which we found painful for many reasons. You can use a micro-SD card to serve as configuration storage, as there's one built into the management module on the rear of the Moonshot chassis.  

We connected to the chassis using a notebook with a USB-Serial adapter and a terminal program to one of two serial ports on the rear of the chassis. One serial port is used for HP’s iLO management, and the other is for the Broadcom switch that's used to connect Moonshot to the rest of the world. HP doesn't have an exact formula for what kind of serial connection is needed. Although it's believed that a standard 9pin D-connect ought to work; two of them failed before HP sent one that finally worked.

The firmware seems primitive. We made it work, but used more guessing than we like. There are marks on the wall at the lab from throwing things in frustration. The summary of the procedure is to use the serial jack to initialize user passwords, provision the switch with IP address options so that the chassis can talk to the world from the switch. The serial cable, from that point onwards, is recyclable. The chassis, for all its other security, uses guessable passwords initially, and in no way vets the secure nature of subsequent passwords. They must be changed.

Once the switch talks and the chassis is alive, Moonshot is provisioned through setting up IP address schemes to match PxE requests that'll be made by the cartridges. Once that's done, the cartridges can be provisioned via PxE to become server instances. Each server, in turn, has two cores and four threads to use for apps, be those native, or inside SELinux, Ubuntu's new containerizing scheme, or other partitioning methods for scale out and compression.

The Linux distros are customizable, and there are methods to allow the HP Cluster Management Software to do the bus provisioning that allows cartridges to be used in scale-out schemes.

We made the Moonshot swallow CentOS, which is the OS most used in HP's documentation examples. The procedure used was: get one of the 45 cartridges to be the “master cartridge,” then use the master to PxE provision each of the nodes in whatever flavor combinations are desired. We provisioned the cartridges up as fast as we could, which didn't take long, as the internal Gigabit Ethernet switch is non-blocking and most of installing a distro amounts to copying unneeded, seldom used, once-in-a-lifetime-if-we're lucky stuff. Each cartridge consumes about 11.3 watts at maximum, not including chassis overhead, as measured by our handy Kill-A-Watt meter.

We then used combinations of an internal VSP-connected (Virtual Serial Port, analogous to watching VMware or RDP-like remote instance booting) master node, so as to prep it to become the image source for the 44 other nodes. The combinations on each cartridge can be: whole thing is used by one OS, split into two, or if you want to play around, divide up resources using virt or another non-VT-compatible virtualized instance.

One node failed early. We updated the firmware, a fairly simple process, but something else was wrong, and HP overnighted a working replacement. This is where the ability to use a virtual serial port to watch cartridge boot-time messages came in handy.

Tests: Processing Power

The Atom processor used in the cartridges isn't a Xeon-family CPU and so lacks tremendous processing power, but it’s reasonably fast.

We compared the Moonshot cartridge with several other servers, desktop units, and tried to find where its musculature fits. We tried to match memory, number of daemons running (killing and adding them to match), and used a Linux physical drive to ascertain disk speed using LMBench3, a tired but reasonable benchmark for Linux boxes. We abbreviated the test, but used equal memory and other settings across the types of systems we tested for results.

For this test, each cartridge was treated as available in whole to LMBench3, which was compiled with gcc. We limited the VMs we tested to 1-VCPU, and made the daemons equal, and used default settings, otherwise. We also used the Phoronix test suite to gauge cartridge speed and compare it to several types of dual-core systems to gauge speed.
Bottom line: The cartridges aren't state of the art Xeons in terms of speed, and they're not so slow, given their low-power consumption characteristics.

Conclusion

HP aptly calls Moonshot, “a software-defined server”, and we agree. In a way, as Moonshot is delivered “raw”, it's both an industrial controller but also a small server farm-in-a-box and lends itself to the “maker” world as well as green server consumer.
This is a radically different server designed to compete with the custom infrastructure being used by some of the major websites and CDNs, but as in scale-out, rather than scale-up. It's not a big machine for virtualization, and indeed the processors don't support but the most rudimentary of virtualization schemes because they're not designed for it.
In aggregate, Moonshot's computational density is comparatively awesome -- especially for the power consumed. Nonetheless, it's neither a blade server, or a dense-core server, and it's a dramatic change from an otherwise conservative server vendor.

Where separate physical instances need to be scaled 45X at a time, there is no real equivalent without building custom (or otherwise) infrastructure. We see numerous clustering opportunities for Moonshot, and look forward to new cartridge modules. The software-defined system moniker that HP applies to Moonshot is apt, but the software needed to truly define Moonshot is still elusive - and what's there currently requires immediate security bolt-down.

As an array, and possible element of a cluster, it's almost revolutionary in terms of bucking the main stream of 1U-defined computing. Moonshot's in need of some additional simplicity, lacquer, and manageability -- but once overcome, it's a key and highly-efficient puzzle piece in NOCs of the future.

Henderson is principal researcher for ExtremeLabs, of Bloomington, Ind. He can be reached at kitchen-sink@extremelabs.com.

How We Tested

We installed the Moonshot chassis in a separate cabinet in our NOC at Expedient/nFrame in Indianapolis -- it requires a long chassis and our rack cabinet won't quite accommodate it with the back door still on; just slightly too long. In turn, we connected the described serial cable and instructed the chassis to wake up in various ways. We later connected the Moonshot Gigabit Ethernet ports to a switch, and began to program the configuration. We then obtained and connected two Extreme Networks switches to two other Extreme Networks switches in our cabinet, using SFP+ and fiber cables to allow cross-bar connectivity among the Moonshot ports and our internal network.

We configured a master node running CentOS 6.4, and developed an internal network for Moonshot cartridges, allowing them to boot by PxE configuration. DHCP, TFTP, and images, ready, we configured the remaining 44 cartridges with the same version of CentOS and proceeded to PxE boot the remaining cartridges. We used Puppet Enterprises puppet communications among the cartridges for status and state observation, then proceeded to test loop-back output of the cartridges.  

We equalized running daemons in each test to match a minimum needed to run the test and these were the same among the systems.

Notes: We strongly recommend HP's Cluster Management Utility for purchasers of Moonshot; although this product isn't reviewed here, it makes Moonshot more livable.
Read more about data center in Network World's Data Center section.

http://www.networkworld.com/reviews/2014/021014-hp-server-278512.html?source=NWWNLE_nlt_daily_am_2014-02-10

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger