Jump to: navigation, search

Difference between revisions of "HPC"

(Initial braingasm)
 
(No difference)

Latest revision as of 15:10, 7 August 2017

HPC with OpenStack - DRAFT

Where am I?

This page is intended to provide an overview and FAQ on the state of high-performance computing (HPC) on OpenStack clouds. It is maintained by the Scientific working group and updated based on information gathered within the OpenStack community and from outreach efforts at various HPC forums around the world.

OpenStack? HPC? What?!

OpenStack is many things to different people and organisations. Likewise HPC is an overwhelmingly broad area. When we talk about OpenStack and HPC together it is important to realise we might be talking about:

  • Architecture and enabling technology for HPC-focused OpenStack cloud deployments
    • These might be typical virtualised infrastructure clouds, or
    • Bare-metal infrastructure clouds, or
    • A mix of both
  • Complimentary/shoulder self-service infrastructure deployed alongside and integrated with the real big-iron Supercomputer/s
  • High-performance data-processing/science (as opposed to inter-process or shared-memory communication bound) infrastructure clouds
  • OpenStack used to provide IaaS to end-users, enabling them to dynamically create isolated HPC environments
  • OpenStack hidden from end-users and used simply as the underlying infrastructure provisioning system for a managed HPC platform/service
  • Modern HPC applications or services with native cloud integration wanting to make use of OpenStack cloud services
Current State

With the above caveat in mind we can now say that roughly 80% of people interested in HPC on OpenStack today are looking at it from the perspective of wanting to build flexible infrastructure that walks the line between orderly priority queues and un-managed chaos. OpenStack is a highly capable and (often confusingly) flexible infrastructure provisioning framework. HPC requires infrastructure (compute, network, storage), and OpenStack can certainly deliver and manage it. The key here is to make your deployment choices to suit your use-cases. The most obvious up-front choice would be whether you will use full system virtualisation or bare-metal (containers can be layered atop both). Networking requirements are the next major consideration.


Cruft below this point

Text from a relevant openstack-hpc list thread a while back that I (b1airo) am intending to coerce into something useful...

- that choice largely depends on your typical workloads and

what style of cluster you want. Beyond that, compared to "typical" cloud hardware - faster CPUs, faster memory, faster network (probably with much greater east-west capacity), integration of a suitable parallel file-system.

However, OpenStack is not a HPC management / scheduling / queuing / middleware system - there are lots of those already and you should pick one that fits your requirements and then (if it helps) run it atop an OpenStack cloud (it might help, e.g., if you want to run multiple logical clusters on the same physical infrastructure, if you want to mix other more traditional cloud workloads in, if you're just doing everything with OpenStack like the other cool kids). There are lots of nuances here, e.g., where one scheduler might lend itself better to more dynamic infrastructure (adding/removing instances), another might be lighter-weight for use with a Cluster-as-a-Service deployment model, whilst another suits a multi-user managed service style cluster. I'm sure there is good experience and opinion hidden on this list if you want to interrogate those sorts of choices more specifically.

Most of the relevant choices you need to make with respect to running HPC workloads on infrastructure that is provisioned through OpenStack will come down to your hypervisor choices. My preference for now is to stick with the OpenStack community's most popular free OS and hypervisor (Ubuntu and KVM+Libvirt) - when I facilitated the hypervisor-tuning ops session at the Vancouver summit (with a bunch of folks interested in HPC on OpenStack) there was no-one in the room running a different hypervisor, though several were using RHEL. With the right tuning KVM can get you to within a hair's breadth of bare-metal performance for a wide range of CPU, memory and inter-process comms benchmarks, plus you can easily make use of PCI passthrough for latency sensitive or "difficult" devices like NICs/HCAs and GPGPUs. And the "right tuning" is not really some arcane knowledge, it's mainly about exposing host CPU capabilities, pinning vCPUs to pCPUs, and tuning or pinning and exposing NUMA topology - most of this is supported directly through OpenStack-native features now.

To answer the GPU question more explicitly - yes you can do this. Mainly you need to ensure you're getting compatible hardware (GPU and relevant motherboard components) - most of the typical GPGPU choices (e.g. K80, K40, M60) will work, and you should probably be wary of PCIe switches unless you know exactly what you're doing (recommend trying before buying). At the OpenStack level you just define the PCI devices you want OpenStack Nova to provision and you can then define custom instance-types/flavors that will get a GPU passed through. Similar things go for networking.

Lastly, just because you can do this doesn't make it a good idea... OpenStack is complex, HPC systems are complex, layering one complicated thing on another is a good way to create tricky problems that hide in the interface between the two layers. So make sure you're gaining something from having OpenStack in the mix here.

Links to be incorporated somehow...