Every year or so the industry comes up with another term that defines all marketing messaging regardless of what you are building or selling.  Marketing terminology creates a lot of confusion and makes it more difficult for us in the field to explain technology to our clients.  The latest term that has become fashionable is hyper -storage/converged.  So what is it really.

Let’s start with hyper-converged infrastructure.  There is where all infrastructure services run off a single appliance that consists of CPU, memory, networking (NIC/HBA), and storage.  It requires a hypervisor to carve the available resources into individual machines.  These architectures have become possible as a result of SSD/Flash technology entering the market.  There are advantages to this model:  it is simple to procure, typically a single line item;  it is a single vendor solution that doesn’t require integration of components; it promises simplicity and reduction of costs.  There are some potential drawbacks as well: many don’t have a way to isolate CPU for compute workloads and CPU for storage services, which may cause unpredictable performance;  in server virtualized environments it is challenging to predict how much CPU, memory and storage is required to run a set of virtual machines. It definitely varies from organization to organization and application to application (this is more predictable in VDI implementations);  it is challenging to scale the environment where resources are needed without over-procuring in other areas; it is a single vendor responsible for the full stack (good enough at all but not great at anything).  There are some newer versions of hyper-converged that allow compute resources to be increased without affecting storage and vice versa.  In these situations though, the software is often sold separately from hardware.

Hyper- performance storage arrays are only recently been making their way to the market.  Most vendors in this space are not GA, yet, but have been developing a different way to do storage.  Most of these vendors have based their technology on the NVMe standard.  NVMe delivers SSD with high IO performance and low low latency.  In its current state, there is no way to share NVMe drives across multiple hosts.  What these vendors have done/trying to do is enable users to share NVMe storage over fabric without losing any or much of its performance gains.  The idea is that in an all Flash/SSD array that connects to clients over iSCSI or FC or NFS can only deliver a fraction of the performance inherent in the drives themselves.  Additionally, there is latency to contend with as a result of using standard storage protocols such as SCSI.  This means that a typical all SSD/flash array can cap out, depending on the number of drives, at 250K-500K IOPS.  The hyper-performance arrays claim that they can deliver microseconds of latency and millions of IOPS on a relatively small footprint.  In order to assure low latency and high throughput, these vendors are requiring the use of RDMA over Ethernet or IB.  Are there enough applications that demand that sort of performance these systems can deliver?  Will this be another turning point in the industry as the introduction of iSCSI was?  All to be seen.

So what will be next in hyper(everything)?

Advertisements