There are many metrics folks use to determine whether a startup is a success or not.    Of course the basic ones are:  are they profitable? were they acquired for more than was invested? do they matter to customers?  There are also more subtle metrics, in my humble opinion, that are not as obvious.  These metrics is what I would like to discuss.

Whenever we discuss success or failure it is important to put the results in the context of the overall market.  On surface, an acquisition of $300M on an investment of $50M seems like a slam-dunk.  Financially, that is true and if financial metrics are all that matters on an individual deal…..But what if the company’s closest competitor got acquired for $1.2B?  Would the $300M seem like success?  If you were an investor, would you like an exit of $300M or $1.2B?

Another way to look at it is via market penetration.  Two companies with the same number of customers, though company A has more revenue per customer than company B.  In five years, company A’s technology is obsolete while company B’s technology is the default standard.

Yet another way to look at success is growth rates.  This has to do with the go to market strategy of choice.  A direct sales model will deliver high end enterprise customers potentially at premium margins.  In the short term, that is a winning strategy but over time, it is expensive and time consuming to scale and thus affects the growth rate/potential of the company.  Conversely, a company may choose a channel strategy from the start where the channel is empowered.  Initially, the investment might outweigh the rewards, but as the business scales so does its ability to deliver.  Growth rate can actually accelerate over time driving overall cost of sale down.  A well executed go to market strategy, which includes sales and marketing, may be the determining factor of success or failure, even where failure is coming in second or third.


Storage performance is core to application performance and data access.  When we talk about storage performance, we typically talk about IOPS and throughput, but there is a third variable, latency.  Latency measures the time it takes for storage to respond to a request or instruction issued by the CPU.    The lowest latency is achieved by delivering data from memory at the speed of memory.  The typical latency is at 1us.  If data could be delivered at such latency, we would have a highly efficient server architecture, but  there are a number of factors that prevent out ability to see latency at that level.

  • Application latency – the inherent architecture of the application may make it impossible to achieve microsecond latency.  Typical operations add to the overall latency.
  • Local file system – since DRAM is volatile, data that requires persistence must be committed to persistent media before an acknowledgement is returned.  The local file system is responsible for taking blocks off DRAM and copying them to other media on the I/O bus.  A common Linux file sytsem such as XFS or EXT4 add as much at 250us.  Even with the replacement of DRAM with NVDIMM (persistent memory), the latencies remain at minimum at 250us.  Though 250us may seem like nothing, in a typical database environment the reduction of 250us alone would increase IOPS and throughput per core by 350% and 410% respectively.
  • Network – When data travels over the network, whether it is FC or IP, there are added latencies.   Most all SSD/Flash arrays deliver performance at 1ms or more latency.  If SSD/Flash is sitting on the PCIe bus, that latency may be reduced to  a range between 500 and 800 microseconds.  Recently, a new protocol has been developed to allow shared storage (SAN) to deliver the same latency as storage on the PCIe.   This is the NVMe standard.
  • Drive media – Flash has a lower latency profile than HDD; it is not surprising since HDD is a mechanical device where the speed with which the platters spin correlates to the time it takes for the data to be pulled off the drive.  Flash is not a mechanical media and doesn’t have the same delays built in.

Of course we can’t leave out IOPS and throughput.  IOPS measures how many operations can be performed per second while throughput is how much data can be transferred through a given pipe.  Depending on the application, one of the other of these metrics will be more relevant.

For applications that stream data sequentially require more bandwidth and are therefore more concerned with throughput.  Thoughput may be calculated by the total bandwidth of the drives in a given system, the controllers, and the network.  Even if you have a system capable of delivering gigabytes of data, it still needs the network to carry the data.  There is often an imbalance between the network and system capabilities.  Recently a client expressed concern exemplifying this issue.  As a research institution there is a lot of data created by the labs and then processed by the investigators.  The challenge they are facing is that the amount of data being created and moved to a centralized location is much greater than what the network can handle.  As a result, they are unable to transfer data over the wire; some use tape or don’t move data at all.

IOPS  measures the number of operations a drive or a system can perform.  We have seen huge gains with the adoption of SSD/Flash.  Where a 15K RPM drive has the ability to deliver around 180 IOPS, a flash drive has the ability to deliver thousands of IOPS.  About 10-15 years ago storage administrators would be forced to over-provision capacity in order to get enough drives in a RAID set to deliver required number of IOPS.  As an example:  if your application needed 1 TB of data and 1,500 IOPS, using 15K drives at 300GB of capacity each an administrator would have to provision 4 drives to reach required capacity and 9 drives to reach the required IOPS.  Today,  capacity and IOPS can be balanced.

Not all applications require microsecond latency, thousands of IOPS and gigabytes of throughput, but with higher performance, when properly designed, the system can perform at a much higher level of efficiency, both operational and financial.  Next time we talk about performance, let’s make sure we are clear what performance we need.


I haven’t written a blog in some time, I have been working on many interesting projects.  More on that later.

This week I saw that HPE has signed a definitive agreement to acquire Nimble Storage for $1B.  Nimble is a solid system and the company has done a great job bringing it to market.  Nimble did all the right things from a sales and marketing perspective.  I can understand why it is an attractive target for HPE or any other potential acquirer.  That said, here are some reasons why I question the value of this merger:

  • HPE has a hybrid SSD/HDD system (3PAR); they have spent a lot of time porting it to a smaller entry point.  There is a significant overlap between 3PAR and Nimble from a target market perspective.  BTW, 3PAR seems to be one area in storage at HPE that is growing so why cannibalize or compete with self?
  • HPE has areas of the portfolio that are really lacking in offerings.  This includes NAS/file system storage, Object storage, tier 3 bulk storage, and others.  There is growth in storage in unstructured data and virtualization.  If you want growth, go where there is growth in the market.
  • Part of Nimble’s success is its go to market strategy and execution.  HPE takes a different approach to channel than Nimble.  One consideration is whether the challenge is gaining more growth in storage is more associated with how HPE sells and markets than what products they care, at least in some areas.  If you want to compete with younger more agile firms, then try being more agile.  Agility in sales process is something other large companies lack.

Moving forward, I hope HPE  doesn’t take too long to integrate Nimble and adopt some of Nimble’s strategies.

Our industry operates in two parallel universes.  In universe A, everything is moving to software defined.  If all you did is listen to presentations by vendors or read trade journals, you might think that everyone is buying software defined everything.  How does the industry explain software defined:  Software Defined is where a commodity hardware’s personality/function is defined by the software it is running.  In other words, if you take a commodity server and add SDS (software defined storage), you get a storage array; if you add SDN (software defined networking), then you get a switch or any other networking device.  This, of course, sounds great.  As an end user I buy hardware from my favorite vendor, load the software, and WHAM, I have a solution.  The reality is what we get in universe B.

In universe B, the end users want a solution that guarantees performance, reliability, and capacity.  The users want real time support of the solution; when something goes wrong, and there is always something that goes wrong, the end user doesn’t want to start calling the different vendors to figure out who is at fault.   This is why solution architects and systems administrators seek offerings that have been tested, have certified interoperability with other components, and where “how to” is straight forward.  So how do we reconcile what the customer wants to hear and what the industry touts and what the customer ends up wanting to buy and use?

The answer is that we need to define concepts a bit differently.  If the idea of software defined is that the software runs on commodity hardware but the customer doesn’t want to do all the integration and support, then what we really have is “Software Designed”.  Software designed means that the software is developed to run on commodity hardware but there is a defined hardware configuration that has been tested and certified for reliability, performance, interoperability and that the software and hardware are delivered together with one point of contact for support.  This concept is not about how the software is developed but how a solution is brought to market.

Let’s remember:  customers don’t want cloud, they want a more efficient, just in time paradigm in consuming and paying for the IT resources.  Customers don’t want software defined, they want a system where the hardware is commodity, therefore lower cost to procure and maintain, and software that is agile and responsive to the evolving needs of the users.

I often write about what works and doesn’t work in the world of channels. There are new challenges we encounter daily but most can be overcome with open communications.  Of course there are always risks associated with partnerships in a reseller model and every entity wants to minimize their exposure.  In the end though, the business defines how the partnership will work and legal then works to put it into terms that are acceptable to all parties.

As an example,  if the reseller will not be providing any post sale support, there is no need to have any wording that dictates how support will be managed.

This is how most discussions work; sometimes though, we come across an organization where legal dictates business terms.  The challenge with these situations is that legal operates without business context and, therefore, often puts terms that are hard to accept for a reseller because they don’t represent reality.  I think when forging a partnership, it is best to first discuss and agree on the business terms, and then put these into legal terms.  In all situations remember, a partnership has to be a win-win for all parties, otherwise, it won’t work.

Here is a question frequently asked by end users when deciding on a purchase from a “younger” company, “Is this vendor safe?  Will they be around in five years?”

The answer is never simple, it depends on the end user’s risk tolerance profile.  Here are some musings.

  • Make sure you understand what is the objection.  Some are concerned with support being there 3-5 years from now, others are concerned about the company being a stand along concern 5 years from now, yet others wonder whether the product is mature enough to perform adequately in their environment.


  • What are you selling?  Is it software, services, or hardware?  From a functionality perspective, it may be less risky to purchase software and hardware than services.  There are numerous organizations running old versions of software without support.  I am not saying I recommend it, but depending on the criticality of the software or hardware offering, there may be support options.  With services such as anything-as-a-service, this may be more problematic.  If a company shuts its doors, the subscribers may end up without any services very quickly or will be forced to move their business elsewhere in a very short period of time.  This has happened a few times in the industry already.


  • Some organizations may be willing to take greater risk if the technology is uniquely positioned to offer them market differentiation.  If there are alternatives that are good enough, the risk of a new vendor may not be worth it.


  • At what point is a vendor considered viable?  Of course it depends on the timeline in question.  If you are looking out 3-5 years, the considerations are different than when you are considering a longer time line.  The shorter the time line and the smaller the financial investment, the less risk may be perceived.


Considerations for determining viability:

  • How many paying customers the vendor has.  The key stepping stones are 20, 100, 250, 500, and 1000.  The other key metric that goes along with the number of customers is how quickly these customers were acquired.  A newly GA vendor with 20 customers acquired in three months is on a different trajectory than someone who has acquired 20 customers in 12 months.  It is also important to understand a typical sales cycle time line and the expected value of a closed deal.
  • How is the company funded.  Most startups are VC funded.  Knowing what round of funding they are in and how much money they have taken is relevant to how well they may be doing.  A well run company will take money with a clear purpose; a round for development, a round for marketing and sales, a round for growth.  Taking too much money quickly may, not always, signal poor management or lack of focus.
  • If the company is well managed and has money in the bank, the next thing to consider is whether the product is doing well.  Does it have good references?  Does it pass the POC?  What is the support like?  Sometimes you can gleam some of this from the interactions with the sales team.  A competent sales team with resources in the back office to assist is a good sign.
  • What position does the product take in the market.  Is it a true disrupter or is it a me too?  How much competition is in the market?  How early or late is the vendor to market?  If a market is saturated with competition, the product has to stand out and the execution of the firm must be superb.  The other key is whether the competition is other startups or established vendors, the big four in the industry with the majority of the market share.  If the competition is primarily from other startups, there will most likely be a consolidation phase within a few years.  This means the established vendors will look at the startups that have been successful and may purchase them to add them to their overall portfolio.  Remember that innovation often starts with startups.
  • Many established vendors buy other companies when they are still relatively small.  What does this mean for those vendors who have reached the 500 or even 1,000 client mark?  It just implies that they may or may not be acquired but the key is that it will take some time for them to truly fail.   Meaning, a well run startup with 500 plus customers will not disappear from the field in a matter of days (with the exception of there being corruption).  It may take years for the company to spend what they have raised and to disenfranchise their customers.  More commonly, companies like that either continue to grow organically, eventually going public and expanding into other areas of the market, or they are acquired.  With a substantial customer base, there is value in a company even if the product is not super exciting.  The residual value of a company with a customer base is enough to attract industry buyers as well as equity management firms, and other private investors.

To answer the question ” Will this vendor be around?” requires more than just providing financial.  It requires an understanding the basis for the objection and then presenting the case in light of industry trends and market positions.  In the end, not all objections can be addressed to the satisfaction of the sales persons or the vendor they represent.  Sometimes, the perceived risk is too high for an individual or an organization to move forward.  It is always best to ascertain the buyers risk profile as early as possible in the sales process.

One of my biggest challenges every day is to cut through the industry noise and get to the bottom of what vendors are selling and what customers are buying.  It is a challenge because vendors message to what they think customers want to buy (not necessarily what they have to sell) and customers want to buy what they are hearing from the industry as what they need.  The reality is a lot simpler; what customers want to buy hasn’t changed in decades.

Enterprises want to leverage their IT resources to drive more business, more revenue, more profitability.  This means that IT must be more efficient, effective, differentiating, agile, and responsive.  These are the high level wants and needs.  Each organization translates these requirements into technical specification based on some criteria such as performance, scalability, cost, simplicity, risk, etc.  How these are prioritized depends mostly on the person/organization making the decision.

The noise complicates the conversation.

Enterprise need to become operationally more efficient and cut costs.  This doesn’t mean they want to buy cheap stuff.   It is about the price only when all other variables are equal.  The industry has instilled in the users the idea that cloud is cheaper and more flexible; you pay only for what you use.  There are many ways to define what cloud is, but if we take cloud infrastructure offerings, once you really look, they may not be cheaper or more flexible.  Here are two examples to demonstrate:

  • Company XYZ needs to store 1PB of data for 7 years.  It is not clear whether data will be accessed regularly or not, but there is a need for it to be secure.  Option 1 is to use cloud storage (S3, Glacier, Google Nearline).  A single location of public storage cloud is average $0.01 per GB per month.  Without accounting for egress and transaction costs, that equates to $123 per TB per year.  Over 4 years, the cost of keeping a PB in the lowest tier of cloud, in a single location would be $503,808.  Keep in mind that depending on where the cloud data center is located, you might need to concern yourself with mother nature.  If you store two copies for geographic distribution, your cost doubles to over a million in 4 years.  Conversely, you may procure an object storage system to host 1 PB of data for $400 per TB over 4 years.  The total cost of this solution would be $409,600.  Some object storage vendors support geo-dispersal which allows you to stretch the system across 3 sites with ability to sustain site failure without data loss.  The cost of such deployment would not be different than already stated $410k.  The facility costs may be off set by the lack of egress and transaction costs.


  • Another Example is company ABC is running a marketing campaign and requires compute and storage resources for the duration of the program, which is 9 months.  Provisioning a decent server in the cloud with a few TB of data and snapshots may cost $210/ month.  This equates to $1,890 for the duration of the project.  You might need to add a backup client for the data, but that could be another few hundred dollars.  If you had to purchase a server, it could cost you 4,500.

No one wakes up and says, I want to go cloud.  What they really want is faster and simpler way to deploy IT resources and applications, to pay for resources that they consume only and not have to pay forward, and to simplify management of their infrastructure.  Some will be willing to pay more to achieve these results, others may not.

There is a way for some to achieve these goals on premise or in a hybrid configuration.  First, identify applications that are not core to your business and can be better served via a service provider.  This could be CRM, email, or SCM.  Then evaluate your environment for places where resources can be shared among departments.  The more an organization centralizes IT services, the more efficiency can be achieved and the greater opportunity for flexibility in how resources are assigned and consumed.  The private cloud concept is exactly this, centralized IT services where end users can select what resources they need and an orchestration and management layer that simplifies provisioning and allocation of resources and tracking of consumption.

Though there are many variables that go into any buying decision, the conversation has to start with what does the business need.  Messaging the market that cloud is the only way, cloud is cheaper and faster, all SSD or Flash is the answer to all your prayers, or that you need 32 Gbps FC when you can barely fill an 8 Gbps pipe doesn’t help users make good decisions.  Instead of the hype and the noise, let’s build, package and deliver products and services that will move the enterprise forward. I seem to have an idealistic view of the world, but a girl can dream.