I have recently been working with two different universities on projects involving hardware.  In both cases, the acquisition of hardware was not new but an expansion of the existing system.  And yet, it still took us over 5 month to receive a PO.  I wouldn’t really care that it takes 5 month to process an order, but what made this challenging is that the actual buyer was running out of space and really needed this capacity to continue working. In this situation, the procurement department was standing in the way of the order.  There were all sorts of contract requirements; we spent weeks negotiating terms of a remote 4 hour install service.  Was the procurement department just following protocol or is there an unwillingness to see the bigger picture?  Are the legacy policies impeding progress in these and other organizations or are they really protecting them from harm?

The intent of the protocols set out in procurement processes is to protect the organization from undue harm and to procure the best solution for the money.  I have been working with contracts throughout my career and business context is a critical component of any negotiation.   It is here where we communication and consensus break down. It is as though we were all walking down the same road at one time, then the industry turned left but not all procurement protocols noticed.  So here are some thoughts to put business context back into the conversation:

  • Many procurement departments treat complex IT purchases the same as they treat buying commodity items like paper or pencils.  Cost typically is only one factor in an IT buying decision; the more relevant criteria may be performance, resiliency, density, and support.  Allowing the users to select the technology first may allow the organization to make better decisions overall that in the long run will save them money and headaches.  That said, we understand that there are required processes in place to ensure fair play.  The RFP process most commonly used lacks necessary detail and flexibility.  If ask a yes or a no question, you don’t get to learn how it actually works and whether it makes sense for your application/business.  An alternative approach that has worked in many situations is to instead issue an RFI to many vendors first.  This provides a response with enough detail to select two or three that fit the need the most.  There may be pricing presented in the RFI but it is not typically contractually binding.  The user may now spend time working with the various vendors to architect a solution that fits their specific need.  This also allows them to perform any POC work and speak to references.  After an extensive evaluation process, the user will be ready to proceed with a solution that has a detailed configuration and will require pricing.  This is where procurement can step in and assist.  Once they have a solution configuration, then pricing may be negotiated either directly with the supplier or through an RFQ process where the submitted quotes are contractually binding.

 

  • There is another phenomena that exists in the industry that procurement departments are either not aware of or don’t understand.  This is called channel partner commitment.  It means that if a channel partner (reseller, VAR) is working with the user on a solution, the vendor/manufacturer treats this partner as a contract sales representative of their organization.  This means that this channel partner will have the same pricing/discounts as if the user was working directly with the vendor/manufacturer.  In the background, the vendor will pay a sales commission to its channel partner.  When procurement departments spend a lot of time sending out requests for competitive quotes in these situations, they are really wasting time and resources.  They could probably get a better return on their effort by negotiating further with the partner presenting the solution.

 

  • Another mistake some procurement departments make, as do users, is think that if they go directly to the vendor they will get better pricing.  The industry has changed over the past two decades from having a few large solution providers to a diverse ecosystem of vendors, manufacturers, and software developers.  It is cost prohibitive in most cases to grow business by building out a direct sales force.  To stay competitive in the market, vendors they have contracted with regional and national reseller and VAR organizations to be the extension of their sales force.  More feet on the street without carrying full time employees.  Resellers and VARs provide a lower cost of sale to the vendor community.  What all this means is that in many cases, there is no way to buy direct from the manufacturer/vendor.  It is not to frustrate users or procurement department, it is just what makes business sense for the vendors.

 

  • Finally, I know everyone wants to protect themselves as much as possible, but please be reasonable.  Most companies/people are not out there to screw you or harm you in any way.  We all know that sometimes things happen and we all agree that protections need to be in place for when these situations occur.  That said, let’s not go overboard, be reasonable, consider what you are buying first (business context).  I have seen contracts from procurement that cover every possible situation that may occur across all products and services they may purchase.  This means that I might need to agree to terms and be held liable for situations that don’t apply.  A good example:  contract says that if the product I am selling is going to cause death of a patient due to malfunction, my organization is held liable.  Here is the problem with this.  I am just selling a storage box.  I am not developing or managing the application.  I am not making a decision whether the system is adequate from a resiliency perspective for your needs.  I don’t have any control where this system will be deployed.  Holding me liable in this situation is not logical.  The key message here is that we are all in this together.  Our goal is to do right by our customers, to go the extra mile when it matters most and in return, we want clients who value what we do for them.  We want a win win.

My hope is that the next time I work with a procurement department on a purchase of IT solutions, that there is a clear understanding, I am not selling pencils and it does matter what the end user ends up with.  Let’s work collaboratively to serve those who are both our customers.  Maybe I am a bit idealistic here, I am sure I am, but a girl can dream.

Advertisements

I spend a lot of time talking to end users about their needs, what is working and what is not.  What surprises me often is the view they have of the cloud.  Cloud is cheaper, it is more agile, it is deployed instantly…..There is no argument that conceptually, using a public cloud is easier than provisioning servers on premise, though outsourcing an application to a SaaS provider is even easier.  And yet, there are gotchas in each scenario.  Here are a few things I learned recently:

  • SaaS providers today provide application availability SLAs, not data integrity or availability SLA.  This means that data loss or accidental anything has no affect on the service provider’s compliance with their promises.  In other words, if the data is that important to you, you need to back it up.  Seems like a simple concept, except that you don’t have a dedicated server or an application instance; this is a multi-tenant environment and there is nothing to put an agent on for a backup.
  • Putting data in the cloud seems like the safest place for it to be.  The cloud provider says so.  You pay $x per GB per month and the provider stores your data.  Data placed in the cloud is stored either in a RAID, mirroring, or erasure coded configuration within the chosen data center location.  If you used to replicate your data between sites so you have some business continuity or disaster recovery…well,  you don’t automatically get it with cloud.  The providers only store in a single location and if you want to have your data in a separate location, you have to pay a separate fee.  This means if you are paying $0.01/GB/Mo, which is about $120/TB/Year, only applies to one data center. If you want a second location, that will be an additional $120/TB/Year.
  • We love the idea that we can provision whatever resources we need, both compute and storage.  Sounds really good; I can provision what I want and need and it is available to me immediately unlike when I have to ask my IT folks to give me a virtual machine.  That is not exactly how it works.  Most cloud providers offer a variety of templates that can be selected.  These are machines that have been already designed with CPU, memory, cache, and storage.  If you need more of something and less of the other, you just have to use what is given to you.  At times, this means that your machines may be either over-provisioned in some areas or under-provisioned in others.  Though there is always a cost attached to each resource, it might be insignificant to the value the end user sees in the service.
  • We often look at other companies using cloud services and say to ourselves, well, if they are using it for all their IT needs, why shouldn’t I.  One common example is Netflix.  Here is a question to ask one self, what is my business model and what are the dependencies and drivers of my business.  This is a really important question because whether you can benefit economically and operationally from the cloud will depend on your business.  As an example:  if you are Netflix and you are providing a streaming service, you need to support as many streams as possible for a single asset for many different assets.  If we equate each stream is a user and each user presents a revenue amount, paying on the fly for more resources is covered by the value creation of such resources.  On the other hand, a less dynamic business like pharma or oil and gas conduct numerous studies that may become revenue producing over time.  Their investment must go as far as possible in order to contain investment costs.  The business driver for Netflix is agility; the business driver for oil and gas is cost containment. Speaking of costs, did you know that IaaS is not less expensive than infrastructure on premise?

It may not seem like I am a fan of cloud, but I am.  I remember back in 2000 when we were trying to figure out how to better utilize resources by sharing them across departments and even organizations.  We didn’t have the right technology then, but we are on our way to having it now.  What cloud really offers is the promise of even greater efficiency than just virtualization and with greater efficiency, lower cost and more productivity per dollar spent.  If we change the conversation from cloud first to what drives my business, then we can come up with an architecture that consists of on premise and cloud environments where the decision to use or the other will be based on what serves my needs in the most cost effective, relevant way.

As part of my effort to get a handle on who are the partners we have and where they fall in terms of industry segments, I sat down and made a list of all manufacturers I could think of off the top of my head.  I got to 97 when I had to stop, had a meeting to go to, and I hadn’t even covered the backup solution vendors, big data, or HPC.

What does it mean that I can list that many vendors without even using web search engine?  It means that our industry is very fractured both in terms of real or perceived need and in the number of solutions that are available.    Is this a good thing?  From an innovation perspective it is exciting to see so many advances in technology that can make us more productive and better connected.  Of course the reality is that the market can’t sustain all these solution providers and over time some of them will close doors or have their IP acquired for some other use case.  Based on this premise that some will fail, what can startups do to improve their chances of being the winner in the end game.

I can’t stress this enough but there are three things that are critical to success:

  1. You need to know what you are building and what REAL problem it is going to solve.  The problem statement has to be put in the context of what is most important to the client, which typically includes:  cost reduction, increase in performance/productivity, simplicity of management, and scalability.  Customers don’t buy specific features, only the outcome of those features as they relate to the problem statement above.
  2. Nothing is perfect and never will be so don’t wait to go to market earlier than later.  If the core product is ready, then start putting it in the field.  Not all customers need all features, remember the adoption curve?  There are early adopters and you want to engage them early.  Waiting to have the full set of features may take away the advantage of timing.  When launching a new product or new technology there is always some time required to educate the market so starting that conversation sooner is better.  Don’t be afraid, it might be more useful for the long term than keeping silent.
  3. People make up an organization and with the right people, an organization may thrive; with not so right people….well…..A strong management team, a good sales team, the right incentive program for the channel, a well worded marketing message can make a huge difference in the success or failure of a young company.  Decisions need to be made in a timely manner and no, there is no such a thing as perfect information to make perfect decisions.  You have to take in as much as you can and go for it.

Do you remember that story about VHS versus Beta?  Well, that is an excellent lesson.  Best doesn’t always win.

Every year or so the industry comes up with another term that defines all marketing messaging regardless of what you are building or selling.  Marketing terminology creates a lot of confusion and makes it more difficult for us in the field to explain technology to our clients.  The latest term that has become fashionable is hyper -storage/converged.  So what is it really.

Let’s start with hyper-converged infrastructure.  There is where all infrastructure services run off a single appliance that consists of CPU, memory, networking (NIC/HBA), and storage.  It requires a hypervisor to carve the available resources into individual machines.  These architectures have become possible as a result of SSD/Flash technology entering the market.  There are advantages to this model:  it is simple to procure, typically a single line item;  it is a single vendor solution that doesn’t require integration of components; it promises simplicity and reduction of costs.  There are some potential drawbacks as well: many don’t have a way to isolate CPU for compute workloads and CPU for storage services, which may cause unpredictable performance;  in server virtualized environments it is challenging to predict how much CPU, memory and storage is required to run a set of virtual machines. It definitely varies from organization to organization and application to application (this is more predictable in VDI implementations);  it is challenging to scale the environment where resources are needed without over-procuring in other areas; it is a single vendor responsible for the full stack (good enough at all but not great at anything).  There are some newer versions of hyper-converged that allow compute resources to be increased without affecting storage and vice versa.  In these situations though, the software is often sold separately from hardware.

Hyper- performance storage arrays are only recently been making their way to the market.  Most vendors in this space are not GA, yet, but have been developing a different way to do storage.  Most of these vendors have based their technology on the NVMe standard.  NVMe delivers SSD with high IO performance and low low latency.  In its current state, there is no way to share NVMe drives across multiple hosts.  What these vendors have done/trying to do is enable users to share NVMe storage over fabric without losing any or much of its performance gains.  The idea is that in an all Flash/SSD array that connects to clients over iSCSI or FC or NFS can only deliver a fraction of the performance inherent in the drives themselves.  Additionally, there is latency to contend with as a result of using standard storage protocols such as SCSI.  This means that a typical all SSD/flash array can cap out, depending on the number of drives, at 250K-500K IOPS.  The hyper-performance arrays claim that they can deliver microseconds of latency and millions of IOPS on a relatively small footprint.  In order to assure low latency and high throughput, these vendors are requiring the use of RDMA over Ethernet or IB.  Are there enough applications that demand that sort of performance these systems can deliver?  Will this be another turning point in the industry as the introduction of iSCSI was?  All to be seen.

So what will be next in hyper(everything)?

Today the news is, Verizon will be exiting the public cloud services market.  It has given its existing clients a month to move all their workloads and data out of the cloud for Verizon doesn’t guarantee its availability after April 12th.  The reasons for the exit are cited being the competitive landscape.  It is too hard to compete with AWS, Azure, and Google on price.

This leads me to ask a few questions:

  • Is the public cloud service providers differentiated solely on cost?  Are there other variables that could make price less of a decision criteria than it is now?
  • If Verizon couldn’t make it in this market, does it mean that the barrier to entry is so high that it is unlikely anyone will enter the rink other than who is already there?  Therefore, is this really a competitive space or has the ship sailed?
  • If we assume that the infrastructure providers in the public cloud space have been defined, where does the opportunity lie for everyone else?

I don’t believe there are simple answers to any of these questions.  I do believe that the ship hasn’t sailed yet for everyone though the only way to enter the race is to differentiate on something other than cost.

Who has the opportunity to compete with Google, AWS and Azure?  I think the ones who have the most potential to make it are actually the component manufacturers.  A lot of functionality in managing a public cloud are already available in projects such as OpenStack and CloudStack, no need to reinvent the wheel here.  What the component manufacturers have is the lowest cost of hardware and the supply chain.  Why is this important?  Well, there are really only 2 hard drive manufacturers left and there are only 2 real processor suppliers.

What does it mean for the rest of the industry?  There are many areas where solutions don’t yet exist even if the problems are acute or are not very mature or comprehensive.  Developers need to keep the assumption in mind that cloud doesn’t equate to public and that many organizations are opting to offer in-house private clouds and therefore solutions that leverage both public and private will have the broadest audience and therefore, the largest TAM.

There have recently been a number of technologies that make using public and private cloud easier.  These are tools that enable seamless machine mobility across hypervisors and clouds.  This simplifies use of cloud for disaster recovery, bursting, and other temporary or continuous use cases.  To mention one specifically, Ravello, recently bought by Oracle.  Ravello is able to move machines between clouds.  Buying Ravello gives Oracle a way to help customer migrate workloads out of other clouds and move workloads into Oracle cloud from on prem infrastructure.  We might see similar movements made by Azure, AWS, and Google.

 

 

How to describe what the channel does.  Let’s first understand the business relationship.

A manufacturer has a product or a service they would like to sell on the market.  To reach the broadest market there needs to be a large sales force.  One option is to hire lots and lots of sales people, pay them either a draw or a salary and benefits and when they sell, they can make a commission.  Another option is to contract with sales organizations (resellers, VARs, etc) to sell your product and be paid a commission when they close a deal.  In this situation, the investment fro the manufacturer is smaller yet the number of feet on the street selling is greater.  Most organizations have a mix of these two strategies.  There are some sales reps to help drive awareness and sales, but most of the business goes through the channel.

So what are the expectations of the channel?  It is simple, to make a living.  This means that the channel enterprise is sustained on the “commission” they receive from a closed deal.  In order to close a deal, the channel organization must invest heavily in acquiring clients, establishing credibility with the end users so that they are open to hearing about what products the channel has to sell that address their needs.  The channel organization also needs to invest in systems and people to support the marketing and sales efforts of it people to ensure the end user has a positive experience working with them.  Most channel organizations establish relationships for the long term; they are interested in becoming the supplier of all your needs, not just selling you one product.  The relationship is worth more than any one deal.

So what does this mean to the manufacturer?  It means that the channel organization is both your client and your employee.  If you don’t pay your employee, with no guarantee for future compensation, the employee won’t stay.  If you mistreat the channel over one deal, you are mistreating a client who has more to offer than just that deal.  Just as the channel is in it for the long term with the end user, so should the manufacturers be in the relationship with their channel for the long term.  Always ask yourself, what is important to the client and what will make them buy.

Every year towards the end of December there are numerous prediction blogs, papers, articles published.   Flash and SSD vendors predict that this medium will overtake HDD demands and that everyone will want and need flash.  Tape vendors tout the return of market demand for tape, and cloud service providers and enablers will show numbers that say everyone will be using cloud.  It is incredibly convenient when  the product you are selling is in line with industry trends.

Of course those of us who are in the field talking to customers, it is not so simple or black and white.  There are definitely movements to adopting cloud models, primarily in the form of software as a service.  It is actually more challenging to find an email archiving solution that will be housed on premise instead of as a service.  There is also great interest in leveraging public cloud services around individual file sharing, office applications, email in general, and other specific applications in each vertical.  All these models make economic and operational sense.  Let’s remember that the reason cloud is attractive is not because we all want to be in the cloud, but because it offers some economic or operational incentive.

So what are my predictions?  I don’t like to predict but I can share what other trends I am seeing and which I believe will continue to gain momentum in the coming year:

  • Adoption of alternative hypervisors for virtualization in the data center.  We already see many organizations bringing in HyperV for VDI and other workloads, especially if the applications are Microsoft heavy.  Reasons are simple, the current version of HyperV is enterprise ready and it makes economic sense.  There is also interest in KVM, but that is more common in test and dev environments in research institutions where there is no need for the enterprise features and costs of VMware.
  • There is a level of distress that exists in regards to protecting data longer term.  Though backups continue to deliver operational protection of live data, there is now greater focus on addressing the need to store data long term, in some cases indefinitely, and still be able to perform backups.  Separating archiving function from backups has become a necessity for many organizations with large immutable data sets.  Figuring out what is the best way to achieve desired results has been challenging both from a business perspective as well as operations.  There are tape based and disk based archive solutions available, determining which one is the best fit is complicated.
  • Back to the cloud for a moment.  Most storage cloud offerings are based on either proprietary API or are accessed via S3, Swift, REST….Many enterprise applications and users are not equipped to speak these and require a translation layer.  This is where cloud gateways come in to play.  There have been a few on the market for the past four to five years but none have offered a fully vetted solution that addresses the variety of needs that exist.  Some vendors are well positioned to take advantage of the need, but they will have to align better their packaging and pricing models to gain a realestate.
  • In the storage world there is a war going on for the hearts and minds of the IT professionals.  It is between the various array vendors and hyperconverged system vendors, traditional, scale out, scale up, all Flash, hybrid, etc…..There is a race to the bottom on pricing for these systems and what differentiates one from the next is very specific and can’t be applied to the whole of the market.  In other words what one cares about may be irrelevant to another.  This war is going to get accelerated as use cases become more rigidly defined.  The addressable market is no longer all the data in the data center but only the applications that need the performance; everything else will go in purpose built system or the cloud ( private or public).  Does this mean that the addressable market will shrink?  Probably not.  The adoption of new paradigms is slow.  I think we will see the effects only in 2-3 years.

So which technologies will see a lot of attention?  I think we will see more on object addressable storage and gateways to object and cloud; adoption of NVMe SSD/flash in the host to accelerate performance of applications by reducing latency (where it matters); greater utilization of public cloud services for disaster recovery and business continuity; more options for interacting with your data through file synch and share applications.  The core of storage will not change much.  We will continue to see price pressures in the field and confusion among the end users trying to figure out why they should care and what difference does it make.