What does 'stable' mean?

 - 
08/04/2009
I'm working hard right now on a software special report - I've got some 8,000+ words of notes already - and it's shaping up to be a look at what it means for software companies to really be "stable" and "reliable." Of course, these are terms thrown around all the time by software manufacturers. Who's going to say they're NOT stable or reliable? But they appeared on face value to be pretty empty terms. How do you measure stability? I hear people throw around the term "five-nines," but how many of them actually have some sort of documentation that shows five-nines reliability? But I'm starting to uncover some metrics that companies use internally that make some sense and you should be asking about as a reseller: What percentage of customers make use of customer support in a given month? This doesn't necessarily speak to reliability and stability, as many customer support calls are because of user error or a reseller who's still unfamiliar with the product, but if a high percentage of customers are calling customer support on a regular basis, that's not good. You can also ask what the ratio of customer support staff to customers is - a high ratio can be a good thing, showing they're committed to customer support, but if all those people are busy all the time, it might make you wonder. Following on that, too, is: What percentage of customer support calls get elevated to the engineering team? Basically, how many of those calls are actually due to a user-discovered error in the software? If this percentage is high, that definitely speaks to unreliability. There simply shouldn't be that many problems that engineering needs to fix or issue an emergency patch for. If there are lots of patches being issue on an unscheduled basis, that's a problem. Because you know the first call is likely to be to you and you'll be rolling a truck way more often than you'd like. How many square feet is your test lab/What's your testing lab look like? Sure, beta tests in the field are great, but there should be rigorous testing done in the lab before it even gets to beta. You should see a big area and lots of actual product that the software is actually running alongside: cameras, readers, etc. If all of the testing is done through simulation, that's a problem - real world testing is vital. Really, if they're working often with video or panels that drive outdoor gates and the like, the company should have an outdoor test bed as well so that environmental factors can be considered. Further, you could ask: What's the ratio of money spent on development to the money spent on testing? Essentially, if all of the money is being spent on development, that's not a sign of a mature product, and you should wonder how reliable the end product is going to be. I had some manufacturers say to me that testing spend should be at least double development spend. But I think that's only fair for more established companies. If it's a young company still building out its feature set, it seems unreasonable to expect them to double that on testing. But maybe not, if you want the software to actually work. And, finally, though this isn't a metric: Who are some customers who can speak to the stability of your software? Perhaps this is a no-brainer, and maybe it should go without saying that if a company doesn't have a customer who's raving about their software then you should run away, but it's something to make sure you don't forget. Go and talk to integrators who've installed the software and go and talk to the end users who are using it. If neither is happy, neither will you be. Anyway, that's a start - I'll have a more robust report on this in about a week. Feel free to make notes in the comments about other questions that should be asked of software manufacturers as you're consider partnering with them or becoming a reseller.

Comments

A couple of thoughts:

"What percentage of customers make use of customer support in a given month?"

I am not sure this is comparable amongst manufacturers of different size and growth rates. An IP camera growing 40% per year almost certainly has a higher percentage of customer support calls than an analog company growing at 5% per year. Even if the reliability was the same, the higher proportion of new customers almost always results in more calls.

"What percentage of customer support calls get elevated to the engineering team?"

I think this is a very interesting question and an honest answer would be revealing. I doubt that most vendors will tell the truth and even those that do will want to under-report.

"How many square feet is your test lab/What’s your testing lab look like?"

I think beta tests in the field are far more important. Mainly because even a very sophisticated, well run QA team can never test and use systems in the myriad ways that real end users do. That's not to say I am against spending more on QA and test labs but I think it only goes so far.

If I was really looking at reliability/stability, I would focus on finding out how many customers use the current version of the product and how long they have been doing so. You can ask the manufacturer for references but they are likely to give you customers without any problems and/or customers who they have a strong relationship with. If you can directly contact existing end users (without the manufacturer setting it up), that would be best.

I have always found this aspect to be tough, short of testing the product oneself.