|By Dan Sarel||
|May 7, 2016 01:00 PM EDT||
As we have seen the growth in security challenges across the organization, we have also seen the growth in security spending and number of products that an enterprise buys. But have we, as an industry, been able to show that we are better off or worse? There is no clear yardstick to measure if a certain security product is making an enterprise more secure and if it I delivering the ROI that it promised. How do we get out of this FUD (fear, uncertainty and doubt) driven spending and get to a point where we measure impact based on risk and assess value more objectively.
I have been involved in producing and selling security products for two decades now. As a security vendor you always feel that there are too many products and tools out there and it's not easy to grab the buyer's attention. However, during the last four years or so things have accelerated considerably and we are all facing an explosive growth in the number of cyber security companies, product categories and a multitude of conflicting protections against the new, more sophisticated, more vicious, more targeted, attacks. I often ask myself what would I do if I were the buyer? How would I decide where to spend my budget? Especially since it's really difficult to assess the benefit of one product or compare the benefit of one product over another. There is no clearly defined yardstick to measure if a certain security product has made an enterprise more secure and is delivering the effectiveness it promised. Anton Chuvakin addresses this issue in his blog RSA 2016: Musings and Contemplations. I really laughed when he quoted an older article:
"You're proposing to build a box with a light on top of it. The light is supposed to go off when you carry the box into a room that has a Unicorn in it. How do you show that it works?"
I believe that a lot of buyers find themselves buying such boxes never knowing whether they will really do the job. We as an industry often steer away from making any promises that the light will in fact go off.
If I were a buyer, here are some things I would consider. Or really examples of applying logic vs. swallowing FUD:
- Will lights go off when the unicorn shows up? Since you don't know, you have to figure out a test that will give you some confidence. Having spent some time in the endpoint security business I will provide the example malware protection. The unicorn here is a 0 day malware. Depending on how much time and people and budget you have for this, you can plan a very elaborate test and a bake-off between several vendors. Even if you don't, you must do the simplest due diligence, which is testing the product with known malware. A product that purports to identify 0 days must be able to identify any old malware. You can do a test of your own with existing malware or ask the vendor for three party lab reports. While this is a controlled environment, any outcome that shows less than 100% detection rate should be a strong red signal that you will not be protected from 0 days.
- Layered defense? When a vendor is offering you a new product and you remind the vendor that you already have 10 appliances fulfilling 20 different security roles and you happen to already have a product in the same category as theirs, they might pull the "layered defense" card, or you might think that maybe they provide a layer that you are missing. How many layers do you need exactly? A lot has been said about the risk-based approach. I'm afraid that there is no better alternative. Enterprises need to assess their risks, understand what risk mitigations they already have, what risks they do not mitigate and what residual risk they are willing to live with. Once you do this exercise (which needs to be repeated and refreshed by new eyes every time), then you should go figure what products will complete your current defenses (sometimes it will be products you already have but do not utilize correctly; sometimes you will need to look for new products out there).
- Listen to your own story. If there's anything that should teach you what you need to do next and provide you with the best feedback loop to your risk assessment, it is your security incidents. Almost all Incident Response plans or playbooks that I have ever seen have a "lessons learned" section in the end. While some enterprise security teams take this seriously, many teams find it very difficult to discuss and rehash solved cases or they are already on the next case and don't have the time or focus to learn from past incidents. Here we really need to learn from militaries that have been in the business of defense long before our industry came to be. I'll be surprised if there is any good military unit anywhere in the world that doesn't conduct serious debriefings after every major incident. This is how enterprises should consider their lessons-learned sessions. Even more important, perhaps, enterprises need to keep tabs of all incidents and see the trends that are emerging in a nutshell:
These are just examples of how I think security buyers should be thinking. The bottom line is: FUD-NO! Test everything-Yes, apply logic-Yes.