Security Is Not One-Size-Fits-All
When it comes to security, specifically when it comes to vulnerability management, one size doesn’t fit all. Organizations need to customize their tools based on a wide variety of business requirements. These include everything from scan windows, frequently changing credentials, report distribution and most importantly the architecture and volume of data that needs to be processed.
As matter of experience, and I am sure many of you can relate, products that work fantastic in the lab fail miserably in production because of the one size fits all philosophy. Problems range from basic scalability, to usability, and even the security of the security solution itself.
There have been many times when I have found myself answering an RFP or in a pilot for an environment with tens of thousands of assets to potentially be assessed and the client chooses only a dozen assets to scan and report on. This typically involves a make shift lab, with virtual machines, and only a scan engine. Users typically fail to consider firewalls, a multi tier architecture, role based security, ticketing systems, and time to implement when installing a solution of this nature. The ones that do have experience understand other technologies of this magnitude or are in a rip and replace scenario due to short comings with the other technology. One size does not fit all to scale from a single asset all the way to every TCP/IP enabled device on the wire.
Consider this small fact. A typical vulnerability management solution needs to assess every single device connected to your network. Is there any one device on your network that can communicate with every other device, using unrestricted ports and protocols, from a single location? The answer is probably no and in many organizations, absolutely not; it’s just not permitted. This is why organizations deploy multiple domain controls, DNS, and mail servers to service devices all the way down stream (outside of bandwidth and resource scalability) and keep networks segregated for regulatory compliance requirements such as PCI. A single vulnerability scanner and installation will fail miserably to assess these remote devices and isolated networks. This is why architecture, scaling, and management are so important.
In a typical enterprise a vulnerability management solution is an independent multi tier agnostic installation that can communicate with everything to determine vulnerabilities and calculate risk. If we consider the typical RFP and lab environment again, they fail to take into consideration these critical components in producing the desired results. It is much more than determining whether the host is vulnerability or not, or if this vendor has this “check box feature” or not. It is how the data is identified, escalated, reported, and managed throughout a complex infrastructure that is important. Consider that no one person can tell you the complete architecture of your enterprise and every device online at any given time. This is why we have information technology tools to help use document and illustrate the status of the infrastructure.
When building a vulnerability management program, consider that one size does not fit all. Sometimes you need scan agents, scan engines, management consoles, and even a data warehouse to complete the task and design the proper architecture for your business. The next time you are presented with a one size fits all solution from your vendors, make sure it is not the same solution for all sizes. It needs to have components that truly scale, allow for growth or shrinkage, and provide flexibility for any of your needs. How could one installation of a scan engine do all that ? Realistically, it can’t. That’s why a fixed subscription price for a single scan engine just does not work for most organizations.
If you would like more information on how Retina can scale to your needs and define how its components can manage your requirements,