The Dilemma: How to Balance Network Security and Performance with Testing
Ongoing testing can help measure the effect and effectiveness of change.
Enterprises need to balance security and performance. Ongoing testing can help to measure the effects of changes, solve problems more quickly, inform on what investments to make, and to demonstrate how changes have impacted your network, explains Sashi Jeyaretnam, senior director of product management for security solutions at Spirent.
Managing network security for a modern enterprise can feel like walking a tightrope. True, one faulty step won’t have quite the same impact (though with cyberattacks continuing to hit the world’s largest organizations, the stakes can feel almost as high). But like navigating a tightrope, the secret to protecting an enterprise network is maintaining excellent balance – in this case, between security on the one hand, and application performance and user experience on the other.
Focus too much on ease of access and user experience, and you risk leaving your business exposed. But lean too heavily into inspecting network traffic for threats, and your applications can slow to a crawl, frustrating users and disrupting day-to-day business operations.
Plenty of companies have solved this dilemma – particularly large service providers and financial institutions. They do it by conducting in-depth security and performance assessments as part of ongoing change management. By implementing proactive testing—both to verify the efficacy of security controls and to measure their effects on user experience—these companies can keep application traffic moving without exposing the business to undue risk. Now, as organizations in every industry rely on increasingly distributed applications and clouds, more enterprises should be following their lead.
See More: Why Network Services Automation Is the Future of Network Management
A Shifting Landscape
The push and pull between security and user experience isn’t a new phenomenon. But several recent trends have converged to make striking the right balance – and maintaining it – far more urgent. These include:
More distributed users and applications: The days when businesses could easily classify as “internal” versus “external” traffic are over. Today’s corporate network is a tangled web of distributed applications, clouds, and connected devices, where the “edge” can be literally anywhere. The good news is that modern architectures like Secure Access Service Edge (SASE) bake security directly into these distributed environments. The bad: understanding how security controls affect a given application or user group has become far more complex.
More complex and dynamic environments: Where yesterday’s enterprise networks were largely static, today’s constantly change. With software-defined networks, shifting cloud infrastructures, and continuous software updates to infrastructure and applications, the network you had this morning could look very different by this afternoon. Even security solutions themselves, which might have received software updates once or twice per year in the past, can now change multiple times per month.
More encryption: The percentage of network traffic using Transport Layer Security (TLS) encryption continues to grow. Google estimates that 95% of web traffic is now encrypted. While this is good news for users in many respects, it also means that inspecting network traffic for threats has become much more computationally intensive—and much more likely to affect user experience. Some enterprises find that inspecting encrypted traffic cuts firewall performance literally in half. It’s the biggest reason why 50% of deployed firewalls capable of TLS inspection have that feature turned off.
Enterprises continue to adopt ever-more powerful security controls to protect against the expanding threat surface that these trends expose. But if enterprises can’t measure the real-world impact of those controls, they won’t be able to use them effectively.
See More: Just Like Sports, It Takes a Team to Fix Network Security
Living with Change
What’s the secret to solving this riddle? The companies doing it best don’t fight against change; they embrace it. They assume they’ll always modify and expand the network security controls in their distributed environments. Where they differ from some businesses, though, is in prioritizing user experience just as highly. They adopt change management frameworks to continually assess security efficacy and performance as their distributed environment evolves.
Organizations that are serious about balancing network security and performance deploy test agents at strategic points in their environment (within on-premises networks, at access points to public and private clouds, in branch offices, and more) to simulate the network topology. They then generate emulated traffic to test the performance limits of network devices, web applications, and media services. And they do it in a way that emulates real-world traffic patterns as closely as possible—engaging all security controls as they’ll be configured in the production environment, as well as simulating real-world threats, with evasion and obfuscation techniques used in real-world cyberattacks.
Using these techniques, these companies establish a baseline to maintain acceptable performance with the right level of inspection for their business. And they repeat this assessment on an ongoing basis—every time security controls, configurations, or network software changes.
Striking the Right Balance
If this sounds like a better strategy than waiting for users to alert you that a security control change has made applications unusable, it is. By adopting ongoing testing into your change management process, you can:
- Consistently, and proactively balance network security and performance: Just being able to test against baselines makes a huge difference for users. By pre-emptively measuring the effects of network and security changes, you can understand their impact before they affect applications. You can keep users happy even as you keep them safe and avoid scrambling to roll back changes after the fact.
- Solve problems more quickly: Being able to test against performance baselines makes it easier to understand how a network or security change impacts applications. You can pinpoint exactly where and how the user experience is affected and quickly zero in on a solution.
- Make smarter investments: With ongoing testing, you can understand exactly what you need from your security solutions—and the size and number of those solutions—ahead of time. For example, you can simulate inspecting all encrypted traffic and identify how many firewalls you’ll need in each location to ensure enough capacity for your users.
- Hold your vendors accountable: If you don’t conduct ongoing security and performance testing, you don’t have baselines—making it much harder to keep tabs on your vendors. If a vendor issues an update that diminishes performance, all you can say is that users are complaining. When you conduct ongoing testing, you can demonstrate exactly how a change has affected your business. And you can validate that your security solutions are living up to vendors’ claims.
Cyber threats will continue to evolve, and you can expect enterprise networks to continue growing more distributed and harder to protect. But if you embrace constant change – and build network security efficacy and performance testing into change management – you can walk that tightrope with confidence.
How are you balancing performance and security efficacy? Share with us on Facebook, Twitter, and LinkedIn.
MORE ON NETWORK SECURITY:
- Top 3 Security Tools To Protect Networks From Ransomware Attacks
- Security on the Network: Protecting Non-sensitive Data
- Enhancing Enterprise Security with 5G Networks
Image Source: Shutterstock