Americas

  • United States

How to eliminate the default route for greater security

How-To
Feb 01, 201811 mins
FirewallsNetworkingRouters

Here’s how and why to remove default routes that lead to the internet and instead configure outbound proxies to better insure security.

open source alternatives routing firewall 1
Credit: Thinkstock

If portions of enterprise data-center networks have no need to communicate directly with the internet, then why do we configure routers so every system on the network winds up with internet access by default?

Part of the reason is that many enterprises use an internet perimeter firewall performing port address translation (PAT) with a default policy that allows access the internet, a solution that leaves open a possible path by which attackers can breach security.

+Also on Network World: IPv6 deployment guide; What is edge computing and how it’s changing the network?+

Traditional Network Design with Default Routing

Network engineers configure static or dynamic (e.g. BGP) routing with upstream ISPs on the routers northbound of the firewall.  It is tradition and habit for network engineers also configure a static default route (e.g. 0.0.0.0/0, ::/0) toward the firewall on the router internal to the firewall.  As shown in the below diagram, this internal router then redistributes this static default into the internal dynamic routing protocol (e.g. OSPF, EIGRP).  Therefore, when any internal router receives a packet destined for an IP address that does not appear in its routing table, the packet is forwarded using the default route (e.g. Gateway of Last Resort).  As a result, every system on every internal network has a path to the Internet whether it needs it or not.

Default route Scott Hogg / IDG

Removing the Default Gateway

Now, the end-nodes connected to these internal edge routers also use a default gateway that directs all non-local network traffic toward the first-hop router.  For access networks, the end-user devices receive this default route from DHCP options.  Although it is possible to remove the default route from all hosts, it would be an administrative burden to do this manually for each-and-every server.  It would be easier to configure the presence or absence of the default route on a limited set of data center network equipment to achieve the same result.

Preventing Malware Communication Channels

Even though these nodes are on internal networks, malware can still reach these internal systems.  Once the host is infected, the malware then phones home to download/drop more malicious software on the host and then reach out to attacker command-and-control networks via the default route.  To help mitigate these threats enterprises employ perimeter defenses such as e-mail/web content filters, IPSes, file inspection, DNS-based security and security systems that leverage threat-intelligence feeds.  The default route directs outbound packets through these security protection measures to scrutinize the outgoing connections.  However, without the presence of an outbound default route, the outbound connections would not be routable, and the packets would be dropped before reaching the perimeter defenses.

Determining Need for the Default Route

End-users would complain if they didn’t have Internet access, but the reality is that not all enterprise systems need to reach the Internet.  For example, building automation systems, video surveillance systems, badge access systems, data center power and cooling equipment don’t necessarily need to make outbound connections to the Internet.

If you have sensitive internal applications that should only be reachable by internal resources, then they don’t need a default route.  For example, if you have a server in your environment that has credit-card data on it and is subject to PCI DSS compliance, then you should be using a reverse proxy, Web Application Firewall (WAF) or stateful firewall to reach this system, and it should not have a direct outbound path to the Internet.

One of the requirements often cited for allowing these internal systems to have Internet reachability is for the purposes of patch management.  We are all in agreement that systems must be regularly patched as part of a healthy security program.  However, there are patch management systems that can retrieve their patches from trusted sources and then deliver those patches to the internal systems.  Therefore, the patch management system requires Internet access, but the internal systems may not. 

Furthermore, the only systems on the internal network that should be sending DNS queries to the Internet are the internal DNS resolvers.  If every system on the internal network were allowed to send DNS packets to the Internet and use external DNS resolvers, then DNS exfiltration would be something to worry about.  With the removal of the default route, software patches can still be applied, servers can still be internally managed, and legitimate users can still reach the applications hosted on internal servers.

Multi-Tier Application Architectures

The techniques that are used in data centers to mitigate these inbound threats and implement a layered security defense model is to use a multi-tier server architecture, as pictured below. 

Default route tiers Scott Hogg / IDG

Data centers and cloud environments have the common three-tier application architecture with a web tier that receives the end-user connections, a back-end application tier, and a database tier that is tucked safely behind the other tiers.  This is the same security approach used in cloud-based, Infrastructure as a Service (IaaS) virtual data centers.  The web-tier load-balancers, reverse proxies and web servers require Internet reachability using the default route.  However, the application-tier and the database-tier only need reachability to their own private networks.

In a data center or in a cloud environment you can have either physical or logical isolation.  Enterprises should take a page from military and finance organization best practices and perform greater segmentation and separation of their environments based on data sensitivity, asset valuation and trust.  The default route is not needed when you are creating a layered security approach.  Your most trusted applications should be sealed away in the inner sanctum, private networks of your enterprise network or cloud infrastructure, without any Internet reachability.

Controlling Default Routing in Cloud Infrastructures

Similar to the way an enterprise would construct an out-of-band (OOB) management network, the same could be created in a cloud IaaS service. Enterprise system and network administrators have a higher level of access than typical employees and they are sometimes placed on special management networks with access restrictions.  For example, when operating in a public cloud like Amazon Web Services (AWS), it is considered a security best practice to use Virtual Private Clouds (VPCs) and use separate subnets for the various application layers mentioned above.  The VPC routers that are associated with the VPC subnets do not necessarily need a default route to the Internet.  The absence of a default route to the Internet Gateway (IGW) and the absence of public/Elastic IPs (EIPs) prevents EC2 instances on private subnets from being directly Internet reachable.  When operating in an IaaS environment you could have private networks that contain the most sensitive services and data.  If these sequestered instances are only reachable via a bastion server in a management VPC, and if they can only be reached by an app server behaving as the web-server tier, then these systems do not need a default route.

Implementation of Default Route Removal

If you want to be secure, you need to control your routing and packet forwarding.  You can control routing tables and dynamic routing on your physical on-premises routers or in your virtual routers in your cloud environments.  You can also use the presence or absence of IP routes to control if and how traffic can reach specific destinations.  You can use Access Control Lists (ACLs) on routers and stateful packet filters on firewalls.  In an IaaS environment like AWS, this means configuring stateless NACLs on subnets or stateful Security Groups on instances with a least-privilege policy.

Using an Outbound Proxy Instead of a Default Route

In AWS private subnets in a VPC, we can remove the default route for the route tables associated with these private subnets.  Furthermore, to allow the EC2 instances on those private subnets to safely reach the Internet we can configure the EC2 instances to use a local web proxy server.  The outbound proxy server has a whitelist set of software repositories for trusted locations of updates.  One popular method is to use a Squid 3.5+ caching proxy and configure advanced whitelisting for the secure software repositories.

A Squid proxy can be configured to permit Windows instances to receive their updates from Windows Update and cache those packages for subsequent requests from other local Windows instances.  It is very easy to configure a Squid proxy for whitelist and caching.  Following is an example of a Squid proxy /etc/squid/squid.conf configuration file that allows a local EC2 instance to only receive updates from AWS software repositories.

acl manager proto cache_object

# ACLs for localhost and the local VPC subnet

acl localhost src 127.0.0.1/32 ::1

acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.10.0.0/16

# Allow only these ports

acl SSL_ports port 443

acl Safe_ports port 80          # http

acl Safe_ports port 443         # https

acl CONNECT method CONNECT

# Allow access to regional AWS software repositories

acl yum dstdomain repo.us-east-1.amazonaws.com

acl yum dstdomain repo.us-west-1.amazonaws.com

acl yum dstdomain repo.us-west-2.amazonaws.com

acl yum dstdomain repo.eu-west-1.amazonaws.com

acl yum dstdomain repo.eu-central-1.amazonaws.com

acl yum dstdomain repo.ap-southeast-1.amazonaws.com

acl yum dstdomain repo.ap-southeast-2.amazonaws.com

acl yum dstdomain repo.ap-northeast-1.amazonaws.com

acl yum dstdomain repo.sa-east-1.amazonaws.com

acl yum dstdomain packages.us-east-1.amazonaws.com

acl yum dstdomain packages.us-west-1.amazonaws.com

acl yum dstdomain packages.us-west-2.amazonaws.com

acl yum dstdomain packages.eu-west-1.amazonaws.com

acl yum dstdomain packages.eu-central-1.amazonaws.com

acl yum dstdomain packages.ap-southeast-1.amazonaws.com

acl yum dstdomain packages.ap-northeast-1.amazonaws.com

acl yum dstdomain packages.sa-east-1.amazonaws.com

acl yum dstdomain packages.ap-southeast-2.amazonaws.com

# Only allow cachemgr access from localhost

http_access allow manager localhost

http_access deny manager

# Deny requests to certain unsafe ports

http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports

http_access deny CONNECT !SSL_ports

# Allow local web apps using “localhost”

http_access deny to_localhost

# Allow outbound access

http_access allow localnet

http_access allow localhost

http_access allow yum

# And finally deny all other access to this proxy

http_access deny all

# Squid normally listens to port 3128

http_port 3128

#… other default squid proxy caching and logging configuration

Proxy Client Configuration for Linux

For the proxy client EC2 instances to utilize the Squid proxy on their private subnet, they do not need a default route, but rather to have the proxy service configured locally.  The following two commands enable the proxy for HTTP and HTTPS communications using port 3128 to communicate with the proxy server with IPv4 address 10.10.10.10.

$ export http_proxy=http://10.10.10.10:3128

$ export https_proxy=http://10.10.10.10:3128

When operating in AWS, we don’t want to use the proxy when retrieving metadata from the EC2 instance using IP address 168.254.169.254, so we need to prevent this IP address from using the proxy.

$ export no_proxy=”169.254.169.254″

We can then try to connect to a public web site using a default communications method.

$ curl -I http://www.google.com

We can also use these commands to test communication to a public web site, to communicate with an AWS S3 bucket, or to reach an AWS software repository.

$ curl -I –proxy http://10.10.10.10:3128 http://www.google.com

$ curl -I –proxy http://10.10.10.10:3128 http://calculator.s3.amazonaws.com/index.html

Then you can test if this EC2 instance can still receive its software updates from the AWS repositories.

$ sudo yum update

Proxy Client Configuration for Windows

This same method of configuring a local proxy service also works for Windows instances.  The following two commands configure the proxy and the third command prevents the proxy from being used for the instance metadata.

C:> set HTTP_PROXY=http://10.10.10.10:3128

C:> set HTTPS_PROXY=http://10.10.10.10:3128

C:> set NO_PROXY=169.254.169.254

On Windows systems you can also use Netsh.exe commands to configure local outbound proxies.  The following command configures the proxy server with port 3128.

C:> netsh winhttp set proxy 10.10.10.10:3128 “localhost;10.10.10.10”

If you want to reset the host’s proxy settings for WinHTTP then you can use this command.

C:> netsh winhttp reset proxy

Summary

Not every private internal enterprise network or every cloud-based virtual subnet needs a default route to the Internet.  Often network and security administrators don’t give this question a second thought and it leads to security issues.  When using routing that allows for Internet reachability, malware can easily report back to the attacker and data can be easily exfiltrated.  However, removing the default route is not practical in many circumstances, but it should be considered as an option for those environments where security is of the utmost concern.  Using OOB management networks, bastion administrative hosts, patch management systems, and outbound proxies, you can create a function and secure environment without the use of a default route to the Internet.

(Scott Hogg is a co-founder of HexaBuild.io, an IPv6 consulting and training firm, and has over 25 years of cloud, networking and security experience.)