Next Generation Firewall Limitations
If product marketing was to be fully believed a particular brand of cleaning product could make hard work a thing of the past, fizzy drinks would be the key to happiness and buying the right car would give you superhuman powers. Who wants to hear that bog standard brand dishwasher salt is the same as the one with fancy packaging, or that Cola will make you fat with rotten teeth or that what type of car you drive is of no consequence to anyone? The reality is that any given company has a vested interest in presenting their product in the best possible way even if the reality is somewhat different and the shortcomings are conveniently left in the shadows. No one has a duty to explain the shortcomings with a product and you are left to feel these out for yourself and that is not without consequences: surmountable when we are talking about the latest innovation in toothpaste, but in the context of network security it is simply not good enough: A false sense of security is as bad, if not worse, than no security at all.
The Context of the Firewall
There is no doubt that a firewall is an essential tool in enforcing network security policy. Next generation firewall products offer tangible improvements over traditional firewalls in so much as they are able to provide context for traffic as opposed to allowing or denying traffic based purely on packet headers (OSI layer 2,3 and 4). Essentially, a next gen firewall is a decision engine which will inspect traffic to a greater or lesser degree based on dynamic, external information sources (e.g. AD, DNS, DHCP) as opposed to a filtering device leveraging very static information as seen in traditional firewalls. Next generation seeks to qualify the legitimacy of a particular flow by, amongst other things, validating the application that is in use and that sounds great, doesnt it? And there it is a real benefit over and above the traditional approach. But to qualify the marketing blurb further we need to understand how this works under the hood. The devil is always in the detail!
Every vendor has their own technology that identifies traffic based on a signature list of protocols/applications, but for all their different marketing names they pretty much work the same way under the hood. To fully identify traffic, the firewall would have to hold a packet indefinitely while the firewall runs its checks, but that latency isn't sane. To keep performance acceptable, some tricks or shortcuts are employed to allow the firewall to do its job without impacting end user experience. Let’s not forget that performance always trumps security. We also can't have a situation (or too many instances..) where an application changes so much, for example a new release of a messaging client or online game, that it is no longer seen as defined by the signature else the product becomes too unreliable or demanding of maintenance to stomach and features are switched off. So what shortcuts are used, and what are the implications for security?
SSL Blind Spot
Encrypted traffic is necessary to provide integrity of date in transit between endpoints. Nearly everything these days is a 'web application' and, if implemented properly, SSL can and should be leveraged to give a common underlying security to those programmes. The problem is that if the firewall can't inspect the traffic, it can’t judge what application is in use and cant spot threats or data exfiltration. In other words, SSL can be used to hide nasties within and the devices responsible for integrity shrug their shoulders and allow the traffic past. Instead, the firewall needs to sit as a virtuous man-in-the-middle of these streams, turning the encrypted to plain text, parsing it, then encrypting it to the end point if all is well. This is be processor intensive if performed on the firewall especially where there is a large amount of traffic, doesn't lend itself to single pass very well since SSL traffic will very likely need to be offloaded, and is difficult to implement in BYOD environments where trusted certificate authority information needs to be fettled.
Next generation firewalls work on the principles of flows: if returning traffic belongs to already inspected and validated outbound traffic then - in the main - it will not be inspected.
Default Behaviour for Unknown Traffic
Next generation firewalls need to see a certain amount of traffic before they can make a decision as to what an application is or to put it another way all connections start from a position of there being insufficient data to determine what the application is. The amount of data required beyond the full connection handshake to make this determination varies from app to app - it could be two packets, or it could be ten or more. This leaves a potential for data leakage from the firewall so long as it is in small chunks, which could be leveraged to exfiltrate intellectual property or other sensitive information from the network.
Next generation firewalls rely on a library of application definitions which detail characteristics and classification: for example the standard TCP/UDP ports it uses, what applications it is dependent on etc etc. These definitions use match conditions and rely on a small, limited set of attributes to make a positive match. Thus we have a situation whereby application signatures they use only basic information to categorise an application. For example, a signature definition for facebook might just specify http as the method and facebook.com (or.co.uk or..) as the host string. If those conditions are met then the firewall categorises it as facebook application traffic even if is destined to an IP address that is not facebook! We have a situation where not only can data can be exfiltrated in small chunks, but it can be moved in large chunks defined by the application id engine as legitimate traffic.
These behaviours are fundamental to next generation firewalls, so are not bugs waiting to be fixed. The truth of the matter is that the firewall is a very large part of the answer, but not the entire answer, and as such only forms part of the overall security posture. Event monitoring and correlation through analytics are critical to network security, as is agility and responsiveness of your Security Incident Response.
By James Townsend, Technical Architect at Data Integration (An Xchanging Company)
Custom Application Signatures in PAN: https://live.paloaltonetworks.com/t5/Tech-Notes/Custom-Application-Signa...
PacketKnockOut - Exploration of data exfiltration by port numbers: https://github.com/JousterL/PacketKnockOut
FireAway - Next Generation Firewall Bypass Tool: https://github.com/tcstool/fireaway
”Network Application Firewalls Exploits and Defense” Brad Woodberg, Defcon 19
”Bypassing Next-Gen Firewall Rules” Dave Lassalle, Nolasec 9/27/2012
"Sinking the Next Generation Firewall" Russell Butturini, Derbycon 2016