Skip to main content
Table of Contents
< All Topics
Print

Zero Trust Networking

Published July 24, 2018

Abstract

The combination of a largely disappearing perimeter, increased enterprise security threats and escalating damages resulting from breaches has introduced stronger levels of inspection and overall security. This includes a new security model called a zero-trust network architecture in which NOTHING on the network is trusted. Zero Trust Networking focuses on locking down all aspects of the network, inside and outside of traditional firewalls.

The goal of a Zero Trust Network (ZTN) is to stop malicious traffic at the edge of the network, before it is allowed to discover, identify, and target other networked devices. Getting to ZTN is just a continuation of the migration from zone-based network security models, to micro-segmentation, down to ZTN which is every user, device, service, and application.

This report describes what a Zero Trust Network is, the compelling case for moving to a ZTN, roadblocks faced, migration planning, and, to better understand the vulnerabilities, we provide mechanisms for measuring network attack surfaces.  We also provide a detailed set of best practice recommendations and a summary of vendors participating in this space.

Authors:

Sorell Slaymaker

Principal Consulting Analyst

[email protected]

 

Executive Summary

The foundation of great IT security starts at the network. Once the IP network is compromised, malicious actors can use it to discover, identify, and target the digital assets of an organization. With more attacks originating from within organizations while external breaches continue to also expand, TechVision Research recommends enterprises and government agencies consider adopting a Zero Trust Networking (ZTN) approach.

With Mobile, Cloud, IoT, and Edge Computing, the boundary of a secure perimeter and an internal trusted network becomes blurred. Add to this the threats from sophisticated hackers, malware everywhere, and knowing that less than 100% of employees can be trusted has leading edge organizations moving to Zero Trust models with least privileged access. The Zero Trust premise is that not a single user, device, thing, service, or application is trusted. Trust is something that is granted and reestablished through Identity and Access Management (IAM) service designed to provide the proper controls.

Zero Trust Networking is a subset of the Zero Trust model, focused on the IP network.  ZTN means that not a single network session from any device is allowed to be established prior to authentication and authorization. Today’s networking security models stop malicious traffic in the middle or at the endpoint of the network. ZTN focuses on stopping malicious traffic at the edge of the network, before it is allowed to discover, identify, and target other networked devices.

While Network Admission Control (NAC), firewalls, intrusion detection, and Virtual Private Networks (VPNs) have been added to networks to improve security, the internal IP routing of networks is not integrated with security. Some of the next generation of software defined networking companies are integrating IAM with IP routing to enable ZTN. As networking becomes 100% software based, the legacy costs and performance barriers are removed allowing secure routing to be everywhere within a network.

The biggest change in enterprise networking will be pushing routing and security to the very edge of the network in order to minimize the attack surface for everything that is connected. The adage of “if you cannot measure it, you cannot manage it” applies to network access security too. Calculating the network attack surface is one way of measuring network access security risks. Organizations can use this calculation to measure existing network security risks and what the risk is after a ZTN solution has been implemented.

This research report examines a set of compelling reasons why we believe every organization should ultimately move to a ZTN architecture and the current best practices in adopting ZTN.

Introduction

The combination of a largely disappearing perimeter, increased enterprise security threats and escalating damages resulting from breaches has introduced stronger levels of inspection and overall security. This includes a new security model called a zero-trust network architecture in which NOTHING on the network is trusted. Zero trust networking focuses on locking down all aspects of the network, inside and outside of traditional firewalls.

Zero Trust Networking starts with the premise that malicious traffic should be stopped at its origin, not after it is within the network trying to access an endpoint or application.  This means that all network sessions must be authenticated and permitted prior to allowing the source and destination to communicate.

Traditional network security architectures protect the perimeter, but once an attacker or malware gets inside the network; users, applications, services, and data are left vulnerable.  Legacy network security architectures often operate on the outdated assumption that everything on the inside of an organization’s private network can be trusted, which is no longer true.  With the acceleration of mobility, cloud, and IoT plus the ever-increasing security threats, enterprises can no longer trust anyone or anything.  Given the increased network attack surface, the accelerating speed and sophistication of attacks and growing insider threats, every enterprise must begin moving to (or at least assessing) a Zero Trust Networking (ZTN) architecture.

TechVision believes that Enterprises and Government agencies trying to more proactively prevent the exfiltration of sensitive data and improve their ability to defend against modern cyberthreats should move to a ZTN architecture. Protecting the perimeter is no longer enough; everything behind the firewall must be protected. All it takes is one infected IoT device, backdoor, or malicious employee to compromise your network.

While devices, servers, and applications can all have security software on them, this isn’t enough to combat the escalating and dynamic threats that are relentlessly being introduced.  The network must also be secured to prevent denial of service and attacks on devices, servers, and applications. With the sophistication and speed of which new threats are created, the first line of security defense is the network and preventing a threat from even getting to a device, server, or application.

The solution is to move to a network architecture where nothing on the network is trusted. This means that in order to establish a network session, explicit authorization and authentication must occur. Organizations can leverage a new breed of routing protocols that tie into directories defining what users, devices, services, and applications have access to as well as verifying identity.

That said, ZTN is much more than just leveraging an Identity and Access Management (IAM) system to validate a user or device and proving appropriate network access. ZTN is far more granular; looking to IAM to work with network routing to control where every user, device, service, application, and data are allowed to go on the network, starting at the very edge.

Zero Trust architectures apply to known users accessing systems and applications. The concept is simple; Don’t Trust Anything. ZTN is a networking subset of this and is about anything (known & unknown) intelligent such as mobile device, or unintelligent like a simple web camera or a sensor accessing a network at the edge, where communication starts.

This research report examines a set of compelling reasons why we believe every organization should ultimately move to a ZTN architecture and the current best practices in adopting ZTN. Getting to ZTN is just a continuation of the migration from zone-based network security models, to micro-segmentation, down to ZTN which is every user, device, service, and application.

What is a Zero Trust Network?

A Zero Trust Architecture means that every user, device, service, or application is implicitly untrusted and must go through an identity and access management process to gain a least privileged level of trust and associated access privileges. Zero Trust Networking, ZTN, is a subset of this architecture focused on the IP network where all network traffic is considered untrusted. This means all network resources are accessed securely regardless of location or device and adopt a least privilege network access strategy to strictly enforce access controls.  Every session that a user creates with other users or applications must be Authenticated, Authorized, and Accounted (AAA) for before a communication session is allowed to be established. ZTN enforces security policies at the edge of networks and stops malicious traffic at its origin, not in the middle of the network or at the front door to an endpoint or application.

Today’s enterprise networks (commercial and government) are made up of a conglomeration of many virtual networks. Each of these can be broken up into sub-networks which are then segmented. Network segmentation is taking a physical network and partitioning it into different logical networks. Network segmentation is used primarily to ensure the performance and security of the users and applications that are running over it.

Within a physical site, layer 2 (Ethernet) segmentation is most common using Virtual Local Area Networks (VLANs). Between VLANs, layer 3, IP, is used to route one VLAN to another, whether locally or over a wide area network which uses MPLS or IPsec tunnels. One example is putting voice traffic and credit card traffic on their own VLANs and utilize MPLS VRFs over the WAN to map to application servers on their own VLANs within a data center.

One analogy is to think of networking like our road system and packets that run over networks are like cars and trucks going from one location to another. Every car or truck can leave a residence and start traveling towards their destination. Along the way, there may be toll roads or borders when going from one country to another. As one goes through a border they are identified and denied or allowed entry. In networking firewalls play a large role on which traffic is allowed to go between network segments.

Next, we’ll look at how networks are segmented as a basis for figuring out how we secure the network and the building blocks behind the network. First, we will start with how networks use zones today, and are migrating to micro-segmentation, and then taking this further to the end goal of zero trust networking.

Zone Based Networking

Traditional networks are broken up into various segments or building blocks that are typically viewed as zones. This segmentation defines users, devices, services, and applications into different trust domains. Typically, users, computers, and servers within a given zone can freely talk with each other. In a LAN environment, this would equate to sharing an Ethernet broadcast domain. Moving between zones, however, generally requires routing and security policies. These policies can be as simple as allowing the cross-segment IP networks/addresses to communicate with each other but can also require additional rules such as TCP/UDP port numbers, user IDs, and website addresses via DNS names.

Zone based networking evolved as part of a defense in depth strategy and that a hacker would have to breach multiple network zones in order to get to confidential data. Because firewalls are expensive to purchase and support, organizations could create trusted zones protected by a firewall and not have to put a firewall on every server or end-point. This model is fast, fairly simple, and cost effective.

Figure 1 is an example of a common enterprise zone-based security model. Next Generation Firewall’s (NG-FW) are used to control the “north & south” movement of network traffic and allow any-to-any communication within a segment.

Figure 1 – Traditional Zone Based Network Segmentation

Starting at the upper left in Figure 1, users and devices are allowed onto the trusted network once they have been authenticated. A device as depicted above has Network Admission Control (NAC) software on it that works with an authentication server such as Radius, which is integrated in with Active Directory or an LDAP service to provide the AAA for network access. Once on the network in this scenario, the user is allowed to go anywhere within the trusted client network segment/zone.

In the lower left portion of Figure 1, untrusted users must use a Virtual Private Network (VPN) and go through the AAA process in order to get a private IP address which then will allow them onto the network. As users wish to access applications, they must cross network boundaries via NG-FW which apply security policies based on IP address and TCP/UDP ports. Users are allowed to the presentation and voice/video zones, but no further. Only the presentation services are allowed to access the application services which in turn are only allowed to the data services. This type of sequential access is called North-South networking where security policies are applied when going between zones, not within a zone.

Continuing with the road analogy we described earlier; each enterprise is like a country and that within that country there are public and private places where people are allowed or denied access, depending on who they are and what they are doing. Within a country you are given identification such as a social security number or national ID number, a driver’s license or some other means of government supplied identification. People from outside the country must validate themselves through customs before entering the country. Determining the businesses, clubs, and associations is the equivalent that an individual has the right to access can be correlated to various zones in the data center.

The traditional zone-based security model has many flaws including:

  • No East-West Security – East-West networking refers to going between devices or servers within the same segment. For instance, if one data base server is compromised, the hackers can use it to launch an attack on the other data base servers within that zone without interference or detection from a firewall.
  • Non-Symmetrical Rules – All the servers are allowed access to the management zone and the management zone to the servers. If the management zone is compromised, this allows attackers to have a “back door” into servers
  • Large User Zone – All users and devices can be within a single zone domain, allowing malware to spread unfettered within this zone. The assumption that every endpoint has a security software client to prevent this breaks down in an IoT world with millions of devices with light and proprietary operating systems or no operating system at all.
  • Not End-to-End – Segments reside within each network, and when users are on different networks than the applications they are accessing, there lacks an end-to-end control which increases complexity. This is further aggravated by each cloud provider having their own proprietary segmentation scheme.

Due to these limitations, organizations are revisiting their zone-based network security architecture and adding micro-segmentation, especially within their private data centers and public cloud hosting environments.

Micro-Segmentation Based Networking

Because security within a domain/zone is just as strong as the weakest link, enterprises have begun further segmenting many networks into micro-segments. For instance, if hackers gain access to one server/segment in the data zone, they should not be allowed to try and hack another server within that zone. Thus, in the data center, micro network segmentation was created.

Micro network segmentation takes the traditional zone-based architecture and further segments networks within the zone to further enhance security. Micro-segmentation is most often used within data center networks to further segment applications within a zone. For instance, within the application zone, separating the servers that provide CRM and ERP applications.

Virtualization further complicates segmentation since a virtualized server may have a single physical network port but supports many logical networks where services and applications which reside across multiple security zones. Thus, micro segmentation must work both at the physical network layer as well as within the virtualized networking layer.

Micro segmentation controls both the north/south movement of traffic and also the “east & west”, further isolating the size of broadcast domains. Figure 2 shows how different security zones represented in different colors could reside in a virtualized environment utilizing shared hardware.

Figure 2 – Micro-Segmentation Across a Virtual Computing Environment

Micro-segmentation is also occurring at the user level and will segment different users utilizing different networks into different segments. An enterprise may have nine different classifications or personas for users from outside guest, to contractor, to employees with different levels of access. The nine different segments have different levels of network trust and performance requirements and different security and routing rules apply to each segment.  Figure 3 is an example of 3×3 segmentation model based on trust and network performance requirements.

Figure 3 – User Micro-Segmentation Example

Continuing on our road analogy; enterprise security is similar to national security; micro-segmentation is the equivalent of creating gated communities or further restricting where one can go within an office building. The “circle” of trust is a lot smaller within these areas and additional security measures are taken to be granted access.

The challenges with micro-segmentation include:

  • Non-Standard Virtualized Networking – While the world has standardized on Ethernet and IP at the physical network layer, the logical networking layer varies between virtualized platforms. Thus, if an organization has different virtualized platforms or is doing hybrid computing between multiple private and public cloud services, mapping of virtualized networks becomes an orchestration nightmare.
  • IP Address Centric – The rules governing what users and devices have access to are tied to their IP address and the TCP/UDP ports of the services and applications they wish to access. As users roam between networks (LAN, WLAN, Mobile, VPN, Partner) and get a new IP address on each network, providing a consistent Role Based Access Controls (RBAC) is challenging.
  • Binary Rules – Routers and Firewalls are configured with Allow or Deny rules, which is an all or nothing access model and does not allow for more intelligent, dynamic, and granular rules granting various levels of access based on location, identity, time of day, and number of resources granted access to.
  • IoT – Creating micro-segmented networks is expensive and thus is focused in data centers to protect confidential data. As users consume applications through many devices and the proliferation of sensors and camera’s as part of Internet of Things, today’s micro-segmentation tools do not cost effectively scale.

Micro-segmentation is a step between having trusted zones, to further limiting the size of trusted zones, on a path towards a zero-trust network.

Zero-Trust Networking

While progress in security zones and micro-segmentation is a step in the right direction, it isn’t enough. This brings us to Zero Trust Networking. In a ZTN, all network traffic is untrusted. This means all network resources are accessed securely regardless of location or device and adopt a least privilege network access strategy to strictly enforce access controls. Every session that a user creates with other users or applications must be authenticated, authorized, and accounted for at the edge of the network where the network session is established.

According to Computer Security Institute in San Francisco, California, 60 to 80 percent of network misuse incidents originate from the inside network.[1] One reason for the severity of the internal threat is that the traditional firewall and Intrusion Detection Systems (IDS) are ineffective against much of the network misuse originating internally since their focus is on the externally originated traffic.

Zero Trust Networking takes segmentation all the way to the absolute end points of every user, device, service, and application on the network. The foundation is that “nothing can be trusted” and it starts with a premise that zero network sessions will be allowed to be established on the network and then builds from there. Whitelist rules, for example, are then created to allow users, devices, services, and applications to talk to one another.

Zero Trust segmentation will break down an entire network to define every single end point with its own IP address and the services on that end point. This is the equivalent of having a network firewall at every IP address on an enterprise-routed network.

In the early days, the Internet architects focused on connecting anything and everything together with little consideration for security. As security became more of a focal point, firewalls were introduced to create boundaries within the Internet and between public and private networks. Additional firewalls were then added within an enterprise to further segment the network. ZTN is continuing this evolution all the way to the edge of the network where an end point connects to the network and passes and receives IP packets.

Zero Trust Networks as currently structured can only go so far. There is a missing link that will need to be addressed over time; that is the lack of integration between routing protocols such as BGP and Identity and Access Management (IAM) services. Routing has the intelligence to take the source device with an IP address that resides with in a sub IP-network and send the packets to the destination sub IP-network with IP address and application. The routing rules reside within the routers that interconnect networks. While IAM can be used to get onto a network, today it is not used in determining how the packet is routed. Figure 4 illustrates that ZTN is combining the routing table of routers with the AAA policies of a directory to allow or not allow a packet to go from a source to a destination. More granular routing rules can be applied versus today’s binary rules, which can improve network performance and security controls.

For decades, routers had trouble keeping up with the growth of network traffic, so the focus was on creating routers that could simply process packets as fast as possible. This created network architectures based on edge, distribution, and core, where the routing was done in the distribution layer and switching done at the edge and in the core. As routers move away from proprietary appliances to software that runs on the edge of the networks on commodity hardware, the restrictions on adding additional security and intelligence in routing have been lifted.

In order for IP routing to work with directories and enforce a zero-trust networking policy, the network must be able to maintain state. While firewalls and other security equipment maintains state, IP networks to date are stateless. Routers were stateless as mentioned above, to keep them simple and fast. By adding state into routers, additional services can be added to make routing more dynamic, intelligent, and secure.

Figure 4 – Routing Plus Directories Enable ZTN

The final part of our road analogy is that every place someone lives or works has an address.  The physical address is equivalent to an IP address, and the logical address can be equivalent to what floor and room within a building someone has an office. To be able to leave one address and go to another, there must be a specific policy to allow one to get onto the road and travel to/from the other address. In today’s world, you are allowed to leave your house and start traveling to your destination and you may be blocked somewhere along the way like at airport customs.  ZTN is not allowing you to leave your home until you have prior authentication and authorization to get to your destination.

While the way we socially interact in the physical world is not in at a zero-trust level, in the virtual (digital) world we need to move to this zero-trust model all the way down to the network edge to mitigate escalating risk and to ensure the right access to the right resources at the right time. Security needs to include all aspects of an ecosystem right down to the physical network level.

Why Enterprises Must Move to A Zero Trust Network Architecture

An enterprise can have great firewalls, end point protection software, server and application security capabilities and still be at risk for breaches and loss of data. The old days of perimeter-based security and providing limited segmentation within an enterprise are gone as more users are mobile, applications are moving to the public cloud, billions of IoT devices are being added, malware is residing everywhere, and hackers are becoming more sophisticated.

The network is the foundation for security. Designing a network security architecture and strategy with a default of denying all network access and then building whitelists on explicitly allowing access on the network and sessions to be established will provide reduced enterprise risk from DDoS attacks, malicious software infections, and data breaches. If a malicious user cannot get to the “front door” of an asset, then they do not have the ability to breach it.

Enterprises create identity and access management systems that validate users and grant permissions to access resources. These IAM’s are the foundation of security and creating levels of trust and need to be incorporated into the IP routing within a network.

The market is changing, and previous assumptions are no longer valid. Enterprises and government agencies need to reconsider their network security assumptions. Following are the reasons why a zero-trust network is worth consideration.

Fluid Network Perimeter

The success of cyberattacks against enterprises and government agencies provide evidence that perimeter-centric security strategies are not effective. As examples, Target was compromised by a back-door attack through their heating and cooling vendor that lead to millions of credit card numbers being compromised and the city of Atlanta suffered a ransomware attack that crippled their critical systems for over a week. No longer can one assume that everything inside an organization’s network can be trusted and that the legacy countermeasures provide adequate controls to protect application traffic going across networks.

Gone are the days that an enterprise can totally force all external traffic through a single point in the network and trust everything within the network. Trends driving the decline of the perimeter of the network are:

  • BYOD – Proliferation of mobile users who bring their own devices. Many enterprises still try and force all their users to use company owned hardware with security software on them. Increasing reliance on temporary contractors, employees who work out of the home or at a customer site, the proliferation of intelligent mobile devices, and the next generation of workers who have never used Windows or the devices the company offers and who refuse to learn, are some of the reasons that enterprises are allowing mobile users to bring their own devices. These devices can be infected with malware, stolen, or not updated with the latest OS or application security patches.
  • Cloud – Greater adoption of public cloud applications. Enterprises are adopting 100+ different cloud services and applications growing into the thousands in the next five years. The migration to the cloud creates a hybrid data center hosting environment. Backhauling users to a data center and then out to a cloud service provider adds additional network latency which in turn has a negative impact on application performance.
  • Pervasive Internet – Digital business is the integration of users, and things. The Internet of Things (IoT) forecast is to go from 12 billion connected things today to 40B in the next 5 years. Enterprises are connecting remote office sites using the public Internet and moving away from private MPLS networks as part of the push of Software Defined WANs to save money and improve application performance.  Backhauling Internet destined traffic is expensive and impacts application performance which in turns impacts business revenues.
  • Everyone is a User – The distinction between employee, partner, and customers is getting blurred. Customers and partners can belong to on-line communities to promote and support business products and services. Customers provide feedback and offer new ideas for product management, videos are shared, and social media tracks everything in real-time. Increases in self-service offerings means employees, partners, and customers all utilize the same applications to get quotes, buy/sell, support, and bill for products and services. In some cases, partners have greater access and authority than internal employees.
  • Virtual Office Sites – Instead of all employees working at fixed locations owned by the company, employees are working everywhere; home, coffee shop, hotel, customer site, car. Different sites mean using different networks connected to the Internet and associated IP addresses such as LTE, broadband, public wifi, or partner network.
  • Data and Applications Everywhere – Keeping confidential and private data in a couple of data centers is nice in theory, but not practical in today’s digital world. Users will cache data for when they are not connected to a network, data is available via APIs to thousands of partners, and enterprises are adopting a multi-cloud hosting strategy, so they are not locked into a single vendor. Plus, many countries have adopted legislation on what data can be stored were, forcing much private and confidential data to be stored in country.

TechVision’s “Identity is the New Perimeter” report by Doug Simmons, Gary Rowe and Nick Nikols provides details on new models for using identity services and verifiable claims to create a more secure “perimeter-less” enterprise infrastructure.

With a fluid network perimeter, there is no longer a clear demarcation point of network trust.  Malware, hackers, malicious employees can reside anywhere and everywhere. The only true defense is start with a zero-trust network model and then create a whitelist of rules. Figure 5 shows the migration from a secure network to a fluid network.

Figure 5 – Fluid Network Perimeter

Internet of Things and 10x Growth in Number of Devices

Every 2.5-3 years the number of devices on a common enterprise network doubles along with corresponding bandwidth utilization. To add fuel to the fire, the explosion in the IoT market with sensors and cameras drive even greater growth in network addressable devices. For instance, today there are 60-100 sensors on a car, by 2020 the number of sensors will grow to 200 and they will all be connectable to a network. By 2025 self-driving cars are projected to have over 500 sensors and 40 cameras. The IoT market is expected to grow from 250B “things” today to over 500B by 2021.

Over the past few years network growth was driven by users with increasing number of devices including computers, tablets, and phones. Because these are intelligent devices, user authentication and authorization could be enforced to allow them on the network. Also, the operating system and applications were standard and could be updated on an automated and regular basis.

In the foreseeable future, we will see explosive network growth driven by IoT. The challenge with IoT is that most of the sensors and cameras are dumb and do not have a full or standard operating system which can be updated easily and regularly. It has been reported that some IoT devices were shipped with malware while others have been infected. The Satori IoT malware family of bot recruits for DDoS attacks has targeted ARC processers which are embedded in over 1B IoT devices shipped in a year.[2]

If the endpoint is not secured, then the only secure approach is to make the network intelligent and enforce AAA security policy. This is most effective at the very edge of the network where the IoT device is connected.

The dramatic growth in devices will drive large enterprises and government agencies to utilize IPv6 for addressing. Since users and applications will still be largely be IPv4 based, Network Address Translation (NAT) will occur between the IoT devices and the IPv4 users and applications, which will further obscure and complicate network security.

Figure 6 – IoT Everywhere

Application Encryption Leaves Today’s Network Security in The Dark

In today’s 3-tiered network architectures of edge, distribution layer, and core, it is the distribution layer that provides the security controls through firewalls, intrusion detection, and deep packet inspections systems. In order for these systems to be effective, they must be able to see all the data within each packet. TLS encryption of applications from end-to-end leaves the network security equipment in the middle of the network blind and ineffective.

75% of today’s Internet traffic is encrypted between the client and the application. This will move to 90% by 2020.  The increase in TLS encryption is driven by:

  • Google – In 2015, Google started ranking content that was encrypted higher in its search engine than un-encrypted content. This drove most websites to encrypt their content
  • Cloud – Software as a Service (SaaS) providers such as Salesforce, Microsoft O365, ADP encrypt their applications. As more companies adopt SaaS offerings supporting users who are everywhere, application encryption is becoming the norm
  • Regulatory – HIPPA, PCI, GDPR all require that financial, private, confidential, or regulated data be encrypted both at rest and in transit.

The level of encryption is going to increase, further making today’s network security solutions useless. Google announced[3] that it’ll be adding “DNS over TLS,” an extra level of encryption, in Android. This will prevent enterprises and ISPs from being able to monitor which sites a user goes to.

Web applications use proxies as Internet relays to identify a user’s intended destination and to permit or deny the user access to that site, as well as to log each site a user visits. DNS, or domain name server, is at the core of this process, and encrypting DNS inquiries will make simple proxies useless.  Users will be able to visit any sites they’d like without a third party logging their Internet journeys.  The spread of YouTube cute cat videos will continue.

Figure 7 – DNS Query Without TLS to an TLS Encrypted Application

The next version of TLS, 1.3, will also hide the DNS name. Today, in the TLS 1.2 handshake to set up an application session, the DNS name is passed in clear text as shown in figure 8 below.

Figure 8 – Application & Site Identification Using TLS

To get around this, enterprises are getting the encryption keys of their application providers, so they can inspect all the data going in and out of their enterprise. But as discussed above, with a fluid network perimeter, it becomes nearly impossible to inspect all data going in and out of an enterprise. Plus, the key management requirements and operations becomes overwhelming.

The only way for networks to truly be secure, when every application is encrypted, is for the network to enforce security at the edge of the network. When a user or device wants to establish a TLS session, the first packet must be analyzed against a whitelist allowing the session to be established or not. This zero-trust network policy approach is the only effective way of enforcing network security policies.

Malware is Everywhere

Malware is any program or file that can harm a user, device, service or application. The AV-TEST Institute registers over 250,000 new malicious programs every day! [4]

Figure 9 shows the dramatic size and growth of types of Malware.

Figure 9 – Growth in last 5 years in Malware

Malicious malware includes computer viruses, worms, Trojan horses and spyware. These malicious programs can perform a variety of functions, including stealing, encrypting or deleting sensitive data, altering or hijacking core computing functions and monitoring users’ computer activity without their permission.

Non-intended malware can include backdoors put in intentionally by vendors or support organizations to gain access in case a system is not working properly, and a back-door network access can be used to diagnosis and fix problems. The challenge is that these backdoors can be acquired by malicious users. These backdoors can exist on network equipment such as routers and firewalls. For instance, Cisco has announced the closing of four backdoors in the past four months.[5]

According to Risk Based Security’s 2017 Data Breach QuickView Report, there were 5,207 breaches recorded last year, surpassing the previous high mark by nearly 20%, set in 2015. The number of records compromised also surpassed all other years, with over 7.8 billion records exposed, a 24.2% increase over 2016’s previous high of 6.3 billion.

Edge Computing at Hyper-Scale

Gone are the days where an enterprise had a couple of data centers where all applications ran and a full security stack to isolate this environment. Low-latency application requirements will require that network and security architectures are pushed out to the edge

The proliferation of IoT in combination with distributed computing will move the center of gravity from centralized data center to the edge. In a hyper-converged digital world, milliseconds matter. People using virtual reality and self-driving cars require near instantaneous data, computing, and connectivity. The round-trip network latency required for the next generation of applications will shrink down to five milliseconds.

This new Cloud 2.0 model will force a totally new network and security architecture. In today’s enterprise networks, backhauling traffic to a central data center is common. Enterprises centrally deploy and manage virtual private networks (VPNs), mobile device management, proxies, and private/public internetworking. They backhaul traffic since putting all the network security controls on the edge is too expensive. With highly distributed data centers in Cloud 2.0, security will also become more distributed.

Figure 10 – Example of Backhauling in an Enterprise Network

The digital world is where things (as in the Internet of Things) and users converge. Augmented reality applications are always on, providing contextual, dynamic, and interactive experience that is hyper-latency-sensitive. Orchestration will take place in centralized cloud data centers, but applications with artificial intelligence with massive amounts of local data will require widespread distribution to thousands of data centers for processing. The internetworking between all the distributed data centers owned by hundreds of providers using many differently managed IPv4/6 networks, both wireline and wireless, will add millions more segments.

Video – Cisco forecasts that 82% of traffic on public and private IP networks will be video by 2020.[6]  Enterprise IT will need to segment different types of video applications for both security and performance requirements. Figure 11 demonstrates nine different video segments based on levels of security trust and network performance requirements. Within each of these segments, IT can provide further user and application security, creating hundreds of segments just for video applications.

Figure 11 – Video Traffic Segmentation Example

The cost of putting networking and security software at the edge of networks is dramatically falling as startups bring new approaches to the market. Instead of a network architecture of edge, distribution, and core, where all the routing intelligence and security controls are at the distribution layer, future network architectures will have an intelligent edge running on any commodity hardware/VM with no distribution layer. The core in this type of architecture is a superfast IP forwarding plane.

The overhead in adding zero trust network security is low, since it is only the initial packets that are used to establish a network session that need to be heavily processed to determine authentication and authorization of the session. If the session is allowed to be established, then the rest of the packets in the session can flow unencumbered.

The Growing Sophistication of Hackers

Attacks on enterprises and government agencies are getting dramatically more sophisticated. The militarization of attacks includes state sponsored groups, organized crime, and malicious on-line communities that can profit from the disruption or theft from a breach. Hospitals and the medical community were hit especially hard in 2017 with medical records being sold for as high as $1,000 (USD) per record as compared to credit card numbers that are only worth 25 cents.[7]

Hackers are also becoming more difficult to catch as they are taking extraordinary steps to hide their activities. Like espionage, organized groups will deploy a multitude of strategies to steal information that can be worth hundreds of millions of dollars.  These strategies include:

  • Recruiting Employees – From temporary secretaries supporting high level executives to disgruntled IT employees, money is used to incent employees to provide information or to use Webex remote desktop capability to gain access to inside the network.
  • Patience – The process of hacking has been the same for decades; discover, identify, target. Once an organization and asset has been targeted, it is usually just a matter of time. For instance, if I know that company XYZ database server is running on a specific OS, DB, and application firewall software, then all I have to do is wait until a known vulnerability occurs and hit the system quicker than the company can patch. If I have thousands of known targets, there are opportunities that are announced daily.
  • Third Parties – The digital economy is all about API’s and utilizing information from partners. A TLS connection is set up from enterprise to 3rd party and encrypted data flows in and out of the organization with very little governance and oversite.
  • Stealing Assets – Executives and experts who travel internationally are frequently targeted. Cameras are used in public places to record login information and key fobs are carried with the laptop or tablet. Cloning devices is a common capability of most secret government agencies.

Measuring Network Security Vulnerability

ZTN is all about mitigating the risk of the increasingly sophisticated attacks. But what is the risk and is it worth the expense associated with the Zero Trust Network approach? Calculating the network attack surface is one way to measure network access risks.

Effective IT security is composed of good identity management, access controls, and logging with associated event management and analytics. Restricting access is one of the primary objectives of network security and why many enterprises are moving to a zero-trust security model.

At a high level, there is generally a 3-step process to hacking. The first step is to discover what is out there, second is to identify and analyze what is discovered, and the final step is to choose targets. By minimizing the network attack surface, hackers will find it difficult infiltrating other devices on a network, thus they are stopped in the discovery step and cannot proceed to identification and targeting of systems.

The network attack surface is the totality of all network devices and services that have access to an endpoint or an application on the network. The lower the score, the more secure the endpoint or application on the network is.

These are the four variables used to calculate the network attack surface:

  1. Number of devices with access – Number of devices that have network access to said device or application. On the LAN, this is all devices within the broadcast domain of a VLAN. On the WAN, it is all devices that have an IP address that can route to said device.
  2. Number of services – Number of ports that are open on said device for communication. Common ones are HTTP (port 80), HTTPS (port 443), SSH (port 22), and the list goes on.
  3. Directionality – Who can initiate a TCP or UDP session (1 = yes, 10 = no)
  4. Application Encryption – A TLS 1.2 (with 1.3 on its way) session that validates the certificate for a session and provides 256bit AES encryption and a SHA-256 authentication, mitigating man in the middle attacks (1 = yes, 10 = no)

If this seems complicated, it is because it is very complicated. The layers of potential areas of attack are deep and wide and hackers often only need one weak spot to breach a network and associated assets. A good example is an IoT surveillance camera that is on its own network VLAN by itself, using a private IP address, that sets up a HTTPS connection to a server and firewall and routing rules do not allow the camera to talk to anything else and the server initiates the conversation. The score here is 1 {1 device * 1 port * 1 direction * 1 (Yes) for TLS}

Number of IP devices with Access 1
Number of TCP/UDP Ports Open 1
Session Directionality Controls 1
TLS Encryption Used 1
Total – Network Attack Surface 1

Table 1 – The Score of a ZTN Device

A common, but poor example of network security, is to set up a camera to watch the entrance of a remote warehouse. At the warehouse a router/firewall only allows sessions to be initiated to the Internet from within that network. There are 50 computers and systems on the LAN at this warehouse. The camera can be accessed through 40 different ports/services.

So, the warehouse camera has a security attack surface score of 200,000 {50 devices on the local network, * 40 known open ports on this camera, * 10 (no directionality on this camera for setting up network sessions, just the router), and * 10 (no encryption on this model of camera because the manufacture assumes it will be used on a private network and the hardware/CPU costs of adding encryption is too much).

Network Attack Surface Calculator  
Number of IP devices with Access 50
Number of TCP/UDP Ports Open 40
Session Directionality Controls 10
TLS Encryption Used 10
Total – Network Attack Surface 200,000

Table 2 – The Score of a Non-ZTN Device

If one of the devices in the warehouse gets infected with malware, this malware can then easily infect the new camera. There is then nothing stopping this camera from initiating a connection to a server in eastern Europe or anywhere in the world. The entire enterprise network is now compromised, plus this camera can be used in DDoS attacks.  The local router/firewall does not have the intelligence to generate an alert when suspicious events are occurring on this network.

Enterprises, which are, of course, a lot bigger and more complex than just one local network have an exponentially greater risk. The current network architecture of a simple edge, complex distribution layer, and fast core, where routing and security are done at the distribution layer, will not meet the needs of a zero-trust network.  As compute power moves to the edge, ZTN will follow this trend.

The biggest change in enterprise networking will be pushing routing and security to the very edge of the network in order to minimize the attack surface for everything that is connected. The adage of “if you cannot measure it, you cannot manage it” applies to network access security too. Calculating the network attack surface is one way of measuring network access security risks.

Figure 12 – Gauging Level of Risk

OSI Layer 5 – The Foundation for an Intelligent Network

To achieve a Zero Trust Network, organizations must make their networks more intelligent and push the intelligence to the very edge of the network. Plug and pray is a not a networking strategy anymore. Security is the new top network design requirement, surpassing cost, performance, and reliability. An intelligent network needs to move up the protocol stack to layer 5 of the OSI protocol stack to be able to effectively and efficiently choreograph a Zero Trust Network.

The foundation of IP routing has not changed in over 20 years and is a standalone function that is not integrated with any other services. Networks operate by aggregating IP addresses into networks that are subsequently routed from source to destination addresses. IP addresses only provide location information. Security functions are bolted onto networks to provide point solutions including Network Admission Control (NAC), Firewalls, Intrusion Detection/Prevention Systems, Virtual Private Networks (VPNs). The foundation of the problem goes back to the OSI model for networking where historically layer 3 IP networking does not manage state and sessions. This is why layer 5 security functions need to be added to future intelligent networks.

Compounding the problem is that users, devices, services, and applications reside on different networks, leading to lack of end-to-end network security controls. Internetworking between networks requires going through a security stack. Every network has its own routing domain and associated security policies.

To solve this problem, networks must become more intelligent by moving up the stack to Layer 5, the session layer, where intelligent services reside (see Figure 13 below for an overview of the OSI model and the security services provided in the session layer). The session layer, the most critical layer for an intelligent high performing and secure network, provides the state and security functions for lower level network functions.

Figure 13 – OSI Reference Model with Session Layer Details

The Session layer provides the mechanism for opening, closing, and managing a session between end users and applications. Sessions are stateful and end-to-end, which provides more granular network and security controls for application services. Firewalls, proxies, Session Border Controllers, WAN optimization, load balancers, IDS, all manage network state and provide higher-level networking and security functions. Instead of requiring the bolting on all these “middleboxes,” network routers must provide these functions natively in next-generation Software Defined Networks (SDNs).

One of the core attributes of new SDNs is separating the data plane which is a high-speed IP forwarding plane, from the control plane which manages routes and enforces network performance and security policies. A lot of the focus of the software-defined WAN (SD-WAN) market is to provide the glue between applications and the network.

The challenge is that SD-WANs use tunnels and overlays such as IPsec and VXLAN, which are Layer 2 and Layer 3 based. These overlays don’t work well through firewall/NAT boundaries, so lack end-to-end user-to-application performance and security controls. SD-WAN vendors also partner with the session players to provide many of the stateful, intelligent network services, but finding one that provides all services is difficult.

Some folks will argue that intelligent routing is a Layer 3 function such as using BGP. While routing is a Layer 3 function, intelligent routing has state and end-to-end performance and security controls. For instance, if a link has a burst of errors or high jitter, a Layer 3 routing protocol will not re-route the application unless the link goes down. Layer 3 routers prioritize BGP keepalive packets that monitor a link health at the highest level, and thus are not impacted by network congestion, just network outages.

Just like next-generation firewalls are moving further up the stack, next-generation networking has to do the same. The intelligent session layer features ensure the performance and security requirements are met for every organization.

The Software Defined Perimeter (SDP) is a research working group that was established in 2013 with the goal to develop a solution to stop network attacks against application infrastructure. Appendix 1 goes over SDP in more detail. SDP is a step toward ZTN, but it relies on an overlay network and clients on the end devices, instead of embedding IAM into core network routing.

Best practices on moving to a Zero Trust Network

So, if you buy into the concepts we’ve outlined, and your enterprise is considering ZTN, we’ll now provide a set of practical steps and best practices to get started in this journey. Zero Trust Networking is both an architecture and a philosophy that an organization has to adopt. We need to move from a “connectivity first” approach to one that considers security and risk mitigation early and throughout the design and implementation phases. IP networking was originally designed to be open and connect everything. Firewalls, Intrusion Detection, Deep Packet Inspection, Network Admission Control are all solutions that are “bolted on” to an IP network to add security by restricting IP networks, TCP/UDP ports, and users/devices accessing the network.

The starting point in this process is intelligent networking. A network must be intelligent in order to enforce Zero Trust Networking. This intelligence lies at OSI layer 5, where TLS, Identity, UID, session state and other services reside. Firewalls manage state & identity, while routers do not. One reason Directory Enabled Networks (DEN) did not take off 20 years ago was that IP Routing (BGP, OSPF) is stateless and to enforce DEN security requires that the network maintain session state. Moving from this intelligent networking base-line to zero trust networks are supported by the best practices outlined below.

Best Practice #1 – Encrypt Everything on the Network, End-to-End

Since no user, device, service, or application can be trusted on a ZTN, all data in motion must be encrypted. It is no longer good enough to just encrypt data at rest.

The most common way to do this is using TLS to encrypt the data end-to-end, from users to applications. Within IP networks, IPsec, is the most common way to encrypt data. The problem with IPsec tunnels is they are a point-to-point solution and do not go across network boundaries where firewalls are used. While it is true, an IPsec tunnel can be configured to run through a firewall, the firewall is blind to this traffic, so provides a network security hole.

The intelligent network will first check to see if an application is using TLS, before deciding to encrypt the traffic at the network level.  It is estimated that 75% of the traffic on the public Internet is TLS encrypted already and going to 90% by 2020. Too many of the early SD-WAN vendors use IPsec tunnels and encrypt all network traffic, even if it is already TLS encrypted at the application layer. This is a waste of router CPU resources and network bandwidth. The next generation of SDN solutions provide selective encryption and do not use point-to-point IPsec tunnels. Figure 14 shows an IPsec overhead used in encrypting an IP packet.

Figure 14 – IPsec Overhead

Another disadvantage of IPsec encryption is that it encrypts the TCP/UDP header. Thus, any network optimization in the middle of the network is rendered useless. This is important on international and satellite links where bandwidth is at a premium and latencies are long.  Satellite modems for example will do a local TCP ACK and optimize the TCP window size to ensure the best throughput possible.

In the past, network engineers have been reluctant to support encryption for the following reasons which are no longer relevant:

  • Slows Routers Down – Routers have traditionally been proprietary appliances that were quite expensive. Adding network encryption meant 10x CPU performance hit. In intelligent next generation SDNs, routing is just software that runs on Commodity Off the Shelf (COTS) hardware. Thanks to Intel’s AES New Instructions, which accelerates encryption, the cost of network encryption and the performance hit has become nominal. Plus doing selective encryption keeps the router from having to encrypt everything, since only 10-15% of the traffic on a network needs to be encrypted at the network level. Finally, SDNs can scale both vertical and horizontally. The horizontal scaling means that to support 100Gbps speed, one can have 10 routers in parallel, each processing 10Gbps of traffic.
  • Loss of Visibility – With application encryption network support tools and security appliances loose visibility to what is inside a packet. This makes troubleshooting application performance issues and providing Data Loss Prevention (DLP) at the network level, difficult. Since the majority of hackers and malware are using TLS, and network management tools and DLP equipment can accept the encryption keys, this is less of an issue.
  • Increased Complexity – For point-to-point network encryption, it is a best practice to rotate encryption keys on a regular basis depending on the volume of traffic. Encryption key management and security add to network complexity. Thanks to tools dedicated to this function, key management has been automated.

To ensure additional security, banks, network service providers, and government agencies will do point-to-point encryption on their fiber, copper, and wireless links. This encryption is done at OSI layer 1 and hides all the meta-data of network sessions such as which addresses are in communication with each other and how often.

While most enterprises and government agencies use TLS to encrypt all their critical applications that have private, confidential, or regulated data, there is still a small percentage of unencrypted traffic on the network that exposes critical data. Some examples include computer to printers or fax machines, Computer Telephony Interface (CTI) data, phone calls, and peer-to-peer file sharing.

Even non-critical network sessions need to be encrypted to avoid hackers doing a man-in-the-middle attack. This is where an attacker inserts themselves in the middle of an un-encrypted network session and use this to gain access to a device or application.

Thus, ZTN ensures every session going across the network is encrypted end-to-end, without double encrypting traffic unnecessarily. 100% of data in motion is encrypted.

Best Practice #2 – Authenticate All Network Equipment and Sessions

In order to ensure complete network security, all networking systems such as routers and switches should be authenticated prior to being allowed to pass network traffic. Users will add wireless routers and other network equipment to bypass IT to get something done quickly or to act maliciously. Many a network has been brought down by an unauthorized router being added to the network and changing the enterprise default route.

The second part of authentication is to ensure every new TCP/UDP session that is being established is first authenticated. Deny forwarding any packet at the ingress of the network unless there is an explicit policy to allow it onto the network. This means getting rid of layer 2 broadcast domains along with default routing in IP. While layer 2 Ethernet is very popular for being fast, simple, and cheap, it is not natively secure. IP routing and network intelligence must be pushed to the edge of the network, the distribution layer should be done away with, and the core of the network should be simple and fast.

A signaling mechanism can be used, like in our phone systems, to see if a session is allowed to be established, prior to establishing it. With TCP, this is done in the SYN, SYN/ACK setup and with TLS the exchange of keys. One of the benefits of the newly approved TLS 1.3 standard is that it completes the secure set-up with a single exchange of packets which makes it perform better and more securely.

Figure 15 – TLS 1.2 & 1.3 Setup Comparison

A best practice is to ensure one follows the FIPS-140-2 standard using 256bit AES encryption and HMAC-SHA256-128 authentication. One should check with prospective vendors to see if they are FIPS-140-2 certified.

Best Practice #3 – Use Intelligent Segmentation

Network segmentation logically separates traffic over the same physical network.  Enterprises rely on segmentation to isolate users and applications for security and performance requirements. The most common uses of segmentation are prioritizing real-time traffic such as voice and isolating credit card authorization traffic to meet PCI requirements. But future business needs for segmentation will be 1,000x greater than they are today, creating an untenable situation for enterprise IT with today’s routers and firewalls.

As discussed earlier in this research, enterprises are going from zone-based segmentation to micro-segmentation. Intelligent-Segmentation is taking this the next level deeper and creating a segment for every user, device, service, and application. In the ZTN industry, there are different names for this type of segmentation including: 1-1, Nano, and Hyper-segmentation.

Today’s network segmentation technologies are limited due to their OSI layer 2 & 3 dependencies. VxLAN is the most common segmentation used in data centers, VLANs are most common in offices, and VRFs are used over the WAN. The problem with these layer 2 and 3 segmentation methods is they use only MAC or IP addresses. An intelligent segmentation strategy has the following additional benefits:

  • Service and Application Segmentation – Being able to segment network traffic not only on the device MAC and/or IP address but also being able to segment traffic based on user service and application. For instance, a point of sale device in a retail store should carry its credit card authorization data on a different network segment than an inventory or device management application as demonstrated in figure 16. Also, an application using TLS (service port 443) can be on a different segment than a standard web-based application (service port 80). So, a TLS session can be routed directly from device to application, while other sessions are routed through a security stack for additional security controls.

Figure 16 – Point of Sale Device With 6 Segments

  • Hierarchal Rules – The ability to create segments and policies based on a hierarchy that mirrors the IAM directory. The management and administration of Access Control Lists (ACLs) can be overwhelming without hierarchical rules because ACLs only work on contiguous IP address space and have to be applied to each defined segment. A user, device, or service can belong to many different network segments and the segments can be defined in a hierarchy with the rules from one segment cascading down into other segments based on security attributes such as trust level and network attributes such as performance. These attributes then define levels of access, authorization/encryption and QoS from the user/device to the application/service. Figure 17 represents an example of an organizational hierarchy where executives would be given the greatest trust and network performance while guest wi-fi would get the least.

Figure 17 – Example of Hierarchal Segmentation

  • End-to-End – The ability to provide segmentation across many networks and through Firewall/NAT boundaries. Many networks can include cloud providers, Internet, MPLS, IPv4 & IPv6 networks and utilize multiple network paths for a single session. The ability to pass security and network policy through a firewall that is not owned or administered by the organization is part of this requirement. To be able to manage and log this, a unique session ID for every flow across networks is required. This creates a unified service-centric multi-tenanted fabric with a common segmentation/security model across a largely fragmented network infrastructure.
  • Full Isolation – The ability to encrypt every segment and provide both hop-by-hop and end-to-end authentication. All traffic within a segment should use the same encryption while traffic on different segments, should have different encryption keys. Each segment should be invisible to other segments, so if one is compromised, hackers cannot not discover others. Different rules can apply based on the directionality of the traffic and who is initiating a session. Note: Most SD-WAN vendors today use the same IPsec tunnel encryption for all their segments, so if one segment is compromised, so are the others.
  • Infinite – No set limit on the number of segments that can be defined and the ability to dynamically add, move, change, and delete segments. Remain elastic and scalable as usage fluctuates. One reason that VxLAN was introduced is that VLANs only scale to 4096 segments.
  • Auto-Distribution – Ability to distribute segment definitions and policies within a domain/authority and across boundaries to another domain/authority. This is feasible with SDN architectures that separate the control plane from the data plane.

In order to scale network segmentation, application and endpoint identity and access controls using directories must integrate with network routing and security policies. The software-defined WAN (SD-WAN) space is starting to provide this from the branch office to the data center and cloud, but to succeed software-defined networking needs to go from endpoints of users and things all the way to the containers hosting the applications. TechVision Research is currently developing a research report that focuses on rapidly emerging SD-WAN market and enterprise use cases.

One foundational challenge is that the segmentation technology in the LAN, WAN, data center, and cloud are all different. If you’re a retailer with 20 VLANs in a store, going across multiple networks (MPLS, Internet, LTE, and VSAT) to multiple data centers and many different cloud providers, managing network segmentation through routers, firewalls, load balancers, and WAN optimizers leads to “ACL hell.” This complexity results in networks that are fragile, costly, inflexible, and unsecure. This is a stark contrast to most enterprise future state models that focus on agile, fast, flexible, dynamic and secure development and deployment models.

To support future requirements, network routing and security must work together, not be diametrically opposed as they are with today’s routers and firewalls.

Best Practice #4 – Incorporate IAM Into IP Routing

Getting identity and access management right is the foundation for great security and is a necessary prerequisite for modern IP network routing. Every organization has directories and identity stores that define users, entitlements and controls to define and who has access to what resources.

Network Admission Control (NAC) tries to do this but fails to meet the requirements in a Zero Trust Network since it starts with the assumption that there is a trusted network perimeter.  NAC uses a directory to provide AAA services to a user or device wanting to gain access to a network. Figure 18 shows a common network architecture using NAC and we then describe why this model is no longer effective in most modern enterprises.

Figure 18.  Common NAC Architecture

NAC has the following deficiencies:

  • Requires Client – NAC software must reside on the device requesting access to a network. IoT, printers, BYOD do not have the software client running on their device.  NAC software is written to run only on common operating systems, which with the explosion of the number of IoT devices using proprietary OS’s, will not work for next generation networks.
  • Lacks Granular Access Controls – Once a user and/or device is granted network access, controlling what devices, services, and applications the user has access to is limited, especially when going across network security boundaries. When a user gets access to a VLAN, they have access to every device on that VLAN. Levels of trust are usually binary – allow or deny, which does not allow for more granular access. Access is based on IP address and network, not services or applications.
  • Devices & Users Only – NAC solutions do not extend into data centers or resources running in the cloud. In virtualized and cloud environments that share an underlying physical network interface and IP address, NAC does not work.
  • Static – NAC does not dynamically encrypt network traffic and any policy changes require an administrator to go in and make changes.

IP networking by itself is simple and dumb. IP addresses are aggregated into IP networks which are used to define a path from one location to another. With ZTN, least privileged access control policies are pushed all the way down to the network level.

Adding strong IAM services enables the IP network to become more intelligent and provide dynamic and granular network security controls. All users, devices, services, and applications on the network can be associated with an identity without the need of a client at either end of the network session. 1:1 network security access policies can be created that allows a specific user with a specific device to only access specific services and applications.

Best Practice #5 – Simplify Network Security with Address Abstraction

One of the many (oh so many) “Holy Grails” for IP networking is the ability for an application to automatically request network resources and security.  Intent-based networking (IBN) is the latest iteration, following past failures such as RSVP, IGMP, and NSIS.

Intent-Based Networking is defined by:

  • Translation – The network takes a higher-level policy (what) as input and translates it to the necessary network configuration (how).
  • Automation – The network automatically configures the appropriate network changes via network automation and orchestration.
  • Dynamic – The Network ingests real-time network status information and will reroute traffic when a network path becomes congested and performance goes below a pre-defined threshold

If we combined the best of snail mail, telephony, and IP addressing, we could come up with a new IP addressing schema that would simplify, secure, and scale all global IP communications. While IPv6 addresses the scalability issue with IPv4, it does not fix the complexity, insecurity, and guaranteed performance of today’s IP networks, which is why after 20 years, it is still a niche protocol.

Addressing is the backbone of all communication systems such as the physical address for snail mail, the telephone number for phones, and the IP address for data networks. Each of these addressing schemas has attributes in common, and each has limitations. By combining the best of all three, we could establish a new IP addressing schema.

This new, next generation addressing schema should include the following attributes to support the networks and use cases most enterprises are aspiring towards:

  • Named Based – An address is based on names and that can be easily understood. Snail mail addresses are a good example, telephone number and IP addresses are a bad example. IP networks use DNS to translate numbers to names to help users, but not being natively named based, makes underlying IP networks very complex, especially when Internetworking across many IP networks through firewalls with network address translation boundaries.
  • Location Independent – An address that is free to roam across networks and to different physical locations. Telephone numbers are a good example of an address schema that can be ported between different networks and can roam around the world. Snail mail and IP networks require change of addresses to do this.
  • Hierarchal – Addresses that are in a hierarchy to ensure optimal network routing performance along with ensuring security policies through infinite segmentation. All of today’s addressing schemas are hierarchal, starting with country, and then region, and then after this, the hierarchal nature between them differs.
  • Signaling – Addressing that first signals the far end for access and authentication for security, routing, and billing purposes. Telephone networks use signaling to seek permission and routing rules to establish a call. Snail mail and IP networks allow all traffic onto the network and will only block the traffic at the far end. This generates a lot of unwanted or needed traffic along with enabling Distributed Denial of Service attacks.
  • Multi-Address – Enabling a single device or user to have many different addresses. One can have many different identities at the same physical snail mail address. Multiple addresses are also at the application layer like email accounts and social media applications. Different identities and communities drive users to have multiple phone, and network addresses on the same physical device.

Starting the process of moving to naming versus IP addresses, the National Science Foundations created an initiative called Named Data Networking (NDN) in 2010 as a research project to create the architecture for the future Internet. NDN changes the paradigm used by traditional networks, moving away from IP and using numbers for address assignment and routing. NDN defines network to transport data containers between two endpoints, or nodes, with unique names (similar to URLs). The NDN scheme allows for blocks of data to be stored, digitally signed, and transmitted across nodes, the names of which higher-level applications can understand.

Figure 19 – IP vs NDN Protocol Stack

Whether NDN or a variation of it replaces IPv4/6, what’s becoming clear in software-defined networking (SDN) is the concept of IP address abstraction. So, while an endpoint still has an IP address for now, this address can be associated with a name that’s readable by humans and applications. With IP address abstraction, it’s possible to support duplicate IPv4 addresses and networks, allowing an enterprise to use RFC 1918 private addressing such as 10.x.x.x multiple times. It could do this to support new IoT devices, for instance.

One potential example of a next generation IP addressing schema which use names based on tenants (people, places, or things) and services (applications, media type). A Qualified Service Name (QSN) can take tenants and services to create the hierarchal naming schema and displace the need for overlay networks such as VxLAN and IPsec. This will allow for more granular and dynamic routing and security controls within networks and supports Zero Trust Networking end-to-end.

Figure 20 – Example Word Based Naming Schema

Universal Resource Name, Locator, and Identifier (URN, URL, URI) is the current strategy for defining services on IP networks. The problem is that DNS, which translates names to IP addresses, which is not embedded in the network infrastructure of routers, firewalls, and WAN optimization. Until these two can be combined with the above enhancements from the PSTN and Mail systems, the Internet and private enterprise IP networks will continue to be plagued with security, performance, and complexity challenges that IPv6 will not solve.

5G and other next generation networks are testing NDN with support from many universities, large network carriers, and new and traditional networking companies.

Best Practice #6 – Real-Time Anomaly Detection

Hackers are usually in an enterprise for months or years, before they are discovered.  Since enterprises have not spent the time on precisely configuring how every user, device, service and application are supposed to run on the network, when an anomaly occurs, it goes undetected.

ZTN can immediately detect and shut down a user, device, service, or application once it becomes compromised and starts behaving badly. As soon as an attempt to establish a network session to another user, device, service, or application, and alert can be generated. While one false alert is not a critical alarm, as the number alerts grows, so does the criticality. Typically, when a device is infected with malware, it will attempt 100’s of network sessions to discover what is out there or to try and flood the network in a DoS attack.

One evolving best practice as enterprises move to public cloud services is to create a specific account just for logging with unique and multi-factor authentication. The second thing a malicious actor does once they have broken into a system is to start deleting logs to hide their tracks. By making this logging account read only and with different credentials, it becomes nearly impossible for a malicious actor to mask what they are doing. Adding anomaly detection to the logging system enables organizations to identify in near real-time when a service or application has become compromised.

Best Practice #7 – Unique Network Session ID

Every network session that’s established should have a unique ID. This unique ID allows the session to be identified as it goes across NAT/PAT borders (usually through a firewall) and the IP addresses and port numbers are changed. This ID is also critical for logging and auditing purposes.  Enterprises should have user, service, application, and data ID’s today as part of the metadata that is used to govern, control, and log usage. The network session ID then maps into these other IDs bringing governance, control, and logging all the way down to and at the very edge of networks.

Sampling of Zero Trust Networking Vendors

128 Technology

128 Technology is a next generation routing and security software company based out of Boston, MA.  Founded in 2014, they incorporate session state into routing.

128 Session Smart* routing and security software provides adaptive encryption, session authentication, and real-time anomaly detection. Unique to most SD-WAN vendors that use IPsec tunnels, 128 Technology routes sessions natively. 128 Technology believes the key to true Zero Trust Security lies in being session-aware. With session directionality built in, each route becomes a secure vector that tightly controls access to the destination or service.

Aporeto

Aporeto is a Zero Trust security solution for microservices, containers and the cloud based out of San Jose, CA.  Founded in 2015, they incorporate IAM into the network and application connectivity.

Aporeto uses identity context, vulnerability data, threat monitoring and behavior analysis to build and enforce authentication, authorization and encryption policies for applications. With Aporeto, enterprises implement a uniform security policy decoupled from the underlying infrastructure, enabling workload isolation, API access control and application identity management across public, private or hybrid cloud.

Centrify

Centrify delivers Zero Trust Security through next generation access and is based out of Santa Clara, CA.  Founded in 2004, The Centrify Zero Trust Security model assumes that users inside a network are no more trustworthy than those outside the network.

Centrify verifies every user, validates their devices, and limits access and privilege. They also utilize machine learning to discover risky user behavior and apply conditional access.  Centrify’s Next-Gen Access uniquely converges Identity-as-a Service (IDaaS), enterprise mobility management (EMM) and privileged access management (PAM).

Cyxtera

Cyxtera is a large data center colocation company that has a division that focuses on security.  Founded in 2017 and based out of Coral Gables, FL, Cyxtera security creates a software defined perimeter.

Cyxtera’s Appgate software enforces the zero-trust model so that anyone attempting to access a resource must authenticate first. All unauthorized resources are invisible. This applies the principle of least privilege to the network and completely reduces the attack surface. By default, users are not allowed to connect to anything.  Instead, zero trust ensures that once proper access criteria is met, a dynamic one-to-one connection is generated from the user’s machine to the specific resource needed. Everything else is completely invisible.

Illumio

Illumio is a computer software company focused on micro-segmentation and based out of Sunnyvale, CA.  Founded in 2013, Illumio uses micro-segmentation to prevent the spread of breaches inside data center and cloud environments.

The Illumio Adaptive Security Platform® protects critical information with real-time application dependency and vulnerability mapping coupled with micro-segmentation that works across any data center, public cloud, or hybrid cloud deployment on bare-metal, virtual machines, and containers.

Luminate

Luminate was founded in 2017 and based out of Palo, Alto, CA.  They use an agentless platform which provides Zero Trust secured access architecture to all corporate resources and applications, on demand.

Based on Software-Defined Perimeter (SDP) principles, Luminate gives users one-time access to a requested application while all other corporate resources are cloaked without granting access to the entire network. This prevents any lateral movements to other network resources and eliminates the risk of network-based attacks.

Palo Alto

Palo Alto is a leader in cyber security including their firewall products and are based out of Santa Clara, CA.  Founded in 2005, Palo Alto recently hired John Kindervag, who was a Forrester analyst who introduced the Zero Trust Security model in 2010.

Palo Alto uses Zero Trust Security to address lateral threat movement within the network by leveraging micro-segmentation and granular perimeters enforcement, based on user, data and location. Lateral movement defines different techniques that attackers use to move through a network in search of valuable assets and data.

Next Steps

While it will take enterprises and government agencies years to implement zero trust networking, TechVision Research recommends that organizations start the journey as soon as possible. Leading enterprises that have moved forward in implementing ZTN started with the following use cases:

  • Remote Access – Replacing their VPN solution with a ZTN solution that is tightly integrated with their directory. In the rollout, enterprises should start with their higher risk users as classified by:
    1. International Travelers – Those who travel on business to foreign countries with sensitive information
    2. System Administrators – IT professions who have administrative rights to services and applications and make changes remotely
    3. Executives – Those who have access to confidential, private, or regulated data
  • Cloud Access – New applications being developed and deployed in the cloud that are mission critical to an organization. In Cloud 1.0 migrations, enterprises moved applications that were used to manage the business such as email, productivity tools, and sales management. In Cloud 2.0 migrations, enterprises are moving critical applications that are used to run the business such as ERP, Contact Center, and financial.
  • Multi-Cloud – Critical applications that utilize services across many different cloud providers. Since each cloud provider has their own proprietary security services, providing ZTN creates a security foundation that can be consistently applied to multi-cloud services.

The biggest barrier to moving to ZTN is the process of automating the access policies.  Importing and utilizing IAM information to create dynamic routing policies on which network session are allowed to occur and controlling access is just starting to evolve in the market.

Conclusion

With security as the top network design priority, we can no longer implicitly trust any user, device, service, or application. Zero Trust Networking is part of a defense in depth strategy.  An intelligent network is required to enforce security starting at the network edge. We cannot rely solely on security software on devices and within applications to prevent DDoS attacks, the spread of malware, or data breaches.

Zero Trust Networking means stopping malicious traffic at its origin, not within the network nor at an endpoint or application. In the physical world, anyone can leave their home or business and come up your driveway and knock on your door. They may not have the keys to get in, but they can check out your place and wait for an opportunity. In the digital world, the same thing occurs on networks. But, with a Zero Trust Network, someone must be authentication and authorized, prior to them leaving their device and communicating with your device.

The dramatic growth in IoT devices that are “dumb”, the loss of a controlled enterprise network perimeter, plus the increase sophistication of hackers will drive enterprise to consider ZTN. Trust domains have been getting smaller for years and ZTN takes it all the way to the edge with a 1:1 trust domain between user, device, service, and application.

There are three primary benefits of moving to a zero-trust network. The first is the elimination of public and private network borders and treating all private and public IP networks with the same zero trust policy. The second is decoupling security from the underlying IP network and adding OSI layer 5 intelligence to the very edge of networks.  The third is that every user, device, service, application, and data identity can be mapped to the network session creating a security in depth strategy that spans the entire OSI model.

Enterprises tired of applying thousands of ACLs will find that ZTN is actually simpler and requires fewer rules. The opportunity to move to ZTN easily involves the process of automating the access policies. Importing and utilizing IAM information to create dynamic routing policies on which network session are allowed to occur is the foundation for creating these rules.

About TechVision

World-class research requires world-class consulting analysts and our team is just that. Gaining value from research also means having access to research. All TechVision Research licenses are enterprise licenses; this means everyone that needs access to content can have access to content. We know major technology initiatives involve many different skill sets across an organization and limiting content to a few can compromise the effectiveness of the team and the success of the initiative. Our research leverages our team’s in-depth knowledge as well as their real-world consulting experience. We combine great analyst skills with real world client experiences to provide a deep and balanced perspective.

TechVision Consulting builds off our research with specific projects to help organizations better understand, architect, select, build, and deploy infrastructure technologies. Our well-rounded experience and strong analytical skills help us separate the “hype” from the reality. This provides organizations with a deeper understanding of the full scope of vendor capabilities, product life cycles, and a basis for making more informed decisions. We also support vendors in areas such as product and strategy reviews and assessments, requirement analysis, target market assessment, technology trend analysis, go-to-market plan assessment, and gap analysis.

TechVision Updates will provide regular updates on the latest developments with respect to the issues addressed in this report.

About the Authors

Sorell Slaymaker has 30 years of experience designing, building, securing, and operating IP networks and the communication services that run across them. His mission is to help make communication easier, cheaper and more secure since he believes that the more we communicate, the better we are. Prior to joining TechVision Research, Sorell was an Evangelist for 128 Technology which is a routing and security software company.  Prior to that, Sorell was a Gartner analyst covering enterprise networking, security, and communications.

Sorell is an IT Architect with a focus on network, security, and communications architecture.  He specializes in IT Architecture – Network Architecture, SIP Trunking, Contact Centers, Unified Communications, and Security Architecture.

Appendix 1 – Software Defined Perimeter

The Software Defined Perimeter (SDP) is a research working group that was established in 2013 with the goal to develop a solution to stop network attacks against application infrastructure. While they advocate a lot of the principles used in Zero Trust Networking, their initial concepts rely on an overlay network and a software client, not fundamentally integrating IAM with the underlying IP network.  Below is an overview of SDP.

Figure 21 – SDP Design

The initial commercial SDP products implemented the concept as an overlay network for enterprise applications such as remote access high value data or protect cloud instances from network attacks. The SDP Initiating Host became a client and the Accepting Host became a Gateway.

SDP Client

The SDP Client handles a wide range of functions from verifying device and user identity to routing whitelisted applications (local) to authorized protected applications (remote). The SDP Client is configured in real time to ensure the certificate-based mutual TLS VPN only connects to services the user is authorized for. The SDP Client becomes the policy enforcement point for organizations, as that is where access control is implemented after user device and identity have been cryptographically verified.

SDP Controller

The SDP Controller functions as a trust broker between the SDP Client and backend security controls such as Issuing Certificate Authority and Identity Provider. Once the identity of the SDP client has been verified and the services that it is authorized for determined, the SDP Controller configures both the SDP Client and Gateway in real time to provision a mutual TLS connection.

SDP Gateway

The SDP Gateway is the termination point for the mutual TLS connection from the Client. It is usually deployed as topologically close to the protected application as possible. The SDP Gateway is provided with the SDP Client’s IP address and Certificates after the identity of the requesting device has been verified and the authorization of the user’s determined.

Working together, the three SDP architectural components provide a number of unique security properties:

1) Information Hiding – No DNS information or visible ports of protected application infrastructure. SDP protected assets are considered “dark” as it is impossible to port scan for their presence.

2) Pre-authentication – Device identity (of the requesting host) is verified before connectivity is granted. Device identity is determined via a MFA token that is embedded in the TCP or TLS set up.

3) Pre-authorization – Users are provisioned access only to application servers that are appropriate for their role. The identity system utilizes a SAML assertion to inform the SDP Controller of the hosts’ privileges.

4) Application Layer Access – Users are only granted access at an application layer (not network). Additionally, SDP typically whitelists the applications on the user’s device – thus provisioned connections are app-to-app.

5) Extensibility – SDP is built on proven, standards-based components such as mutual TLS, SAML and X.509 Certificates. Standards based technology ensures that SDP can be integrated with other security systems such as data encryption or remote attestation systems.

Pre-authentication when combined with pre-authorization create networks that are dark to unknown hosts while providing need-to-know access to authorized users. A key aspect of SDP is that pre-authentication and pre-authorization happen before a TCP connection between the user and protected application occurs. Additionally, users are only granted access to authorized applications to eliminate the threat of lateral movement from compromised devices.

[1] https://www.networkworld.com/article/2268110/lan-wan/chapter-1–understanding-network-security-principles.html

[2] https://www.darkreading.com/vulnerabilities—threats/satori-botnet-malware-now-can-infect-even-more-iot-devices/d/d-id/1330875

[3] https://www.engadget.com/2017/10/23/google-android-dns-tls/

[4] https://www.av-test.org/en/statistics/malware/

[5] https://www.bleepingcomputer.com/news/security/cisco-removes-backdoor-account-fourth-in-the-last-four-months/

[6] https://www.cisco.com/c/en/us/solutions/service-provider/visual-networking-index-vni/index.html

[7] https://www.forbes.com/sites/mariyayao/2017/04/14/your-electronic-medical-records-can-be-worth-1000-to-hackers/#6675343d50cf

We can help

If you want to find out more detail, we're happy to help. Just give us your business email so that we can start a conversation.

Thanks, we'll be in touch!

Stay in the know!

Keep informed of new speakers, topics, and activities as they are added. By registering now you are not making a firm commitment to attend.

Congrats! We'll be sending you updates on the progress of the conference.