This article is part of a series that explores zero trust, cyber resiliency and similar topics.
The recently released federal zero-trust strategy from the Office and Management and Budget (OMB) and the Homeland Security Department’s Cybersecurity and Infrastructure Security Agency (CISA) has one action area that has raised a few eyebrows within the zero trust community: Go ahead and open your applications to the internet. Wait… what?
Outlined in Applications: Action 4, OMB states that “agencies must select at least one FISMA [Federal Information Security Management Act] Moderate system that requires authentication and is not currently Internet-accessible, and allow access over the Internet.” This very direct mandate may, frankly, seem nonsensical and counterintuitive. However, with some more context, it makes sense within the framework of a zero trust architecture.
There are a few reasons why OMB is mandating this particular action. Mostly, it revolves around leveraging modern cloud protective services and reducing reliance on self-managed firewalls, which are always mere moments away from a configuration error. It forces traffic separation, especially in the modern cloud era when it is easier than ever for any administrator, with correct permissions, to open an organization’s cloud-based application to the world. But while a zero-trust architecture may shift the concept of defense-in-depth, protecting resources at the edge still very much matters. After all, just because an agency moves an application to zero trust doesn’t mean the adversary will stop throwing malicious payloads at it.
What does change with zero trust-enabled defense-in-depth is how an authorized user traverses those defensive measures along with risk management, or more specifically how that risk is now shared. With zero-trust network access (ZTNA) and secure access service edge (SASE) architectures, we no longer directly connect to an internal application or its security device sitting in front of it. Instead, we broker through a second party, such as an internal component or IT division, or third-party system to enforce ZTNA principles. Thus, in most cases, we transfer some risk to a provider or internal component, such as a software-as-a-service-based web application protection platform.
If we took OMB’s mandate at face value and simply opened any in-house application that we have handy, that meets the definition of “requires authentication,” the results could be disastrous.
Take, for example, the on-premise Microsoft Exchange hack of 2020, which affected any Microsoft Exchange mail server running Outlook Web Access. This is a system that requires authentication; however, the hack bypassed the authentication requirements of its login screen by exploiting a previously unknown vulnerability. So, what requirement should we add? The answer is to validate one’s identity and access before their computer can communicate with the application.
To illustrate, take the classic train ticket example. A train conductor will often check your ticket once you’re already on the train and comfortably in your seat. If your ticket is not in order in some way, or if you don’t have one at all, the conductor will boot you off.
The problem with this is that you’ve already been on the train! You’ve traveled at least a station or two before your joyride is over. However, if there is a subway-style fare gate to check your ticket before you reach the platform, you never even get to the train. Your ticket was prevalidated and only then were you allowed to approach the train.
So, how do we prevalidate and open our applications safely to the Internet? The answer is relatively simple in concept, if not in application: pre-authentication. Whether self managing or using a cloud service approved for the Federal Risk and Authorization Management Program, pre-authentication plays a major role in zero trust.
Pre-authentication, or pre-auth for short, is not new. The concept and supporting technologies have been around for decades. It is exactly what it sounds like—a means to preliminarily authenticate something, most often users, before they reach the intended target. One of the most basic examples of pre-auth would be familiar to anyone who has ever used Windows Remote Desktop to connect to another computer.
In the Windows XP era, the normal mode of operation was to directly “remote in” to a computer, get a prompt on a desktop, then enter your password with the already established session. For example, you would type your password into a prompt on the Windows login screen. A channel had already been established, meaning that if that computer was vulnerable due to missing patches, exploits could be leveraged against that computer with relative ease.
Today, the standard practice is to force users to authenticate first, then give them a background and login screen. This is a subtle but very important difference as a communication channel with the remote computers cannot be established without first authenticating, thereby greatly reducing risk.
In the context of applications, pre-auth follows the same principals. It leverages different technologies and associated protocols, which are implemented by way of any number of proprietary and open-source products. It’s also ubiquitous with everyday technologies such as choosing to sign into Zoom with a Google or Facebook account.
In such case, you’ve used another party, either Google or Facebook, to authenticate to Zoom. This is a key concept of pre-authentication, using a second or third party to prove your identity and not the application itself. It’s also more convenient as one set of credentials can be used for multiple applications. However, if those logins can be used for numerous applications, it had better be a secure service, right? Absolutely!
While the term “zero trust” specifically pertains to placing zero trust in any user or device to implicitly connect to another device or application, one must place trust—and a lot of it—into the authentication and authorization service to make this happen.
This service can be a cloud-based identity provider such as Okta, Azure AD, Ping Identity, OneLogin and more. Or it can be an on-premise system, or cloud-based self-maintained, infrastructure-as-a-service, such as Red Hat SSO/KeyCloak, AD FS, Shibboleth and others.
Regardless of the vendor or technical solution, using a trusted, vetted and mature solution is critical as a code flaw within the authentication system itself can spell disaster. In fact, with golden SAML (security assertion markup language) attacks a possibility, it may no longer be wise to run your own federated service.
Considerations for OMB Application Action 4 can include:
Is your application safe to open to the Internet?
More than likely, “no,” at least not in its native form. If an application has any type of input form field and is not capable of pre-authentication, pick something else.
As more agencies continue their journey to the cloud and software-as-a-service, this mandate potentially leaves legacy on-premise applications as the “gGuinea pPig” for this action.
Regardless of your authentication provider, don’t choose your decades-old legacy enterprise resource planning system. Remember, defense-in-depth still matters.
Use common sense—allow access over the internet does not necessarily mean globally open to the Internet. Use conditional access methods, policies, firewall rules and more to filter to the US, known other offices or other criteria.
At the bare minimum, this will cut down on noise in the logs when trying to pinpoint events.
Is an identity provider all that is needed?
In short, no, as applications still require a network protection solution. Some identity providers have this built-in or as an optional add-on; however, it’s more common to use a network protection solution such as Zscaler Private Access or Palo Alto Prisma and then integrate identity proofing into it.
If time and budget allow, consider the use of two separate providers.
For example, take into account two separate high-value asset systems. If one high-value asset system uses provider A and another uses provider B, then a zero-day code flaw at provider A would only affect one of those systems. This effectively halves this particular risk metric; however, it doubles complexity.
Finally, once we take pre-auth into account as part of a zero trust architecture, some of the recommendations seem to make sense. But, success—and safety—all come down to how well we implement our solutions. Implementation is where those devilish details live, and it’s vital that we focus on the give-and-take, dialectical nature of creating a zero trust architecture. Once this element is understood, then implementing pre-auth and opening your applications to the internet begins to make a lot of sense.
Dan Schulman is founder and CTO of Mission: Cyber LLC. Dan and his team develop and implement holistic zero trust solutions focusing on the people carrying out the mission, whether they do so using modern or legacy technology. Mission: Cyber is currently engaged with multiple organizations who are actively defining their individual paths to zero trust implementation and optimization. He is a member of the AFCEA Zero Trust Strategies Subcommittee.
Any reference to individual product vendors is for illustration only and is not an endorsement of any kind.