In Steven Bellovin’s book, “Thinking Security: Stopping Next Year’s Hackers,” (2015) he wrote:
“Companies are spending a great deal on security, but we read of massive computer-related attacks. Clearly, something is wrong.
The root of the problem is two-fold: We’re protecting (and spending money on protecting) the wrong things, and we’re hurting productivity in the process.
InfoSec is hurting productivity in the organization it is supposed to be protecting.”
Bellovin clarified what he meant by “the wrong things.” He observed that security measures tend to stick around even when the threat they were intended to address has faded into the past. Those security costs have only increased since 2015 when the book was published, and today represent a trillion-dollar drag on the global economy despite the emergence of security-integrated models of software delivery such as DevSecOps, shift left and shift right.
The 2021 IBM Security Threat Intelligence Index reports the No. 1 attack vector was a surveillance technique known as scan-and-exploit. Defenders face an asymmetric engagement with such network threats. The successful attacker needs only to exploit a single vulnerability whereas defenders must secure 100% of the network’s ports of entry 100% of the time. The fundamental problem is the anonymous and permissionless nature of the internet. This makes it both useful and easily abused. Anyone anywhere may conduct reconnaissance and launch attacks at a massive scale with little cost or risk. How can we make it prohibitively difficult or costly to successfully surveil and attack our own applications? What if we could reliably defend against the entire category of active scanning tactics?
There are no secure networks, only isolated networks. Taking this back to first principles, a network’s purpose is to convey packets of data from sender address to recipient address. The network has no built-in ability to identify either party or to discern the legitimacy of any activity. Active scanning techniques identify potential targets that are then probed by scripts that may be intrusive or stealthy or both. We could simply reject all incoming network traffic with a firewall, but application servers must be continually reachable in order to listen for incoming requests. Any interruption of availability causes errors like “connection refused” and “service unavailable” for the user.
This necessary network exposure leaves the upper layers of the network and the application server itself vulnerable to intrusion and denial of service attacks. “Private” enterprise applications are similarly exposed to the famously-vulnerable group that are VPN users’ devices, and so are by no means immune to this problem. This presents the problem of automatically discerning which incoming connections are legitimate, continually, forever. Are we trying to solve the wrong problem?
Ideally, we would identify the sender before they connect to the application server. That way, we can know whether they have permission to connect at all. This is something we simply can not do when the sender is known only by a network address. We can’t ask for a credential because that introduces yet another application layer with all of the same problems. This is precisely why VPNs do not go far enough: A superficial barrier is erected for users that prevents the application server from checking for permission until after surveillance and subsequent attacks have occurred.
Embedded connectivity-as-code solves the right problem. This represents a shift in thinking about the relationship between the application and the network. Now, the application brings its own point-to-point overlay with a verifiable identity on both ends. This has significant implications for the security of the application that can be summarized in terms of shifting security left in the development timeline and shifted right in the life span after deployment.
Shift left: OpenZiti makes embedding connectivity-as-code convenient for developers and transparent for auditors and users. As open source software, it fits into DevSecOps processes that scan the source code. The management side of the OpenZiti network is cloud-native and API-driven and fits into the DevOps paradigm of continuous integration testing. For example, a network-as-code may be continually created and destroyed by a pipeline to exercise any imaginable configuration of endpoints, services and policies, or to facilitate dynamic scanning of the data edge and fabric APIs.
Shift right: OpenZiti allows us to shift right security to measure and monitor applications after development, in production, with a novel ease and without compromising security. Operators maintain perfect visibility of deployed applications and need not cope with the complexity of hopping through security gateways or bastions or worrying about external malicious actors and inbound connections. This best-of-both-worlds reality is a natural effect of an access control scheme that is based upon verifiable cryptographic identity, not a weak identifier like an IP address. Importantly, this means it is always knowable which identity is accessing what services, when they were accessed and to what extent.
OpenZiti gives applications superpowers! To name a few: Identity-driven addressability, private domain name system, shelter from unauthorized connections, portable and programmable micro-segmentation, end-to-end encryption, solves the IPv4 shortage without IPv6, smart routing to choose optimal routes across the internet and is transparent to end users.
Learn more about OpenZiti by checking out our Quickstarts or engaging in conversation on Discourse.