We mentioned host intrusion detection and network intrusion detection in an earlier blog, and mentioned firewalls a couple of times in passing. Let’s delve a bit into the history to understand how these tools’ functionality has evolved over time.
On the host side, we had virus detection, in one form or another, from early in the evolution of the Internet. The idea being that, if an antivirus program could scan your files, it could help protect you from malicious applications. Applied to an organization, if antivirus applications could scan all users’ desktops, it could help protect the organization. And it did kind of work. Most organizations were protected, at one point or another, by their antivirus software. Then, they realized the need to see more than what simply scanning files could show. A file that looked innocent on disk could assemble malicious code while executing, so runtime environments also needed to be watched. Generally speaking, the same antivirus companies led the charge into runtime protection. But there is more to protecting systems than simply looking at running applications. Scripts, even database entries, could be made malicious, and activity over time could indicate a bigger problem than a virus – an intrusion. This problem was the reason host intrusion detection was born. The idea that all of your system(s) needed protection all of the time created a mostly agent-based environment that monitored several aspects of the running system to watch for suspicious activity.
On the network side, firewalls were implemented early in the growth of the Internet to allow limited access to corporate resources over the network. If no one ever needed to telnet to the machines on subnet X, the telnet port for subnet X could simply be blocked at the firewall, and no sleep need be lost over attempts to connect. Almost immediately after their inception, firewalls started logging attempts to access ports, and this information was made available to network administrators to review. This had an interesting impact on the management of hosts, because now, the firewall could inform network administrators if one of the internal hosts was constantly attempting to do something it shouldn’t be – possibly indicating an intrusion or a bad employee (more often than not, it turned out that undertrained, curious employees were just poking around). But firewalls – whether using whitelist or blacklist methodology – were fragile and/or interruptive. They also simply plugged holes in the perimeter without being proactive. That is when some bright people realized that, if you monitored streams running through the firewall, you could detect attacks and intrusions and offer admins proactive information about potential threats. That was the genesis of network intrusion detection (at least, as with all of these, as much of it as I can cram into a blog).
But as time has moved on, there was a need for more. Network intrusion prevention was tried, but most organizations don’t want automated systems cutting off connections for fear of impacting real users. While it is still around, it hasn’t gained all that much traction. The next step was to “connect the dots,” so to speak. If the HIDS on host A was detecting a suspicious pattern of behavior, it only made sense to have the HIDS on the other hosts in the network take a look for similar behavior. Command and control of cross-HIDS installs is at the heart of endpoint detection and response (EDR), which gathers information from multiple HIDS agents and correlates it to get a more holistic view and alert on activity that a single host might not have noticed (a super simplified example: if the same user account tries to get super-user privileges with a single attempt on a dozen machines. HIDS might not raise a high-priority alert for a single attempt, but EDR sees twelve attempts on twelve different machines, not just the one on a given box, and can raise a high-priority alert).
Extended detection and response (XDR) extends this information amalgamation and analysis to the network side, including attempts to gain access to the network (in our simple example above, imagine if the same user account tried to ssh into machines it doesn’t have network access to).
As of this writing, EDR and XDR are using machine learning (ML) to perform faster processing of all the data they are seeing, and to limit false positives while raising critical alerts up to human operators. At this point in the ongoing feud between attackers and security personnel, I’ll go so far as to call it state-of-the-art. For now. Other attack vectors and XDR-defeating attacks will no doubt come along, and will have to be answered, but for now, if you have a publicly exposed network, it is the best you can get. Vendors have moved both EDR and XDR analysis to the cloud, ostensibly to improve analysis. It does reduce the amount of maintenance customers must perform, but cloud usage comes with an operational pricetag that prospects should understand before signing on the dotted line. Most mid-sized or larger organizations don’t have a storage or processing power shortage, so operational efficiency is what you are really paying for, in most cases. Make sure it’s worth the cost, and you have a plan if you lose access to that ML engine in the cloud.
Meanwhile, keep kicking rear. Mark each day with, “At least we’re not SolarWinds,” and, remember: you’re doing your best to protect the org in a dynamic and hostile environment. Celebrate each day without a breach as a success – because someone did try.