Hello, and welcome back to CS615 System Administration! This is week 11, segment 3, and we continue our discussion of system security concepts. In our first video in this series, we've covered in somewhat broad strokes how to perform a Risk Assessment, and in our last video, we talked about how to develop a threat model and to keep in mind the attacker's point of view, motivation, and capabilities. With that in mind, we'll now take a look at the common processes our adversaries will follow in their goal to compromise our systems so that we may then develop a conceptual model of how to defend against these. Note that this will not be attack-vector specific -- we don't need to discuss whether an attacker might try to compromise our systems by phishing some employee credentials or if they will use a code injection attack because an internal application doesn't validate the input -- these two are practically a given anyway -- but rather we once again try to go one level higher and consider the common sequence of the attack life cycle and how that influences our defenses, leading us to a phrase you may have heard used a lot in the information security domain recently: Zero Trust. --- One of the core principles in system security is that we require what is known as "Defense in Depth". That is, we don't just deploy a single defensive mechanism and call it a day, - instead, we apply different protective and detective controls on multiple different layers in a "belt and suspenders" kinda of approach. We want to be sure that no one layer assumes that the other protects it - or that any one mechanism is sufficient. This aligns well with our concept of a risk assessment and threat model, where we identify different components all across our environment that may face different threats and thus require different protections. Furthermore, we - never assume that whatever protection we do deploy will be 100% bullet proof: you should always operate under the assumption that your protection mechanism can be compromised or circumvented, and then what? Defense in Depth helps us minimize the impact of such a compromise, and this concept of employing protections on multiple layers leads us towards the Zero Trust model we'll discuss in more detail in just a minute. But so where exactly do we need to implement our defenses? To make an informed decision about the value of your protections, it's best to understand the path your adversaries may take when they stage an attack. After all, as we discussed in our last video, our attackers are dedicated humans with specific objectives, so they will follow a logical path towards their end goal. --- This is known as the "Attack Life Cycle", a sequence of processes and procedures that time and again we have observed to be followed by necessity for any major, targeted attack. Now note that I'm explicitly mentioning _targeted_ attack here, since of course _some_ opportunistic attackers may jump into the attack life cycle on any stage, but by and large, our attackers do follow the path we'll outline here. So suppose you're an attacker, and you want to gain access to a specific asset. Let's say you want to gain access to the user data of a large email service provider, for example. Where do you start? - Well, the first step is of course to learn about your target, to, for example, perform network scans, identifying the software used and exposed, probe for known vulnerabilities, and in general gather any and all data about the systems, company, operations, as well as the people. In addition, this stage also includes collecting information about the people who work in the target organization, their positions, their access, their routines. After that, --- we move on to the - initial compromise. Here, our attackers will identify and exploit a vulnerability that allows them to gain access to some systems. This can take the form of triggering a Remote Code Execution by way of a weakness in a library known to be used on the target systems, or it may involve tricking an employee into revealing their access credentials by way of phishing, or by installing malware on an employees laptop via a watering hole attack. --- Once a way into the systems is found, - the attackers have to find a way to keep their access, to keep the system compromised. Typically, they will install a persistent backdoor, allowing them to access the system at will, without relying on the exploit they may have used initially. --- But the initial compromise usually does not give the attacker access to the data they really are after, and often does not yield sufficient privileges to accomplish their objective, nor even to move within the infrastructure or organization, so the attackers - will typically attempt to gain elevated privileges. In practical terms, consider a vulnerability in, say, PHP, whereby an attacker gained access to a the web server as 'nobody', the Unix user running the http server. In order to access a private database on the system, or to gain access to a private key protected by Unix permissions, the attacker would try to chain an additional local privilege escalation vulnerability to become 'root'. Another example of escalating privileges might be to install a key logger on an employee's laptop to gain access to the password used to authenticate to another system. --- Now even with elevated privileges, it is rare that an attacker manages to immediately compromise the final target. Think of the way that a web application might work: the web server exposed to the internet may not have access to an internal data store or the end-user database you're after. But the server may have access to a service that contains access credentials to reach into the database. In the first step (Initial Reconnaissance), the attacker collected all the information she could get from the outside, but now she has to identify the next steps, the next target on the way to the final objective. Hence, she will perform - Internal Reconnaissance, again perhaps scanning the network (now from the initially compromised host with possibly elevated access), identifies systems of interest ("Oh, look, a CI/CD service that's able to build and deploy software to all systems. Interesting!"), or collects access credentials. --- With the gained knowledge, the attacker can then try to further broaden her access, - and often does hop from one system to the next, repeating the previous steps in order to further elevate privileges as needed. That is, --- while repeating some of these steps, the attacker has to be careful to - continue to maintain their presence and elevated privileges, possibly going around - a few more times in this loop here before they - finally accomplish their mission and get to the beef steak, I mean, end user data they were after. And so --- our complete attack life cycle looks like this. Now granted, you don't always get to have it illustrated with all these good dogs here, but hey, that's one of the benefits of taking this class. They're all good dogs. --- Perhaps more professionally -- but distinctly less amusingly -- illustrated, the attack life cycle looks like this: - Initial recon, identifying possible targets, - initial compromise, followed by - lateral movement until the attacker - reaches their intended goal and begins to - exfiltrate the data. Now what's useful about understanding this attack life cycle is that it breaks down so neatly into individual stages, which then allows you to better identify specific defenses. Rather than trying to "secure all the things", you can now think about how you can disrupt the attack life cycle at each stage. --- This is what that might look like, then. In order to disrupt the initial recon, you'd - try to reduce your attack surface, limit what systems are exposed, for example; to disrupt the initial compromise, you'd - harden specific systems and libraries; to disrupt the lateral movement of an attacker, you might - restrict what systems can talk to what other systems, as well as - protect the assets with appropriate authentication and authorization controls, while also - disrupting the data exfiltration with additional egress controls. Note how each of these is an independent stage with its own protections, that does not depend on the other mechanisms, nor makes assumptions about the environment. --- Which gets us to the concept of "Zero Trust". "Zero Trust" is currently a big buzzword in the industry, and I'm always happy to use it as an excuse to reference an old TV show none of you is old enough to remember, but that's ok. I don't even remember whether "Sledge Hammer" actually _was_ a good show, but so there's this insane cop with the slogan "trust me, I know what I'm doing", which really reflects what the old, traditional information security model was like. --- That is, in the old world, it was turtles all the way down: a hard shell protecting squishy internals. Once you passed the perimeter, you were in: applications trusted you, you could freely move around the network, access services without authentication, and the like. This model is now being obsoleted by the concept of "Zero Trust" networks, a world where a given network position does not infer inherent trust upon you. Different people will interpret "Zero Trust" to mean different things, and for many --- it refers to the work done by Google around 2009 in their "Beyond Corp" papers, which are an implementation of the Zero Trust security model, but not quite the _same_ as that. And while "Zero Trust" is a buzzword in the industry --- and many vendors try to sell you solutions that will turn your environment into Zero Trust just like that, - I'm afraid "Zero Trust" is not a product - you can't buy yourself a Zero Trust. You can't deploy a Zero Trust. It's not an initiative or a project with a fixed goal. --- Instead, "Zero Trust" is a core concept, similar to "Defense in Depth", "Least Privilege", "Fail Safe", or Kerckhoffs's principle. Here, it is the simple assumption of a compromised or hostile environment. That's it. I know, doesn't seem like much, and doesn't even sound novel. But what follows does overthrow a few decades of operational practices. It means you can drive and measure very specific initiatives: If we assume all networks to be hostile - then we necessarily require transport encryption for all traffic. Operating in a hostile environment requires that clients - authenticate the services they talk to, just as services need to authenticate the clients connecting to it; mutual authentication becomes mandatory. But authentication by itself is not sufficient: authenticated clients require - explicit authorization to be allowed to perform actions, and authorization needs to always be limited to the least privilege required. So you need to integrate a granular RBAC system, for example. Because we assume our adversaries to be persistent and thus could compromise a previously trusted account or system, any trust, once established, - needs to be renewed periodically, and any actions need to be logged to ensure a complete audit trail. That is, a system's access capabilities derives - explicitly from its _identity_, such that its access can be audited, extended, restricted, or revoked and is not inherited implicitly from any specific physical or logical position within the network. Now building a PKI, developing suitable RBAC, monitoring for and enforcing mutual auth and encryption on layer 7 as well as below sounds like a lot of work. And it is. It'll take you years to move your old infrastructure into this new world. --- But it's not only a clear security win: Zero Trust enables _identity-based_ deployment of services with automated access controls & capabilities at time of birth and lets you ditch manual configuration or high-risk, broad network manipulation. And you can get there incrementally. All of these rules then enable you to specifically disrupt each - of the stages of the attack life cycle. And somewhat paradoxically, you can actually make certain counter-intuitive access decisions and for example allow connections to internal services from the internet because you treat your "internal" network as equally untrusted as the internet. You can deploy services without having to think about which security zone or what network they have to go into, and you still are assured that lateral movement is restricted because everything requires explicit authentication and authorization. But you only get the benefits if you don't think of it as a single _thing_, a one-time effort, a product, a temporary industry trend, a buzzword. Instead, accept it as a mind set, a principle, a core concept. It's simple: the environment is assumed hostile - the rest follows. And with that, I'm going to leave you just one additional reading assignment for today: - Ken Thompson's "Reflections on Trusting Trust", a talk he gave when he accepted his Turing Award, and which illustrates the necessity of deploying multiple layers or protections and the difficulty of proving trust. The link to this talk is in the slides, but you will quickly find it by using your favorite internet search engine as well. It has been almost 40 years since he gave this talk, but it remains a seminal paper and discussion, so please do make sure to read it. We'll pick this up in our next video. For now, thanks for watching - until the next time! Cheers!