Linux is not a bulletproof operating system, no doubt flaws, vulnerabilities, and exploits exist at this very moment that are unidentified and unpatched. It’s also very likely that many Linux systems are running old unpatched kernels that are just waiting to be owned by some nefarious persons driven by who knows what motivations. Linux like Windows has a human flaw built into it, businesses don’t want downtime for updates, users want too many permissions, and sysadmins don’t want to risk breaking an application by running an OS update. In fact, at some points in time, it has been a point of pride for administrators of Unix systems to talk about uptime of a year or many years. That said, you don’t see the problems with Linux that you do with Windows in either scale or degree of damage. Why is that?
I’ve been a little hesitant to speak about the “WannaCrypt†ransomware campaign that has been plaguing Windows computers over this past weekend for a couple of reasons, and this post isn’t really about “WannaCrypt†it’s about Linux security features and best practices that I think are relevant to this particular worm. First, my area of expertise is systems management, I’m not a researcher, or a hacker, or a policy writer I don’t spend my days analyzing viruses or writing bug fixes. However, systems administrators do need to understand the attack vectors that their systems are vulnerable to and understand how to mitigate those vulnerabilities where they exist. Second, this particular instance of malware primarily affects Window’s systems and as such doesn’t have an immediate impact on me so it feels a little disingenuous and uncourteous to say “just use Linux†while the rest of the world is putting out this fire. I say primarily in the last sentence because any unpatched Windows client that is using a Linux SMB file server could encrypt files shared from that Linux host. This post should not be interpreted as a call for Linux users to become lax on security, just the opposite actually. A lot of valuable data lives on Linux systems around the world and we should carefully consider how that data is protected and what we can do to prevent this type of problem from occurring on our servers. On the other hand, I think it would be foolish to sit back and pretend to agree that there isn’t a better way to provide IT services, or a better platform for providing them. It simply isn’t true that Linux and Unix systems are just as vulnerable as Windows and I intend to outline my reasons why below.
Linux is not security through obscurity
Before I begin to lay out my reasons I want to address a misconception, Linux is not an obscure rarely used OS. One of the more annoying things I’ve seen shooting around the internet, especially on social media, is the idea that Linux users are hiding behind obscurity to keep themselves safe. This is false on two counts. First, it is said that Linux has too small of a market share to be targeted by large-scale ransomware attacks like the one we are seeing now. The claim that Linux is obscure is completely indefensible. Linux is the backbone of essentially all cloud services including AWS. Linux powers some of the world’s busiest websites (https://www.linux.com/news/learn/intro-to-linux/how-facebook-uses-linux-and-btrfs-interview-chris-mason), it is trusted by banks, and fortune 500 companies to host databases, critical applications, and internal websites. Not to mention the fact that many of you reading this with skeptical eyes have an Android phone (built on Linux) in your pocket or on your desk at this very minute. Linux is everywhere, the return on investment for pulling off a large-scale attack on Linux or Unix platforms would be very lucrative indeed. (https://www.wired.com/2016/08/linux-took-web-now-taking-world/) To suggest that hackers wouldn’t have as much to gain from a widespread Linux exploit is nieve. Which is also why Linux Engineers need to be cautious about configuration practices, and aware of current threats.
Second, Linux is the very definition of an open platform. There is nothing obscure about the code base, anyone can view the code at any time. Good guys, bad guys, professional kernel hackers, governments, businesses, and students. One of the great strengths of Linux is that there are so many eyes on the core of the operating system, which means that bugs can be identified and patched much faster.
With that out of the way, I will now present the reason’s why I think Linux is a safer option for critical systems. I say “safer†because I know that no computer system that is turned on and connected to a network is ever completely safe. Safe is something that comes in varying degree’s of risk, and no operating system can save you from yourself if you choose to ignore best practices. I think Linux makes it easier to follow industry practices and provides sane defaults from which to base a larger security framework.
User Account permissions
Linux is committed to the idea that each user should only have the minimum permissions required to do their job. This idea is enforced from the moment a user is created and requires an administrator to elevate those permissions.
This is something that Windows has been getting better at, user accounts need to be restricted for normal day to day activities in order to prevent them from causing inadvertent or intentional damage. However, even with UAC (user account controls) being installed by default, too many administrators turn this critical security feature off. Sometimes UAC is turned off because it’s easier to disable it than it is to listen to user complaints about the warnings. Sometimes because businesses run legacy software that can’t handle UAC. Whatever the reason, it is obvious that UAC is not being implemented in a way that is going to have a meaningful impact on the security landscape for Windows, at least not yet.
In Linux sudo is comparable to UAC, however, separation of powers is so ingrained into Linux systems that it is not even possible (without hacking the crap of your box) to turn this feature off. When a user is created on a Linux system that user has permission only to manipulate files they create or files that they are explicitly given permission to. This includes service accounts which run web servers, databases, LDAP, or any other service running on a system, none of which run as root by default. Users that require more permissions can be given sudo access either for full administrator capability or for specific commands. For example, in Linux, a user can be given permission to start or stop a particular service, without having permission to install, remove, or change the configuration of that service. If Windows has this ability I am unaware of it and have never seen it used in practice.
Sudo allows far more fine-grained control over what a user can and cannot do than UAC does on a Windows system, and as such Linux machines are protected from negligent or malevolent users when admins follow best practices. In fact, on some Linux systems like Ubuntu, the root password is locked by default and no one can log in as the root user. In those cases, all administrative function on the machine must be run through sudo. (https://help.ubuntu.com/community/RootSudo)
What these account controls do, is ensure that the integrity of the entire system is not compromised when one particular user becomes affected by malware. If a Linux user were to become infected by some type of drive-by ransomware infection the problem would be limited only to files which that user has access to. Which on a correctly configured system would be limited to only what that user owns. Correctly configured in this case means the default configuration, at least for every major Linux OS family (Fedora, SuSE, Debian) and all of the derivatives of those families that I’ve used.
File permissions
Along with, and complementary to user controls, file permissions in Linux will prevent any application that is not running as root from encrypting, copying, or modifying files that are owned by any other user. On a Unix or Unix-like operating system, everything is a file, and all files have an owner. By default when a user creates a file in their home directory, that file is given permissions of 644 which means that only the file owner can modify it, and nothing is executable by default. Since no files are executable until a user makes them executable, and only the file owner can modify a file then any malware would either need to have a method to escalate privileges (not impossible, but not trivial either), or it could only harm the user who downloaded and executed it.
For the extra cautious among us, Linux file attributes can be implemented to prevent even a process running as root from deleting, modifying, or encrypting a file. If you set the immutable flag on a backup archive in Linux it would be very hard for a ransomware campaign to strong arm you into giving up any cash. Since your backups would be untouched…. (plus you keep your backups off site and away from your other systems anyway right?)
SELinux / AppArmor
SELinux and AppArmor are two forms of mandatory access controls. If you are using Ubuntu, or Suse you are very likely to be using AppArmor. Fedora, Centos, and RHEL use SELinux. AppArmor and SELinux both contain applications within a set of policies that are added onto the normal file permissions that will stop an application from reading, writing, or executing a file that it would normally have access to if SELinux or AppArmor were not enabled. If for example, the apache process were to become compromised and a user had decided to keep a file on a system with 777 permissions in their home directory, normal file permissions would allow the attacker to steal those files. However, by enabling SELinux or AppArmor that file still with 777 permissions has become inaccessible to the attacker. Because apache is contained in a predefined context (SELinux) or policy (AppArmor) that will prevent it from being able to have any access, despite the permissions set by a user. For a good write up about this topic see: https://www.linux.com/news/firewall-your-applications-apparmor
This is often thought of as a zero-day defense. Software packages on a system that have known vulnerabilities that have yet to be patched can still be protected by enabling the mandatory access controls that are available on your server, or desktop. You should still patch when you can, obviously, but these tools will provide something of a stop-gap measure between the time a vulnerability is discovered and the time that package update becomes available.
Official software channels
Every Linux distribution provides officially supported package repositories. These repositories are essentially app stores that contain officially supported distribution provided packages. The great advantage of using Linux is that you will rarely if ever need to download a package from an unknown source on the internet. Ubuntu, for instance, has 45,000 packages available in its official repositories. Everything from note taking apps to office applications to high availability database programs and web servers. While we still need to be cautious about installing applications from third party repositories, by default your server or desktop is many times more protected by the fact that most of the software you will ever want is available at the click of a button or a quick command in the terminal. The software repositories are constantly updated and patches are made available for users on a continuous basis.
Online Patching
Ubuntu, Red Hat, and Suse all offer options for users to install patches without the need for a reboot. Ubuntu even offers this service for freefor up to 3 desktops or servers. This removes the last real boundary to patching on a regular cycle. If online patching is implemented in your environment, you can reduce the downtime due to reboots by an incredible margin. Suse will support your OS patching for up to 1 year without a reboot. Which is pretty incredible. I’m still inclined to reboot if for nothing else the piece of mind that comes with knowing my systems will come back up in the event an outage, but it’s nice to know that I don’t have to.
Conclusion
I think Linux systems are more secure and provide better tools to keep a system secure in the long run than proprietary operating systems. A default Linux installation while not bulletproof does provide an excellent starting point to build from, and with proper management is more secure than most of the alternatives that are available. If you disagree let me know in the comments, I’m always open to learning new things. Hopefully, I’ve convinced at least some of you to switch to Linux … But in the interest of full disclosure, I do have a bias in this regard since I do make a living using Linux 🙂
Luke has an RHCSA for Red Hat Enterpirse Linux 7 and currently works as a Linux Systems Adminstrator in Ohio.
This post, re-published here with permission, was originally published on Luke’s site here.
Leave a Reply