Access Control


Before you can deploy technology to support access control of your machine, you need a clear answer to the following questions because you might not want to put secure locks on each door of the house while you keep your valuables in the garage.

  • Purpose of the machine

  • Who can request services from the machine

  • Evaluation of damages of a compromised machine

The purpose of the machine will greatly influence the access policy. If be a web server, it will face the outside world for anyone to contact. If be a source repository server, it might tightly restrict the personel authorized to access it.

The physical loss of a machine is better evaluated in the context of a backup strategy. Damages from a compromised machine are different. They often involve intengible value such as reputation, trade secrets, etc. Unlike the physical loss of a machine, the damages from a compromised machine will often be public and cannot be undone. The central question thus focuses on: What happens if the machine is going rogue? For example, a mail server might start spamming on your behalf or strategic plans might become known from your competitors.

A major decision with regards to running services is whether access will be unrestricted or restricted. Web and mail services are meant to reach the maximum number of people, by definition people you might have never be in contact with before. Therefore anyone should be able to talk to such web and mail services. Of course that also means such service will see a lot of unwanted traffic from shaddy botnets. Access policies for unrestricted services need to filter requests a posteriori, implemented through a broad range of categorization tools such as log analysers, spam filters, virus scanners and pro-active denial if necessary. The major challenge for unrestricted services quickly becomes dealing with false positives.

Any service that writes information, such as source repository commits, will typically require authentication with the service before sending requests. A common approach is to implement an authentication service that acts as a gateway in front of restricted services. The authentication gateway itself is an unrestricted service that denies or grants access to restricted services which can thus assume all requests are valid.


One of the first principle of controlling information is partitioning. For example, a same person will have different passwords for different roles and systems. Partitioning restricts compromised information to a specific subsystem.

Image you have two machines, one where development is on-going and one with a stable release that serves your Software-as-a-Service website. Let's call them stage and production. If that case, an example partition policy might look like:

  • There should be one unique ssh key per person and per environment (i.e. stage or production).
  • A correct e-mail address should be embeded in the public key certificate.
  • Access to the production environment requires a double-factor authentication.


It is better to discover and address a vulnerability before taken advantage of by an unauthorized party. Active penetration auditing sets the time it takes to compromise a system. Password cracking tool, for example, should continuously be run against password files, because no matter how complex a password policy is, only actual time to crack the first password matters.

Another category of auditing tools focus on intrusion detection. This sets the delay between a system being actually compromised and the intrusion detected. Activity logging and analysis are a big part of this kind of auditing.

Physical Access

Passwords, file permissions, firewalls, etc. do not prevent against unauthorized accesses to a physical infrastructure. Stolen laptops form a majority of unauthorized physical access. Encryption of the persistent data is the only way to prevent access to controlled information in that case. As most applications use temporary files and generate data caches at different random places on a harddrive, only full disc encryption at the device level guarantees information is actually stored encrypted.

Remote Access

The same way encryption is used for preventing unauthorized physical access to information stored on a disk, it can be used in communication protocols to prevent eavesdropping. The most commonly used set of tools for that purpose is the openssh software package.

A remote machine provides services on open ports. Since each open ports is potentially a vector of attacks, the most tighten access control policy is to only have one port publicly open for the openssh daemon and use the local forward mechanism to redirect traffic to other services through tunnels.

One of the most versatile service on a remote machine is the command shell. Within the shell, the control mechanism to restricted commands is through filesystem permissions and a local authentication command to assume different identities. On unix for example, permissions usually boils down to read, write and execute bits for an owner, a group and everyone while users can assume a different identity through the sudo command. As a shell usually provides to a wide variety of commands, it is an important policy to decide which accounts are provided with a shell login and which accounts are not.


Users have restricted permissions to access files and services. Authentication consists for a system to verify a user is who he claims to be. In every day life, you might be asked to present your driver's licence before you can access your bank account. This is authentication. Usually, pieces of information to provide the system to authenticate fall in one of the following categories:

  • What you know, for example, a password
  • What you have, for example, a key
  • What you are, for example, a fingerprint
On the other side, a lot of times a user also needs to verify the authenticity of a system before it starts to interact with it. Authentication, as trust, is a two way street.

On today computer systems, public/private key cryptography is concidered one of the most sure way of authentication and thus heavily used. ssh and gpg, both, use public/private keys for authentication. Keys are typically very long and thus a shorter fingerprint is usually used to validate the authenticity of the key itself. To obtain fingerprints for ssh and gpg keys, use the following commands respectively.

ssh-keygen -lf keyfile
gpg --fingerprint username

Backups are an important part of any software infrastructure. A lot of times it is to advised to setup mechanical backups at scheduled intervals. This has important consequences with regards to authentication because in order to enable mechanical backups we need both an empty passphrase key pair and an account with the highest level of privileges to access the most sensitive information. Fortunately, backups only require read-only priviledges. As a result, it often means as a special backup role with regards to access policies.

Each service typically came with its own custom-built authentication mechanism. That not only increased the risk of a system to be compromised but also created management headaches for IT staffs. In Linux, slowing PAM as emerged as a unifying standard for authentication. Each service that cares about authentication can use PAM's simple programming interface to leverage a well-behaved authentication library.

Example policy

A software development group and the underlying supporting infrastructure often defines four roles with regards to access policies:

Example policy
Rights Contributor Committer Backup Admin
website yes yes yes yes
read repository yes yes yes yes
dev mailing list yes yes yes yes
ssh tunnel no yes yes yes
repository write no yes no yes
mail address no yes yes yes
admin files read (/etc, /var) no no yes yes
admin files write no no no yes
shell no no no yes
noc mailing list no no yes yes
sudo no no no yes


At minimum services that allow modifications to underlying information will require authentication. As time goes on more and more services pile up each one providing a different password store by default.

Contributors access privileges are thought through a notion of role. Of course roles most often do not map one-to-one to products but what you get is a default password store by product. Once a new contributor roles have been defined, an accounts needs to be added to each service. Doing so is a little annoying. Changing credentials is annoying. De-authorization is definite issue. When a contributor is not deemed trustworthy anymore, his authorization to use services associated to the role has to be revoked, often as soon as possible. With a proliferation of machines, products and password stores mashed up through the years, the task of de-authorization is at best complex, at worse not fully executed, leaving a security hole. What you really want is a central authentication directory and that is where LDAP comes in.

LDAP is tricky to setup and each product has its own way to configre authentication through LDAP, if they support it at all. None-the-less, LDAP is versatile and common enough that it became a standard for solving password store proliferation issues. I strongly recommend to shy away from applications that do not have LDAP support, either directly or through PAM.