Tools HashiCorp Vault Lead image: Lead Image © pip, photocase.com
Lead Image © pip, photocase.com
 

Credential management with HashiCorp Vault

Key Master

Admin teams can use secret sharing to centrally manage shared access to user accounts and services. HashiCorp Vault is one of the few tools that has proven effective when it comes to implementing this solution. Here's how to use this open source tool and keep important credentials safe. By Matthias Wübbeling

Depending on a company's security paradigm, the admin team members share a joint account or assign admin rights to personal accounts. In the first case, each member must know the account's shared password. This sometimes leads to unfortunate situations when an established employee leaves the team or even the company. In the second case, the administrators handle all their daily tasks with a user account that has rights far above normal levels.

Additionally, the use of cloud infrastructure along with dynamic resource management is geared to the actual needs of the respective services. In high-utilization situations, instances added at short notice carry out the necessary calculations. User accounts for databases or APIs in such dynamic environments typically share secret keys for encrypting communication or access to shared filesystems. Each password is then stored in a widely available configuration file that is used when setting up the instance for configuration. Alternatively, the shared admin password is used to log in and the remaining configurations are semiautomatic.

In both scenarios, different instances use the same secrets to perform the work to be done. On the one hand, this leads to accountability issues for the work performed – for example, traceability: Who logged into the administrative account on a system and when? If cloud instances have the same access credentials for database access, it is impossible to determine unambiguously which instance is responsible for errors or malicious actions. On the other hand, existing structures can change after distributing the secrets. This results in new problems: What happens to the old password? Do you need to equip all systems with a new password or is it sufficient to do this for future instances?

Installation and Configuration

HashiCorp [1] is known in the field of dynamic services, mainly for its Vagrant and Packer tools. HashiCorp Vault [2] gives you access to shared resources and services, cryptographic keys, and dynamic access to user accounts. Vault is developed as an open source client-server application, primarily in the Go programming language. Installing it from the GitHub source is a trouble-free process; the vendor provides ready-to-run packages for popular operating systems. After the installation, the program should be located in the execution path to simplify its use.

Vault manages the secrets in a separate server process. This can be controlled with the well-documented HTTP API. The Vault client mediates between command-line arguments and the API on the server. However, before you can safely store the first secret, you first need to configure the server. I am assuming a central service on a publicly accessible server system. Access rules in the packet filter may increase security. However, you should choose these such that access to a shared cloud infrastructure is possible. A configured subdomain (e.g., vault.example.com) allows for easy configuration of the client. You need to secure the connection between the Vault client and server by SSL; for this, you need a certificate for the relevant subdomain.

If you have set up the subdomain and the SSL certificate is available, initial configuration of the server can take place. You create this in the specially developed HashiCorp Configuration Language (HCL). Because HCL is compatible with JSON, the format is quickly accessible. The vault.example.com.config configuration file might look like Listing 1.

Listing 1: HCL Configuration File

backend "file" {
  path = "/opt/vault/store.db"
}
listener "tcp" {
  address = "0.0.0.0:8200"
  tls_cert_file = "/opt/vault/server.pem"
  tls_key_file = "/opt/vault/key.pem"
  tls_min_version = "tls12"
}

First, configure the server's back end, the place where Vault safely stores secrets. Here, you can choose different targets that can be used by Amazon S3 with a MySQL database or the filesystem up to the memory, as needed. For my first attempt, I am choosing the back-end file, which stores the file on the local hard disk.

The only currently supported listener configuration is TCP. In this configuration block, address configures the local address and port to be used by the server. The tls_cert_file and tls_cert_key options define the location of the SSL certificate, as well as the associated secret key. To optimize privacy, you should not allow TLS versions before 1.2. To do this, use the tls_min_version option with the tls12 value.

You are probably wondering whether storing secrets in a file on the local hard drive or on Amazon S3 can be secure. The answer is yes, because Vault stores the secrets entrusted to it in encrypted form. The key is stored in system memory. To ensure this and to avoid swapping the password out to a hard disk, Vault uses the Linux mlock kernel function, which is reserved for root, so you need to launch the Vault server with the following command:

$ sudo vault server --config vault.example.com.config

Setting Up the Server

After the first launch of the server, you must prepare it for use. This is done with the same Vault program you used to launch the server. For convenience, it makes sense to store the address of the Vault server in an environment variable. On Linux, you can do this for the current terminal session as follows:

$ export VAULT_ADDR="https://vault.example.com:8200"

The following command ensures that you are able to access the server and that it has not been previously initialized:

$ vault status

The result of the status request should be * server is not yet initialized. Now you can initialize the server. During this process, Vault generates the key used to protect the secrets. Since the key is never written to the hard disk, you have to input it each time you reboot the server to unlock the vault. Yet another feature increases the security of the Vault server: The key itself is never shown after initialization. Vault uses Shamir's Secret Sharing procedure developed by Adi Shamir.

Vault divides the key used in the default configuration into five subkeys, which together describe the master key. After restarting the server, Vault requires at least three of the five subkeys to generate the master key again. Therefore, responsibility for the master key can be split between all members of a team. You can initialize the vault with the command:

$ vault init

In addition to the five generated subkeys, the dialog also shows the root token (Figure 1), which authorizes your actions on the server. Distribute the subkey and make a note of the displayed token.

The subkey and root token after initialization. Of the five keys, only three are needed to open the vault.
Figure 1: The subkey and root token after initialization. Of the five keys, only three are needed to open the vault.

If you now check the status of the vault, you will receive information about the state, which is still sealed, as indicated by sealed: true displayed in the status output. The next step is to open the vault. As previously mentioned, three subkeys are necessary. Vault prompts you to enter your subkey after you run the command:

$ vault unseal

If the required subkeys are made available by commanding unseal multiple times, Vault generates the master key, and the state of the vault is now marked as unsealed. From this point, you can store and retrieve secrets. Before using Vault, you need to know which existing back ends are used for storage, authentication, and auditing.

Customization via Vault Back Ends

Vault lets you adapt to individual conditions with different back ends. In doing so, vault distinguishes between storage, authentication, and auditing back ends.

Storage back ends manage the secrets, the dynamic generation of time-limited access, and the corresponding access to it. You need to integrate the required back ends with the vault before you can use them. A back end is similar to a virtual filesystem. The supported features are read, write, and delete. Secrets are stored using paths. One secret can be stored in each path. You can control access to individual paths using rule sets or policies. An example of this is shown later. First, the following command shows the currently integrated storage back ends:

$ vault mounts

When you first start, the generic, system, and cubbyhole back ends are already mounted. The generic back end is storage for arbitrary secrets. The system back end stores the server's configuration settings. These also include the previously mentioned policies for access control of the vault's seal state. In the cubbyhole back end, Vault manages the secrets for each user token. Access is not regulated using policies, but with authentication against the server. You can store secrets that you do not want to share in this storage area (Figure 2).

Access to the secret is impossible without authorization.
Figure 2: Access to the secret is impossible without authorization.

To learn more about the use of the server, first store a simple password with the integrated generic back end under secret/ and then read it right back:

$ vault write /secret/secret value="secret"
$ vault read /secret/secret

It is important to note here that although Vault encrypts the value stored in value, the path remains unencrypted. You should also avoid using the path for storing confidential information. For an overview of the paths used, simply use the command:

$ vault list /secret

The list shows the paths and folders used at the requested secret folder level. In this case, secrets and subfolders can have the same name. As with the generic back end, you can also access the cubbyhole back end. However, the difference is that the command list actually only shows the paths used.

In addition to those already mentioned, other back ends have different purposes, including, for example, back ends for dynamic access to SSH or MySQL servers, which are particularly useful in cloud environments. Before setting up these two, I'll take a closer look at the authentication and audit back ends in Vault.

Authentication via Tokens and Policies

Authentication against the server relies on tokens by default. When the server was initialized, your root token was already stored in the ~/.vault-token file. If you want to delete this file or authenticate with another user account, Vault asks for your token with the following command:

$ vault auth

Then you can see information such as the token's lifetime or its applicable policies. On creation, a new token inherits the associated policy of the parent token. When a token is revoked, all child tokens that were created are also revoked. The allocation of a token to policies cannot be changed later. To create a token without additional access options, run the following command when logged in with a root token:

$ vault token-create -policy=default

When you log in with the created token, you will no longer have access to the previously created secret in /secret/secret. Next, create a policy that grants a token read rights to the /secret/secret path and assigns all rights to the associated /secret/secret/ folder. Listing 2 shows a corresponding policy. The policies are formatted in HCL, like the Vault configuration. They describe the respective rights (capabilities) for specified paths. The available rights are create, read, update, delete, list, and deny. For access to specially protected areas, you also have sudo.

Listing 2: Policy Example

path "/secret/*" {
  capabilities = ["deny"]
}
path "/secret/secret" {
  capabilities = ["read"]
}
path "/secret/secret/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

To add the policy to the policy.hcl file as ita, you use the command:

$ vault policy-write ita policy.hcl

Use token-create to create a new token, as shown above, and replace default with the newly created ita policy. The response from the server shows you that the token is now assigned both the default and the ita policies. The default policy is automatic and prevents access to all paths that are not explicitly mentioned.

Setting up External Access

Managing a large number of tokens can quickly mean a huge amount of documentation for the administrator. In addition to the token, Vault offers further options for user authentication. In addition to TLS certificates, LDAP, and other back ends, GitHub can also be used to minimize the maintenance overhead. For this, you first add the authentication back end; then, you configure the organization for the back end – in this case, IT-Administrator:

$ vault auth-enable github
$ vault write auth/github/  config organization=IT-Administrator

For successful access, you have to configure a policy that Vault assigns to users after login. This can be done via GitHub teams within the organization. All members of the administration team are given root access to the server with the following command:

$ vault write auth/github/map/teams/administrators value=root

Logging in as a GitHub user who is a member of the GitHub company group now works with the personal access token stored on GitHub.

$ vault auth -method=github token=<GitHub Personal Access Token>

SSH Login to a Cloud Environment

In today's dynamic cloud environments, administrators need an overview of logins on different machines. This is particularly difficult if the admin team is subject to regular changes. It would be useful if you no longer had to worry about maintaining SSH public keys or distributing and updating shared passwords for administration. Using the example of an Amazon Elastic Compute Cloud (EC2) instance, I will now configure Vault for SSH login (Figure 3). EC2 generates instances based on a template, the Amazon Machine Image (AMI). If this template already contains Vault's SSH back end [3], spontaneous use becomes child's play.

The process of an SSH login with Vault with, in this example, an OTP.
Figure 3: The process of an SSH login with Vault with, in this example, an OTP.

For example, the setup on CentOS is quite easily managed. To verify the login data, you need a special pluggable authentication module (PAM) that communicates with the Vault server and verifies the one-time password (OTP). To field any failures in communication with the Vault server during testing, you need to install a public key as a backup on the machine. Start by adding the SSH back end to your vault:

$ vault mount ssh

You can pick up the vault-ssh-helper PAM from the Git repository [3]. After installing on the target server, you need to verify the PAM configuration for the SSH daemon and sshd_config to ensure that SSH actually uses PAM for logins. Following the documentation on GitHub [3], you verify the configuration using the following command:

$ vault-ssh-helper -verify-only -config=/etc/vault-ssh-helper.d/config.hcl

For the login to work, you need to configure access to the vault. This is achieved with the following command

$ vault write ssh/roles/ec2instance key_type=otp default_user=ec2-user > cidr_list=w.x.y.z/32

which sets the key type to otp and the user to the default ec2-user. You can randomly adapt the ec2instance role name. The use of the key is restricted to the machines listed in cidr_list (or the appropriate subnet): If the server generates an OTP, test it with the next command. It should be noted that the SSH back end generates the OTP and then writes it to the protected storage (Figure 4), which is why the command starts with vault write. Now write the credentials for the previously selected role name:

$ vault write ssh/creds/ec2instance ip=w.x.y.z
SSH logins are handled by the OTP generated here.
Figure 4: SSH logins are handled by the OTP generated here.

Despite successful tests on both sides, it can happen that a login is not possible if SELinux is activated and enforced. If this is the case, you might need to modify the rights for access to the network for the SSH login process. First, however, try the login from your client machine with the command:

$ vault ssh -role ec2instance ec2-user@w.x.y.z

The OTP is now in the first line of the output. Verify the SSH fingerprint and enter the password. For further automation, you can install the sshpass program. However, manual verification of the SSH fingerprint no longer works. If the login fails, check the log data in the /var/log/audit/audit.log file on the server for a similar entry:

[Error]: Put https://vault.example.com:8200/v1/ssh/verify: dial tcp w.x.y.z:8200; connect: permission denied

If this exists, you can customize the SELinux configuration straightaway with the use of these entries:

$ grep 'avc' /var/log/audit/audit.log | audit2allow -R -M vault.allow
$ semodule -i vault.allow.pp

Now the login should be successful. Next, create a policy for accessing the various SSH logins. You can, of course, use SSH to log in to servers in a cloud environment. In case of staff changes, you can simply revoke the token or update the GitHub team. Auditing in Vault lets you trace user logins with shared SSH user accounts.

Managing Database Logins

Much like the SSH login, you can use Vault to manage access to a database server, such as a MySQL server. This is a valuable feature for cloud instances, because you no longer need to maintain a static password in the template. Every instance gets its own username:password combination, which is also time-restricted.

A word of caution: For many cloud services, such as Amazon EC2 and Amazon RDS, the machines and databases reside together on virtual private networks. If that is not the case in your setup, make sure you encrypt the MySQL connections using TLS. This is not normally the case for MySQL.

First add the MySQL back end and write the connection to your MySQL server in the back end configuration path:

$ vault mount mysql
$ vault write /mysql/config/connection connection_url="user:password@tcp(w.x.y.z:3306)/"

The connection settings can no longer be read out after doing this. Make sure that MySQL also distinguishes users by hostname. In other words, you need to specify the correct host entry of the vault server for the user. Additionally, the user must have GRANT rights to create additional users with different rights.

Next, set the validity period and the duration of use for MySQL access in the back end's configuration path. First, set the validity period to one hour. At the end of this hour, the lease can be extended up to the maximum period of 24 hours, as per your definition:

$ vault write /mysql/config/lease lease=1h lease_max=24h

The MySQL back end supports random names for roles that can have user access. A role definition includes the MySQL statement for creating the user. As an example, I will define a readonly role that can only perform SELECTs on all tables:

$ vault write /mysql/roles/readonly sql="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';"

Vault replaces the placeholders {{name}} and {{password}} with the generated values. The following commands show the stored roles and the associated SQL statements:

$ vault list /mysql/roles
$ vault read /mysql/roles/readonly

For access to the MySQL database, the application itself – or a job that runs at regular intervals – requests access and receives the username/password combination:

$ vault read /mysql/creds/readonly

The access assignments are automatically revoked when the validity period expires. If you manage multiple MySQL servers, you can do this by additionally integrating the MySQL back end with a second path. Transfer this path to the mount command using the path argument:

$ vault mount -path mysql2 mysql

Auditing via Log Data

Vault provides two back ends for creating logs for auditing access to the vault server in case of heavy utilization. You can choose between file and syslog, or both back ends can be used at the same time. In this example, I will only be using the syslog back end, which you've probably already integrated into your existing monitoring. To activate, run the following command with the vault tag and the AUTH syslog facility. The logs can be sorted directly by syslog in the usual way:

$ vault audit-enable syslog tag="vault" facility="AUTH"

Vault's log data is very detailed and contains information about the login credentials used. However, Vault hashes these by default using SHA256 and a salt, so they do not end up in plain text in the logs.

Conclusions

Far beyond the capabilities of classic password managers, Vault provides options for structured management and distribution of secrets and the dynamic handling of user access. Policy-based authorization, together with the authentication and audit back ends, enables technically and procedurally safe deployment, and not only in cloud environments. Compared with the Keywhiz program [4], which is similar in terms of the basic concept, structure and data storage with Vault is more sophisticated and more flexible.

The use of Shamir's Secret Sharing procedure is useful to distribute responsibility reliably across several shoulders. However, stopping provisioning (e.g., in an emergency) is something the administrator can do alone. Authentication back ends like the one used by GitHub are useful and shift the work from the production to the administrative level.

Only when you create your own back end, which you would have to compile directly with Vault, is the process a little awkward. A system with dynamic addition at run time would be beneficial. Nevertheless, thanks to the low barriers to entry, Vault is a useful optimization tool in the daily working life of an admin team.