Secure your database credentials with Hashicorp Vault

Rafael Remondes
9 min readAug 31, 2019

Managing sensitive information such as passwords or tokens can be complicated. They are stored in clear text right with the code or stored in a way that is accessible by other users or processes, making it vulnerable.

Encrypting all the configuration files can be a solution to help you sleep well at night. However, passwords can be compromised in some other way. It’s a security best practice rotating credentials regularly. If you have only a couple of passwords, it’s not such a big deal but as you integrate more services(or more accounts) things start to get confusing and complex as you have more passwords. It’s hard to manage all of these credentials. Luckily, there is a solution to this sort of problem. A centralized, yet secure way to store all of your sensitive data: Hashicorp Vault.

What is Vault?

Vault is a secrets manager that encrypts and stores data securely, allowing users to interact with it through a CLI or an HTTP API. Access is validated through policies written and uploaded by you, the administrator. All the data you store in Vault is encrypted and written to a backend of your choice. You can either choose a Consul cluster, an S3 Bucket, or even memory. Vault offers many options to store your secrets. Since it’s encrypted, only through Vault the data accessed. When you, someone from your team, or service needs to access the stored data, Vault handles all the encryption/decryption process transparently as well as checking if the user/application is allowed to read, write or edit that resource. If you think some credentials were compromised, fear not, you can always revoke the credentials through Vault or, in extreme cases, seal it, blocking access to everyone until unsealed.

The Sealing process

Sealing? What is sealing?

As stated before, Vault encrypts everything that you store on it. How it’s done? Vault uses a single key to encrypt/decrypt data. This is also stored encrypted in Vault using yet another key called the master key. So, you have an encryption key to encrypt your secrets and another key called the master key to encrypt the encryption key.

When Vault starts for the first time, it has no master key. You first need to create it by running vault operator init that will initialize Vault. When initializing, the master key is created and split into several shards. The shards are then printed in the STDOUT and thrown away. After this process is completed, Vault starts in a sealed state, meaning it has the encryption key encrypted and no way to decrypt it.

To unseal it, you need to provide Vault the keys that were previously printed so Vault can compute the master key and decrypt the encryption Key. This is possible because the master key is created with a cryptographic scheme called Shamir.

After the shard keys are provided, Vault is finally unsealed and ready to use. If Vault restarts, it starts again sealed because the master key is stored in memory.

You can always manually seal Vault which means throwing away the master key.

From Vault version 1 onwards, an auto unseal option is available. Instead of asking for the keys to build the master key each time it starts, it is stored in a remote Key Store and fetch automatically by Vault. You can use AWS KMS, GCP Cloud KMS, Azure Key Vault, or other Key Store as long as it is supported. See the list here.

In our setup, we use the AWS KMS service to retrieve the master key. To let Vault know which service it should use, we just need to set it in the config file written in hcl format:

seal "awskms"{
region = "us-east-2"
}

You can pass the config file path with a flag -config when starting Vault with vault server.

We also need to supply the correspondent AWS credentials associated with an IAM profile that has permissions to use the KMS service and the key id to use.

We can either pass the credentials through environment variables or set them up in the config file. In our example, we will set this up using environment variables:

export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_key_id>
export VAULT_AWSKMS_SEAL_KEY_ID=<kms_key_id_to_be_used>

Vault can then read these env variables and fetch the encryption key. Make sure the IAM profile associated with these AWS credentials has KMS usage privileges.

Using Vault to store Database credentials

The auto-unseal feature is very useful as it allows us to automate the deployment of our Vault cluster without any human intervention. Just a simple script will do the trick, and we have an operational cluster up and running.

Despite how cool this is, a Vault cluster is of no use if we don’t use it to store and manage our sensitive data. Vault offers a simple Key-Value store that can be used to store passwords.

But, say we want to store database credentials in Vault. It’s good that we have safe storage to use but for things like rotation and role based access to our database, a fixed password in a Key-Value store is not the most convenient choice. Rotation of those credentials can be complex, boring, and time-consuming as more users have access to our database. One password for all users is out of the question.

Considering all of these problems of rotating credentials and role based access, Vault offers a secret engine for databases that creates temporary credentials for a user. They can be created by role, so we can have for example a read-only one stored in Vault to share with applications or users that only consume data and another role that allows writing in some tables for others.

In this example, we are assuming a PostgreSQL database but you can use another one of your choice as long as Vault offers support for it as well.

First, enable the database secret engine:

vault secrets enable database

Then, add the necessary configuration, role, and connection string, in order to allow Vault to connect to our PostgreSQL database called expenses_database:

vault write database/config/expenses_database \
plugin_name=postgresql-database-plugin \
allowed_roles="expenses-db-role" \
connection_url="postgresql://{{username}}:{{password}}@postgres-database:5432/<your_database>?sslmode=disable" \
username="<your_db_user>" \
password="<your_db_user_password>" verify_connection=false

We need to pass down to Vault the allowed roles(Vault roles, not PostgreSQL roles). In this case, we use only one. We are not using SSL connection to the database, that’s why you see the option sslmode=disable and verify_connection=false. Goes without saying that you should always use SSL connections in production

So now we can create a database role called expenses-db-role that we allowed access to when configuring the database:

vault write database/roles/expenses-db-role \
db_name=expenses_database \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
GRANT SELECT, UPDATE, INSERT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"

In creation_statements we set the PL/SQL query that will run in PostgreSQL each time we ask Vault for credentials. Don't worry about the password and username, those will be retrieved by vault. In default_ttl, we set the time to live for these credentials. It can be customized by the client accessing Vault up to a maximum set in max_ttl

A Running example

To wrap all this up, we’ll start a container with Vault running, another one running Consul that’ll be used as our storage backend, and, lastly, one container running a PostgreSQL database.

Our Vault configuration is very simple. We just need to configure the auto-unseal option and the storage backend:

ui = truestorage "consul" {
address = "consul-cluster:8500"
path = "vault"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
seal "awskms" {
region = "us-east-2"
}

This configuration goes in the same file we configured the seal method.

Next, our docker-compose file used to start all the services:

version: "3.1"
services:
vault-cluster:
image: "vault:latest"
command: server
ports:
- 8200:8200
volumes:
- ./vault/config:/vault/config
cap_add:
- IPC_LOCK
env_file: ./compose-env-vars.env
consul-cluster:
image: "consul:latest"
command: agent -server -bootstrap-expect=1
volumes:
- ./consul/config:/consul/config
- ./consul/data:/consul/data
postgres-database:
image: "postgres:11"
env_file: ./compose-env-vars.env
ports:
- 5432:5432
volumes:
- ./postgres/init.sql:/docker-entrypoint-initdb.d/init.sql

We need to set the option --cap-add=IPC_LOCK to avoid sensitive values to be swapped to disk, as explained in the container documentation on memory locking:

Vault container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have cap-add=IPC_LOCK provided to docker run. Since the Vault binary runs as a non-root user, setcap is used to give the binary the ability to lock memory

After that, we can deploy it with a docker-compose up. The next step is to initialize Vault:

vault operator init

Vault will prompt the recovery-keys. As we are using auto-unseal, they are used only to re-create a new root token. A root token is also prompted. We’ll use it to login and configure access to the database secret engine and create an App Role.

To login, simply type:

vault login

Vault will ask to type your token. Using our root token, we’re logged in. We can start configuring Vault by enabling the database secret engine:

vault secrets enable database

Next, we add our database configuration and role by following the steps described above. Our database is called expenses_database.

After the PostgreSQL access is configured. Vault is ready to be used as our database credential manager. We can simply retrieve a user and password by using the CLI:

And to login:

To use it in an application, we’ll need to create credentials to access Vault. In this case, we use approle based credentials.

As Vault checks access to data if there is a policy authorizing it, we’ll first need to create a policy file that authorizes our code to retrieve database credentials from expenses-db-role:

path "auth/approle/login" {
capabilities = ["create", "read"]
}
path "database/creds/expenses-db-role" {
capabilities = ["read"]
}

And upload the created file to Vault to create the policy:

vault policy write database-access-policy database-access-policy.hcl

With our policy uploaded, we can now create an approle set of credentials(role id and secret id) for our app to use:

vault write auth/approle/role/database-role \
token_ttl=20m \
token_max_ttl=30m \
policies=database-access-policy

To fetch the newly created credentials the following commands will prompt for the role id and secret, respectively:

vault read auth/approle/role/database-role/role-id
vault write -f auth/approle/role/database-role/secret-id

Finally, we can include the credentials in our code and use any of the available SDK to interact with Vault. In our case, we interact directly with the HTTP API in a very simple python script:

def requestVaultToken(self):
payload = {
"role_id": self.vault_role_id,
"secret_id": self.vault_secret_id
}
return requests.post(self.vault_addr+'/v1/auth/approle/login', data=json.dumps(payload), verify=False)

This function uses the HTTP API to retrieve a Vault token that will be passed in the next request to retrieve the temporary database credentials:

def buildRequest(self, path):
return {
"url": self.vault_addr + path,
"header": {'X-Vault-Token': self.vault_token}
}
def getDBcredentials(self):
req = self.buildRequest('/v1/database/creds/expenses-db-role')
resp = requests.get(req['url'], headers = req['header'], verify=False)
resp_json = self.checkResponse(resp, json_parse=True)
if resp_json is not None:
return resp_json['data']

The resp_json variable includes the credentials we need, so we extract those and create a new PostgreSQL client:

credentials = vaultClient.getDBcredentials()
myDB = DatabaseClient(HOSTNAME, credentials['password'], credentials['username'], DB_NAME)

And it’s done, we have it all ready to initialize a database connection and execute SQL queries.

Conclusion

This is a simple example where it’s explained how to set up a simple Vault cluster to store a set of database credentials and a python script to interact with Vault to retrieve a temporary username and password to access the database.

From this example, you can add more database roles, add other types of databases or use Vault to store other types of secrets. Vault offers a variety of secret engines that allow the creation of credentials with a time-to-life.

You can check all the available secret engines and other features of Vault in their documentation.

If you’re planning to use Vault in production, keep in mind that monitoring and logging should be in place as well as SSL connection to Vault and to your database.

Happy coding.

--

--

Rafael Remondes

Software engineer at several place but only one at a time. Currently: Cloud Engineer at Feedzai