KeyChest Blog

HashiCorp Vault and PKI

Jan 28, 2020 2:57:45 PM / by Dan

I started playing with HashiCorp Vault about 2 years ago and I really struggled to start with. I didn't expect the simplicity. Here are some of my notes that may help you touch the ground running.


Note 1: I have updated the steps for Vault version 1.3+ (tested with v1.3.2) as the syntax has changed since my first deployments.

Note 2: There is an official how-to page which I used for this description and it's worth having a look at. I have added a description of basic concepts to help you understand the Vault more quickly and also added some details that took me a little to figure out.

Vault is an open source key management system by HashiCorp. You can use it with Consul and other CI/CD tools to securely manage passwords, keys, and certificates - basically any sensitive data items for your software configurations. Vault is a universal tool to manage all kinds of secrets - API keys, passwords, symmetric keys, certificates. It is open source and free if you can learn and use its CLI or RESTful API, if you want a graphic interface, you can go for a paid license.

In a sense, the overall concept of the Vault is simple. There is a data storage - called Backends. On top of that are Secrets Engines. Each provides access and a logic for a particular type of secrets. On top of that are Auth Methods, which provide access control according to your access control policy. The use of the Vault is logged in JSON files and there's an "audit" command to inspect those.

I didn't mention the "Seal" concept yet. When you start a Vault server, you need to "unseal" the backend storage - it basically gives the server an encryption key to access the backend storage. The Vault will keep the secret in memory so long as it's running. It is not unusual to require 2 or more people (so called dual control) to provide their seal secret to start a new instance of the Vault server. If you don't do it, you expose yourself to an internal threat of a rogue employee. As you can run many instance of the server against any backend, it's easy to launch a new server and pull secrets from it.

If I sum up the "sealing/unsealing" - the backend is encrypted with a key that has to be reconstructed from "key shares". The server will keep this key in memory while it runs. If it fails, you need to unseal again. This is one of the main reason why you need several servers in the production environment to prevent denial of service.

Let's get started with the PKI Secret Engine using the Vault's CLI. All commands can be replicated with a RESTful API calls.

Download & Installation - here you can find binaries for quick get-started, no installation needed, just one file of about 50MB that will unzip to 140MB or so. The supported platforms are OSX, Windows, Linux, FreeBSD, NetBSD, OpenBSD, and Solaris.

As the Vault provides a service, you need to run it as a server, and you need a client to run commands. The same binary file works for both if you use the CLI. Postman is my preferred option for testing RESTful APIs.

Before you start the server, you need an initial configuration file that will define:

  • a path to data; and
  • the address and port for the server

You can use '#' to comment out lines you don't need. Here's my simple test.hcl file:

storage "file" {

path = "/Users/dcvrcek/Downloads/vault.cfg/data"


listener "tcp" {

   address = "" tls_disable = 1


#listener "tcp" {

# address = ""

# tls_disable = 1


Note: I have played with a free UI, which I didn't find that useful, really. However, it needs a non-localhost address to work.

OK, now we can start the server:

./vault server --config test.hcl

You should see something like this:

           Cgo: disabled

       Listener 1: tcp (addr: "", cluster address: "", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")

        Log Level: info

          Mlock: supported: false, enabled: false

      Recovery Mode: false

         Storage: file

         Version: Vault v1.3.2

Besides a few technical messages, you can see that we have successfully set up a basic configuration. The only mandatory items are the listener and the backend/storage, but you can add the following items later on:

  • listener - where we can reach the server.
  • storage - the selection of a Backend - in our case it's "file", once you stop testing, you can choose from a wide selection from Zookeeper to MySQL, PostgreSQL, S3, to in-memory.
  • seal - it's an additional protection of the Vault - you can use an HSM, one of the cloud key vaults (AliCloud, Google, AWS, Azure), and a couple more.
  • telemetry - i.e., performance data - here you can specify an upstream server to collect the Vault's performance data.
  • entropy - an additional source of entropy for secrets generation.

Vault Client

Now we have a server running in one terminal window, we move to another terminal window and set an env. variable VAULT_ADDR that should reflect the listener configuration.

export VAULT_ADDR=

The important bit is to define the use of "http". The client would otherwise complain about an HTTPS/HTTP mismatch. We have a server running and we have an environment set up for the client. What we need to do first is to initialize the storage - create a "seal".

Initialize the Vault instance

First you need to initialize the "store backend. You can skip the parameters key-shares and key-threshold. The default values are 5 and 3, i.e., you need three to enter 3 shares out of 5 to unseal the backend storage.

./vault operator init -key-shares=1 -key-threshold=1

The client will print the share and "root token" into the terminal:

Unseal Key 1: pvoVBIL9i+mcvU3iGXaToEAuocpACuO++IVZ0nGqPqY=

Initial Root Token: s.pST0K6ecmyK0eCXbACrtohjZ

Now comes the really hard bit - at least for long-term use of the Vault. What to do with these values. You need to keep them somewhere. The unseal keys should be distributed to different persons AND no person should have access to more than one key (unless it's part of your security policy).

The root token is needed to load the initial security policy. You can then create tokens to manage particular parts of the policy and access particular Secrets Engines and secrets within. This initial token should be revoked as soon as practically possible.

Note: when you stop the server (don't forget to start it again) you can see that it created a data folder "vault.cfg" - as set in the configuration file, with an initial structure.

The First Unseal

We have created an encrypted backend storage. The next step to do the first unseal - so you need the "threshold" number of keys. It's worth mentioning that you can use different clients to send an unseal key to the server. So if you need your team cooperation, each of them can use a client on their laptop ... so long as they can access the server's API.

./vault operator unseal pvoVBIL9i+mcvU3iGXaToEAuocpACuO++IVZ0nGqPqY=

When enough keys are entered, the server can calculate the Master Key, which will be in memory while the server is running. The client will also print the server's config:

  • Seal Type - shamir
  • Initialized - true
  • Sealed - false
  • Total Shares - 1
  • Threshold - 1
  • Version - 1.3.2
  • Cluster Name  - vault-cluster-35aa9a47
  • Cluster ID - a5e043be-f857-2f8a-bd92-c048acdcdda1
  • HA Enabled - false

The next step is to authenticate using the root token - it's basically an API key of sorts.

./vault login token=s.pST0K6ecmyK0eCXbACrtohjZ

You should see a "Success!" message with some important details:

  • token - value of the token you used;
  • token_accessor - it's a token handle that you can use to query properties of the token - quite useful for auditing existing tokens;
  • token_duration  - how long till it expires, the default validity is forever
  • token_renewable - some tokens require renewals and they will expire if you don't do that. If you have a long running process, it can renew its token after 5 mins. If it dies the access automatically expires.
  • token_policies    ["root"] - the scope of the token
  • ...

Alright, you should be set up and we can start playing with the PKI Secrets Engine. Just a quick recap. We have:

  1. started a server with a simple default configuration;
  2. initialized the data storage;
  3. unsealed the storage; and
  4. authenticated ourselves so we now have full access to the Vault server and its backend.

PKI Initialization

Each Secrets Engine has to be enabled/mounted before its first use. It creates an instance of the selected Secrets Engine. For the PKI engine, you need a new mount for each CA you want to use. If you need a root and 2 issuing CAs, you will need to mount the pki three times. We will start with a root

./vault secrets enable -path=rootca pki

and one issuing CA.

./vault secrets enable -path=ca pki

That's it - we have created two certification authorities! Now comes the tuning. Let's start with the validity of the root CA cert. In the Vault's language it's "max-lease-ttl". Let's try 10 years.

./vault secrets tune -max-lease-ttl=87600h rootca

And the issuing the CA will have a cert valid 1 year.

./vault secrets tune -max-lease-ttl=8760h ca

That's the basic setup. Let's generate the first certificate for the root CA. The default name would be something like "" but we can change it with a "common_name" parameter, and we can, e.g., specify the validity of certs a CA will be creating.

./vault write rootca/root/generate/internal \ \

You will get a new certificate printed on the screen. The private key is stored in the backend.

Note: notice that the path in the command contains a keyword "root". This tells the Vault that it is a root CA and it will automatically create a self signed certificate. The next command is for "intermediate" CA. As a result, you will only get a CSR that will have to be signed by another CA.

./vault write ca/intermediate/generate/internal \
common_name="Issuing CA" \

Note: mind the space before "\".

We can now store the CSR in a file and get it signed. Let's create, pki_int.csr , e.g. with a vim or any other editor - copy&paste the PEM string of the CSR into this new file.

We can now call the rootca CA to sign the certificate request. We want the result as a PEM data with the whole chain = "pem_bundle".

./vault write rootca/root/sign-intermediate csr=@pki_int.csr format=pem_bundle

You need to copy&paste the result into a file or redirect stdout when you call this "write" command.

Hooray - we have a certificate for the issuing CA. Let's import it.

now you can import the signed certificate to the issuing CA

vault write ca/intermediate/set-signed certificate=@signed_certificate.pem

You should get back a "Data written" message.

Policy/user and first certificate

The next step is to create a role for clients to request certificates - lets' say we have one agent requesting certificates for many clients in the domain. Let's restrict the issuing CA to this domain. We can also set the certificate validity.

./vault write ca/roles/example-dot-com \ \
allow_subdomains=true \

This command created a policy "example-dot-com". It has also created an entry in the policy file that you can assign a new token to. Whoever shows this new token will only be able to issue certificates under this "example-dot-com" policy.

Let's create a first end-user certificate.

./vault write ca/issue/example-dot-com \

The command will generate a private key and create a certificate for it. You get back:

  • ca_chain - [ root CA in PEM, CA in PEM ]
  • certificate - PEM data
  • expiration - timestamp
  • issuing_ca - PEM data
  • private_key - PEM / PKCS1
  • private_key_type - rsa
  • serial_number = string

You may need a bit more structured output for automation. I like JSON so I would just add "-format=json" to the command.

Note: the private key is printed out only once so you need to carefully capture the output so you can install it.

This tutorial loosely follows instructions from here - . Hopefully the text above should help you avoid some difficult bits that took me a little to get right.


We really like vault and we work on integrating Vault with KeyChest and with our hardware root CA service. The goal is to offer a very, very secure storage of root keys with a convenient remote access to generate keys for intermediate CAs or even end-points. If you'd be interested in this integration, drop us a line at .

While the CLI is convenient for first testing, I would certainly recommend switching to the RESTful API as soon as you get to grips with the PKI secrets engine.

Here's a query to get the configuration of our Issuing CA so you can see the flexibility you have:

curl   --header "X-Vault-Token: s.pST0K6ecmyK0eCXbACrtohjZ"


  • "request_id":"97dfbae4-0a68-5e88-df0c-3c6db990d70d",
  • "lease_id":"",
  • "renewable":false,
  • "lease_duration":0,
  • "data":{
  • "allow_any_name":false,
  • "allow_bare_domains":false,
  • "allow_glob_domains":false,
  • "allow_ip_sans":true,
  • "allow_localhost":true,
  • "allow_subdomains":true,
  • "allow_token_displayname":false,
  • "allowed_domains":[
  1. ""
  • ],
  • "allowed_other_sans":null,
  • "allowed_serial_numbers":[],
  • "allowed_uri_sans":[],
  • "basic_constraints_valid_for_non_ca":false,
  • "client_flag":true,
  • "code_signing_flag":false,
  • "country":[],
  • "email_protection_flag":false,
  • "enforce_hostnames":true,
  • "ext_key_usage":[],
  • "ext_key_usage_oids":[],
  • "generate_lease":false,
  • "key_bits":2048,
  • "key_type":"rsa",
  • "key_usage":[
  1. "DigitalSignature",
  2. "KeyAgreement",
  3. "KeyEncipherment"
  • ],
  • "locality":[],
  • "max_ttl":259200,
  • "no_store":false,
  • "not_before_duration":30,
  • "organization":[],
  • "ou":[],
  • "policy_identifiers":[],
  • "postal_code":[],
  • "province":[],
  • "require_cn":true,
  • "server_flag":true,
  • "street_address":[],
  • "ttl":0,
  • "use_csr_common_name":true,
  • "use_csr_sans":true
  • },
  • "wrap_info":null,
  • "warnings":null,
  • "auth":null


Tags: certificate, key management


Written by Dan