When building services that need to run reliably, you face three fundamental problems: managing secrets securely, deploying workloads consistently, and enabling services to find each other. HashiCorp’s stack addresses these with Vault, Nomad, and Consul. This guide shows how to set up and integrate all three based on what I learned while building infrastructure for distributed services.

Why This Stack?

Before diving into configurations, let me explain why I chose this combination. I was looking for tools that could handle production workloads without the complexity overhead of larger platforms.

Vault solves secrets management. Instead of scattering API keys and credentials across configuration files, Vault centralizes them with access control and audit logging. You can version secrets, rotate them automatically, and revoke access when needed.

Nomad handles workload orchestration. It’s simpler than Kubernetes but still provides scheduling, automatic restarts, and resource allocation. If a container fails, Nomad restarts it. If a node goes down, Nomad reschedules the workloads elsewhere.

Consul enables service discovery. When you deploy multiple instances of a service, Consul tracks where they are and provides DNS-based lookups. Your applications query api.service.consul instead of hardcoded IP addresses.

Together, they form a complete platform: Vault manages credentials, Nomad runs the workloads, and Consul helps them find each other.

The Three Core Components

Vault: Secrets Management

Vault stores sensitive data like database passwords, API tokens, and certificates. Instead of environment variables or config files, applications authenticate to Vault and request secrets at runtime.

The key concept is policies. You define who can access which secrets:

path "secret/data/dev/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

This policy allows full access to anything under secret/data/dev/. The path structure is important-Vault’s KV version 2 engine uses secret/data/ as the API path, even though CLI commands use secret/.

Vault also provides versioning. Every time you update a secret, it creates a new version while keeping the old ones:

vault kv put secret/dev/api-key key=v1
vault kv put secret/dev/api-key key=v2
vault kv get -version=1 secret/dev/api-key  # Retrieves v1

Nomad: Workload Orchestration

Nomad schedules and runs your applications. You define jobs in HCL that specify resource requirements, restart policies, and health checks. Nomad handles placement across your cluster.

Key concepts in a job definition:

  • Resources: CPU and memory limits ensure fair allocation across the cluster
  • Network: Port mapping-Nomad assigns dynamic host ports that map to container ports
  • Restart policies: Define how Nomad handles failures
  • Service blocks: Enable automatic registration with Consul for service discovery

We’ll see a complete job definition when we deploy a real workload below.

Consul: Service Discovery

Consul maintains a real-time registry of services. When Nomad starts a task with a service block, it automatically registers with Consul. Other services can then discover it via DNS or HTTP API.

DNS queries follow this pattern:

# Basic service lookup
dig @localhost -p 8600 nginx-test.service.consul

# With specific datacenter
dig @localhost -p 8600 nginx-test.service.dc1.consul

# Tagged services
dig @localhost -p 8600 web.nginx-test.service.consul

Consul also performs health checks. If a service fails its health check, Consul marks it unhealthy and removes it from DNS results.

How They Work Together

The integration happens at multiple levels:

  1. Nomad ↔ Consul: Nomad jobs declare services that automatically register with Consul. Health checks in the job definition become Consul health checks.

  2. Nomad ↔ Vault: Nomad can authenticate to Vault and inject secrets into tasks as environment variables or files.

  3. Applications ↔ All Three: Your application uses Consul for service discovery, retrieves credentials from Vault, and runs as a Nomad task.

Here’s a concrete example. An API service needs to:

  • Store its database password in Vault
  • Run as a Nomad job with 3 replicas
  • Register in Consul so other services can find it

The workflow:

  1. Store database credentials in Vault
  2. Define a Nomad job that retrieves those credentials
  3. Include a service block so Consul can track instances
  4. Other services discover the API via api.service.consul

Hands-On Setup

Let me walk through setting up all three components together.

Starting Vault

For development, Vault’s dev mode keeps everything in memory:

vault server -dev -dev-listen-address="0.0.0.0:8200"

This outputs a root token and unseal key. In production, you’d initialize Vault properly and distribute unseal keys, but for learning, dev mode works fine.

Configure authentication and policies:

export VAULT_ADDR="http://localhost:8200"
vault login token=$ROOT_TOKEN

# Enable userpass authentication
vault auth enable userpass

# Create a policy for developers
vault policy write dev-policy - <<EOF
path "secret/data/dev/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}
EOF

# Create a user
vault write auth/userpass/users/developer \
    password="devpass" \
    policies="dev-policy"

Now non-root users can authenticate and access secrets within their permissions.

Nomad-Consul Integration

The integration requires specific configuration. Here’s the Consul config:

datacenter = "dc1"
data_dir   = "/tmp/consul/data"
server     = true
bootstrap_expect = 1
bind_addr = "0.0.0.0"

ui_config {
  enabled = true
}

client_addr = "0.0.0.0"

ports {
  dns      = 8600
  http     = 8500
  grpc     = 8502
}

connect {
  enabled = true
}

The key points:

  • bind_addr uses a specific IP instead of localhost for cross-service communication
  • dns = 8600 enables service discovery queries
  • connect.enabled allows service mesh features

Nomad configuration includes Consul integration:

data_dir = "/tmp/nomad/data"
bind_addr = "0.0.0.0"

server {
  enabled = true
  bootstrap_expect = 1
}

client {
  enabled = true

  host_volume "docker_sock" {
    path      = "/var/run/docker.sock"
    read_only = false
  }
}

consul {
  address = "localhost:8500"
  auto_advertise = true
  server_auto_join = true
  client_auto_join = true
}

The consul block enables automatic service registration. When you deploy a job with a service stanza, Nomad registers it with Consul automatically.

Starting Nomad and Consul

Start both services in the background using their configuration files:

# Start Consul agent
consul agent -config-file=consul.hcl > /tmp/consul.log 2>&1 &

# Start Nomad agent
nomad agent -config=nomad.hcl > /tmp/nomad.log 2>&1 &

Verify they’re running by checking their status endpoints:

curl -s http://localhost:8500/v1/status/leader  # Consul
curl -s http://localhost:4646/v1/status/leader  # Nomad

Access the web UIs at http://localhost:8500 (Consul) and http://localhost:4646 (Nomad).

Deploying a Real Workload

Let’s deploy an nginx service with full integration:

job "nginx-test" {
  datacenters = ["dc1"]
  type = "service"

  group "web" {
    count = 1

    network {
      port "http" {
        to = 80
      }
    }

    service {
      name     = "nginx-test"
      tags     = ["web", "frontend"]
      port     = "http"
      provider = "consul"

      check {
        type     = "http"
        path     = "/"
        interval = "10s"
        timeout  = "3s"
      }
    }

    task "nginx" {
      driver = "docker"

      config {
        image = "nginx:alpine"
        ports = ["http"]
      }

      resources {
        cpu    = 200
        memory = 256
      }
    }

    restart {
      attempts = 2
      interval = "30m"
      delay    = "15s"
      mode     = "fail"
    }
  }
}

Key components:

  • Service block: Registers with Consul automatically
  • Health check: HTTP check on path / every 10 seconds
  • Restart policy: Attempts 2 restarts within 30 minutes
  • Resources: Limits CPU and memory usage

Deploy it:

export NOMAD_ADDR=http://localhost:4646
nomad job run nginx-job.hcl

Verify the deployment:

# Check Nomad job status
nomad job status nginx-test

# DNS lookup
dig @localhost -p 8600 nginx-test.service.consul

# Health check
curl -s http://localhost:8500/v1/health/service/nginx-test | jq

The service is now discoverable via DNS at nginx-test.service.consul.

Integrating with Vault

Applications can retrieve secrets from Vault programmatically. Here’s a minimal example using the Vault Go client:

import "github.com/hashicorp/vault/api"

// Create Vault client
config := api.DefaultConfig()
config.Address = "http://localhost:8200"
client, _ := api.NewClient(config)
client.SetToken(os.Getenv("VAULT_TOKEN"))

// Read secret
secret, _ := client.Logical().Read("secret/data/dev/database")
data := secret.Data["data"].(map[string]interface{})

// Use credentials
dbUser := data["username"].(string)
dbPass := data["password"].(string)

The pattern is straightforward: authenticate with a token, read from the secret path, and extract the data. This centralizes secret management-rotating credentials requires only updating Vault, not redeploying code.

Production and Next Steps

While dev mode works for learning, production requires additional hardening. Here are the key considerations and next steps:

  • Initialize Vault with proper seal/unseal using Shamir’s Secret Sharing
  • Use AppRole or JWT authentication instead of static tokens
  • Enable TLS for all communication across all three tools
  • Deploy multi-node clusters (3-5 servers) for fault tolerance
  • Enable ACLs with default-deny policies
  • Set resource quotas to prevent cluster exhaustion

Conclusion

HashiCorp’s stack provides a complete foundation for running services in production. Vault manages secrets securely, Nomad orchestrates workloads reliably, and Consul enables service discovery automatically.

The integration between these tools reduces operational complexity. Services register themselves, credentials are injected securely, and DNS-based discovery works out of the box.

This setup handles the fundamental infrastructure problems so you can focus on building your applications instead of managing configuration files and manual deployments.