Set up a Nomad cluster on GCP
This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on GCP. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.
Prerequisites
For this tutorial, you will need:
- Packer 1.7.7 or later installed locally
- Terraform 1.2.0 or later installed locally
- Nomad 1.3.3 or later installed locally
- A GCP account and the
gcloud
CLI tool installed locally
Note
This tutorial creates GCP resources that may not qualify as part of the GCP free tier. Be sure to follow the Cleanup process at the end of this tutorial so you don't incur any additional unnecessary charges.
Clone the code repository
The cluster setup code repository contains configuration files for creating a Nomad cluster on GCP. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.
Clone the code repository.
$ git clone https://github.com/hashicorp/learn-nomad-cluster-setup
Navigate to the cloned repository folder.
$ cd learn-nomad-cluster-setup
Check out the v0.3
tag of the repository as a local branch named nomad-cluster
.
$ git checkout v0.3 -b nomad-cluster
Navigate to the gcp
folder.
$ cd gcp
Configure gcloud
Log in to GCP with gcloud
and follow the prompts to complete the login process.
$ gcloud auth login
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?response_type=code[...]
You are now logged in as [YOUR_GCP_ACCOUNT].
Your current project is [YOUR_CURRENT_PROJECT]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
Set the project
, region
, and zone
configurations in gcloud
.
Tip
If you already have a project in your GCP account, these configurations will be set for you as part of the login step. If not, set them with the gcloud config set
command after creating a project.
First, set project
to the project ID of your preferred project.
$ gcloud config set project <GCP_PROJECT_ID>
Then, set region
to the associated region.
$ gcloud config set compute/region <GCP_REGION>
Finally, set zone
to the associated zone. Note that the zone must be in the region set above.
$ gcloud config set compute/zone <GCP_ZONE>
Create the Nomad cluster
There are two main steps to creating the cluster: building a Google Compute Engine image with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that some configurations be set before they run and these configuration variables are defined in the variables.hcl.example
file.
Update the variables file for Packer
Rename variables.hcl.example
to variables.hcl
and open it in your text editor.
$ mv variables.hcl.example variables.hcl
Warning
The .gitignore
file in the example repo is set to ignore variables.hcl
so your configurations will not get pushed to your source code repository if you choose to do so. Do not commit sensitive data like credentials to your source code repository.
Update the project
, region
, and zone
variables with the values from gcloud
by first listing the configurations and then copying the values for project
, region
, and zone
into variables.hcl
. In this example, those would be hc-3ff63253e6a54756b207e4d4727
, us-east1
, and us-east1-b
.
$ gcloud config list
[compute]
region = us-east1
zone = us-east1-b
[core]
account = [GCP_ACCOUNT]
disable_usage_reporting = True
project = hc-3ff63253e6a54756b207e4d4727
Update the retry_join
variable with the project ID by replacing the GCP_PROJECT_ID
placeholder in the value with the same project ID as the project
variable above. Replace the GCP_ZONE
placeholder with the same zone as the zone
variable. Save the file.
gcp/variables.hcl
# Packer variables (all are required)
project = "hc-3ff63253e6a54756b207e4d4727"
region = "us-east1"
zone = "us-east1-b"
# Terraform variables (all are required)
retry_join = "project_name=hc-3ff63253e6a54756b207e4d4727 zone_pattern=us-east1-b provider=gce tag_value=auto-join"
# ...
Build the GCE image
Initialize Packer to download the required plugins.
Tip
packer init
returns no output when it finishes successfully.
$ packer init image.pkr.hcl
Then, build the image and provide the variables file with the -var-file
flag.
Tip
Packer will print out a Warning: Undefined variable
message notifying you that some variables were set in variables.hcl
but not used, this is only a warning. The build will still complete sucessfully.
$ packer build -var-file=variables.hcl image.pkr.hcl
googlecompute.hashistack: output will be in this color.
==> googlecompute.hashistack: Checking image does not exist...
==> googlecompute.hashistack: Creating temporary RSA SSH key for instance...
==> googlecompute.hashistack: Using image: ubuntu-minimal-1804-bionic-v20221026
==> googlecompute.hashistack: Creating instance...
googlecompute.hashistack: Loading zone: us-east1-b
# ...
==> googlecompute.hashistack: Creating image...
==> googlecompute.hashistack: Deleting disk...
googlecompute.hashistack: Disk has been deleted!
Build 'googlecompute.hashistack' finished after 4 minutes 31 seconds.
==> Wait completed after 4 minutes 31 seconds
==> Builds finished. The artifacts of successful builds are:
--> googlecompute.hashistack: A disk image was created: hashistack-20221121163551
Update the variables file for Terraform
Open variables.hcl
in your text editor again.
Update machine_image
with the value output from the Packer build. In this example, the value would be hashistack-20221121163551
.
Then, open your terminal and use the built-in uuid()
function of the Terraform console to generate two new UUIDs for the token's credentials.
$ terraform console
> uuid()
> "a90a52ae-bcb7-e38a-5fe9-6ac084b37078"
> uuid()
> "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44"
> exit
Copy these UUIDs and update the nomad_consul_token_id
and nomad_consul_token_secret
variables with the UUID values. Save the file.
In this example, the value for nomad_consul_token_id
would be a90a52ae-bcb7-e38a-5fe9-6ac084b37078
and the value for nomad_consul_token_secret
would be d14d6a73-a0f1-508d-6d64-6b0f79e5cb44
.
gcp/variables.hcl
# Packer variables (all are required)
project = "hc-3ff63253e6a54756b207e4d4727"
region = "us-east1"
zone = "us-east1-b"
# Terraform variables (all are required)
retry_join = "project_name=hc-3ff63253e6a54756b207e4d4727 zone_pattern=us-east1-b provider=gce tag_value=auto-join"
machine_image = "hashistack-20221121163551"
nomad_consul_token_id = "a90a52ae-bcb7-e38a-5fe9-6ac084b37078"
nomad_consul_token_secret = "d14d6a73-a0f1-508d-6d64-6b0f79e5cb44"
# ...
The remaining variables in variables.hcl
are optional.
- Â
allowlist_ip
is a CIDR range specifying which IP addresses are allowed to access the Consul and Nomad UIs on ports8500
and4646
as well as SSH on port22
. The default value of0.0.0.0/0
will allow traffic from everywhere.
Note
We recommend that you update allowlist_ip
to your machine's IP address or a range of trusted IPs.
- Â
name
is a prefix for naming the GCP resources. - Â
server_instance_type
andclient_instance_type
are the virtual machine instance types for the cluster server and client nodes, respectively. - Â
server_count
andclient_count
are the number of nodes to create for the servers and clients, respectively.
Deploy the Nomad cluster
Initialize Terraform to download required plugins and set up the workspace.
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v4.43.1...
- Installed hashicorp/google v4.43.1 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
Provision the resources and provide the variables file with the -var-file
flag. Respond yes
to the prompt to confirm the operation. The provisioning takes several minutes. Once complete, the Consul and Nomad web interfaces will become available.
$ terraform apply -var-file=variables.hcl
# ...
Plan: 11 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: Yes
# ...
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
Outputs:
IP_Addresses = <<EOT
Client public IPs: 52.91.50.99, 18.212.78.29, 3.93.189.88
Server public IPs: 107.21.138.240, 54.224.82.187, 3.87.112.200
The Consul UI can be accessed at http://107.21.138.240:8500/ui
with the bootstrap token: dbd4d67b-4629-975c-e9a8-ff1a38ed1520
EOT
consul_bootstrap_token_secret = "dbd4d67b-4629-975c-e9a8-ff1a38ed1520"
lb_address_consul_nomad = "http://107.21.138.240"
Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the URL in the Terraform output.
Click on the Log in button and use the bootstrap token secret consul_bootstrap_token_secret
from the Terraform output to log in.
Click on the Nodes page from the sidebar navigation. There are six healthy nodes, including three Consul servers and three Consul clients created with Terraform.
Set up access to Nomad
Run the post-setup.sh
script.
Warning
If the nomad.token
file already exists from a previous run, the script won't work until the token file has been deleted. Delete the file manually and re-run the script or use rm nomad.token && ./post-script.sh
.
Note
It may take some time for the setup scripts to complete and for the Nomad user token to become available in the Consul KV store. If the post-setup.sh
script doesn't work the first time, wait a couple of minutes and try again.
$ ./post-setup.sh
The Nomad user token has been saved locally to nomad.token and deleted from the Consul KV store.
Set the following environment variables to access your Nomad cluster with the user token created during setup:
export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646
export NOMAD_TOKEN=$(cat nomad.token)
The Nomad UI can be accessed at http://107.21.138.240:4646/ui
with the bootstrap token: 22444f72-c222-bd26-6c2c-584fb9e5b698
Apply the export
commands from the output.
$ export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 && \
export NOMAD_TOKEN=$(cat nomad.token)
Finally, verify connectivity to the cluster with nomad node status
$ nomad node status
ID Node Pool DC Name Class Drain Eligibility Status
06320436 default dc1 ip-172-31-18-200 <none> false eligible ready
6f5076b1 default dc1 ip-172-31-16-246 <none> false eligible ready
5fc1e22c default dc1 ip-172-31-17-43 <none> false eligible ready
Navigate to the Nomad UI in your web browser with the URL in the post-setup.sh
script output. Click on Sign In in the top right corner and log in with the bootstrap token saved in the NOMAD_TOKEN
environment variable. Set the Secret ID to the token's value and click Sign in with secret. Click on the Clients page from the sidebar navigation.
Cleanup
Use terraform destroy
to remove the provisioned infrastructure. Respond yes
to the prompt to confirm removal.
$ terraform destroy -var-file=variables.hcl
# ...
google_compute_instance.server[1]: Destruction complete after 51s
google_compute_instance.client[2]: Destruction complete after 51s
google_compute_instance.server[2]: Destruction complete after 51s
google_compute_instance.client[1]: Destruction complete after 51s
google_compute_instance.server[0]: Destruction complete after 51s
google_compute_instance.client[0]: Destruction complete after 51s
google_compute_network.hashistack: Destruction complete after 52s
Destroy complete! Resources: 11 destroyed.
Next steps
In this tutorial you created a Nomad cluster on GCP with Consul and ACLs enabled. From here, you may want to:
- Run a job with a Nomad spec file or with Nomad Pack
- Test out native service discovery in Nomad
For more information, check out the following resources.
- Learn more about managing your Nomad cluster
- Read more about the ACL stanza and using ACLs