Skip to content
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.

Latest commit

 

History

History
135 lines (86 loc) · 7.86 KB

README.md

File metadata and controls

135 lines (86 loc) · 7.86 KB

DISCLAIMER: This is no longer supported.

Moving forward in the future this repository will be no longer supported and eventually lead to deprecation. Please use our latest versions of our products moving forward or alternatively you may fork the repository to continue use and development for your personal/business use.


Nomad GCP Module

This repo contains a Module for how to deploy a Nomad cluster on Google Cloud using Terraform. Nomad is a distributed, highly-available data-center aware scheduler. A Nomad cluster typically includes a small number of server nodes, which are responsible for being part of the consensus protocol, and a larger number of client nodes, which are used for running jobs:

Nomad architecture

This Module includes:

What's a Terraform Module?

A Terraform Module refers to a self-contained packages of Terraform configurations that are managed as a group. This repo is a Terraform Module and contains many "submodules" which can be composed together to create useful infrastructure patterns.

Who created this Module?

These modules were created by Gruntwork, in partnership with HashiCorp, in 2017 and maintained through 2021. They were deprecated in 2022 in favor of newer alternatives (see the top of the README for details).

How do you use this Module?

This Module has the following folder structure:

  • root: This folder shows an example of Terraform code to deploy a Nomad cluster co-located with a Consul cluster in Google Cloud
  • modules: This folder contains the reusable code for this Module, broken down into one or more submodules.
  • examples: This folder contains examples of how to use the submodules.
  • test: Automated tests for the modules and examples.

To run a Nomad cluster, you need to deploy a small number of server nodes (typically 3), which are responsible for being part of the consensus protocol, and a larger number of client nodes, which are used for running jobs. You must also have a Consul cluster deployed (see the Consul GCP Module) in one of the following configurations:

  1. Deploy Nomad and Consul in the same cluster
  2. Deploy Nomad and Consul in separate clusters

Deploy Nomad and Consul in the same cluster

  1. Use the install-consul module from the Consul GCP Module and the install-nomad module from this Module in a Packer template to create a Google Image with Consul and Nomad.

    Ideally, we would publish a "public" image you can use for trail purposes, but Google Cloud does not yet support custom public Images so, for now, you must build your own Google Image to use this module.

  2. Deploy a small number of server nodes (typically, 3) using the consul-cluster module. Execute the run-consul script and the run-nomad script on each node during boot, setting the --server flag in both scripts.

  3. Deploy as many client nodes as you need using the nomad-cluster module. Execute the run-consul script and the run-nomad script on each node during boot, setting the --client flag in both scripts.

Check out the nomad-consul-colocated-cluster example for working sample code.

Deploy Nomad and Consul in separate clusters

  1. Deploy a standalone Consul cluster by following the instructions in the Consul GCP Module.

  2. Use the scripts from the install-nomad module in a Packer template to create a Google Image with Nomad installed.

  3. Deploy a small number of server nodes (typically, 3) using the nomad-cluster module. Execute the run-nomad script on each node during boot, setting the --server flag. You will need to configure each node with the connection details for your standalone Consul cluster.

  4. Deploy as many client nodes as you need using the nomad-cluster module. Execute the run-nomad script on each node during boot, setting the --client flag.

Check out the nomad-consul-separate-cluster example for working sample code.

How is this Module versioned?

This Module follows the principles of Semantic Versioning. You can find each new release, along with the changelog, in the Releases Page.

During initial development, the major version will be 0 (e.g., 0.x.y), which indicates the code does not yet have a stable API. Once we hit 1.0.0, we will make every effort to maintain a backwards compatible API and use the MAJOR, MINOR, and PATCH versions on each release to indicate any incompatibilities.

License

This code is released under the Apache 2.0 License. Please see LICENSE and NOTICE for more details.

Copyright © 2017 Gruntwork, Inc.