Minions are added and configured from salt-prime
with the following Minion ID
schema: HST__POD__LOC
:
HST
is the hostname or role. It indications what services are running on the host or the role that it serves.POD
is the pod or group. It indicates the logical grouping of the host.LOC
is the location. It indicates where the host is.
Examples:
bastion__core__us-east-2
salt-prime__core__us-east-2
chapters__prod__us-east-2
chapters__stage__us-east-2
This host classification allows multiple levels of specificity to minimize the configuration required between similar hosts.
Like Apache2, SaltStack pillar data uses a last declared wins model. This repository uses (from least-specific to most-specific):
1_LOC
(location)2_POD
(pod/group)3_HST
(host/role)4_POD__LOC
(pod/group and location)5_HST__POD
(host/role and pod/group)
This method of setting least-specific to most-specific pillar data was inspired by Puppet Hiera.
The HST__POD__LOC
schema is implemented using Jinja2 in the
pillars/top.sls
file.
Implementation is also supported by three configuration values:
- Master
pillarenv_from_saltenv: True
(see Configuring the Salt Master)
- Minion
pillarenv_from_saltenv: True
(see Configuring the Salt Minion)top_file_merging_strategy: same
(see Configuring the Salt Minion)
The only grain which can be safely used is grains['id']
which contains the
Minion ID. (FAQ Q.21)
It is important to rely only on the Minion ID as all other grains can be
manipulated by the client. This means a compromised client could change its
grains to collect secrets if a dedicated grain (ex. role
) was used for host
classification.
See Orchestration.md
for how these classification parts
are used with orchestration.
Node groups provide similar functionality. However they are far less flexible and have a number of issues:
- They are configured at server run time (
salt-master
must be restarted to apply changes) - They do not allow a least-specific to most-specific configuration path
- open "nodegroup" Issues · saltstack/salt