Description
We use Foreman as a bare metal provisioning system. Foreman installs Puppet at the server kickstart, already connecting back to the Puppet Master. The Puppet Master CA has a policy to auto-approve machines that have a valid *.tier2 DNS. That goes along the lines that Foreman controls the local DNS, so if it's there, is because we included it through Foreman. As it's a local network we don't need to be more secure than that.
Tier-2
At the Tier-2, the server that manages Foreman and also
all Puppet components is t2-headnode-new. It has a Foreman Smart-Proxy, a Foreman Server, a Puppet Master configured with Passenger through Apache and a Puppet CA. All deployed with the respective default manuals in the "product's website".
Tier-3
The T3 setup is more minimalistic. Mostly because we could benefit from the same Foreman Server as we use for the Tier-2. As a consequence it was only deployed a smart-proxy with all associated services (DNS, DHCP, TFTP, etc) in "nas-1-1".
We're using Chef for the T3 and the code is
here. I don't think it was ever tested out of VMs thought. We have a chef server in t3-headnode-new.
Specifics.
- The best host group for T2 installs in the private network is "tier-2"
- The best partitioning schema for non-critical machines (nodes) is "dynamic partitioning rocks migration". The features are : 1 - it will size /wntmp to 20 * NCores GB. 2 - IF /data1 is present in /dev/sda, all other partitions will be killed but that one. All other partitions will be re-created but /data1 will be left alone. There's an edge case where we re-use a disk from a datanode, with /data25 label still there. Cure is going to TTY2 during reinstall, killing the only partition in /dev/sda and rebooting, then it will pickup the normal Anaconda partitioning.
-- Main.samir - 2015-04-27
Topic revision: r1 - 2015-04-27
- samir