Kubernetes on AWS with Auto-Applying
Overview
This workflow demonstrates the creation of a multinode Kubernetes cluster on AWS which utilises Flight Solo to automatically configure nodes as workers in the Kubernetes cluster upon boot.
Prepare Network
- Create a VPC with
- Name:
kubecluster1
- Name:
- In "VPC and More"
- IPv4 Subnet Block: 10.10.0.0/16
- Number of Availability Zones: 1
- Public Subnets: 1
- Private Subnets: 0
- VPC Endpoints: None
- Create a Security Group with
- Name:
kubecluster1-sg
- Description: "Security group for
kubecluster1
cluster" - VPC:
kubecluster1
- Name:
- Inbound Rules
- "SSH" from "Anywhere-IPv4"
- "HTTP" from "Anywhere-IPv4"
- "HTTPS" from "Anywhere-IPv4"
- "All Traffic" from 10.10.0.0/16
Build Login Node
Launch the Flight Solo image in AWS marketplace
- Select "Launch from EC2"
- Instance Type: t3.2xlarge
- Click "Edit" on Network Settings and
- VPC:
kubecluster1
- Auto-assign public IP: Enable
- Select existing security group:
kubecluster1-sg
- Set root volume size to at least 20GB
- Under Advanced Details -> User Data, add this:
#cloud-config write_files: - content: | SHAREPUBKEY="true" AUTOPARSEMATCH=".*" AUTH_KEY=kubecluster1 path: /opt/flight/cloudinit.in permissions: '0600' owner: root:root - content: | /opt/flight/bin/flight profile prepare openflight-kubernetes-multinode path: /var/lib/firstrun/scripts/00-prepare-profile.bash permissions: '0600' owner: root:root
Note
The above data will enable sharing the public key to clients, automatically add any nodes that connect to hunter with a correct auth key, and secure node hunting with the authorisation key of kubecluster1. For more information see the user data documentation
Configure Login Node
- Parse & Label Login Node
- Run the command
flight hunter parse
. - Select the node with Space
- Press Down to erase the field.
- Type in
login1
as the label and press Enter.
- Run the command
- Run flight profile configure and Select
OpenFlight Kubernetes Multinode
- Cluster Name: kubecluster1
- Default user: flight
- Set user password to: Some secure password
- NFS server (hostname or flight-hunter label): login1
- IP or FQDN for Web Access: Public IPv4 DNS from AWS EC2 Console for Node
- IP Range of Compute Nodes: 10.10.0.0/16
- IP Range of Kubernetes Pods (must not overlap with Compute Node IP Range): 192.168.0.0/16
- Apply the master profile to login1 with
flight profile apply login1 master
- Wait for flight profile list to show the status of login1 as completed (
flight profile view login1 --watch
) -
Setup Automatic Application of Hunter Nodes
-
Add the following lines to
/opt/flight/opt/hunter/etc/config.yml
to automatically apply the worker profile to hunter nodes with labels containingnode
.auto_apply: node: worker
Note
You will need to use sudo to have permissions to edit this config file.
-
Restart hunter service
flight service restart hunter
-
Launch a Compute Node
Launch the Flight Solo image in AWS marketplace
- Select "Launch from EC2"
- Instance Type: t3.2xlarge
- Click "Edit" on Network Settings and select:
- VPC: kubecluster1
- Auto-assign public IP: Disable
- Select existing security group: kubecluster1-sg
- Set root volume size to at least 20GB
- Under Advanced Details -> User Data (replace
LOGIN_SERVER_IPV4_PRIVATE_ADDRESS
with the IP of the login node, this can be found withip addr
on that system)#cloud-config write_files: - content: | SERVER=<LOGIN_SERVER_IPV4_PRIVATE_ADDRESS> LABEL=node01 AUTH_KEY=kubecluster1 path: /opt/flight/cloudinit.in permissions: '0600' owner: root:root - content: | /opt/flight/bin/flight profile prepare openflight-kubernetes-multinode path: /var/lib/firstrun/scripts/00-prepare-profile.bash permissions: '0600' owner: root:root
- Repeat the above to create more nodes, changing the
LABEL=
field in the cloud-init data to be a unique label.
Checking it Works
Congratulations! You now have an automatically expanding Kubernetes cluster!
Once the compute node has come up it will be automatically added to the accepted list of hosts for the cluster (see flight hunter list
) and will have the Kubernetes worker profile applied to it automatically (see flight profile list
).
What you can do next is:
- Seamlessly access the cluster via the Flight Web Suite in a web browser by visiting the Public IPv4 DNS from AWS EC2 Console for the login node
- Run some kubernetes pods
- Remove unused nodes from the cluster before deleting them from AWS