Skip to main content

Provisioning EKS clusters

For now EKS is only used for the manager cluster, it mainly holds Cloud Platform concourse CI. We are working on migrating our live cluster to EKS.

IMPORTANT: All cluster’s names must be globally unique (EKS or kOps). Each one creates a unique Route53 hostzone which is unique to the name

You can create a new EKS test cluster using the cluster build pipeline.

Alternatively, using the create-cluster script.

Execute the script, by giving the -k eks or --kind eks option and the desired name of your new cluster. e.g.:

./create-cluster.rb --name mogaal-eks --kind eks

Check the create a cluster runbook for pre-requisites and environment variables to run the script.

Alternatively, if you need more control over the test cluster parameters, or you just prefer to do it manually, the rest of this document describes the process.

Pre-requisites

  • Make sure you have access to Cloud Platform AWS account
  • Set your environment variables (you might also want to include AWS_* envs):
    export KOPS_STATE_STORE=s3://cloud-platform-kops-state
    export AWS_PROFILE=moj-cp

Provisioning

1. VPC

We need to create a VPC to deploy the cluster, VPCs are deployed using terraform code under cloud-platform-infrastructure/terraform/aws-accounts/cloud-platform-aws/vpc folder:

cd cloud-platform-infrastructure/terraform/aws-acounts/cloud-platform-aws/vpc
terraform init
terraform workspace new <WorkspaceName>
terraform apply

You should be able to see your new VPC (called WorkspaceName) inside the AWS Console. Check it before jumping to the next step.

NOTE: For conventions purposes please call all terraform workspaces the same.

2. Creating EKS cluster

Now it’s time to provision the cluster itself, we change directory to terraform/aws-accounts/cloud-platform-aws/vpc/eks and follow very similar terraform workflow:

terraform init
terraform workspace new <WorkspaceName>
terraform apply -var="vpc_name=$VPC_NAME"

vpc_name must be set, it’s the only required variable. It can also be used variables like cluster_node_count and worker_node_machine_type to set up different node groups parameters

Once terraform finishes you should be able to generate your kubeadmin file (used by kubectl to establish a connection to the cluster) using aws-cli in the following way:

aws eks --region eu-west-2 update-kubeconfig --name mogaal-eks

NOTE: Replace “mogaal-eks” with the cluster name (workspace name)

Final check for this step consists in cluster connection

$ kubectl cluster-info
Kubernetes master is running at https://B04C5FC7828A0515457E141A9610815D.sk1.eu-west-2.eks.amazonaws.com
CoreDNS is running at https://B04C5FC7828A0515457E141A9610815D.sk1.eu-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

3. Deploy components

Components installation is the last step, for that we need to have the KUBECONFIG variable exported and make sure you have access to the cluster. To deploy components we go to terraform/aws-accounts/cloud-platform-aws/vpc/eks/components and follow exactly the same terraform workflow:

terraform init
terraform workspace new <WorkspaceName>
terraform apply

4. Delete the EKS cluster

Delete the EKS cluster using the script

There is a destroy-cluster.rb script which you can use to delete your cluster.

Read the script before using it. Deleting a cluster is something you should be very cautious about, and ensure you know exactly what you’re doing.

The script is entirely non-interactive, and will not prompt you to confirm anything. It just destroys things.

First, run make tools-shell

The delete cluster script must always be run in a container. This ensures that the environment of the script is fully controlled, and you don’t run into problems such as the kubernetes context being changed in another window, or extra environment variables causing unwanted effects.

Then invoke the script like this:

./destroy-cluster.rb --name [short cluster name] --kind eks --yes

Run without --yes to do a dry run, and see what commands would be executed.

You can get more information using:

./destroy-cluster.rb --help

If any steps fail:

  • Fix the underlying problem
  • Edit the script to comment out any sections of the ClusterDeleter.run function which you no longer need to run
  • Re-run the script

Delete the EKS cluster manually

Follow these steps, to delete the EKS cluster.

First, set the kubectl context for the EKS cluster you are deleting. The easiest way to do this is with aws command:

$ export KUBECONFIG=~/.kube/config
$ export cluster=<cluster-name>
$ aws eks --region eu-west-2 update-kubeconfig --name ${cluster}

You should see this output:

Added new context arn:aws:eks:eu-west-2:754256621582:cluster/<cluster-name> to .kube/config

Then, from the root of a checkout of the cloud-platform-infrastructure repository, run these commands to destroy all cluster components, and delete the terraform workspace:

$ cd terraform/aws-accounts/cloud-platform-aws/vpc/eks/components
$ terraform init
$ terraform workspace select ${cluster}
$ terraform destroy

The destroy process often gets stuck on prometheus operator. If that happens, running this in a separate window usually works: kubectl -n monitoring delete job prometheus-operator-operator-cleanup

$ terraform workspace select default
$ terraform workspace delete ${cluster}

Change directories and perform the following to destroy the EKS cluster, and delete the terraform workspace.

$ cd .. # working dir is now `eks`
$ terraform init
$ terraform workspace select ${cluster}
$ terraform destroy
$ terraform workspace select default
$ terraform workspace delete ${cluster}

Change directories and perform the following to destroy the cluster VPC, and delete the terraform workspace.

$ cd .. # working dir is now `vpc`
$ terraform init
$ terraform workspace select ${cluster}
$ terraform destroy
$ terraform workspace select default
$ terraform workspace delete ${cluster}
This page was last reviewed on 1 November 2021. It needs to be reviewed again on 1 February 2022 by the page owner #cloud-platform .
This page was set to be reviewed before 1 February 2022 by the page owner #cloud-platform. This might mean the content is out of date.