Access EKS cluster
In order for kubectl to find and access an EKS cluster, it needs a kubeconfig file.
When you create a new EKS cluster, the kubeconfig file is created locally within the cloud-platform-eks/files/, named: files/kubeconfig_${WorkspaceName}
This file is only available for the user who created the cluster.
Other authorized users can create their own kubeconfig file to access the cluster as follows:
Pre-requisites
- AWS cli tool
- An AWS Profile
moj-cp
with credential for the Cloud Platform AWS account
Set the AWS_PROFILE environment variable:
export AWS_PROFILE=moj-cp
Create kubeconfig using aws command
Run this command:
$ export KUBECONFIG=~/.kube/config
$ aws eks --region eu-west-2 update-kubeconfig --name <cluster-name>
You should see this message:
Added new context arn:aws:eks:eu-west-2:754256621582:cluster/<cluster-name> to .kube/config
Check that kubectl is properly configured by getting the cluster state
$ kubectl cluster-info
If kubectl is correctly configured to access the EKS cluster, you should see output similar to:
Kubernetes master is running at https://B04C5FC7828A0515457E141A9610815D.sk1.eu-west-2.eks.amazonaws.com
CoreDNS is running at https://B04C5FC7828A0515457E141A9610815D.sk1.eu-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
You should now be able to run kubectl commands, such as kubectl get namespaces
.
Create kubeconfig manually using a template
You can create a kubeconfig file manually, using the template below.
Login to the AWS console. In the EKS service, select the cluster you need to access. You will see all the information related to the EKS cluster.
Copy the values for API server endpoint
, Certificate authority
and Cluster name
into this template:
apiVersion: v1
preferences: {}
kind: Config
clusters:
- cluster:
server: <API server endpoint>
certificate-authority-data: <Certificate authority>
name: eks_<Cluster name>
contexts:
- context:
cluster: eks_<Cluster name>
user: eks_<Cluster name>
name: eks_<Cluster name>
current-context: eks_<Cluster name>
users:
- name: eks_<Cluster name>
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<Cluster name>"
Save the kubeconfig with a filename (eg.: ~/.kube/config_eks
) and use it to access the EKS cluster.