Terminal access to EKS managed nodes
Overview
AWS System Session Manager is used to configure and manage a terminal access to EKS worker nodes.
All EKS clusters are created with needed permissions for AWS System Session Manager to start sessions to the worker nodes.
On the AWS Console, Services -> AWS System Manager -> Quick Setup for Host Management is configured to detect the live
EKS cluster worker nodes(default & monitoring)
by tag Cluster
with value live
.
Pre-requisites
- AWS Console access
Session Manager plugin (if using AWS Cli). To install Session Manager plugin on macOS,
brew install --cask session-manager-plugin
Steps to get terminal access via Console
Get the Instance Name of the node you want to login using kubectl
kubectl get nodes
Login to AWS Console, Services -> EC2 and search by giving the
Instance Name
Select the instance and then click Connect
Select the Connection method -> Session Manager
Steps to get terminal access via AWS cli
Get the Instance ID of the node you want to login using kubectl
# To get the list of nodes and their names kubectl get nodes # To get the node ID of the node for given node name kubectl describe node <node name you want to shell into> | grep 'ProviderID' | awk -F '/' '{print $NF}'`
Start the session with cli command
aws ssm start-session --target <Instance ID>
Access via SSH /SCP
It can be more convenient to access the nodes using the standard ssh client, especially when copying files.
To set up, add the following in ~/.ssh/config
:
Host i-*
User ssm-user
ProxyCommand bash -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
Connect and append your ~/.ssh/id_rsa.pub
key to the /home/ssm-user/.ssh/authorized_keys
file on the node.
Now you can ssh i-0abcdef
or scp 1-0abcdef:file