Kubernetes Ingress on EKS(AWS) step by step

Rajesh
6 min readOct 17, 2021

I was working on ingress some time back and have to implement working EKS setup. I saw some document online and most of them was missing something or the other and I was like “whaaat”? So i decided to take thing in my own hand!

Whaaaaat??

I will try to show a working implementation kubernetes ingress on AWS EKS cluster end to end. This ingress will host some website/app and have controller with external DNS running parallelly to update Route 53 DNS entry based on yaml code which will direct ALB traffic to different server based on path based ingress routing. You don’t need to reffer some other resource to make this work.

We will be using eksctl to create EKS cluster on AWS

Why not use terraform or CFT?
It will take more time and efforts to achieve the same result.

Prerequisites for EKS on AWS for our setup:
- AWS CLI, kubectl, eksctl Installed
- IAM user with required permission (Admin roles is best but recommended)
- Both VPC Subnets (EKS) with tag entry (for Subnet auto-discovery to work)
[Subnet Tag] kubernetes.io/cluster/${cluster-name} owned or shared
- Linux server or Powershell to run commands with jq installed (optional though)

Clone this repo it have all the file to implement Ingress : https://github.com/iamraj007/EKS_with_Ingress.git

Replace EKS cluster name (ie: myeks) and AWS region (ie: us-east-1) as you implement.

1 Create EKS cluster (its will have Control plan etc)

$ eksctl create cluster - name=myeks - region=us-east-1 — zones=us-east-1a,us-east-1b — without-nodegroup

2 Create node group for above EKS (with some addition parameters)

$ eksctl create nodegroup --cluster=myeks --region=us-east-1 --name=myeks-ng-public1 --node-type=t3.small --nodes=2 --nodes-min=2 --nodes-max=3 --node-volume-size=20 --ssh-access --ssh-public-key=ps-poc --managed --asg-access --external-dns-access --full-ecr-access --appmesh-access --alb-ingress-access

Validate EKS if it’s ready (via CLI or check AWS Console):

$ eksctl get cluster
$ eksctl get nodegroup --cluster=myeks
$ kubectl get nodes

3 Create IAM policy that can manage AWS resources like ALB,check EC2,VPC details and act to create or adjust required resource.

$ aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

4 Create IAM ODIC identity provider for our cluster

$ eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=myeks –approve
$ aws eks describe-cluster --name myeks --query 'cluster.identity.oidc' --output text

5 Create service account on our EKS cluster in a namespace and attach IAM policy created in step 3

$ eksctl create iamserviceaccount --cluster=myeks  \
--namespace=kube-system --name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::458XXXXXX123:policy/AWSLoadBalancerControllerIAMPolicy --override-existing-serviceaccounts --approve

Verify assume role policy statement:

$ SALB-ARN=aws iam list-roles | jq -r '.Roles[] | select(.RoleName|match("eksctl-my-eks-addon-iamserviceaccount-kube-s-")) | .Arn'$ aws iam get-role --role-name $SALB-ARN --query "Role.AssumeRolePolicyDocument.Statement[].Condition" --output text
Assume role policy statement

Validate service account in kube-system namespace:

$ kubectl describe sa aws-load-balancer-controller -n kube-system

6 Deploy Cert-manager is a kubernetes add-on to automate TLS certificates management

$ kubectl apply –f cert-manager.yaml --validate=false

7 Create Load balancer controller

Now that our prerequisite for cluster and LB controller is done, create LB contoller. This single file will create multiple components for controller to function correctly like Create ClusterRole, RoleBinding. CustomResource Definition etc.

$ kubectl apply -f  aws-load-balancer-controller_v2_2_1_full.yaml

a) Verify LB controller deployment :

$ kubectl get deployment aws-load-balancer-controller -n kube-system
$ kubectl describe deployment aws-load-balancer-controller -n kube-system
$ kubectl get sa -n kube-system | grep dns
AWS LB controller in kubernetes

b) Check load balancer controller longs (for any exaction if any) in kube-system namespace:

$ kubectl logs -f $(kubectl get po -n kube-system | egrep -o ‘aws-load-balancer-controller-[A-Za-z0–9-]+’) -n kube-system

8 Create SA in cluster for external-DNS to work properly

$ eksctl create iamserviceaccount  --name external-dns  \
--namespace kube-system --cluster my-eks --attach-policy-arn arn:aws:iam::458XXXXXXXXXXX123:policy/AllowExternalDNSUpdates \
--approve

a) Verify assume role string (using aws CLI):

$ SADNS-ARN= aws iam list-roles | jq -r '.Roles[] | select(.RoleName|match("eksctl-my-eks-addon-iamserviceaccount-kube-s"))| .Arn'$ aws iam get-role — role-name $SADNS-ARN — query “Role.AssumeRolePolicyDocument.Statement[]Condition”

b) Check external-dns sa attributes

$ kubectl describe sa external-dns -n kube-system

9 Deploy external DNS on cluster

Make sure to update two important field in this yaml file

  • --txt-owner-id : [This be Route53 domain zone id]
  • --domain-filter: [This is the domain name you wish to use]
$ kubectl create -f external-dns-deploy.yaml
$ kubectl get sa -n kube-system | grep dns
$ kubectl describe sa external-dns -n kube-system
External-dns SA

a ) Verify external-dns deployment, it should be running without any error:

$ kubectl get deployments  -n kube-system | grep external-dns
$ kubectl describe deployments external-dns -n kube-system

b) A pod for external-dns must have started in kube-system, we can check logs:

$ kubectl logs -f $(kubectl get po -n kube-system | egrep -o 'external-dns-[A-Za-z0-9-]+') -n kube-system

10 Deploy application, that you wish to host on kubernetes

Now deployment a sample application (you can do this as 1st step also) that you wish to host on kubernetes. We will deploy nginx customized image using deployment and also add a NodePort service to expose app on port (ie: 80).

$ kubectl create  -f app-deploy-service-backend.yaml$ kubectl create  -f app-deploy-service-frontend.yaml

Verify ports are running and service expose on define ports:

$ kubectl get pods | $ kubectl get svc

11 Deploy Ingress on our EKS cluster

Finally all our prerequisite are done its time to deploy our ingress on cluster

annotations is ingress are very critical if you miss important ones, things might not work as expected, though you can always check ingress pods logs to see any exception that you see.

$ kubectl create -f ingress-pathbased.yaml
Ingress
# Be sure to update ingress.yaml file with your own domain
external-dns.alpha.kubernetes.io/hostname: svc.subdomain.domain.com
#
Also make sure zone id of above domain match with one given in
external-dns-deploy.yaml file in step 9
args:
- --domain-filter=subdomain.domain.com
- --txt-owner-id=Z0123XXXXXXXXXXXXXXXNNN

Verify now ingress it should be working as expected:

$ kubectl get ingress
$ kubectl describe ingress my-ingress

Now after few minutes you can access your domain, where based on path you enter different service will respond as per ingress configuration you defined. We no more have to update Route53/DNS entry every time with load balancer, just update ingress yaml rest will be taken care. #Automate

Ingress traffic flow

--

--

Rajesh

A tech guy doing Cloud and DevOps stuffs, loves tech, cloud, network and IT security.