Installation of Cloud-Native Application Stacks in Amazon EKS Cluster

Learnlogik
7 min readFeb 24, 2022

In this current IT Cloud evolution, most of the enterprises are shifting their gear in building and owning their applications in a cloud-native way or adopting the other stacks to their production environment as SaaS model by subscribing to the cloud marketplace. We can say every company in this space wants to own a SaaS boutique.

In order to achieve this substantial model, they should prepare a quick and easily scalable platform. So to organize the same on a quicker note they should use the Kubernetes Orchestration Platform in any of the managed cloud services. As we understand, to develop less dependency, easily detachable, and more efficient software to build we should use microservices tasks to be pipelined and add the upgrading stacks in a stage-by-stage process. Some examples are that London-based Fintech Bank — Monzo, Indian furniture e-commerce startup — Furlenco, developed their whole applications in microservices strategy and the companies like worldwide German-based Courier partner DHL had totally developed and migrated the whole backend and logistics systems in new strands of microservices. Here both the DevOps Engineers and Kubernetes / SRE Engineers should play a vital role in orchestrating the services in an effective manner by meeting all SLA standards on each feature upgrade, rollouts, and any additional deployments of heterogeneous applications.

Supporting and Keeping the IT organizations in mind who want to set up their own Cloud-Native boutique, In this task, we are going to show how to build a simple single-tenant multi-node EKS cluster in AWS cloud.

One should make sure they have a verified AWS Cloud account before starting this process. The below process is building it on any Linux machine, you need to install the AWS CLI in your local Linux terminal.

Install AWS CLI:

$ curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o “awscliv2.zip”

unzip awscliv2.zip

sudo ./aws/install

Once the process of AWS CLI is installed, we need to authenticate the local terminal to the AWS account by providing Access Key and Secret Key which you have generated as a root file in IAM. The default output option is preferred to have a text file but you can also use that as a JSON or YAML file mode.

Configure AWS Account to Local Machine

$ aws configure — profile produser

AWS Access Key ID [None]: xxxxxxxxxxxxxxxx

AWS Secret Access Key [None]: xxxxxxxxxxxxxxx

Default region name [None]: us-east-1

Default output format [None]: text

Or

You can import it from the CSV file downloaded during creating the IAM.

$ aws configure import — csv file://credentials.csv

The next step is to install the EKSCTL on the terminal before launching the EKS Cluster. Alternative methods such as configuring it from the AWS GUI terminal would help the same but this method is more convenient.

Installing EKSCTL:

curl — silent — location “https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz” | tar xz -C /tmp

sudo mv /tmp/eksctl /usr/local/bin

eksctl version

Once the EKSCTL is successfully installed, we can verify with eksctl version and make sure it’s on the latest version. Here to create the EKS cluster we should specify the cluster name and the advanced stable version of the Kubernetes, at the current moment it’s v1.21 but make sure it has to be at it’s updated version.

Note: Please do not load the whole config file along with describing the region, namespace, node limits hence that will throw invalid installation parameters. So it is advised to create a node group once the cluster is created.

Install EKS Cluster

eksctl create cluster \

— name my-cluster \

— version 1.21 \

— without-nodegroup

If the cluster is created, kindly check the same in AWS Console GUI under the EKS services. Later, nodes have to be created under a new node group. Choose the node type according to your application workload requirement depending on what sort of application either it is memory-intensive or CPU intensive or storage need is more. Likewise, we have chosen t6.medium for the demonstration purposes, but something like if the applications are similar to SAP HANA, please prefer to go with even node type as m5.large for large scale deployments.

To run a cluster to host cloud-native deployments, we should need the minimum nodes matching the engineering specification. For a small application to host, the best practices were to bring a minimum tri-nodal cluster to be live, the nodes mentioned here are only the client node irrespective of the master node. Here on EKS, the master node is self-organised with the AWS itself once we put the specification as managed.

eksctl create nodegroup \

— cluster alpha \

— region us-east-2 \

— name ng-mp-test \

— node-type t6.medium \

— nodes 3 \

— nodes-min 2 \

— nodes-max 4 \

— managed

eksctl delete nodegroup — cluster alpha — name ng-mp-test

To test the cluster for active status we can see in the EKS GUI or by querying the below command.

aws eks — region us-east-2 describe-cluster — name yobimp — query “cluster.status”

Then coordinate the above prepared EKS Cluster into the local pc which is already preconfigured with AWS CLI.

Add AWS EKS Cluster into AWS CLI Local Machine

aws eks — region us-east-2 update-kubeconfig — name alphapc

output:

Added new context arn:aws:eks:us-east-2:241087790595:cluster/alphapc to /home/pcboy/.kube/config

Install Helm:

In order to deploy the cloud-native application stack, we have to use a kubernetes file deployment package manager called HELM. Helm is considered to be one of the stable repositories built by the group of CNCF contributors. Now the mandate process is to install the Helm plugin to the same EKS Cluster and to make sure the version of the Helm 3.

helm plugin install https://github.com/hypnoglow/helm-s3.git

Once the Helm plugin had been successfully installed, on the AWS account we should create a new S3 bucket

Create S3 Bucket Folder in the name of alphatestmarket in AWS GUI

After the S3 bucket has been created, we should create an empty repository on the application name in S3 bucket so that the application will park inside later by Helm S3 init and specifying the application folder “spark-operator” on the S3 (S3 Bucket Name: alphatestmarket).

Create an Empty Repository in S3 Folder

helm s3 init s3://alphatestmarket/spark-operator/

Initialized empty repository at s3://alphatestmarket/spark-operator/

Also once the Helm inits on the S3 Repository, it creates a valid index.yml file that will be generated under the application folder: spark-operator which will act as a pointer file to the application we are supposed to add from our local machine after Helm packaging.

The format of valid indix.yml file will be like,

apiVersion: v1

entries: {}

generated: 2022–02–10T14:26:15.247888154–07:00

To work with the chart repository by name instead of needing the whole URL, you can add an alias. For example, to create a “alphaapp” repository as an alias. Here on this demonstration, we are choosing the stable chart i.e., cloud-native stack as Spark Operator from Apache community. Here we are downloading the chart from Helm repository or Github or Docker and keeping the same chart file on the local pc. But in your case, your boutique application is already with you as proper files such as charts, files, and template files. If you are not confident enough how to prepare, please do compare with any of the helm repository files for understanding purposes.

helm repo add alphaapp s3://alphatestmarket/spark-operator/

“alphaapp” has been added to your repositories

helm repo update

helm repo list — → check the repository list

Now, Package the same required stack folder in the local machine. Here, the application folder contains charts, values, and templates in the same packaging name and whereas spark-plus-alpha and once the package is attempted, the same application stack file will be packed in the tgz format and versioning as per the version data mentioned in the chart file, finally as spark-2.4.5.tgz.

helm package ./spark-plus-alpha

Push the Local Helm package to S3

Now push the packaged application stack from the local machine to the AWS S3 repository under alphaapp.

helm s3 push ./spark-2.4.5.tgz alphaapp

For Installation of Stack in the EKS Cluster

Now, check and make sure the same application stack is available in the added repository from S3 under alphaapp

helm search repo alphaapp

NAME CHART VERSION APP VERSION DESCRIPTION

alphaapp/spark 2.4.5 1 Fast and general-purpose cluster computing system.

We would confirm that all the above-mentioned prerequisites had been installed and added the repositories and the application stack to the same, now on the production tri-nodal cluster we are installing the application stack.

helm install spark alphaapp/spark — namespace ng-spark-test1 — create-namespace — wait

Now we wait for the installation to begin and complete, once the installation has been successfully passed, we check the nodes in the cluster as mentioned below.

kubectl get pods — all-namespaces

kubectl get pods -n ng-spark-test1

kubectl describe pod -n ng-spark-test1

Finally, the cloud-native boutique application stack has been installed on EKS AWS managed cluster, and open the application to the browser on appropriate ports or via the ingress port policy mapping.

Sometimes during the installation, errors might occur due to insufficient memory on the cluster due to the memory-intensive issues, so you can update the cluster with appropriate node instances and once again perform the installation. If there is an error occurring with the namespace, try with a new namespace. It is mandated to add the application namespace during the app installation, keeping the default namespaces of cluster namespace will create confusion and we cannot segregate during any such upgrades or at deletion of the app.

For production purposes if you’re an enterprise company and if you’re in need of any support Yobitel will assist on a timely basis with our qualified staff. Also if you need any advanced training on Kubernetes, Docker Orchestration at the production level for your employees, we will take on enhancing the skill sets to meet the industry standards.

--

--

Learnlogik

Learn Logik — A Virtual Training and Self-Learning Provider on Cloud, Container, DevOps, Kubernetes and IT Technologies worldwide based in London, UK