Monday, November 26, 2018

JTD-DesignerSeries-15-Micorservices-101


A brief Context
Past decade is about SOA architectures, SOAP protocol which provided a standard way of platform neutral communication between applications in an enterprise environment. SOA services are usually deployed as application packages on application servers running on VMs, but SOA business & architectures initiatives evolved to support reusability, composability, portability, service discovery etc.

Cloud computing, container based docker support, Restful Architecture & Tooling & success stories like Netflix, are suggesting to design application features as small autonomous deployable containers that can be created, destroyed, scaled independently. This approach of dividing monoliths into small services, or creating platforms backed by 1000s of small services working together, improved & released iteratively with CI/CD can be defined as Microservices.

Enterprises are getting several benefits by adopting microservices & cloud computing like cost, time to market, automation but there is a significant design challenges & management complexity that has to be embraced along with it.

Heroku suggested 12-factor app design practices for cloud native applications, which can constrain architects & developers to Microservices architecture.


12-Factor App Methodology
Apps developed with 12-Factor design generally employs immutable infrastructure [containers rather than VMs], ephemeral applications [non-persistent, can be deleted or restarted], declarative setup [IAAS, Deployments with desired state], and automation at scale [fault-tolerance, auto-scaling, self-healing] and are considered more scalable, maintainable, portable.

I. Codebase
It suggests that you should use version control in proper way, just to say that a single repository per application. In a literal sense, you shouldn't entangle the commits across application to maintain clean isolation at source code level.

II. Dependencies
It is more likely that modules / or services will have dependencies, but a suggested approach to explicitly declared in a pom.xml [Java / Maven] or package.json [NodeJS]. Jars / or node modules shouldn't be checked into code repositories & manually included in the application.

III. Config
Configuration are things that changes between environments like DB URLs, credentials should not be part of the code, rather defined as config strings in properties file and the runtime environment provides values to these config variables.

IV. Backing Services
Consider things like Postgres DB, Redis as attachable resources, and one of the pattern is to treat these resources as external resources, maybe use a CName to map the DB URL, and for sure use config vars that gets URL from the runtime environment.

V. Build, Release, Run
As you push code to the repository, build process should create a single artifact [jar with mvn install, container with docker build], which then picks up the environment specific configurations to create an deployable image. Once the release it tagged, runtime will run that application image [java -jar myApp.jar / node app.js]

VI. Processes
It is quite important that service design doesn't rely on application between restarts, rather have the application pull the current state from persisted storage like Redis or DB like MongoDB.

VII. Port Binding
Traditional application deployment rely on server container for request routing which means that request are received at a specific listener on the server, but in modern application development, port mapping is managed as a service inside the application itself.

VIII. Concurrency
It may help to design the application as processes that may scale out easily depending upon the availability of worker nodes, rather than just always scaling up during peak loads.

IX. Disposability
This refers to a fast startup, responsive service with a graceful shutdown. Microservices are treated more as cattle, pay as you need them with cloud resources and are not really pets as the application servers have been over last decade.

X. Dev/Prod Parity
This principle is quite important as it advocates that the app dev environment is same as your app prod environment along with other envs in between. As all the code & dependencies is packaged in a same image that runs everywhere, it solves the problem of that it worked on my machine.

XI. Logs
It suggests that keeping the logging process separate from application processes and treat logs as events.

XII. Admin Processes
One-time admin tasks like migrating DBs, generating reports should be run just like other application processes as containers etc.


Guiding Principles in Microservices Architecture
It will be nice to have some high level overarching principles that you can rely on while designing your small autonomous microservices. Sam Newman explained quite well in conferences and it is worthwhile to make some notes out of it.


I. Modelled around Business Domain
It helps to design microservice for each feature / capability of a platform / or application. Instead of designing services in horizontal layers like system, process, UI; it is better to design across vertical business layer.

II. Culture of Automation
As there is large number of deployable units, automation is way to manage containers at scale. It is quite important that to enable small teams that provision their infrastructure, manage & operate their service. Org should adopt CI/CD tools to automate deployments & releases as much as possible.

III. Hide Implementation Details
In microservice architecture, service design will also define the bounded context and API layer will manage the exposed data rather than multiple services hitting the same objects / or tables.

IV. Decentralize All the Things
It is about service teams making their own smart decisions, and sharing best practices without centralized governance, and keeping messaging & middleware as dumb pipes business context creeping into gateway services.

IV. Deploy Independently
This directly relates to deployable dependencies and whether the service is running on VM / or as a container image. In SOA ecosystem, developer can develop independently but have to wait for scheduled deployment & releases, and a progressive approach is developed to reduce dependencies while evolving towards microservice architectures. You also want to run

IV. Consumer First
It is about getting feedback as you develop your APIs, and in the SOAP driven webservices, there is a WSDL document which is a spec driven way of telling about how to use your service. Similarly there are API documentation tools like Swagger & registries that provides a way of calling & discovering the service to your consumers.

IV. Isolate Failure
As you design many microservices, you have to think about how not to fail the application when a service fails to accept request or brought down. Pattern like circuit breaker, appropriate timeouts helps increase the resiliency of your application.

IV. Highly Observable
It is about understanding your deployed system, monitoring it as it runs & providing a faster & easier way to troubleshoot issues. It is suggested to have a log aggregation system which collects logs from all your services & provides some kind of query framework against the aggregated logs. Another design approach is to create a correlationID that traverses along the path of transaction & can provide a runtime view of the service model.

Sunday, November 25, 2018

JTD-DesignerSeries-14-Kubernetes-101


A brief Prep-up
Though, DockerFile has standardized the application containerization format, complex features usually require multiple containers communicating together. Containers are light weight operating system virtualization, and an application can be packed in a docker image at build/release time. With containers deployed across different IAAS vendorslike AWS, Google, Microsoft & opensource Openstack, Kubernetes provides a portable, extensible open source platform for managing containerized workloads in a streamline way across the IAAS landscape. For eg: Deployment Kubernetes Object provides a JSON spec that allows you to create the infrastructure for your application / service on anywhere running with Kubernetes Cluster. Kubernetes is an extensible framework that allows vendors to build tools to support things like Dashboard, Visualizer, Operator.

An example of this "Kubernetes Operator for MongoDB" mentioned in my precious post
[https://integrationshift.blogspot.com/2018/11/jtd-designerseries-13.html]each other.


Concepts
Kubernetes Object - A persistent entity that defines the cluster workload & desired state of the cluster. For eg: Kubernetes Deployment Object represents an application running on the cluster with 3 replicas as the desired cluster state.

Pods: It is a basic building block, that represents a running process in the cluster. An idea of pod is to provide a layer of abstraction around container image to enable communication by managing network & storage layer. One Container per Pod is a most common Kubernetes usecase, but Pod can contain multiple co-located containers that are tightly coupled & need to share resources. As Pods are ephemeral disposable entities, Kubernetes uses controllers to manage the lifecycle of Pods.

Controller: It can create & manage multiple Pods for you, handling replication & rollouts to provide self-healing capabilities at cluster scope. Usually pod specification are included in objects like ReplicaSet, StatelfuSets, Deployments that helps Kubernetes controllers manage the runtime state of pods.

Kubernetes Architecture
At a really high level abstraction, Kubernetes documentation categorizes components as follows:
Master Components - Usually specifies the cluster control plane, and provides administration & management features. [kube-apiserver, etcd, kube-scheduler, kube-controller-manager, cloud-controller-manager].

Node Components - Runs on every node, maintaining running nodes and providing the kubernetes runtime environment. [kubelet, kube-proxy, container runtime]


Links
a) https://kubernetes.io/

Sunday, November 18, 2018

JTD-DesignerSeries-13-KubernetesOperatorForMongoDB-101


A Brief on Containerization
Generally transporting goods involves packaging & shipping containers that move between different modes of transportation by different shipping companies to deliver your goods at the doorstep. critical business functions. Modern software techniques like microservices divide software applications into smaller independent functions that are build, packaged, scaled & managed independently as isolated containers. Containers can mentioned as operating system level virtualization method for running multiple isolated linux systems on a host with a single linux kernel.


Docker has become the de-facto standard for managing container images defined using Dockerfile, and allows that to run on any machine with Docker Install. However complex application require container orchestration solutions, like Kubernetes, that can manage the lifecycle of containers & how these containers communicate can each other.

A Brief on Kubernetes
Kubernetes is an open source platform for managing containerized workloads & services. Kubernetes, with support from major cloud vendors, has emerged as the de-facto standard for container orchestration governed by Cloud Native Computing Foundation a.k.a CNCF. Kubernetes, born at Google, has a backing from large open source community has quite an advantage compared to other products like Docker Swarm, Apache Mesos.
Master node in a kubernetes cluster contains services to support the Rest API, scheduler & controller manager. Each cluster contain one or more worker node, which contain the components to communicate with master node & also manage the containers running on the node. Worker node run containers managed as a logical layer represented by Pod.


A Brief on MongoDB Ops Manager & Operator
Ops Manager 4.0 contains a specialized component called MongoDB Ops Manager Kubernetes operator, simply referred as Operator. Operator Implementation, now part of Kubernetes framework, is a continuously running lightweight process deployed as a Pod with single container.


Operator defines & registers the custom types within the Kubernetes cluster, which allows operator to receive notification about the events occurring on the registered types. Notifications such as object creation or object deletion allow Operator to trigger custom logic on Kubernetes tasks, such as add mongod replica set to the Ops Manager project. Operator essentially acts as a proxy between the Kubernetes & Ops Manager to perform the needed tasks against each system. Helm, which is a tool for managing packaging & deployment in Kubernetes, can be used to deploy an operator Pod with a helm chart.


Lab - Setup Kubernetes Cluster with MongoDB


Pre-requisite Step: Virtual Box, Docker, Kubectl, Minikube, Helm



Virtual Box: virtualbox --help

Docker: docker version










Kubectl: kubectl version



Minikube: minikube version

Helm: helm version



Step: MiniKube

minikube start
minikube status

eval $(minikube docker-env) - This sets the shell environment variables so that docker points to the registry running inside the kubernetes cluster. Running docker images will list the kubernetes images deployed in the minikube cluster.


Lab - Kubernetes Operator

a) Create a MongoDB namespace.

Jeetans-MacBook-Pro:dirKubernetes home$ kubectl create namespace mongodb
namespace/mongodb created

b) Configure Kubectl to mongodb namespace
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl config set-context $(kubectl config current-context) --namespace=mongodb

Context "minikube" modified.

c) Check for deployed resources in mongodb namespace.
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl get all

No resources found.

d) Clone MongoDB-Enterprise-Kubernetes repository
Jeetans-MacBook-Pro:dirKubernetes home$ git clone https://github.com/mongodb/mongodb-enterprise-kubernetes
Cloning into 'mongodb-enterprise-kubernetes'...
remote: Enumerating objects: 21, done.
remote: Counting objects: 100% (21/21), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 132 (delta 8), reused 14 (delta 6), pack-reused 111
Receiving objects: 100% (132/132), 29.71 KiB | 2.29 MiB/s, done.
Resolving deltas: 100% (52/52), done.
Jeetans-MacBook-Pro:dirKubernetes home$ ls
mongodb-enterprise-kubernetes

d) Create a service account for helm
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created

e) Create a cluster role binding for the account
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created

f) Initialize the helm system
Jeetans-MacBook-Pro:dirKubernetes home$ helm init --service-account tiller
Creating /Users/home/.helm 
Creating /Users/home/.helm/repository 
Creating /Users/home/.helm/repository/cache 
Creating /Users/home/.helm/repository/local 
Creating /Users/home/.helm/plugins 
Creating /Users/home/.helm/starters 
Creating /Users/home/.helm/cache/archive 
Creating /Users/home/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /Users/home/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

g) Create a secret & verify it with describe command.
$ kubectl -n mongodb create secret generic my-credentials --from-literal="user=some@example.com" --from-literal="publicApiKey=my-public-api-key"

secret/madajeeblog-credentials created
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl describe secrets/madajeeblog-credentials -n mongodb
Name:         madajeeblog-credentials
Namespace:    mongodb
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
publicApiKey:  36 bytes
user:          21 bytes

i) Install the operator with helm chart.
Jeetans-MacBook-Pro:mongodb-enterprise-kubernetes home$ helm install helm_chart/ --name mongodb-enterprise
NAME:   mongodb-enterprise
LAST DEPLOYED: Sun Nov 18 09:33:07 2018
NAMESPACE: mongodb
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME                         AGE
mongodb-enterprise-operator  1s

==> v1beta1/CustomResourceDefinition
mongodbstandalones.mongodb.com      1s
mongodbreplicasets.mongodb.com      1s
mongodbshardedclusters.mongodb.com  1s

==> v1/Role
mongodb-enterprise-operator  1s

==> v1/RoleBinding
mongodb-enterprise-operator  1s

==> v1/Deployment
mongodb-enterprise-operator  1s

==> v1/Pod(related)

NAME                                          READY  STATUS             RESTARTS  AGE
mongodb-enterprise-operator-74fbcbd9b7-p944v  0/1    ContainerCreating  0         1s

i) Operator is up & running.

Jeetans-MacBook-Pro:mongodb-enterprise-kubernetes home$ kubectl get all
NAME                                               READY   STATUS    RESTARTS   AGE
pod/mongodb-enterprise-operator-74fbcbd9b7-p944v   1/1     Running   0          9m

NAME                                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-enterprise-operator   1         1         1            1           9m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-enterprise-operator-74fbcbd9b7   1         1         1       9m
Jeetans-MacBook-Pro:mongodb-enterprise-kubernetes home$ 


Lab - MongoDB Ops Manager
a) Simple Test Ops Manager - Deployment with one pod with container running a mongoDB instance for Ops Manager application DB, another container running an instance of Ops Manager.
Jeetans-MacBook-Pro:dirKubernetes home$ curl -OL https://raw.githubusercontent.com/jasonmimick/mongodb-openshift-dev-preview/master/simple-test-opsmanager-k8s/simple-test-opsmgr.yaml

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100  3228  100  3228    0     0  14934      0 --:--:-- --:--:-- --:--:-- 14944

b) Use kubectl & downloaded yaml configuration to create an instance of Ops Manager.
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl create -f simple-test-opsmgr.yaml
persistentvolume/mongodb-opsmgr-appdb-pv-volume created
persistentvolumeclaim/mongodb-opsmgr-appdb-pv-claim created
persistentvolume/mongodb-opsmgr-config-pv-volume created
persistentvolumeclaim/mongodb-opsmgr-config-pv-claim created
secret/mongodb-opsmgr-global-admin created
service/mongodb-opsmgr created
deployment.apps/mongodb-opsmgr created

c) Ops Manager is up & running.
Jeetans-MacBook-Pro:dirKubernetes home$ kubectl get all
NAME                                               READY   STATUS             RESTARTS   AGE
pod/mongodb-enterprise-operator-74fbcbd9b7-p944v   1/1     Running            0          5h
pod/mongodb-opsmgr-8c44d98f8-97jvs                 0/2     Running            0          1m

NAME                     TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/mongodb-opsmgr   NodePort   10.100.253.9   <none>        8080:30080/TCP   1m

NAME                                          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb-enterprise-operator   1         1         1            1           5h
deployment.apps/mongodb-opsmgr                1         1         1            0           1m

NAME                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-enterprise-operator-74fbcbd9b7   1         1         1       5h
replicaset.apps/mongodb-opsmgr-8c44d98f8                 1         1         0       1m
Jeetans-MacBook-Pro:dirKubernetes home$ 

c) Ops Manager runtime contains the configuration for creation of config map, projects & secrets objects.
d) Create a mongoDB replica set config file with appropriate project, credentials & namespace.
e) Run the config file to create a replica set which have mongod container pod as its members. Container Pods will be associated with the Stateful Sets.
f) You can then connect to the mongoDB replica set with the minikube IP & exposed external port from a kubernetes replica set service.





Thanks

Monday, November 12, 2018

JTD-DesignerSeries-12-SpringBasics-101


A Brief on JEE applications
Enterprise Applications can be categorized as large scale distributed, transactional, highly available applications designed to support mission critical business functions.
JEE applications are made of modules & components deployed in their appropriate containers that provide the execution environment along with management & control services.
Architecturally speaking, Distributed Applications are layered across multiple tiers [Web Tier, Application/Service Tier, Database Tier] and primarily based on Model 2 Model, View, Controller [MVC] pattern.

Out of necessity, Rod Johnson in 2003, releases modular spring framework under Apache 2.0 license to reduce the complexity of application development for J2EE developers.


Beans & Core modules apply principles of DI / IOC and delegates the responsibility of object creation to the BeanFactory. Context modules allows you to access the created objects based on configuration files.

Data Access modules helps you abstract lower level JDBC connection protocols & allows an easy management of complex features like transactions, ORM.

Spring Web modules provides implementation of Dispatcher Servlet, View Resolver, model objects, Request Mapping & Controller classes which can configured to implement complex web applications.


A Brief on IOC / DI
Generally, loose coupling is enabled by defining dependencies as Interfaces & delegating the initialization of dependent objects to a factory class. Factory class can then abstract the implementation based on configuration and enabling complex features like transactions, cacheing, logging etc. Let's say for example Service Layer [TransferServiceImpl] need access to DAO layer for database functions. [findAccount, updateAccount]. TransferServiceImpl can then define AccountRepository interface as a dependent object and can delegate the Spring factory to instantiate an appropriate implementation like JDBCAccountRepository or HibernateAccountRepository based on configuration.

public class TransferServiceImpl implements TransferService {
private AccountRepository acctRepository;
acctRepository = (AccountRepository) BeanFactory.getBean("repoClassImpl")
}

Spring Container, thus creates & manages bean lifecycle based on the configuration file and injects them as dependencies as needed. This pattern of giving responsibility to the container to create objects & injecting its dependencies based on the configuration files is referred as Inversion of Control / Dependency Injection.

beans.xml
<bean id="accountRepository" class="JDBCAccountRepository">
<constructor-arg ref="basicDataSource">
</bean>
<bean id="transferService" class="TransferServiceImpl">
<property name="accountRepository" ref="accountRepository">
</bean>

ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext("beans.xml);
TransferService transferSrv = (TransferServiceImpl) ctx.getBean(""transferService");
transferSrv.transfer(fromAcct, toAcct, amount);

Spring container can inject dependencies either through constructor args, or with setter methods. Mandatory dependencies like datasources use constructor based injector approach whereas optional notification features can be injected as setter based dependencies.


A Brief on Maven
Maven, is a software project management tool based on the POM configuration file (pom.xml). It helps you easily manage a build lifecycle of software projects by running commands like mvn build, mvn package. Maven helps you pull dependencies from central repositories into your project by specifying groupId:artifactId:version. Typical Maven file structure:

Maven is based around a central concept of build lifecycle, and lifecycle is composed of build phases, and each phase can be an order set of instructions defined in terms goals executed with a plug-in execution engine. Some of the common build phases of default lifecycle are [validate, compile, test, package, verify, install, deploy]

Friday, November 9, 2018

JTD-DesignerSeries-11-AWS-CloudEssentials-101


Cloud Concepts
Cloud computing allows the on-demand delivery of IT resources via the internet with Pay as Go Pricing. AWS allows to provision servers, database, storage resources in seconds & can be treated as disposable rather than long leases of IT infrastructure created in the datacenter.

Cloud computing generally increases agility and AWS Infrastructure designed with regions & availability zones supports elastic, highly available & easily scalable computing resources.

AWS infrastructure spreads across several regions across the globe in secure data centers and customers can support data residency regulations by choosing their own region.

Core Services
a) Compute - An example service is Amazon EC2
b) Storage - An example service is Amazon S3, Amazon EBS.
c) Databases - An example service is Amazon RDS, Amazon DynamoDB, Amazon Redshift.
d) Networking & Content Delivery - An example service may be Amazon VPC, Amazon Route 53.
e) Security, Identity & Compliance - An example service is AWS Identity & Access Management.

AWS Global Infrastructure
a) Region - Helps you optimize latency, minimize cost & support regulatory requirements. Resources in one region are not automatically replicated to other region. Region generally constitutes two or more Availability Zones (AZ).
b) Availability Zones (AZ) - Collection of data centers in a region, which are isolated from one another but connected with a low latency network. Multiple AZs in a region support reliability & availability requirements of distributed systems.
c) Edge Locations - Hosts the content delivery network (Amazon Cloudfront) to support low latency & fast delivery of content to the customers.

Amazon Virtual Private Cloud (VPC)
Amazon VPC - Allows you to create a private network within a AWS cloud and lets you configure the IP address spaces, subnets & routing tables. These configurations helps you control what you expose to the internet & what you isolate in the VPC. Other AWS services can then deploy in the foundational VPC infrastructure designed with the custom configurations.
Amazon VPC lives within a region & can span across multiple AZs. VPC defines an address space that is further divided into subnets. Route Tables control the traffic between the subnets & between subnets & internet. Subnets are categorized as public [access to Internet] & private [no access to the internet]


AWS Security with Shared Responsibility Model
AWS Datacenters & network architecture is designed & build to satisfy the security requirements of most sensitive & controlled environments, and at cloud scale all customers benefits from it. AWS infrastructure manages security of the cloud with core AWS IAM, provide logging & monitoring capabilities with services like AWS cloud watch, Encryption & Key Management with likes of AWS KMS & Certificate Manager, Network Segmentation with VPC & AWS Direct Connect, Standard DDos protection with AWS Shield.

Under the shared responsibility model, AWS operates, manages & controls the components from the vitualization layer down to the physical data centers. AWS protects the global infrastructure to secure the AWS cloud services.
When using AWS services, customers manages their content including requirements like [What to store, Which AWS services, In what location, Content format & structure, Access to the content]. Customers responsibility differs slightly with AWS services, as with managed service like DynamoDB, AWS secure the OS layer but with EC2, customer is responsible for securing host OS. Thus customer is responsible for security in the cloud.


Identity & Access Management (IAM)
Amazon IAM - Allows you to manage access to AWS account by allowing you to create users, groups, roles & policies, enable Identity Federation & multi-factor authentication mechanism & integrate with other AWS services & let you configure access to the account AWS resources.
User: People who are logging into your AWS account.

Groups: Collection of users with common set of permissions. You can create a marketing group who need to access same files on the S3 bucket.

Role: It defines a common set of permissions, for eg: S3 bucket access & then a role can be assigned to the either users or AWS resources (like EC2 instance) to give access to the S3 bucket.

Policies: A document that defines one or more permissions which can then be assigned to user, group or roles.
IAM is a universal service, and not region specific. Users initially are created with no permissions (least privilege), and are assigned access key & secret which can used to access AWS with APIs & command line.


Amazon EC2
AWS EC2 is Elastic Compute Cloud, allows you to create & destroy server instances of resizable compute capacity in the AWS cloud infrastructure. Amazon EC2 are virtual server instances and attached with virtual disks of Elastic Block Storage (EBS).
Region-->EC2 Wizard-->AMI-->Select Instance Type-->Configure Network--> Configure Storage-->Configure KeyPairs-->Launch & Connect. Putty can be used to connect to running instance after configuring the private key. EC2 offers various instance types for different purposes, some of them are mentioned below:
T2: Lower Cost & General Purpose usually used for Web Servers & Small DBs
M5: General Purpose usually used for Application Servers
D2: Dense Storage usually used for File Servers / Data Warehousing/ Hadoop


Elastic Load Balancers
Elastic load balancers is a distributed load balancing service that allows you to configure listeners & targets and helps you scale EC2 instances, storages, containers & other services across multiple AZs. AWS categorizes elastic load balancers as Classic & Application Load Balancers.

Classic Load Balancers addresses use-cases like [Accessing web servers through a single load balanced endpoint], [provide scalability & high availability by enabling internal load balancing in application environments]. ELB can work in layer 7 to support http/https protocol and can also be enabled at layer 4 network layer to support TCP protocols.

Application Load Balancers adds more features to existing classic load balancers and supports additional scenarios that cover load balancing across multiple containers in an EC2 instance. It also added support for additional protocols, enhanced cloudwatch metrics& targeted health checks. You can enabled path-based routing, IPV6 support, dynamic ports with port mapping rules etc.

While defining load balancers, you configure listeners [process that checks for connection requests for a specified protocol on a configured port], listener rules to route the traffic to targets [destination like application servers on EC2 instances] & target groups.


Amazon Route 53
AWS Route 53 is a DNS service that map the domain names to EC2 instances, load balancers & S3 buckets. Typically you will create a A record which can map to an IPV4 address of the load balancer, which has a registered target as an EC2 instance running the website. You can then type the DNS address in the browser which should bring up the home page of the website deployed on the EC2 instance.


Auto Scaling
Auto Scaling helps you ensure that you have correct number of EC2 instances available to handle the load for your application. With Cloud Watch you can determine the appropriate capacity for the instances & then configure autoscaling rules [capacity over 60%] to automate on demand provisioning [Scale Out by launching instances] / or de-provisioning [Scale In by terminating instances] of EC2 instances.
Autoscaling is quite important in environment with fluctuating performance requirements. You can configure multiple Auto scaling groups & auto scaling policies to schedule manage different capacities & load scenarios.

Elastic Block Storage
EC2 Instances can be allocated storage using disk volumes managed by Amazon Elastic Block Storage service on pay as go basis. EBS volumes are designed to be durable & available as data is automatically replicated across multiple disk volumes in different AZs. AWS let's you choose storage volumes based on speed of IOs & cost benefit analysis, and also let you re-create volumes from the snapshots.

AWS with CLI Interface
AWS CLI lets you manage, run AWS services from the command line. Let's say you want to create a bucket & upload some files from your local disk.

aws configure - lets you authenticate by providing the KeyID & Access Key.
aws s3 mb [bucket name] - Create an S3 bucket [ for eg: s3://demobucket-111718]
aws s3 copy hello.txt s3://demobucket-111718 - copies hello.txt from local filesystem to s3 bucket.
aws s3 ls s3://demobucket-111718 - lists the files in the bucket [hello.txt]

You can access the bucket files from the S3 service on AWS console. You can also access the file with DNS address if you have enable public access.