AWS Tutorial For Beginners | AWS Full Course – Learn AWS In 10 Hours | AWS Training | Edureka

Cloud computing is at the cusp
of technological advancement. And when you talk about cloud
computing it cannot shy away without mentioning
Amazon web services (AWS), which is one of the leading
cloud service providers in the market. If you are looking for a career
in this domain you have landed at the right place Edureka
brings you a complete course on Amazon web services, which not only touches
upon the fundamental but also die. It's deeper at
a conceptual level. So let us take a look
at the offerings of this session first. We would start with the
fundamentals of cloud computing and Amazon web services
moving on we will talk about the core services that Amazon web services
has to offer to you. The first domain is
the compute domain where we would be exploring
services like ec2. Elastic Beanstalk
and Lambda moving on. We'll talk about
this storage domain where we'll be exploring
services like S3 EFS and Next in line is the networking domain where we'll be talking about
services like VPC Route 53 Etc.

Then could be
talking about management and monitoring services
like Cloud watch cloudformation load balances
Etc moving on you take a look at Cloud security and take a look at services. Like I am Etc then
the database part where we'll be exploring
services like Amazon redshift. Once we are done
with the core Services, we will be also
discussing develops on AWS where we will be talking
about AWS services like aw score pipeline, aw score commit Etc. Now that the devops part and the core part
of AWS is over. We can also switch
to the career part where we'd be
discussing some numbers like jobs friends salaries Etc and would also take
a look at the roles and responsibilities. And what are the kind of things
that you should know when you talk about making a
career in this particular Dome? So before we get started, feel free to subscribe
to our YouTube channel to get the latest updates
on the trending Technologies. Firstly let's understand
why Cloud to understand this we need to understand
the situation that existed before Cloud came
into existence.

So what happened back then and firstly in order
to host a website you have to buy a stack
of servers and we all know that servers are very costly. So that meant we ended up paying a lot of money next was
the issue of traffic now as we all know if you are hosting a website
we are dealing with traffic that is not constant
throughout the day and that meant more pain
we would understand that as we move further. And the other
thing was monitoring and maintaining your servers. Yes. This is a very big problem
now all these issues. They led to
certain disadvantages. What are those as I mentioned servers
are very costly.

Yes. The setup was again costly and thus you ended up
being a lot of money and there were other factors
contributing to this point. Let's discuss those as well. One troubleshooting was
a big issue since you're dealing with a business your Prime Focus
is on taking good decisions so that you have
Business does well, but if you end up
troubleshooting problems or you focus more on
infrastructure related issues, then you cannot focus
more on your business and that was a problem. So either you had
to do multitasking or you have to hire
more people to focus on those issues thus again
you ended up being more money as I've discussed the traffic
on a website is never constant. And since it varies you are
not certain about its patterns. Say, for example, I need to host a website
and for that what I decided is I am reserving. To petabytes of total memory for
my usage based on the traffic but as the traffic
where is there would be times when the traffic is high and my whole to petabytes
of data is consumed or space is consumed Roger, but what if the traffic is very low for certain
hours of the day.

I'm actually not
utilizing these servers. So I end up paying
more money for the servers than I should be. So yes upscaling was an issue. So all these things
were an issue because we were
paying more money. We do not have sufficient time
to Take our decisions properly. There was ambiguity. There was more trouble
monitoring and maintaining all these resources and apart
from that one important point which we need to consider
is the amount of data that is being generated now and that was being generated
then then it was okay, but nowadays if you take a look
at it the amount of data that is generated is huge and this is another reason why
Cloud became so important as of mentioned the data now, we all know that everything
is going online these days and what that means
is we shop online. And we buy food online. We do almost everything that is required as
an whatever information we need. We get everything online
bookings and reservations.

Everything can be taken care of
that means we have a lot of data that is being generated
these days and this is Digital Data back
in those times. We were communicating
through verbal discussions and all those things so through paperwork and that was
a different data to maintain since everything is moving on cloud or moving
online the amount of data that we have is used these days. Days, and then when you have
this huge amount of data, you need a space where you can actually go ahead
and maintain this data. So yes again, there was a need of this piece
and all these issues that is your cost.

You're monitoring
your maintenance providing sufficient space. Everything was taken
care by Cloud. So let us try to understand
what this cloud is exactly. Well think of it as a huge space that is available
online for your usage. Now. This is a very generic
definition to give you to be more specific. I would be seeing that. Think of it as a collection of data centers now
data centers again at a place where you store your data or you
host applications basically, so when you talk
about these data centers, they were already existing. So what did
Cloud do differently? Well, what cloud
did was it made sure that you are able to orchestrate your various
functionings applications managing your resources properly by combining all
these data centers together through a network and then providing
you the the control to use this resources and to manage them properly
to make it even more simpler.

I would say there was a group
of people or organizations. Basically that went ahead
and what these servers these compute capacities
storage places compute services and all those things and they have their own
channel or Network. All you have to do was go ahead and rent those resources only
to the amount you need it and also for the time
that you needed. So yes, this is what cloud
did It let you rent the services that you need and use
only those services that you need. So you ended up paying
for the services that you rented and you ended
up saving a lot of money. The other thing is
these service providers. They take care of all the issues like your security
your underlying infrastructures and all those things. So you can freely focus
on your business and stop worrying
about all these issues. So this is what cloud is
in simple words. It's a huge space which has
all these services available and you can just go ahead
and pick and read. And those services
that you want to use so what is cloud computing? Well, I've already discussed that just to summarize it I would say it is
nothing but an ability or it is a place where you
can actually store your data.

You can process it and you can access it
from anywhere in the world. Now. This is an important Point
say for example, you decide to choose
a reason for infrastructure somewhere in u.s. You can certain maybe China
or maybe in India and you can still have access
to all your resources that is there in u.s. All you need is a good
And a connection so that is what cloud does it makes the world
accessible it lets you have your applications
wherever you want to and manage them the way you want
to next we would be discussing different service models.

Now you need to understand
one thing you are being offered cloud services the platform
to use your services or your applications basically, but then different people
have different requirements. There are certain people
who just want to consume a particular resource
or there's certain people who actually want
to to go ahead and create their own applications
great the own infrastructure and all those things. So based on these needs we
have particular service models that is your Cloud
providers provide you with a particular model
which suits your needs. So let us try to understand these models one by one we
have these three models that is your iaas
your paas and your saas.

I would be discussing them
in the reverse order. That is I would be talking
about saas first and then I would go upwards so let us start. Saas, or SAS SAS is nothing
but a software-as-a-service. Now what happens here is basically you're just
consuming a service which is already
being maintained and handled by someone else to give
you a valid example.

We have a Gmail. All you do is you send mail to
people and you receive mails and whatever functionality you
do is you just use the service that is there. You do not have to maintain it. You do not have to worry
about up scaling down scalings security issues
and all those things. Everything is taken care
by Google say for example, you are Gmail is what I'm talking about Google
manages everything here. So all you have to worry
about is consuming that service now this model is known
as software as a service that is saas. Next we have passed that is platform as a service
now here you are provided with a platform where you can actually go ahead
and build your own applications to give you an example. We have our Google app engine. Now when you talk
about Google app engine, what you can do is
you can go ahead. You can create
your own applications and you can put it
on Google app engine so that others can use it as well. So in short you're using
the app platform to create your own applications, and lastly we have iaas that is infrastructure
as a service.

Now. What do I mean by this? Well, the whole infrastructure
is provided to you so that you can go ahead and
create your own applications. That is an underlying structure
is given to you based on that. You can go ahead and choose
your operating systems the kind of Technology on to use on that platform
the applications you want to build an All those things so that is what an iaas
is infrastructure-as-a-service basically, so these were
the different models that I wanted to talk about. Now. This is the architecture that gives you a clear depiction
as in what happens as far as the service
models are concerned. Now, you have something
called as your sass now here as you see all you're doing
is you're consuming your data, that's it or using it. Everything else is managed
by your vendor.

That is your applications
runtime middleware OS virtualization servers Network. Everything as far as your past
is concerned your data and applications are
taken care by you. That is you can go ahead you
can build your own applications. You can use
the existing platform that is provided to you. And finally you have your iaas. Now what happens here
is only the basic part that is your
networking storage servers and virtualization is managed
by your vendor deciding what middleware OS runtime
applications and data that resides on your end. You have to manage
all these things that is you are just
given a box of car. For example people or maybe parts of car you
go ahead and you fix it. And you use it for your own sake
that is what iaas is to give you another example think
of it as eating a pizza.

Now there are various ways
of doing that one you order it online you sit at home you order the pizza. It comes to your place
you consume it that is more of your saas. That is software as a service. You just consume the service. Next is a platform as a service. Now when I say platform
as a service you can think of it as going to a hotel
and eating a pizza. Say, for example, I go They have
the infrastructure as in I have tables chairs. I have to go sit
just order the pizza. It is given to me. I consume it and I come
back home and iaas. Now. This is where you go ahead
and make your own pizza.

You have the infrastructure
you buy it from somewhere or whatever it is. You use your pizza. You put it in our new put spices
all those things. Can you eat it now? This is the difference
between these three services. So let us move further
and discuss the next topic. That is the different
deployment models that are there now when you talk about
deployment models you can also call All them as
different types of clouds that are there in the market
we have these three types. That is your public Cloud
your private cloud and your hybrid Cloud. Let us try to understand
these one by one now as the name suggests
the public Cloud it's available to everyone you have
a service provider who makes these services
or these resources available to people worldwide
through the internet. It is an easy and very inexpensive way of dealing
with the situation because all you have to do
is you have to go ahead and rent this cloud
and you're good to you.

And it is available publicly. Next we have the private Cloud. Now. This is a little different here. You are provided
with this service and you can actually
go ahead and create your own applications. And since it's a private Cloud
you are protected by a firewall and you do not have to worry
about various other issues that are there at hand and next. We have our hybrid Cloud now, it is a combination
of your private cloud and your public Cloud say, for example, you can go ahead
and build your applications privately you can use them. You can consume them you
can use them efficiently.

When you sense that peak
in your traffic. You can actually
move it to public that is you can move it
to the public cloud and even others can have access
to it and they can use it. So these are the three
basic deployment models that are there
for your exposure or your usage rather and you can
go ahead and use those as well. I hope this was clear
to all of you. So let us move further and try
to understand the next topic that is different Cloud
providers that are there in the market now as I've mentioned
what happened was since Cloud came into existence. Quite a few people went ahead and they bought their own
infrastructure and now they rent the services to other people and when you talk about this infrastructure
the quite a few people out there who are actually providing
these cloud services to different people
across the globe.

Now, when you talk
about these Cloud providers, the first thing that should come to your mind
is Amazon web services because it is highly popular and it leaves other
Cloud providers way behind. The reason I'm saying
this is the numbers that talk about Amazon web
services to You an example if you talk about
its compute capacity. It is six times larger than all
the other service providers that are there in the market
say for example, if you talk about the other
service providers in the market, if the compute capacity combined was X Amazon web services alone
gives you a capacity of 6 x which is huge apart from that. It's flexible pricing
and various other reasons. That is the services it provides
and all those things. It is rightly a global
leader and the fact that it had a head start. It started way before many other services
that are there in the market. It actually gained popularity. And now we see quite
a few organizations going ahead and using Amazon web services
apart from that.

We have Microsoft Azure, which is a Microsoft product
and we all know that when Microsoft decides
to do something they expect that they kill
all the competition that is there in the market. It is still not in terms
with Amazon web services or few other service providers that are then the market
but not very neck to neck but it is probably
the second best when you talk about Amazon web. Services or the cloud service
providers in the market? So yep. It has a lot
of catching up to do when you compare it
with Amazon web services, but it is still a very
good cloud service provider that is there in the market. Then we have something called
as Google Cloud platform again a very good cloud provider
in the market. Now, why am I saying this? We all know the infrastructure that Google has to offer
to you it has one of the best search engine that is then the market and the amount of data they deal
with every day is huge.

So they are the Pioneers
when you talk about Data and all those things
and they know how to actually handle
this amount of data and how to have an infrastructure
that is very good. That is why they have a very
good facility and that leads to it being one of the cheapest
service providers in the market. Yes. There are certain features
that DCB offers which are better. Even than Amazon web services
when you talk about its pricing and the reason for it is
it has various other services that are there water does is it
helps you optimize various costs how it uses analytics
and various other ways by which it can optimize
the amount of power you use and that leads
to less usage of power.

And since you are
paying less for power that is provided as a paying less for power you end up paying
less for your services as well. So that is why it is
so cost efficient. Then the other service providers
that is we have digital ocean. We have to remark we have IBM
which is again very popular, but that is a discussion
for some other time. As far as these
service providers go. These are the major ones that as we have Amazon web services
we Microsoft Azure, we have DCP which are
talked about a lot. This was about the basic Cloud
providers and the basic intro which I wanted you all to have. I hope you all are clear with whatever Concepts
we've discussed in time.

Let's try to understand
a little more about AWS. Well, it is a complete software suit or
a cloud service provider, which is highly secure. It provides you with various
compute storage database and a number of other services,
which we would be discussing. Discussing in further
slides as well. And when you talk
about the market it is the best and it has various
reasons to be the best in the market one being
its flexibility its scalability and its pricing other reasons
being its compute capacity now, why is it so important
to compute capacity? Well, if you talk
about the compute capacity, you need to understand one thing if you take all the other
cloud service providers in the market and you
combine the compute capacity that is your layout
AWS and you take all others into consideration this Is would be
somewhere equal to say x and if you compare it
with AWS, it is 6X.

So AWS has
more compute capacity, which is six times more than all
the other service providers that are there in the market. So that is a huge amount. So these are the reasons
that make a database one of the best in the market
and let's try to find out what are the other reasons
about aw that make it so good. What are the services features
and its uses basically, so I would be discussing
some use cases now. Now if you are talking about a
manufacturing organization now, the main focus is
to manufacture Goods, but most of the businesses
they focus so much on various other services are practices that need
to be taken care of that. They cannot focus on the
manufacturing goal of this is where aw steps–and it takes care of all the it
infrastructure and management. That means businesses are free
to focus on manufacturing and they can actually go ahead and expand a lot
architecture Consulting now, the main concern is prototyping
and During a dove is takes care of both the issues it
lets you have automated or speed up rendering as
far as prototyping is concerned and that is why architectural
business benefit a lot when you talk about using AWS
or any cloud provider but AWS being the best
in the market again, the services are the best media company now
as far as a media company goes the main concern
is generating content and the place to dump it
out to store it again, aw takes care
of all these situations or both these situations.

Large Enterprises when you talk about large Enterprises
their reach is worldwide, so they have to
reach the customers and the employees globally
or across different places. So AWS gives you that option because it has
a global architecture and your research
can be very wide as far as these points are
concerned the advantages of AWS as I mentioned. I won't say advantages exactly. I would say features
as well flexibility. Now as far as AWS is concerned it is highly flexible now
the The reasons to support it and one of the major reasons
is it's very cost-effective. Let us try to understand these
two points together other now when you talk about flexibility, the first concern you
should have is you are dealing with big organizations. They have a lot of data that needs to be managed
deployed and taken care of now when you talk
about a cloud provider if it is flexible, all these things are taken care of the second thing is it is
highly cost-effective now when I say cost-effective
AWS takes care of almost every aspect.

Aspect if you are
a beginner or a learner, they have something
called as a free tier. That means you have sufficient
resources to use for free and that too for one long year
stood have sufficient Hands-On without paying anything plus
it has something called as pay-as-you-go model now when I say pay as you go model what it does is it charges
you only for the services which are using and only for the time being
you're using them again that lets you scale up nicely and hence you end
up paying very less since you are being very less. And since you have
so many options when you are actually
buying it Services what that does is that gives you a lot of
flexibility scalability again, the first two points
are related to this point. Now, how is that
when I say scalability what happens is as I mentioned it
is very affordable.

So you're paying on a daily basis if you're using
a particular service for one hour you'll be paying
it only for one hour. That is how flexible it is. And what that does is that gives you a freedom
to scale up and even scale down since it Is easy to scale up? It is always advisable that you start with less
and then scale as for your needs plus they're
quite a few services that are there which can
be automatically schedule. Now what that means is you
would be using them only when there is an up time
and in down time you can miss those
get automatically shut down so you do not have to worry
about that as well. So when you talk
about scalability scaling up and down is very easy as
far as AWS course security again are now security
has been a topic of debate when you talk about
What cloud services especially but AWS puts all
those questions to rest.

It has great security mechanism. Plus it provides you
with various compliance programs that again help you take care of security and when you talk
about real-time Security even that is taken care
of you can take care of all the suspicious activities that are there and not uaw's
takes care of all those things and you're let free to focus
on your business rather. So these are the advantages
which I feel that AWS adds value to and apart from that
the quite a few other points like we have
automatic scheduling which I just mentioned you have
various integrated apis. Now these apis that are available in
different programming languages and that makes it architecture
really very strong to switch from one programming language
to another so these are some of the features I feel that make AWS a wonderful
wonderful service provider in the market.

So let's move further and try
to understand other things as far as database is concerned. It's Global architecture when you talk
about a double usage of mentioned it is the best
service provider in the market. So what X ews this popular. One of the reasons is its architecture now when I talk
about its architecture, it is very widely spread
and it covers almost every area that needs to be covered. So let's try to understand
how it works. Exactly. Well if you talk
about AWS architecture now, the architecture is divided
into two major parts that is Regions
and availability zones. Now when you talk
about the regions and availability zones reasons are nothing but different
locations across the world where they have there. Various data centers put up now. As far as one region goes it might have
more than one Data Center and these data centers
are known as availability Zone. You being a consumer or an individual you
can actually access or access these Services by sitting anywhere in the world
to give you an example.

If I'm sitting in some part
of the world say, for example, I am
in Japan right now. I can actually have access
to the services or data centers that are there in u.s. Right now. So that is how it works. You can choose your region. Accordingly you can pick your
availability zones and use those so you do not have to worry
about anything to throw some more light on it. You can take a look
at this small map which is the global map and it
shows the different places which has its regions
and availability zones. Now as far as this map goes, I believe it's fairly old and it has been upgraded
in recent times because AWS is putting a lot of effort to have
more data centers or more availability zones
as far as there.

Wide reach is concerned and we can expect some
in China as well. So yes, they are actually
reaching for and white. So when you talk
about these regions and availability zones, if you take a look at this map what you can see is
you have your reason which is an orange color. And the number that is inside. It is the number
of availability zones that they has to be now
to give you an example. We have São Paulo, which says that it has
three availability zones, so that is how it is and the ones that are
in the green Are the ones which are coming
soon are the regions that are in progress and some
of these have actually gone. I hadn't already started or have been made
available to people. So yes, this is
how the architecture works and this is how the database
architecture looks like. Okay, so let's move further and take a look at the next
concept domains of AWS. When you talk about its domains. The first domain that we are going
to discuss is compute. And when you talk
about compute the first thing that should come to your mind
is easy to have a nice easy to it is elastic Cloud compute and what it does
is it lets you have a resizable compute capacity.

It's more of a raw server where you can host a website
and it is a clean slate. Now. What do I mean by this? Say for example,
you go ahead and buy a laptop. It is a clean device where you can have
your own OS you can choose which OS you want and all
those things accordingly. Your ec2 is again a clean slate and you can do so
many things with it. Now next you have
elastic Beanstalk with lets you deploy your various
applications on AWS. And the only thing you need
to know about this thing is you do not have to worry about
the underlying architecture now, it is very similar to your ec2. And the only difference
between the two is as far as your elastic Beanstalk
is Concern you can think of it as something that has predefined libraries. Whereas your ec2 is a clean slate when I say
predefined libraries say, for example, you want to use
Java as far as easy to goes. Now. This is just an example. Don't take it literally
will have to say for example, install everything from
the beginning and start fresh.

But as far as your elastic
Beanstalk is concerned it has this predefined libraries and you can just go
ahead and use those because there's an underlying
Sighing architecture, which is defined. Let me say it again. I just give you an example
don't take these sentences literally so next we have migration
when you talk about migration, you need to understand one thing
AWS has a global architecture and there would be
a requirement for migration. And what aw does is it lets you
have physical migration as well. That means you can
physically move your data to the data center.

Which you desire now,
why do we need to do that? Say, for example,
I am sending an email. Somebody I can do
that through internet, but imagine if I have
to give somebody a movie. So instead of sending it online. I can actually go ahead
and give it to someone if that person is means
reachable for me and that way it would be
more better for me. My data remains secure and so many other things so same
is with data migration as well. And when you talk about AWS, it has something
called as snowball which actually lets you move
this data physically now, it's a storage service and it actually helps you
in migration a lot security. And compliance now
when you talk about security, we have various services. Like I have I am we have KMS now
when I say I am it is nothing but your identification and
authentication management tool. We have KMS which lets
you actually go ahead and create your own public
and private keys and that helps you keep your system secure the quite
a few other services as well, but I would be mentioning one
or two services from each domain because as we move further
in future sessions, we would be discussing
each of these services in detail and that is when I would be throwing a lot
more Done these topics for now.

I would be giving you
one or two examples and because I want you
all to understand these to some extent getting into details of all
these things would be too heavy for you people because the quite a few domains
and quite a few services that we need to cover and
as we move further definitely we would be covering all
those services in detail. Then we have storage now
when I talk about storage again AWS has quite a few
services to offer to you. We have something called
as your S3 now s38 works as a bucket object kind of a thing. Your storage place is called
as a bucket and your object which you store in nothing, but your files now
these objects have to be stored in their food files which act as the buckets
basically and then we have something called
as your cloudfront which is nothing but
your content delivery Network. We have something
called as Glacier. Now when you talk about Glacier
you can think of it as a place where you can store archives because it is
highly affordable next.

We have networking
when you talk about networking. We have services like VPC. Direct Connect Route 53, which is a DNS a when I say VPC it is
a virtual Network which actually lets you move
or launcher resources. That is your AWS resources. Basically when you talk
about Direct Connect, you can think of it as
a least internet connection which can be used
with an AWS next on this list. We have something
called as messaging. Yes AWS Usher's
secured messaging and the quite a few applications
to take care of that as well. We have something called as
Cloud trial we have opsworks all these things there. Help you in messaging or communicating with
other parties basically databases now storage
and databases are similar, but you have to understand
one difference when you talk about your storage that is where you store
your executable files.

So that is the difference
between the two and when you talk about databases, we have something
called as your Aurora, which is something which is very sql-like
and it lets you perform various SQL options
at a very faster rate and what Amazon claims has
it is five times faster than What aeschylus? So yes, when you talk about Aurora again a great
service to have we also have something called as Dynamo DB
which is a non relational dbms. When you talk about
non relational dbms, I won't be discussing that
but this helps you in dealing with various unstructured
data sources as well. Next on this list. We have the last domain
that is the management tools. Now when you talk
about management tools, we have something
called as cloudwatch, which is a monitoring tool
and it lets you set alarms and all those Those
things hopefully today when we are done with
the demo part you'd be having at least one part
of your cloudwatch code because we would be creating
alarms using Cloud was today.

So stay tuned for that as well. So this is about AWS and it's Basics as
in the points, which we just discussed that as what it is its use has
its advantages its domain its Global architecture. So you guys what I've done
is I've gone ahead and I've switched
into my AWS account. The first thing you
need to understand is what AWS does is it offers
you a free tier now while I was talking about these things I
just rush through it because I know that I was going to give
you a demo on these things. So and I wanted to discuss
this thing in detail. Now when you talk about AWS, if you are a beginner,
this is where you start now, what aw does is it provides you
with its free tier which is accessible to you for Twelve months
and the quite a few Services which we just discussed which are available
to you for free. And when I say free
the certain limitations on it as in these many hours is what you can use it for
and this is the amount of memory or storage you can use in total and all those things
and its capacity and everything based on that you
have different instances, which you can create
an all those things.

Now. What aw is does is it gives
you these services for free? And as long as you
stay in the limits that AWS has set you
won't be charged anything. And trust me when it is
for learning purposes that is more than enough and
let's quickly go ahead and take a look at these Services first and then there are
few other points, which I would like
to discuss as well. But firstly the free
tier services and say this is what it has to offer
to you 12 months of free and always free products when you talk about
easy to which is one of its most popular compute
Services 750 ours and that is per month.

Next you have Amazon quick site, which gives you 1 GB
of spice capacity. Now I won't get into the details
of these things as an what spice capacity is
and all those things when you have time, I would suggest that you go ahead
and explore these things as in what do these things do today? We are going to focus more
on the easy to part. So for now, let's quickly take
a look at these one by one first Amazon RDS, which is again, which gives you send
50 hours of your T, 2 micro instance Amazon S3, which is a storage
which again gives you 5 GB of standard storage and it w is Lambda
1 million free request.

So there's some
of the videos here actually which would introduce
you to these things that would help you get started
with how to creating an account and all those things and this
is the other important point which I would like to mention. When you do create
an AWS account. The first thing you
need to consider as they would be asking you
for your credit card details. So how does the login process
work firstly you go there you doing your email ID
and your basic details as in why do you want to use it
and all those things next? What it would do is just
to verify your account. And it would ask you
for your credit card details, even the debit
card details work.

I've actually tried those
so you can go ahead and give you a credit card
or debit card details. And when you do that
what it does is it subtracts a very small amount
from your account. I did this in India, and I know that I
was charged to rupees which is fairly less and that was again
refunded back to me in two to three working days. The only reason they cut
those two rupees was just for the verification purpose that my account is up and running and I am
a legitimate user.

Now as long as you
stay in the limits, you won't be charged anything. But if you do
cross those limits, you'll be charged. Are you might be
worried as an what? If I do cross the limit
would I be charged? Yes, you would be
but the fact is you actually won't go beyond it. And even if you do
you'll be notified seeing that you are going
about the limit or about the limit. Even when your free
subscription ends. You are notified saying that do you want to enter
your billing details? And do you want to start billing and if you say yes only
then would be charged for the subsequent. Months and that is
a very stringent process. You don't have
to worry about it.

That is you won't be losing out on any money as long as you
follow these rules. So if you do not have an account my suggestion
would be you go ahead. You would log into AWS and
create your free tier account which has a very easy
and two to three step process. So guys, I would start
this session by talking about what is an instance
would understand. What is AWS ec2 service
which is core for us.

Standing instances in AWS. Then we'll talk
about different types of ec2 instances would understand how instance pricing models work and we'll take a look
at a use case which would be followed
by a demo that walks you through all the stuff
that we have talked about. So it is a fairly good content and a lot of stuff
to study today. So as let us just
quickly move further and take a look
at these things one by one. So first and foremost guys, we would be talking
about an instance. So when you talk
about an instance, we have this definition here. Let's try and understand what does this definition
has to say first and then probably I would throw
in some light on that.

So as far as this definition goes it says and instance is
nothing but a virtual server for running applications
on Amazon ec2. It can also be understood
like a tiny part of a larger computer
a tiny part which has its own Hardware network
connection operating system. Cetera, but it is
actually virtual in nature. So there are a lot of words here and a lot
of stuff has been said, let me try and simplify
this particular definition for you people.

So guys when I say
a virtual server running on your application not on
your application virtual server that basically hosts
our application is what I should say. So what do I mean by this? What do I mean by a virtual
instance a virtual presence of a particular device? Well guys when you talk about software development elopement
application development. What you do is you are supposed
to build an applications and run those on servers right? But at times there are a lot
of constraints like the space that you use the resources that you want to use
say for example, certain applications run
on Windows certain run on Mac OS and certain run
on your Ubuntu OS right? So in that case, I cannot always go ahead
and have different systems and different operating systems
on them and then run my applications
on top of that right because it is time consuming.

Stu and also consumes
a lot of money that you invest into it. So what is
the solution for that? What if I could have
a single device and on top of which I could
create virtual compartments in which I could store
my data differently store my applications run
my applications differently. Wouldn't that be nice? Well, when you talk
about an instance, that is what it exactly
does you can think of it as a tiny part of a computer. Well, that is what
it is time to symbolize. I mean you have a system
on top of which. You can run different
applications and how it works is if you are running
an application a in part 1 and running an application B in Part B of your server these
applications have a feeling that they are running
individually on that system and there is no other system
running on top of it.

So this is what
virtualization is. It creates a virtual environment
for your application to run and one such instance of this virtual environment
is called as an instance. So when you talk
about virtualization, it is not something
that is very complicated. As you can see
in the first image. You can see a man surrounded by various virtual images
something that you see in an Iron Man movie. When you talk
about virtualization, it is very simple. It can be a simple computer which is shared by different
people and those people are working quite
independently on that server. That is what
virtualization is that is what an instances in this image
the second image each All of this individual would be
using a different instance. So this is what an instance is when you talk
about virtualization. So guys, let us move further and take a look
at some other pointers. Now we understood
what an instances what virtualization is
to some extent at least guys. As far as the session goes. I believe this
information is enough. If you wish to know
more about virtualization, you can visit our YouTube
channel and take a look at VMware tutorial.

It talks about this particular
Topic in a More detail. So let's let us move further
and try to understand easy to now now easy to as an Amazon
web services compute service. It stands for
elastic compute Cloud. Now, what do I mean by this? When you say
an elastic Cloud compute? That means basically it is
a service with lets you actually go ahead and Carry
Out computation practice and when I say elastic it means that it is fairly resizable
and fairly reusable. Once we get into the demo part probably you'd get
a better picture. What do I mean by elasticity? Because it is highly
flexible highly scalable. It is very cost efficient
and it serves a lot of purposes. Now. These are some of the features
that I just mentioned right? Let me throw in some more light
on these pointers as well. What do I mean by scalable now when you talk about
a cloud platform one of its best features is it gives
you high amount of scalability? That means your applications
can scale up.

Down depending upon the data that you want to use
on top of it. So if the traffic increases
more you need more performance. So your application
should be able to scale to those needs, right? So that is what cloud computing
provides you with and that is what ec2 also provides
you with when I say an instance. Basically, what you're doing
is you're launching a virtual machine. It is called as instance
in terms of AWS. So this virtual machine
should be scalable. That means it should scale up and scale down both
in terms of memory. A storage and even in terms
of the computation that it is providing. So when you talk about easy
to it is highly scalable.

Once we get into the demo part
you would see this now it being scalable and it being cost-efficient
makes it highly flexible. So that is the third Point. Let us try and understand
the second Point as well. What makes easy to cost
efficient when you talk about cost optimization. What easy to does is
it lets you scale up and down I just mention that right so instead of buying Number of instances
or instead of buying a number of services you
can actually go ahead and scale this instance up and
down with minimal cost changes. So you're saving money because apart from that there
are burstable instances. There are various pricing models
that ec2 boasts of using which you can actually
save a lot of money as we move further. We'd be talking
about those models as well. So meanwhile, just bear with me
so easy to well it is a service which is a computation service
and it takes care of Of following pointers.

I mean it is easily resizable. It is cost efficient. It is highly scalable and all these features make
it highly flexible as well. So guys, let us move
further and take a look at some other pointers as well. So what are the types
of instances now when you talk about
easy to it is one of the oldest AWS services. So if you talk about
the type of instances that are there in the market. Well, there are
quite a few types of instances that you can deal with and these are some
of the popular ones Once I move into the demo part, I would maybe talk
about other instances but to keep it simple basically these instances
they have different families. I mean, you have the T Series you have
the M series The cseries.

Well, basically these series consists of different
kind of instances that serve different purposes
to simplify this process. What AWS has done
is it has gone ahead and categorized these instances
into following types. The first one is
your general purpose instance. Now it is basically
suited for applications that require a balance
of performance and cost that means places where you
require quick responses, but it is still cost-effective. I mean say for example the example shown here
email response systems. Now you require a quick response and there will be n
number of responses or n number of emails that would pop in
but you do not want to pay a lot of money
for this kind of service. So in this case you need
cost optimization as well and you need Quick
response as well. So this is where your general
purpose instances come into picture next on this list.

You have yard compute instance. Now what a compute
instances these are for applications that require
a lot of processing. Now when you say computation they have
better computation power. That means if there is
a lot of data that need quicker computation power you
can use these kind of instances. What is an example. You have your analyzing
streaming data now if you know, what stream Data
is it is the data that continuously flows
in and flows out.

That means you
are streaming the data say for example this session it
is being streamed, right? I mean the information or whatever is happening
here it is going live. So in order to process
this kind of data, you need systems that give
you good computation power which are very active
and very good in nature. So when you talk
about compute instances, they provide you with these kind
of services and that is why if you are dealing
with streaming data if you wish to analyze
this kind of data, Definitely go for
compute instances.

So next on this list. We have memory instances. Now, what are
these instances for? Now? These are the instances that
are required for applications that require more memory
or in better terms more RAM, right random access memory. So these are for applications that require good
computation power again, like the previous one, but when you talk about Ram, it is something that resides
in your local system, right? So you need instances. Which have good memory capacity and what kind
of application it serves? Well, you can
think of applications that need multitasking multi
processing say for example, I need a single system that does fetching data for
me as well process it for me as well dashboard
it for me as well and then gives it to
the End customer as well. So these kind of applications require memory instances
moving further guys. We have the storage instances
as the name suggests. These applications are or these instances are
for applications that require. You to store huge
amounts of data. Say for example, you have large size applications
like your big data applications where the amount
of data is used number. So you would be requiring more
storage more storage flexibility in that case.

You can opt for instances
that are specifically optimized for storage
kind of requirements. And then you have
your GPU instances. If you know what GPU is
you would understand what it serves that means if you are interested
in graphical kind of work where you have basically
A heavy Graphics rendering in that case you can opt
for GPU kind of instances which basically help you
sir purposes like 3D modeling and stuff like that. So guys, this was about
the different kind of instances.

Now, let us try and understand what are the different
instance pricing models that are out there. So guys when you talk
about pricing ec2 or a SS in general, it ensures that you
can save a lot of money, but normally what people do
is they are under the And that if we just go ahead and take in Cloud probably
you would go ahead and save a lot of money. Yes Cloud does support
applications in such a way that you would spend very less amount but it involves
a lot of planning guys.

So each time you use
a particular service. It is very
important to understand. How does that
particular service work? And if you actually plan
in the services in that manner you would actually end
up saving a lot of money. So let us try and understand how the pricing models work
when you talk about it. See two in particular. So Guys. These are some
of the pricing models that easy to has
to offer to you. You have your on demand
dedicated on the spot and reserved instances. Now, let me try and simplify
what these instances are. And what do I mean by these now when you say an on-demand
instance as the name suggests, it is an instance
that you demand and you get it. Now these instances
are made available to you for a limited time frame
say for example, I need a particular instance
for an hourly basis.

So I would be wanting
to use that instance for only that Eurasian. So to use that instance
for that particular duration. What I do is I actually go ahead
and demand this instance. So a tub – would give me that instance
but it would work for an are only so my prices for that instance
would be fixed on that manner. I mean the fact that I would be using
it for one instance or for an one are basically so I would be charged only
for that one hour. And once that are is complete that instance
it gets Terminated on its own it's similar to renting a flat
for one month suppose if I move to a new city and I'm looking
something temporary say, for example, I'm
looking for a hostel or a paying guest kind
of a living system.

Right? So in that case, what I would do is I would
upfront go and tell the owner that I would be staying
here for a month. You can charge me
for a month only if it is 1000 more
than normal charge. It is fine. But once the month is over, I would like to leave
right so that kind of service or that kind of instance. Demand is called as on-demand instances
basically dedicated now Guys. These instances are
kind of given to a particular organization so that their security
is defined better than other say for example, if I need to protect my data, I need my data to be privatized
Now understand this thing AWS or the other Cloud platforms
are highly secure. Your data is secure no matter whether they are
on dedicated instance or not. But what happens is you
normally share your Space with someone else
data remains private but there are companies that deal with highly
confidential data. And in that case they want
that extra Assurance as an okay. I am using a space
which is not shared by anyone.

So in that case you
have dedicated instances, which basically serve your needs
like high security and basically an isolation
from the other vendors as well. So that is what dedicated
instances do they are costlier. But yeah, they give you
that isolation on spot. Now guys, when I say
A non spot instance, it is like bidding
say for example, I am buying a particular share. So I have
a particular budget right so I might have
a budget of $300. So what I do is I go
ahead and buy the chair and I sat in a cap
as an okay to the max I can bid for $300
for the share. So if the price goes
above 300 dollars, I'm not taking that share right? So if there is a particular
instance you can bid for that instance as an okay. This is the maximum price
that I pay for this.

Ernst so if that instance
is available at that price it is given to you and if after a particular
duration the price of this instance can change so it is available to you
for a limited period of time. So if you are dealing
with data that is volatile and you want to work
on the data in real-time, so you cannot for this instance because after a while the price
of this instance might change and this instance
might be terminated and you might not be able
to use it for a longer while but the thing it does
is it is available to you at a cheaper price? And at the pricing bit
that you put on it, so that is why it
is more affordable. But again, it is good for volatile data only finally
you have the reserved instance. It is like renting
an apartment on a lease for a longer period right? I mean suppose if I am getting a flat
on an agreement will basis where I sign
an agreement for a year.

That means I
am reserving this flat for one complete year, right? So nobody else
in comments say that okay, you have to vacate this. A flat right so
that is one benefit. And the other thing is
you have a fixed set of rent. So if you're taking something
for a longer duration, there is a chance that you might end up paying
lesser money for that as well. Now what happens here is when you talk about it from the
instance perspective suppose, you know that you
would be needing this much configuration
for this duration. You can rent that particular
instance for that duration, and probably you end up
saving a lot of money now when you talk about AWS
it gives you Latif where you can actually go ahead and upscale downscale
your instances to your needs.

You can kinda terminate stuff
and move to the next up. But if you are certain
about certain things as an okay, I have to use this no matter what happens for a longer
duration in that case. You can offer reserved kind of instances and those are
more affordable to you. So Guys, these were
different types of instances based on the pricing
that is there. Now. We have talked about General
cluster ization of instances, like the general-purpose the GPU that was based
on They're functioning, right then we learned about
the pricing models as well.

Now. There is one more type that we need to understand
or one more classification that we need to understand. Let us try and take a look
at those as well. So we are classifying instances based on
that General functioning. Now, what do I mean by this? Well, these are the types. Let us take a look
at those one by one first. So when I say
burstable instance, we've talked about general
purpose instances, right? So what happens is there is
a category of General. But was instances with start with a base utilization power
available to you. That means if you want
to utilize your CPU for a certain amount burstable
instances are good here.

Let me throw in some more light
as in what am I talking about? Exactly suppose. I need a CPU utilization
of 20% And I know that so I can go
for burstable instances. What they do is they start with the functioning
of 20% but in case if I'm dealing with data that It is not constant that might change
with time say for example, if my website
experiences more traffic, so I might need
more performance. Right? So in that case what burstable instances
do is they burst out of their current performance
200% CPU utilization so that you can get
more performance. Now what happens here is you
are charged a particular amount for these instances and you have certain credits
for which you can use the burst people performance and
if you do not use the bustable. Performance those credits
can be used later as well. So you are getting
optimize performance as well. And you are saving
some money as well in case if there is an urgent traffic that you experience you
have something called as EBS optimized now when you talk about
EBS optimized now, these are the applications where basically you
are processing data at a higher speed.

Say for example,
there is some application where the data is
flowing in continuously. So I need quick response, right? So EBS backed up
or EBS optimized instances. What they do is they give you
high input output processing and that is why these are
good instances to art for these situations
cluster networking. Basically, they form clusters of instances now
a particular cluster what it does is it serves
one kind of purpose say for example in my application. What I want is I have
different sections and in different sections
my first section requires To be processing data
at a faster rate. The other one. I wanted to be storage optimized so I can
Define different clusters of instances that serve
different purposes here.

And then I have
the dedicated one. We've already talked
about dedicated one. It is more related
to the data security part. So Guys, these were the
different types of instances. I know I've talked
about a lot of stuff once we get into the demo part probably this would ease up
a little more for you people. I believe you people are with me and you are
following this session. So guys now let us move further
and take a look at the use case so that we can just move further and take a look at the demo part
as well for this use case.

I've considered
a derecho itself. Let us try and understand what could be
the possible problems that can be solved by
using these instances. Now imagine that if it is Erica used AWS
as their Cloud partner and they used the ec2 service. So what kind of problems could
be solved by these instances that we just talked about suppose we have
the first problem where you have To analyze
the data of the customer. So what kind of application
would you use? Can you guess that for me? I won't be looking
at your answers. Let me just quickly go ahead
and give you other examples as well so that we can discuss
these one by one suppose.

You also have an auto
responsible system now compare these two and let
me know which one would you believe
would be served better by these instances that
we've just talked about. So when you talk
about the performance here guys when you talk about analysis
of data for the customers data, it is never Went right at times the data is used
at times it is less. So in this case,
I would need burstable performs. So my general purpose
burstable performance instances would serve me better right
auto response email system. I need quick response, but I do not want
to invest a lot of money EBS optimized instances with iops would help me better
search engine and browsing. I believe it is fairly clear. I'm talking about browsing
and search engine to different things I want
to do I would be opting for Stud Network instances, right and confidential data. Well, I would be opting for
the dedicated instances here. So guys, this was
a very simple use case. So let us move into
the demo part and try and understand ec2 a little more shall we so guys what
I've done is I've gone ahead and I've signed
into my AWS Management console.

Please forgive me guys. I have a lot of gold today and that is why my voice is
little Jiggly and echoing. So I hope you people
are not offended by that moving further. The guys this is
the AWS Management console. You can sign in to AWS
free tier account and probably Avail
these Services you can practice a lot of stuff by signing
into your free tier account. How do you do that? Just go ahead and look for a SS free tier and sign in
with your credit card or debit card. You won't be charged
you have these services for free for one complete year and you can practice most
of the services that are there. There is some free tier limit
on these services. So check the upper cap as
in what those limits are so that you Get charged. So guys this is
how the console looks like. We are going to go ahead
and learn about easy to hear. That is the instant
service in AWS. So let's search for ec2.

And you would be redirected
to this page guys. Now when you talk about ec2, there are a lot of things
that you can do. You have Amazon Marketplace where you have am eyes,
I will tell you. What am I is our do not worry
you can just go ahead and launch our instances. You can attach volume to it. You can detach volume storage
from these instances. And when I say am I is those are
Amazon machine image has that means once you
create an instance, you can create an image
of that instance as well.

That means a template
of that instance as Suppose you have
certain applications running on top of that instance
certain specific settings that you've done
for those instance and you do not want to do
those settings again. And again, you can create images
of that instances as well. So let us see what all we can do
with these instances. So let us first
launch an instance. So guys, once you click
on that launch instance button, you would be given a number of options to choose
from you can launch Linux instances Ubuntu
instances Windows instances. And you can choose the EBS
backed up non-abs backed up. So there are a lot of choices when you actually go ahead
and launch these instances. You can see this Ubuntu
Red Hat Microsoft Windows and there are specific
instances specialized in deep learning some
of our service specification. You can see that there are
quite a few instances, but ensure that
if you are practicing choose the free tier
eligible one for now, I'm going to go ahead and launch
a simple Windows instance. Let's not get
into the Ubuntu one because Request a petition
to sign for that.

So let us not do that. So guys once you click
on launch an instance, you can see that you
are redirected to this page. Now if you take a look
at the information here, it talks a lot. Now. This instance is
general purpose. We've discussed the
other families, right? This is one. This one is T 2 micro there are t 2 T 3 micro and medium
and bigger instances as well. The size is very guys the Tito
micro one is free tier eligible. You have t to Nano
you have small right? So you have me do
Another large instances as well. So when you say a microphone, it has 1 V CPU and one gigabyte
of memory instant storage. It is EBS backed up
and what kind of network performance it
gives you low to moderate.

So I would say
configure further. These are some configuration
details what network it is following what subnet ID. It is falling that means it falls
under the cloud Network guys. That means your Cloud
would have a network and under that Network
lies are instance so that it's accessible. SS policies security policies
can be managed. So let it be basic for now. Let us move further. Storage now guys, this is the storage it
is your route storage and 30 GB of space. You can change it if you want say a hundred
but let us take 2 34 now and guys you can see
these are the types.

You have a general purpose. You have your
provisioned magnetic now, there is one more type
of instance guys. That is HDD kind of an instance, but guys when you talk
about root storage, you cannot attach HDD to it, right because route
storage is something that is constantly Constant, if you wish to have HDD kind of storage it has
to be attached secondary. So if I add new volume here, you can see and if I
search for this now, it gives me an option
of cold HDD, right? So that is what guys I mean
in order to have this kind of HD kind of a volume you need
to use secondary storage for it.

So let us cancel this for now
and just go ahead and say next you can add in tags guys
for the Simplicity of namesake say for example
sample today and let's just say next Security Group
guys Security Group. What do I mean by this? Well, basically you have set of policies as in
who gets to access. What kind of traffic do you
want to your instance? What kind of traffic do you want
to flow out of your instance so you can create a security group and you
can use customized as well when you create one
this type is RDP. That means it can allow
traffic from a desktop or a remote desktop app
and through which I can log. To my system I can add
other rules as well. I can add PCP HTTP
kind of rules. And these are the port ranges
you can specify those for now. I'm allowing traffic
from everywhere through our DP and I can say review and launch improve
your security it says but this is a basic one guys, you can add in more rules
as I've already mentioned. So let's not do that.

Let's say launch generate a key
pair now a key pair is something that lets you log
into your instance. It is a double security
for your Instance you do not want your instance
to be left insecure. Right? So in that case,
you need to generate a key pair. You can use an existing one or you can create
a new one as well. So let's just say that I want to create
a new key pair. So I say create and let us say
Vishal 3 4 1 2 1 and let's just say download. So guys once you
download this instance, what you do is and protects cut it from here
and I'm going to go ahead and paste this instance to the desktop guys and
let's just say paste. Here it is.

So the reason I'm doing this is because basically we
would be needing this thing is if you lose this key there
is no other way to explain. Is your instant so
make sure you keep it safe and I say lunch. So guys now this process
it takes a minute or two to go ahead
and launch our instance. So meanwhile you'd have
to bear with me. So what happens is once you do actually go ahead
and launch this instance. It involves a couple of steps like basically it does
some Security checks some status checks and
while these statistics happen, it takes a minute or two and once the instances up
and ready we can actually go ahead and take a look
at this instance. So meanwhile guys what I'm going
to do is I'm going to go ahead and take to the ec2 part Now
there are three instances that are running guys.

Now, this is
somebody else's account. So there are quite
a few other instances that are running you can see that there must be
some instance here which basically is initializing. So this is the one
that we are going to use. This is the ID. Let's not remember that we know that this
is getting initialized. So as these are the other
instances this one is start.

Let us take a look
at this instance as well to understand
as an what happens. So Guys, these are the options
that I have right? You can actually go
ahead and get the password. You can create a template
for your instance. What you can also do
is you can start stop. Now. This instance
is already stopped. So you do not have these options
that has stops.

He Burnett and reboot you
can start this instance and probably you can go
ahead and do that. Now when you stop an instance if you want to actually make a snapshot you
want to take snapshots you want to create Amazon
machine image is out of it. What you do is you
stop that instance so that you prevent
any activity from happening. In that instance so that you can take
an exact snap of it. So that is why you
stopped an instance when you wish to do
these kind of operations. Once you start it again, you can make it function
normally at it was functioning. If you are done
using an instance, you can terminate it there and there guys, so these are
the options instance setting. Okay. So as these are the options
you can add tags to it.

You can attach replace. I am rules that is
access management policies guys. So you have a user
access management. Here you can attach
roles to it as well. You can change the instance
type guys you can click on it and you can go ahead
and do that. You can change it
to higher versions as well. Now, why do you need
to do this suppose? I am experiencing
a particular traffic and my instance
supports that need but if I move further and future, I need to cater more traffic. What do I do in that case
in that case guys? I can actually go
ahead and update it to a larger version
unlike your other applications. You are
on-premise infrastructure. Where you have
to actually go ahead and have new servers you data
on top of it here. What you do is you just click
on this thing and it happens in a couple of seconds. You are instance gets optimized
or upscale to a better lever. And that is why
it is highly scalable because what you can also
do is you can change termination protection of this
is for data security suppose. If I am using
a particular instance, and in that case, I accidentally deleted
my data would be lost.

Right? So what this Does is it changes or turns my termination
protection on that means if I have to
delete this instance? I have to get into the instance. I have to change the policy
and then delete it. I mean I cannot delete
it unknowingly, right? So that is why this service
helps now while talking about these things guys are
instance is up and ready. Let us just launch it. I say connect. And it says download
remote desktop file the RDP path that I
talked about right and I need to get in my password
as well guys to login. How do I do that? I click here. I choose the file for that. I'm gonna go to the desktop. I'm going to scroll down. There is a file
called as Vishal. I open it and I decrypt it
and there you go guys. My password is here. I can just copy it.

So if this is copied
I can launch this. Remote desktop file. It would ask me
for the password. I would say take this and okay. Do you want to
login and securely? Yes. And guys a Windows instance
would be launched. It is just like your
Windows operating system, but it is running
on my existing system guys. They can see
personalized settings. It is setting up
personalized setting for me and in half a minute
maybe in 10 seconds. My Windows app would be
up and running. So just like my Windows device. I have one more Windows device so I can do something
in this device and something else in my normal
Windows device as well guys. So this is what you are. Instance does it basically
creates an instance of word Sewell machine for you to work on I Believe
by Navi one understood. What a virtual machine is. So guys we are done
with this part. So let us just use it for now.

Let us see if there is anything else
that we need to talk about now, if I come back
here I've mentioned that you can take
snapshots, right? So these are am is what am I is it is an image basically so I can actually go ahead
and launch an Emi for an instance
that I already have. I can create an image of it. There is a volume here. So my instances are
EBS backed up right? So there is a block storage
attached to it.

Can I add another storage to it? Yes, I can remove the previous storage and attach
a different storage to it. Say for example, this is the store is
that I have with me if I click on it and I
will go into actions. I can create
a A short out of it. Once I create
a snapshot out of it. I can attach it
to the existing instance. So we just launched
an instance, right? So if I want to
replace the volume that is already attached to it. What I do is I actually go ahead
and detach the volume that is already attached. So I would be stopping
my instance First Once I stopped the instance. I can come to the volume assume that this volume is attached
to some instance. So I need to detach it
from here and the snapshot that I've already created. Or if I have created one, I can select
that and I can attach that to the existing instance.

All I have to do is
I have to go ahead and create an image here. Once I create an image
it would ask me. What can I do with it? I would ask me
to actually go ahead and given the region in which
the instance was created. Now my instance that I just used was created
in a particular region. I'm working in
Ohio reason for now. What do I mean by these regions? Well, basically what Says
AWS has different data centers in different regions
of the world. So you can choose the reason that is convenient to you that
suits your business needs right so I can create instances
in those particular regions. So if my instance was
in particular region, I need to create
a snapshot in that region and then attach that snapshot
or that volume to my instance. So guys I Believe by now, you've understood a lot of things you've understood
what instances are how to launch those how to create those
and how to make those work. So as far as this
is Ian goes guys.

I wanted to talk about these pointers
one more important point that I would like
to mention here is make sure that you terminate
your instances so that to avoid any charges if there are any now this
being a free tier account. I don't think there
would be a lot of charges but still I would request
you to actually go ahead and terminate the instances even if they don't charge you a lot
because that is a good practice because there are certain
services that might charge you a lot more guys. So I'm going to terminate
my instances the ones that I have created today. So let's just wait a minute
and in a minute or two guys, these instances would
be terminated from end to end. Today's session is going
to be all about AWS Lambda. So without making
any further Ado, let's move on to today's
agenda to understand what all will be covered today. So we'll start off today's
session by discussing the main services in the AWS
compute domain after that.

We're going to see
why AWS Lambda is as a separate service. We're going to discuss
what aw is Lambda actually is and then we'll move on
to the part where we'll see how you can use a double s
Lambda using the AWS sdks once we're done with that
I'll teach You guys how you can integrate your SDK
with the Eclipse IDE? And in the end
we'll be doing a demo. So let me quickly show you guys how we will be using AWS Lambda
in today's demonstration. So guys, this is a website that I created which is hosted
on the Local Host. Now what this website
does is it applauds a file onto the H3 file system now
once the file is uploaded. It sends me a mail
regarding that now that meal is generated by a SS.

I'm not now let me
quickly show you how that mail actually looks like so let me upload
a file over here. So let file be this I click on
open and before uploading image. I will show you my inbox. So as of now, I don't have any
As you can see, right. So what I'll do is I'll click
on upload image now. It is S3 upload complete. Now. What is this website does
is it will upload my file? It will rename the file
according to the system time so that there is no conflict
in the name of the object. Right? So whatever file that I've uploaded right
now will be uploaded on in this bucket.

So if I refresh this you can see that there's a file
over here, right? So this file has now
been renamed, right? Right, and I also have
an email over here, which says awacs test, right? So if I click on this email, I can see that I have got a mail
from this address saying that an object has been uploaded
the name of the object. Is this the size of the object? Is this the bucket name? Is this and it will slash
modified on 12/31 UTC right? So let me quickly compare whether this file
name is the same. So it's seven four eight and it's a sin
for it here as well. Awesome. Now, the next cool thing that you can do over here
is you can move this file to some other folder. So all you have to do is
you will reply to this mail by saying move you click
on send now when I send move to this email address that I have configured
in my code what it does is it will basically move
this file from this bucket to some other bucket.

So let me quickly. Press it and see
whether my file has been moved. So as you can see
my bucket is now empty now. Let me go back. So basically my file was
there in Erica demo now, it will be there
in quarantine demo bucket. So as you can see seven four
eight file has now been moved to the quarantine demo by simply
writing a male over here. It says move so we'll be creating
this demo today. Let's move on to the first topic
of today's discussion. That is the AWS compute domain. So the main services are
under this domain are easy to elastic Beanstalk
and AWS Lambda.

Now among these three the most
important service is easy to so easy to is basically just
like a raw server. It is like a personal computer
that you're working on remotely, right? So it can install any kind
of improv operating system of your choice, which is supported by
the AWS infrastructure and then you can use it
in any manner as you want. You can configure it to become. A web server. You can configure it
to become a worker to your environment anything. Uh, next service
is elastic Beanstalk, which is an automated
version of ec2. So with the elastic Beanstalk, you don't get the access
to the operating system, but you still have a control
over the configuration of your system so you can choose what kind of instance you want
to launch, right? So elastic Beanstalk is used
to deploy an application.

So basically you just upload
your code and your application is deployed on the
AWS infrastructure, right. So this is what elastic
Beanstalk is all about. Then we have
the AWS Lambda service. So the Lambda service is
again an automated version of ec2 wherein you
don't get the access to the operating system
with the errors Lambda. You don't even have
the choice to choose what kind of configuration you want
with your server, right? So with either plus Lambda you
just have to upload your code and it executes. It's that simple
but then why do we have? Have an AWS Lambda service
when we have elastic Beanstalk. So let's understand that. So either plus Lambda
like a told you guys. It is an automated version of easy to just
like elastic Beanstalk, but then with AWS Lambda, you can only execute
background tasks, right? You cannot deploy
an application. So either plus Lambda is not
used to deploy an application. It is used to execute
background tasks. Other than that like I told
you guys you don't have to choose the Integration and a double s Lambda you
don't have to choose what kind of servers you want on
depending on your workload.

Thus kind of configuration. The server configuration
is assigned to you, right? So this is why
we use AWS Lambda, but then let's go on to the definition part and see
what AWS Lambda actually is. So according to its definition. It's a survivalist
compute service because you're not choosing
the server's right. You're not choosing what kind
of Aggression you want in your server? It's a serverless
compute service you just upload your code.

And the code is executed. It's that simple right and also like it's mentioned
in the definition and I told you guys
again again it is used to execute background tasks. It is not used to deploy
an application guys. This is the main
difference between elastic Beanstalk news12. So as an architect, you should know
what the use case is and with service
will suit it better. So Moving on now, you've understood what
AWS Lambda actually is and why do we use it? Right? So let's move ahead to see
how you can use this service. So you can use the service using
the software development kits which are provided by AWS. So before moving ahead
and understanding how you can use the skills. Let's understand what
these kids are all about. So the software development kits
are basically apis which are used by developers to connect to the desired
service at the wound. So it makes the life
of the developer easy because he can now concentrate
on the logical part of his application rather than
wasting time on understanding how you can connect
his code to the service which is there on AWS, right? The other part is that these sdks
are used with ID.

He's right. So currently we
have only two IDs which are supported that is eclipse
and visual studio. So today in this session. I'm going to teach you guys. Is how you can connect your SD
keys with the Eclipse IDE? So let's do that. So before that we
are going to configure or we going to code ra SS
and of function in Java, right? And that is the reason
we're using Eclipse. Now. First of all, you have to install
eclipse on a system. Once you do that. This is the eclipse green guys. This is how your Eclipse
dashboard will look like.

So for installing ews SDK
on your Eclipse, you have to click on Help and then you'll go
to install new software. Once you have reached here. You will enter the website name. That is
/ Eclipse. Once you have entered
that just hit enter and it will list you
guys all the SDK is which are available
all the tools which are available
select all the tools and click on finish and then it will take
some time to download the SDK, but then it will integrate
everything into your And then you'll have a button
like this over here.

Right? So with this button you
can actually deploy a new server which is configured
according to AWS. So guys, this is
how you install sdks with IDE. Alright guys, so it's time for
the demo now enough of theory. So what we'll be doing is where our aim is
to create an application which will be uploading our
files onto the S3 file system. And what a Lambda function
here Willy We'll be doing is so like I told you guys Lambda function basically
executes your background task, right so that we
don't want to burden server on which the website
is hosted in this task.

We want some other server
to execute this task. What is this task? We basically want to get an email with all
the details of the file, which has just been uploaded
on the S3 file system. So that email will be sent
by the Lambda server now once we get that email
if you reply to that email that the file has to be moved Lambda will Pick up that email
it will read that email and it will perform
the necessary operation. So if we specify move, what will basically do is it
will pick that file move it to some other bucket
and store it over there. So this is the project that will be doing right
now sounds simple right? But let me show
you the architecture.

Let me explain you what
the architecture tells you. So basically this
is our website. So what our website
we'll be doing is it will be uploading a file
onto the S3 file system. At the same time it will also
be making an entry into the sqs, which is nothing
but a simple queue service which use your data, right? So as soon as your file
is uploaded on to S 3 S 3 is bucket is configured in a way to invoke
the Lambda function. Now as soon as the Lambda
function is invoked now Lambda functions are stateless. They don't know anything about
what file you have uploaded or what you have done. You have to feed
them information. And that is the reason we
have updated the entry in s us or the file which Recently
been uploaded right? So what ew is Lambda will do
is it will read this queue and we'll get the file name and we'll actually
retrieve all the properties from that file name from S3.

Now once it has retrieved all the file names all
the properties of that file. It'll actually mailed me with using the SES service in
AWS the details of that file now once I receive the details
of that file, I have an option to reply
to that email, right? Now how will I reply
to that email is like this. So I will open the email client on my computer and I
will reply to that email that email will actually
go to that address which is actually pointed
to my DNS server and that DNS server will actually
redirect that email to SES. Now SES on receiving that email has been configured
to invoke the Lambda function that Lambda function
will be invoked again.

The file will be read
from the SQL. That file will be moved
to a new bucket and in the end that message
will be deleted by ask U s– now my S
us has been configured like this that in case I don't reply to that email within
two or three minutes that message will automatically
be deleted from the queue. And in that case
if you try to move that file, you will not be allowed to do so because that file is
no longer available in the cube, hence. You cannot move it, right? So this is what our project. It is going to be all about now. I have already showed you
how the project works. So let me quickly delete
the project and again show you how it can be configured
from scratch right? So give me a moment. All right. So everything is set. Now the first thing that I'll be doing is I'll
be configuring my S3 to interact with my Lambda function, right? So what I have not done is I'm
not deleted the Lambda function because there's no point.

You just have to click next and your function
will be created. What matters is the code, so I have uploaded
the code in your LMS with the proper documentation. If you have any doubts, you can actually email me
regarding the doubt and I'll clear it. You so as an architect, your job will be to act
take this architecture. Not the coding. The coding part has to be done
by the AWS developer, but it is a good
to know knowledge, right? So that is the reason I
have uploaded the code for the website
and AWS Lambda to your LMS.

Okay. So like I said, I have to configure my is 3
so that it can interact with AWS number. Now. My website's code is like this that it will upload
the file to a bucket in S3 called either a car. A demo, right? So what we'll be doing
is we will be going to the Elder a car demo bucket. Which is here, I click
on the dareka demo bucket. I click on properties. I'll click on events and let
me delete this event right now. Right? So I will be adding a notification now now let
me call this notification as AWS – Lambda right. Now. What I want it to do is whenever
the event is a put event that is and upload event.

I want it. Send a notification to
my Lambda function. So I have to select
the Lambda function. So my function
should be this one and I will click on Save. Let me check
if everything has been filled. Yes. It has let's click on save. All right, so I have
one active notifications now now you might get
an error over here saying that you don't have
sufficient permissions. So if you get that error
on the right hand side, you'll have a button
called add permissions.

Mission just click
on that button and everything will open up
an automatically basically those permissions are
for your Lambda function. Your Lambda function
may not have permissions to get notifications from S3. But once you click
on that button, you will get the proper
notifications automatic. Right? So this is how you
will configure your S3 bucket. Now, let's go back
to our slide to see what other things we have to do. So we have configured RS3
to invoke a Lambda function once a file is Loaded to S3. Now. A Lambda has already
been configured to interact with ses through the code, which is so through the code should be calling
the SES service and we'll be living
in a meal now the next function or the next thing is
to configure SES or before that lets configure
our sqs, right? So our sqs is basically
a simple queue service. So we have to create a queue
in a COS in which our website.

It will be uploading of files,
right so let's do that. Let's just go back
to our dashboard. So this is our dashboard guys
and we'll go to the sqs service. Well, click on create
new Q fifo queue and that Q has to be named
as hello – Lambda. And since it's a fifo queue, you have to give
the extension as dot fifo.

All is done. Let's click on
quick create Cube. Okay, so my Q has now been created and now I
have to configure this Q so that whenever
the message comes in, it gets automatically
deleted after 2 minutes. All right, so let
us configure it so Will click on configure q
and we set this to two minutes. All right. All is done. Let's click on Save changes. All right. So my Q has
also been configured. Let me go back to my slide. Alright, so my sqs
has been configured now, so let me configure my SES now now this
might be a little tricky. So hang on with me. We'll go back to the dashboard.

We'll go to the ACs service. Now first of all
in the SES service, you actually have to add
the email addresses. Now. How will you
add email addresses? You will actually have
to verify a new email address. Now you have to verify
the recipient as well. So since I want to receive
the email from the SES service, I'll have to type
in my email address, which is he meant at the rate
of the rate during And we have to verify
this email address.

Now I'll receive
a verification email on him and the other a cannot go. So, let me quickly go
back click on inbox now. I have got
a verification request right? So I'll click
on this verification link. Okay. So my email address has now been configured
has now been verified. So it says
congratulations awesome. So let me go back to my ACS
says pending verification. Let me quickly refresh it. All right. So it says verified now
now let's go back to our slide. All right, so guys we have
configured the recipient of SES. But what about the sender right? So we have to configure
the sender as well because and why do we have
to configure the sender? And the sender has
to be a domain name that you own right? Can you have to own
that domain name so that you can send emails
via that domain name now what I mean by that is you may say that okay, why not use the recipient
address only why not use payment Authority diwaker dotco
for sending the email but our application
also receives email if you would have noticed right
so for receiving the emails through a CSU have to actually
own the domain name now since I'm an employee, I don't own any record or go, right? So what I've done is I
have actually created a domain name I can get
a free domain name.

This website it is
my dot dot dot TK. You can go in this website and create a domain
for yourself for free. So basically you will be getting
this domain name free for three months. All right. I am almost from
the expiry date. So I might have to renew it. Okay, but since this is a demo,
let me quickly show you. All right, so I have actually
created this domain name and I can use this domain.

Name to send or receive emails. Now what I'll have to do or how do I configure this
in my CSS is like this. So you will go to your SES. You see this tab? It says email receiving right? So we will click on rule sets and you'll have to create
a new rule said before that. You have to actually
verify a domain you basically have to verify that the domain is actually
owned by you now how you will do that will click
on verify a new domain and you You will give
your domain name here, which is any record or TK. Click on verify this domain and you will get
these two records over here.

Now. Where will you enter
these two records is actually in the DNS server. So the domain name
Eddie record or TK has to point to a DNS server, right? And in that DNS server, you will be putting
in these two records. Now. How will you point any record
or TK to a DNS server? So DNS server is
basically Route 53 8 so we'll be configuring Route
53 with any record or TJ. Let me show you quickly
how you can do that. Let me open my Route 53 service. So this is my Route 53 service. I don't have any host
of drones as of now. So let's click on get started
now click on create hosted zone. So my domain name is
anyway card or DK right? Click on create. All right, so I have created
a hosted zone now in my Route 53. Now what I have to do is
I'll have to connect this domain to my Route 53 now. How will you do that? You will click on manage domain. And you will click
on management tools and you'll click
on name servers, right? So these name servers
have to be updated with the name servers
provided you over here, right? So let me quickly show you so you will copy
this paste it here.

Remember guys don't include
the dot in the end. Otherwise, it will
give you an error. So without the dot
copy the name server. Right, so I first
save to and see if it's working click
on change name server. All right says
changes saved successfully. All right, so it's saving
the server's now. So, let me copy
the rest to as well. All right. So I've copied my name servers
I click on change name servers and fingers crossed. Okay, so it says
changes saved successfully. All right. So my domain name is now
pointing to Route 53 awesome. So now in Route 53, I have to include
these records now. How will I do that? Let me quickly show you so you go to Route
53 and you will click on create record set now you
don't have Prototype anything here just in the type click on MX and in the value.

So as you can see, there's a value for m x
over here just copy this value and paste it here, right? This is it guys nothing has
to be done here. Click on create. Awesome, so I have an MX record now now let's we have
to create one more record, sir. And that name has to be
like this right? So I'll copy
this part paste it here as you can see now. The name is underscore
Amazon SES dot Ed u– a card or TK. And as you can see the name
over here is to seem right. So this name has to be the same
and the type of value is txt. Select EXT from here and then
you have to enter the value. So the value is this enter
this value over here and click on create. Awesome. So my Route 53 is now
configured to actually Sue actor receive the emails
from the Ed Eureka dot t– k domain cool. So we'll go back to our SES now close it it says
pending verification refresh it. Alright, so as you can see
my domain name is now verified.

All right, so let's just go
to the rule sets now. So email receiving we have
to configure so I click on view active ruleset. There is no rule sets
while create a rule. Now I have to
specify a recipient. So let me specify
hello at the rate and Eureka door TK, right? I'll click on ADD recipient. So my verification
status is verified because my domain name
is verified now will click on next step. Now. What action do you
want it to do right. So if you receive email
on this email-id, what do you want to do? So what we want to do is we want
to invoke a Lambda function.

Now what Lambda function
do you want to invoke? I want to invoke my function
to and will click on next step. So everything seems fine
will click on Next Step again. So it is ask me the rule name. Let me give the rule name
as Lambda – demo. click on next step
and click on create rule. Okay, so my rule set has now
been enabled awesome. So I have configured
my SES as well. So let me go back to my slide. Alright, so I've
configured my CS I have configured my Route 53. I've configured my ews Lambda. I have configured my sqs. I have configured my S3. And my website is also
configured right so we created a rescue SQ. So we may have to change the url in our code to Let's
quickly do that. We'll go back here. Go to the dashboard. click on SQL s Alright, so this is our q
and this is a URL.

So basically I have named
the queue seems so so if you do that sometimes
a URL don't change. So let me see if I have to upload
the code or not. So I'll go to my Lambda function
Handler go to the part where my cue is saved. All right, let me
anyway space to Q over here. I think it is the same. Yes. It is the same. Anyways, let us save it. This is my function one. So let me upload the code now. So it's my function
and click on finish. Right, so it is uploading
the function right now. So meanwhile, let me go to my function to and configure
in the queue address, which is this.

Paste it here control s save it. And once this
process is complete. I will upload this code as well. So while this is uploading, let me change the address
in my index file as well. This is my websites index file. So I'll go to the queue URL
which is this. I will change it save
it and close it. Alright, so my website's
address has also been done. Alright, so my code is uploaded
for this function. Let me upload the code
for function to as well because we are may change
upload function to AWS. So it is my function
to that is my function to in Lambda click on next. And click on finish. All right, so my code
is being uploaded Let's Wait Awhile so that
my code gets uploaded and then we can proceed
with our demonstration.

Alright, so my code
has now been uploaded to both my Lambda functions. Now. What I'll do is I will go to my local host website
click on refresh. And I will upload a file. So let me go back and see what is there
in my bucket right now so that it becomes easier for us to verify
that a file has been uploaded. So as of now my bucket is empty. I there's nothing in my editor a car demo bucket and my other
bucket is quarantine demo. This is the place where my other file
will go right? Let me empty this as well so
that we are clear that some Checked
has been added. All right, so this bucket
has also been cleared. So we'll go to a local host
will choose a file. So let's upload some image. So let it be this image, right I click on open
and click on upload image. All right, so it says
S3 upload complete. All right. So let me check if a file has been added
in My riruka Demo bucket.

I'll click on refresh. Awesome. So one file has been added
and it's called one four. Nine two, five four
six zero nine seven. Let me check in my email
to let me check if I got an email. So yes, I've got an email. Let me click on it. All right. So this is the name
of the file that I got, which is the same right? So, let me see if there is something
in my Quarantine demo bucket, so there's nothing there.

I'll come back now. I'll reply to
this email as move. So this basically means move
my file to some of the bucket and I'm replying it to hello either a Teddy
Ricardo TK, right? So now we'll hit on send. So my message has
been sent to Route 53, which will be sent to ACS which will invoke the Lambda
function which will move. My file to the other bucket. So let us check if that is done. So first let us check if I enter a car demo bucket
has been emptied so will click on refresh.

Alright, so my I do take
a bucket has now been emptied. Let's go back and check if something has an ad
in my core and tine Emma bucket. Alright guys, so my file
has successfully moved to this bucket Let
Us verify the name. So this is one phone nine to
five four six zero nine seven. Let us check that in the email. So the email that we replied to had then
the object name as one phone nine to five four six
eight or 7 so this is the same file you guys. All right guys, so we have completed
our demonstration successfully.

Welcome to the session
on elastic Beanstalk a web application hosting
platform offered by Amazon. So without any delay, let me give you
a brief overview of what we will be
discussing today firstly. We will see what elastic
Beanstalk exactly is, and then we'll discuss certain. In Salient features
of elastic Beanstalk moving on. We'll try to understand elastic
Beanstalk a Little Deeper by taking a look
at its components and then at
its architecture in finally, we'll try to deploy an
application on elastic Beanstalk for practical understanding
of the concept. So let's get started. What is elastic Beanstalk? If I have going
to find elastic Beanstalk and Amazon terminology, then it is a platform
as a service where you can deploy your application, which you might have developed with programming languages
like Java dotnet PHP node.js and many others on familiar servers such as
Apache nginx passenger and Tomcat the definition, which I just mentioned
seems to have a lot of technical terms as ended.

Well, let's try to figure
out what elastic Beanstalk is and simple terms. All right, let's say you Need
to build a computer tonight. Well, you have two ways
to go at it first. You can go to a computer
Warehouse Computer Warehouse is a place where you have different
components of computer laid out in front of you like you have CPU motherboards
router disk drive models and many other components
you can choose which have a component you need
and assemble them and form a brand new computer. This is similar to situation when you try to
deploy an application without using elastic Beanstalk when you try to develop
Application by yourself you will have a list of tasks
which you need to do.

Like you might have to decide on how powerful you want
your ec2 instance to be then you have to choose
a suitable storage and infrastructure stack
for your application. You might have to install
substrate surface for monitoring and security purposes as
well moving on to option b, you can always visit
an electronic retail store which has pre-configured
computers laid out in front of you. Let's say you are
a graphic designer and you want a computer which has a modern graphical
user interface installed in it. All you have to do
is specify this requirement to a salesperson and walk out
with a computer of your choice. Well, I personally
prefer this option. This is similar to the situation
where you're trying to deploy an application
using elastic Beanstalk when you use elastic Beanstalk
to develop your application. All you have
to do is concentrate on your code list of the tasks like installing ec2 instances auto-scaling groups
maintaining security and monitoring. Etc is done by
elastic Beanstalk. That is the beauty
of elastic Beanstalk.

So let's go back and take a look
at the definition again and see if he'll understand
it this time. Well elastic Beanstalk as
a platform as a service where developers just have
to upload their application load balancing auto-scaling
an application Health monitoring or all and it automatically
by elastic Beanstalk. Now, let's try to understand how elastic Beanstalk
as a platform as a service is beneficial. Vishal to app developer. I'm sure most of you know, what platform as a service
has but let's try to refresh what we know platform as a service as
a cloud computing service which provides you a platform where you can deploy
and host your application elastic Beanstalk
makes the process of app development much more fun and less complex and I
have five points to prove that to you firstly it
offers quicker deployment suppose you're developing
an app by yourself.

Then you'll have to do a lot
of tasks by yourself like you might After decide
on ec2 instance choose a suitable storage and infrastructure stock as well as install
auto-scaling groups as well. And then you might have
to install substrate surface for monitoring and
security purposes. Well, this will take
quite a lot of time but if you have used
platform-as-a-service to develop your app then all you have
to do is develop a proper court for your application rest will be handled by
platform as a service or elastic Beanstalk
in this case, which makes the entire process of app development
much more faster. now secondly elastic Beanstalk simplifies entire app
development process like the set or developers have
to do is concentrate on developing a code
for their application rest, like monitoring servers
storage networking Etc and managing virtualization
operating system databases is done by elastic Beanstalk, which simplifies
the entire process for a developer using
platform as a service to deploy our application make
Center app development process more cost-effective if you're trying
to dip By yourself, then you might have to install
separate surface for monitoring and security purposes
and I'm sure for that you'll have to pay
a lot of money extra money.

But if you're using
an elastic Beanstalk to deploy your application it will provide you all
this additional software such as a package and you can avoid paying
unnecessary operating costs also elastic Beanstalk offers multi-tenant
architecture by that. I mean, it makes it easy for the users to share
their application on different devices. And that too with high security when I say high
security platform as a service will provide
you a detailed report regarding your application usage
different people or users who are trying to access
your application as well.

But this information
you can be sure that your application is
not under any cyber threat and finally platform as a service provides you
an option where you can know if the user who is using
your application is getting a better experience out of it or
not with platform-as-a-service. You can collect feedback at Seven stages
of your app development like during development stage like testing Stage
production stage design stage by doing so you will have
a report regarding how your application
is performing at every level and you can make
improvements if needed. So this is how platform as a service like a are an elastic
Beanstalk makes it easy for developers to develop
an all-around perfect up guys will be able
to relate to this point when we try to deploy
an application using elastic Beanstalk in the later
part of this session.

You'll understand. How will a Stick
Beanstalk is beneficial to app developer in Marquette. There are quite a lot
of application hosting platforms which are providing
platform as a service. Let's have a look
at few of that. First. We have something
called openshift. It is a web hosting platform
offered by Red Hat. Then you have Google app engine which we all know ask a lingo
at is a platform as a service where you can deploy your
application and just do minutes apparently will provide you
a production ready environment where all you have to do
is deploy your application code.

Then you have python anywhere. It doesn't online
integrated development platform and web hosting service as well. But based on Python language, then you have elastic Beanstalk
offered by Amazon moving on. We have a sure app Services
by Microsoft and many others. But today our main focus will be
on elastic Beanstalk, which is a web hosting platform
offered by Amazon now that you have basic
understanding of elastic. Stop, let's go ahead and take a look at few
of its features. Mostly all the features
are similar to the ones which we discussed earlier, like elastic Beanstalk makes
an app development process more faster and simpler
for developer moreover. All developer has
to do is concentrate on developing code list
of the configuration details and managing and monitoring details will be handled
by elastic Beanstalk.

Also elastic Beanstalk
automatically scales up your abs resources, which have been
assigned to your uh, Occasion by elastic Beanstalk
based on your application specific needs but
there is one feature which is specific
to elastic Beanstalk suppose. You have deployed an application
using elastic Beanstalk, but now you want to make changes
to the configurations which have been already assigned to your application by
elastic Beanstalk though. Bienstock is a platform
as a service. It provides you with an option where you can change
the pre-assigned configurations like you do and infrastructure
as a service. Well if you remember Member
when if you're trying to use infrastructure-as-a-service to
deploy an application, you will have full control
over AWS resources. Similarly Beanstalk also
provides you with full control over your AWS resources and you can have access
to the underlying resources at any time.

Now, let's try to understand elastic Beanstalk
a little deeper first. We'll be discussing few
components of elastic Beanstalk, then we'll have a look
at its architecture. What we have your first we
have something called application suppose you
have decided to do a project. So what you do you go ahead and create a separate folder
on your personal computer, which is dedicated
to your project. Let's say your project needs
Apache server SQL database and a platforming
software like Eclipse. So you install
all the software's and stole them in the folder which is dedicated
to your project. So that will be easy for you to access
whenever you need all the software's similarly when you try to do
deploy an application on elastic Beanstalk Beanstalk
will create a separate folder which is dedicated
to your application and an aw storms.

This folder is
what we call an application if I have to Define folder or
application in technical terms, then it is a collection
of different components like environments
your application versions and environment configuration. Let's try to understand each
of these components one by one. We have something called
application version suppose you have written a code stored. In the file and deployed
this coat on elastic Beanstalk and your application
has been successfully launched but now you want to make
certain changes to the code. So what you do you go ahead and open the file make
changes to it save it and then again deployed on elastic Beanstalk
elastic Beanstalk again, successfully launches
your application. So you have two versions
of your application now, it's just a copy
of your application code, but with different changes and elastic Beanstalk
will provide you with an option where you can upload
different versions.

As of your application without even deleting
the previous ones then we have something called environment
environment is a place where you actually
run your application when you try to launch and elastic Beanstalk
environment Beanstalk starts as ining various AWS resources, like ec2 instances
auto-scaling groups load balancer security groups
to your application the point which you have to remember
is at a single point of time environment can run
only a single version of your application. Elastic Beanstalk will provide
you with an option where you can create multiple environments for
your single application suppose. I want and different environment
for different stages of my app. Like I want an environment
for development stage one for production stage and one
for testing stage. I can go ahead and do that create a different
environment for different stages of my application and suppose you have same
version or different version of your application installed
on all these environments.

It's possible to run all
this application versions at same time. I hope that was clear. Well, you'll understand
them practically when we try to deploy an application in
the later part of the session. Then we have something
called environment Tire when you try to launch
an elastic Beanstalk environment elastic Beanstalk asks
you to choose amount to environment tires, which are web
server environment. And then you have
worker environment. If you want your application
to handle HTTP request, then you choose
web server environment. And if you want your application
to handle background task that is where a work environment
comes into picture.

Sure, which to choose
either web server or work environment
and how to work with them when we'll try to deploy
an application in later part. And lastly we have
something called environment Health based on how your application is running
Beanstalk reports the health of your web server environment and it uses different
colors to do. So first gay indicates that your environment
is currently being updated. Let's say you
have installed one version and now you're trying
to upload different version.

Well, it's taking a lot of time so that time
it shows gray color. It means your environment is
still under updating process. Then you have green which means that your environment has passed
the recent health check. Then you have a low which means that your environment has failed
one or more checks and red failed three or more
checks moving on. Let's try to understand the architecture of
elastic Beanstalk. Like I said early on
when you try to launch an elastic Beanstalk environment
Beanstalk ask you to choose Was among two different environment
tires firstly we have web server environment
web server environment usually handles HTTP
requests from clients and it has different components
firstly we have something called environment.

You know, what environment
is it's a place where we actually run your application and Beanstalk provide
you with an option where you can create
multiple environments and the main point is
at a point of time this particular environment
can run only one version of your application moving
on we have Something called elastic load balancer. Let's say your application
is receiving a lot of requests. So what elastic load balancer
does is it distributes all this request
among different ec2 instances so that all the requests
are handled and no request is being delayed. What actually happens is when you launch an environment
or URL is created and this URL in the form
of C name is made to point elastic load balancer
senior is nothing but alternate name for your url. So when your application
receives requests all these requests are forwarded
to elastic load balancer and this load
balancer distributes. These requests among ec2
instances of Auto scaling group. Then we have Auto
scaling Group Well, if your web server is trying
to handle a lot of traffic and it's having a scarcity
of ec2 instances, then Auto scaling group automatically installs
few easy to instances.

Similarly. If traffic is very low, then it automatically terminates
under use ec2 instances then we Have ec2 instance. So whenever you try
to launch an elastic Beanstalk environment Beanstalk
will assign your application with a suitable ec2 instance, but the software stack like
the operating system the servers and different software's which are supposed
to be installed on your instance are decided by
a device called container type. For example, let's say my environment
as Apache Tomcat container. So what it does it installs Amazon Linux operating
system Apache web server and Tomcat software. Do you see two instance
similarly depending on your application requirements
it installs different software stack on your ec2 instances. Then we have a software
component called host manager which runs on every
easy to instance that has been assigned
to your application.

There is host managers
responsible for various tasks firstly it will provide
your detailed report regarding performance
of your application. Then it provides
instant level events. It monitors your application
log files as well and it monitors
your Datian server, you can view all
these metrics log files and create various alarms on
cloudwatch monitoring dashboard. Then you have security
groups Security Group is like a firewall
to your instance. Not anybody can
access your instance. It's just for security purposes. So elastic Beanstalk has
a default Security Group, which allows client to access
your application using Port 80. You can Define
more security groups if you need and then elastic Beanstalk also
provides you with an option where you can define
a security group. All your database
for security purposes moving on.

We have something
called Walker environment. First question that comes
to our mind is what is worker. Well suppose your web server has
received a request from client. But on the way while it's trying to process the
request it has come across tasks which are consuming
a lot of resources. I'm taking a lot of time because of which
it's quite possible that your web server
might deny other request. So what it does it forwards
these requests to something called Welcome these worker
handles all this stuff. Us on behalf of web server. So basically worker is a process
that handles background tasks which are time intensive
and resource intensive.

And in addition. If you want you can use walker to send email notifications
to generate metric reports and clean up databases when needed let's try to understand why we need Walker
with the help of you skis, so I have a client he has made
a request to a web server and the web server
has accepted the request and it starts
processing the request but While it's processing
the request it comes across the switch
are taking a lot of time. Meanwhile, this client has requested or send
another request to a web server since web server is still
processing the first request it denies second request.

So what is the result
of this as the performance and the number of requests accepted by
a web server will drastically decrease alternatively let's say
a client has made a request and your web servers accepted it and it starts processing
the request and again, it comes across Stars which are doing a lot
of time this time.

What it does it transfers or it passes all this task
to walk our environment and this work environment
will handle all these stars and request one
is successfully completed. Meanwhile, if it
receives a second request since it has completed
processing request one, it will accept requests to I
hope the scenario was clear. We'll all we are doing
by installing work environment is we are avoiding spending lot
of time on single request here. Now, you know what
web server environment is and work environment is and why
do we need work environment? But there has to be some way so
that this web server environment can pass on this task
to work environment.

Let's see how so you have
your web server environment. It has received a request and while processing
it as encounter tasks which are taking a lot of time. So what it does it creates and
sqs message sqs is a simple to service offered by Amazon and this message is then put
into es que es que and the different
requests are arranged based on priority in this qsq. Meanwhile when you're trying
to install Walker. Environment elastic
Beanstalk has installed something called demon. What is demon does it pulls
sqs message from Askew and then it sends the Stars
to web application, which is running on vodka environment as
a result or as a response to spin start application
handles all the stars and responds with
an HTTP response option. So this is
how the entire process of handling tasks transferring
and then handling does goes on so you have a client he has made
a request to a web server, but the web servers
encounter with tasks which are I'm consuming
and resource consuming.

So it passes this
request rescue is Cube. And when you try
to install walking environment, there's a demon which pulls
out all this messages art us from your rescue. And then this demon
sends all the stars to our application application
results all the stars and then it responds
with a HTTP response option. So this is how your to
application communicate I can read was lot of 30. Don't worry. We have arrived
at the fun part of session where we'll be trying
to deploy an application using elastic Beanstalk hear you by doing or by creating an application on
elastic Beanstalk practically, you'll understand different
concepts its architecture and different environment tires
and all this.

So let's go ahead. So this is my area
plus Management console. And if you want to take a look
at all the services, then you have all the services here, but were mainly
concerned with elastic. Up, which have recently used. So it shows that all recently
used resources or Services here. So I'm going to choose
that elastic Beanstalk and this is my Beanstalk console. If you're trying to deploy
an application for first time, this is the page where you land
when we scroll down it says that I can deploy an application
and three easy steps. All you have to do is select
a platform of my choice then upload our application code if I have one or use
a sample application code and then run it. Let's see if it's
as Easy as it says here, so go ahead and click on create
new application option here.

It will ask you for application
name and description. I'm going to name my application
as Tomcat app then description as my new web app. And then I'm going to click
on this create option C. When I try to
create an application. It has created a separate folder which is dedicated
to my application. And in that folder, we have different components
as you can see here. I have my environment then I
have application versions and if I've saved any configuration, it will show all
the saved configurations here. Now. Let's go ahead and create an environment
on the right side. You see an actions option
and you click on that you get different choices. You can just select
the create environment here. So again, it's asking you to choose among two different
web environment tires. You have web server environment and work environment
in web server environment. Your application handles
HTTP requests from clients. Then you have work environment where your application
will process background tasks like time intensive and resource consuming
task in this demo. I'm going to work only
with Observer environment.

You can go ahead explore
and create work environment. Once you understand how to deploy an application
on elastic Beanstalk. So I'm going to click
on the select option here. It will take me to a It's bad
enough to give a domain name or in technical terms are URL
to my application. You can give any URL
of your choice and see if it's available. So let's say my Tom app
and it see if it's available. It says the domain name is
available then description. I'm going to give
it a same as before. So my new web app then when I scroll down it asked me
for a platform of my choice. There are different options. You have go then you have dotnet Java Ruby PHP
node.js python Tomcat. At and if you're trying
to deploy an application on the platform, which is not here, you can configure
your own platform and deploy turn
elastic Beanstalk.

It provides an option here. You can see there's
in custom platform here. So I'm going to choose Tomcat
platform for my application. And since I'm not
any kind of developer, I'm just going to go ahead
and use the sample application provided by Amazon. But if you have
any application code if you have created or develop
some code you can store that in a file and upload
your it says you can upload your code then you have a zip. You need to convert
your file to zip our war file and then upload it here.

So I'm going to just
select sample application and then click on create
an environment here. So it's going to take awhile
for elastic Beanstalk to launch my environment though. It's not as much time as it would have
taken me to develop entire application by myself while elastic Beanstalk
is trying to launch environment. Let's discuss some points or in
the earlier part of the session with discuss some benefits of
elastic Beanstalk firstly I said that it fast ins your process. Of developing an entire. So it's true. Doesn't it? All I did was
select the platform of my choice dress is done
by elastic Beanstalk itself. So thereby saving a lot
of time similarly it simplifies the process
of app development again.

All I did was select
a platform of my choice like installing easy
to instances security groups Auto scaling groups and assigning IP addresses rest
is done by elastic Beanstalk. I even mentioned
a point where I said that it will provide elastic Beanstalk provides you
with an opportunity. And now you can change
the present configuration. We'll explore that. Once the environment is created. Let's go ahead and see
what elastic Beanstalk is doing. It says that it has created
a storage for my environment. Well S3 bucket
solar all my files where I have my application code
are stored there then test created a security group as well and elastic IP address then it says
it's launching an ec2 instance.

So you see it's as easy as that. All you have to do is select
a platform of your choice rest is Founded by elastic
Beanstalk and later on if you're not satisfied. If you want to change
some configuration, you can go ahead
and do that here. Look at this. This is the IP address
which are domain name which are assigned to my up. It says new instance has been added and in addition
it showing each task while it's doing
Isn't that cool? You'll know what your
environment is currently doing. So it's still taking a while.

So it says it has installed and added instance
to my application and my environment has been We launched it is finished
almost all the tasks. It should have taken
to environment page now. So this is my environment page or you can see
our dashboard first. You have environment healthier. It says green. It means that my environment
has successfully passed the health check then
it shows the sample version of your application since I've used the sample
application and saying sample application here since I've chosen
Tomcat as my platform. It has installed
suitable infrastructure stacked like Amazon Linux and you have Java
8 aiming language. Let's go ahead
and explore this page first. We have something
called configuration here. Like I said, though,
it is a platform as a service. It provides you with an option
value can change configuration. So you will have full control
of your resources first. We have something
called instances here. When I click on modify option, you can see that elastic Beanstalk
has assigned micro instance to our application if I want I can go
ahead and change it to different instance based on my application
requirement scrolling down.

I have cloudwatch monitoring. If I want detailed monitoring,
then I can go for one minute if I want basic
monitoring or not. So detailed monitoring then I
can choose five minutes here. Then I have an option
of resigning storage to my application as
well at says we have magnetic storage general purpose and provision. Iops as well. When we scroll down again. We see different
security groups. I can just click on that and the security group
will be added to my application. So once you've made
the changes you can click on apply option or do I
haven't made any changes. I'm just going to click here. So now elastic Beanstalk is
trying to update my environment. So it's showing gray color here.

If you recollect a mentioned
during the earlier part that grey indicates. My environment is being updated. Okay, let's go back
to configurations. We did have a look at instances. Then you have something
called capacity apparently elastic Beanstalk is design a single instance
to my application. If I want I can go ahead and
change to auto-scaling groups. You have an option
called load balance so you can click on that here
and you can set the minimum and maximum number of instances
that your auto scaling. Group can install as well then if you have chosen
a load balancer option earlier than a load balance
would have been enabled here. Then we have monitoring details which provides you with
two options enhanced monitoring and basic monitoring and when we scroll down
you can see a streaming to cloudwatch logs option here. So if you want your log files, you can view them
on cloudwatch dashboard as well.

You can set the retention period
according to your choice and suppose you want your application for
some private purpose. Then you can create a generate
a private VPC for you. Your application similarly, you can add or decrease
the amount of storage as well. So by explaining all this what I want to say is your hands
are not tied you can make changes to configurations. If you want. Then we have logs option. If you want to have a look
at the last 10 lines of your log files,
then you have an option. It says last hundred line. Sorry lost a hundred lines then
if you want full log files, then you click on that
do provide you a file and download format. You can just download it. Then we have health option here
where it provides health. You are a cc sources basically
shows ec2 instance here. It says it's been
7 minutes or six minutes since my ec2 instance
has been installed. Then you have monitoring where it shows
different monitoring details like CPU utilization Network
in network out. If you want you can go ahead
and create an alarm with alarm option here suppose you want notifications
to be sent to you when the CPU utilization
or when the number of ec2 instances are scarce
in your auto scaling group.

Then you have events here events basically are nothing
but it's a list of things which has happened since you started launching
an environment when I go down it says we have seen earlier on the black screen
the same things are applied your so it says create
an environment starting then we saw that AC to instance has been installed security
groups elastic IP address. So basically it
shows all the events that has happened from the time
elastic Beanstalk has started to launch our environment and till the time you
terminated the environment. So that's it. Then you have tag files. You can assign different
key values as well. Let's go back. This is a sample application, which I've tried to use not let
me try to upload and deploy a new application version here.

Okay, I'm gonna go
to documentation here. I'm interested with
elastic Beanstalk. I'm going to select on that and then develop a guide click
on getting started on when you scroll down on deploy a new application Virginia based
on your sample application. You have different versions
of your application since I've selected
a tomcat is my platform. I have a tomcat zip file. You're a boy. Already downloaded that so I'm gonna just going
to upload the file then so let's go back
and it says upload and deploy but let's go
back to our folder.

Then there's an application
versions option here. So it gives you deploy and
upload option separately here. I'm just going to upload
first then we'll deployed version label new version
and upload the file. I have it here zip file. I'm just going to attach
the file and then click on upload optional. The new version of
my application has been uploaded but it's not been deployed yet. So when I go Can you can see that I can still see
the same version which was there before now? Let's go back and deploy it.

Okay. I'm going to select this
and then I'm going to click on deploy option
and select employer. Let's go back
to environment and check so my environment
is being updated. So again the gray color here once it's updated as and show
the new version name here. It is uploaded. So as you can see it showing the version name of my new
version application version. Like I said all your both
my application versions. Are there have been
deleted any you don't. Have to delete
your application versions when you create
a new one similarly, you can upload multiple versions
of your application going back actions option. Then you have
load configuration, which will definitely
load your configuration. Then you have saved we can save
this configuration suppose. You want to create
an application with the same configurations again, you don't have to start
from the beginning from creating application
environment all that. You can just save
the configuration and use for the other application or other environment
of your application. Then you can clone
your environment as well rebuild and I environment
and terminate as well. So here I have
saved configuration.

If you have saved this
configuration the configuration of been listed here
and like that conversation. I can use when I'm creating
a new environment. Okay, just let's see if have explored all
the options environment how well I forgot to show
you one most important thing when I click on this URL. It takes me to a page
where it shows but my application
has been successfully installed. Well, that's it. So now you know how to deploy an application
using elastic Beanstalk. Do I have used
the sample application? Are you can go ahead
and upload a code of yours if you have any and try it out. Well, all the options here
seems to be user-friendly so you will know what to do. It seems to be easier process. You'll understand it better
when you try to reply an application by yourself. So first and foremost, I would start by talking about
what cloud storage exactly is. Then we would move further
and understand some of the myths that surround cloud storage but also discuss certain
cloud storage practices and would understand how different cloud storage
service providers work.

Finally, I would finish
things off with the demo part where I would be talking about
how cloud storage Services work on Amazon web services. So I hope this agenda
is clear to you guys. So let's not waste any time
and quickly get started then. So what exactly is cloud storage
now first and foremost, let me tell you what prompted
me to actually go ahead and take this session. Well recently.

I had been interviewing
and where I asked people what do what did the know
about cloud computing and they told me that cloud computing is a place or it is a place online
where you actually store data. I went to some extent I agree. Yes cloud computing
helps you store data, but that is not the definition
on the longer run. So that is why I thought
that we should actually go ahead and have this session so that we can discuss
some of the myths that surround cloud computing
and tout store is in particular.

So guys, let's start
with a basic definition first. Storage. Well, it is something that is made available
in the form of service. Which is connected
over a network. So guys this is
a very basic definition and the throw some more light. I would like to actually go
ahead and given certain examples as well to specify
what does this definition mean? But to some point
this definition is correct. It says that it is
nothing but a storage which is available as a service which is connected
over a network now again, you might wonder as
in this is what people told me in the interview, right? I mean it is a place
where you store data.

So yes cloud storage
to some extent. Yes. This is what it is. But when you talk about
cloud storage it is lot more than this basic definition. Let's try to understand
what all this cloud storage exactly has to offer
to you people. Well first and foremost as I've already mentioned
it is storage it can let you store emails media. Now when I say
media you can store in a different kind of media whether it's your images whether it's your videos
or maybe other kind of files. It also lets you hold
Services as well. Yes. We are living in the world
of internet right now and there are various Services websites that are online and this data can be stored
by using Cloud platform. and finally I'm sorry guys finally it is nothing
but the backup now when I say back up guys, we are talking
about large Enterprises that let you back up the data and the using Cloud
platform to do that.

But again, it's to still
holds the same point right? I mean when I say
emails Media Services backup for large organizations, I mean it is still
a simple storage know now, let me tell you what it does when I say backup
for large organizations. We are referring to a lot
of pointers here data coming in from different sources. The weight is processed. The weight is integrated and
stored into a particular storage how it is handled
and what all can you do with it. Now when you talk
about a cloud storage, it actually takes care
of all these things. That means it's not redundant
or a dead storage where you just take
your data and put in your data you can think
of it as smart data storage. So to understand that let's talk about
cloud computing a little so what cloud computing does
is it lets you have this data on the platform
and it is a platform where it has
a number of services that lets you compute or process this data to suit
your business needs now, it can be using machine
learning Big Data finding out certain patterns
using power bi tools or not power bi tools bi tools.

And also do a lot
of other things like maybe use a cloud platform where the data can be used
for marketing purposes, Maybe. I think I owe to Bots
and stuff like that. So this is what
a cloud computing platform. Does it basically lets you use
different sources and use this particular data to do
multiple or different kinds of things. So when I say a cloud storage it basically ensures
there is a mechanism that in first place it stores
data and lets you perform some of the actions that you can actually
perform on this data. So as we move further, I would be discussing
quite a few pointers that support this claim or this.

Definition of mine. So let's just move
further and try to understand a little more pointers
or some other pointers that talk about cloud storage
but to keep it simple. It is a storage that lets you do a lot of things
with the data primary reason being storing the data and the other reasons
being processing it or managing it also so let's
move further and take a look at the next pointer. So what are the myths
that surround a cloud storage? Well when you talk
about the myths, this is what some people
The same that cloud computing is suitable only for large scale organizations
know this is not true.

Let me give you
an example recently. What happened was one
of my friends. He actually happen
to format his mobile phone and he lost all the images
and other data that was there on that phone. So the problem was he
never backed that data on any Drive neither
on Google Drive or anywhere so he lost the data so he came to us and he told us
that this is what happened.

So we told him that You
should have backed it up. Maybe on Google Drive. So next time he did
that and again, he being used
to losing his data. He lost his data again. So he again comes up and he's like I've lost the data
so we reminded him that he had his data stored
on Google Drive. So when you talk
about Google drive, it is nothing but an online
storage where you actually make a copy of a data, so he made a copy of his data and he could actually
get that data back. So when I say cloud storage it
gives you a simple application or a simple. That you can actually go ahead
and just put in your data just like Google River you can put
in your data as well. So it is not limited to
large-scale organizations only if even you are
a single individual where you just need
to store your data, you can use cloud storage. Now, there are there are
various cloud service providers that actually meet or cater different cloud
computing needs So based on that the cloud storage
is might get complicated and might give you
more functionality.

But even if you need is
as basic as storing data, don't worry cloud
computing or cloud. Storage is for you as well. Now if you talk
about small scale businesses, yes these days
the amount of data that is generated is huge. And that is why what happens is even
for small scale organizations. You need a place where you can store your data
and somebody can manage the data for you so you can focus
on your business goals. So this is
where cloud storage comes into picture for even small
scale businesses as well.

So if you ask me, yes last scale
organizations are suitable for cloud computing or only
large-scale organizations. A suitable for cloud storage. This is a myth. Complexity with cloud guys. Now. What does this term symbolize
people normally assume that having that private
infrastructure makes it easier for them to actually go
ahead and put in your data that is not true. The fact that people are used
to certain methods or methodologies.

They feel comfortable with it. Whether cloud is complex or not. I would say it is not why because if you get used
to certain Services, you would realize that storing or moving a data to cloud is
actually lot more easier than Normal infrastructures are your previous or traditional
infrastructures is what I would say, so whether cloud is complex, I would say no as we move into the demo part
probably we would be talking about this pointer or once I give the demo probably you would have
a clearer picture how easy it is to actually
move your data to Cloud.

Not eco-friendly. Now this might sound
out of the blue. I mean you might wonder this is
not a sociology session. So where did this
point coming from? I mean not eco-friendly. Yes what people
assume is the fact that a large amount
of data is being stored on these platforms. So we have use amounts
or use numbers of data centers which are big in size and they consume
a lot of electricity. So there is power
wastage electricity wastage. Well, that is a myth again first
and foremost the fact that Getting a centralized
storage somewhere. That means most of the data
would be stored there. So yes, you are
automatically saving out on your power consumption when you talk about it from
a global or an Eco perspective. The other thing is I
would not want to point out a particular
cloud service provider. But when you talk about GCB
that is Google Cloud platform, then Amelie provide their cloud services
at a very affordable price now, why is that? The reason for that
is they've actually put in a lot of effort
into the research part.

Where the researched a lot on how they can actually
minimize the cost and how did they do it? They basically ensure
that the amount of power that is consumed
by the resources. They tried and optimize
that amount to a minimum amount so that they are charged less and in a way you
are charged less. So if they're optimizing
that particular process, obviously you're consuming
less amount of electricity. So whether it's eco-friendly definitely it is
eco friendly friendly. Zero down time again. There's no such thing
as zero downtime. Now the fact that I'm talking about
cloud storage does not mean that I tell you that it has zero downtime and you're completely secured
know there is a possibility that there might be
a downtime the fact that cloud ensures
that this downtime is very less. Now. That is a plus Point what loud also
does is it ensures that there is disaster recovery and there is always a backup
of your data or your resources. So even if something goes down
for a very little time and we normally it happens
for a very less time if it does happen
and it happens very rarely, but even if it
happens care is taken that nothing harms
your resources or your data.

So zero downtime. No that is not true. But definitely downtime
is taken care of when you talk about Cloud storages. There is no need
of cloud storage. Okay, this is one
of the biggest myths whether people agree or not. If you go back like 10 years
from now probably people did not know a lot
about cloud computing. But with time people
are actually moving to cloud and if you take a look
at recent statistics, they would agree as well. I mean people would be wanting
to switch to cloud in near future. And the reason for that
is the quite a few service is quite a few facilities
that cloud gives you and that is why people
are moving to And if you do move to Cloud, you'll be using
cloud storage inevitably. So yes that is going to happen. And if you think that there is no need
for cloud storage definitely near future. I would assure you that even you would
be moving to Cloud. So Guys, these are some
of the major myths there are some other myths as well as we
move further not worried. We would be discussing that as
well in some other pointers.

So let's just go ahead and talk
about some of the benefits of using a cloud storage
for data storage or basically using Cloud
for data storage. So what are the benefits
of the signal I purposely kept this pointer for the later half
and I first discussed the myth because these pointers
would definitely help you understand some
of those myths better. Not a cloud platform
is customer-friendly. What do I mean by this? Well, first and foremost when you talk
about cloud storage, what you're able to do
is you're able to scale up your storage scale
down your storage keep it secure monitor it and you can ensure that there is constant
backup taken of your data.

So when you talk about it
from a security perspective, it is secure as well plus
what cloud service providers do is they've had so many services that In the market you talk about any popular cloud
service provider they have lot of services
that are made available. What do these services
do is they ensure that you're functioning
on cloud platform is very smooth and same is
for cloud storage as well. You can utilize various
Services which ensure that you're functioning or you're working
on cloud becomes easy again, which I have been
reiterating for a while. Now that I would be talking
about these in future slides. Don't worry as we get
into the demo part you would and how user-friendly
these Cloud platforms are Security now again, this is an important point when you talk about
Cloud platforms Cloud storages are they secure or not? Definitely they are very secure and there was a time
when people believed that these platforms when not secure
to a greater extent and that out was understandable.

I mean if there is something that is new in the market
you tend to doubt that but if you talk
about Cloud platforms these platforms are actually
more secure than your on-premise or your traditional. Says which people are used
to using the reason for this is if you talk about
cloud service providers, let's talk about AWS. That is Amazon web services
in this case. What it does is it gives you
a shared security model now, what do I mean by this you
have service level agreements where you and your customer or maybe the customer
and the AWS providers.

They basically come to a term where the decide as
in what kind of security or what kind of principles
are to be implemented on the architecture and you
can take control as a new. You can decide what accesses do
you want to give to the vendor? And what are the axis is
you want to keep to yourself? So when you do
combine this approach? It ensures that security is is
at the optimum and you get to be or you get to take control
of your security as well.

So yes, if you talk about cloud storage
being secure or not. Yes. It is very secure to name
some we have S3 and AWS. It is highly durable
and it is highly reliable. So when you talk
about disaster recovery and T it is almost up to there and as I've already
mentioned not everything is hundred percent when I talked about
the downtime or yeah the downtime part so yes, not everything is
hundred percent. But when you talk
about security and durability when you talk about S3 in particular it is
99 point something six or seven times nine that is
99.999999 times durable. So that does make
a system very secure. Another benefit guys. It is pocket-friendly. Now, if you talk
about cloud service providers, whether it's storage, whether it's compute service
database Services all these Services you can actually
go ahead and use these services for rental basis. It's just like
paying for electricity. I mean, if you're using a particular service you
would be paying for that service for the duration
you use that service and you would be paying
only for the resources that you've used.

So it is pay-as-you-go
kind of a model where The only for the resources you use and only for
the time duration you use so whether it's
pocket friendly or not. Yes. It is pocket friendly. And as you move further, I mean if you are using
more storage the cost again, it comes down
to a greater extent. So it is already cheaper and
if you decide to scale up, it would be more cheaper
or it would be cheaper is what I should say. So yeah, these are some
of the benefits now if you talk about cloud
computing and storage again, there are other benefits
like as I've already mentioned durability. Scalability and various
other benefits but these are some core ones. I would not want to get
into the details because I wish to keep everyone
on the same page for people who have been attending this session for the first
time and for people who probably know a bit about
cloud computing again guys, if some of the terms
that I'm talking about in this session you feel that these terms are
fairly new for you and I'm probably going
at a faster Pace, I would suggest
that you actually do go ahead and check into the The sessions that we have
on our YouTube channel because we've talked
about a lot of stuff there.

I mean other cloud services what cloud computing is
what cloud service providers are what are different
service models and quite a few other videos
and sessions to be honest. So I would suggest that you go through
those sessions as well. And I'm sure that by now many of you
might have been wondering as in whether this session
would be recorded and a copy of it
would be available to you. People are not not very
most of us sessions. They go on you. Boop so probably a copy
of it would be there on YouTube. And if not, you can actually share
your email IDs as well. If it does not go on YouTube. Somebody would share a copy
of the session with you people. So guys if I'm if I'm happening to go
a little faster than what you're expecting do not worry you'd be having
a copy of this as well. But for now just try to keep up
with the pace that I am going with and I'm sure that by the end of the session
we all would be good.

So guys what are some
of the cloud storage practices that you should take
care of now? These are the practices
that should concern somebody who is planning to move
to Cloud again. If you are a newbie and you're just here to practice
we are not talking about you in particular but these pointers are important for you as
an individual as well. But I'm talking about it from more
business business perspective or more industrial perspective.

So if your organization
is planning to move to Cloud Definitely. These are some
of the practices or pointers that you should take care of. So first and
foremost scrutinize SLA, so as I've already
mentioned you have SLS where your service providers or
vendors basically come to a term where you actually
go ahead and decide on particular rules as a nugget. These are the terms and these
are the services as a vendor. I would be providing
you people and you as a customer you agree
to certain terms as an okay. This is what you
would be giving us. And this is what we
would be paying you. So there are certain pointers
that you should consider while you are actually
signing your essays. That you need to understand
is when they say that you would be this is the
base charge try to understand how the charges would be when you decide to scale up
and stuff like that other thing that you need to consider
as I've talked about downtime.

Right? So normally you have SLS where people talk
about the stuff that there won't be an outage
which is more than 10 minutes. So yes, I mean this
sounds fairly good right? So in an hour's time, this is a hypothetical
example do not consider that there would be
a downtime of 10. Minutes, this is
for your understanding. Let's assume that there's
a downtime of maybe 10 minutes in an hour's time, which is too high for now,
but let's assume that so what service
provider would claim is if there is a downtime once probably this is
what the charge would be. But if it goes down
after that probably you get some more consistent discount
and those kind of things. So if there is an SLA
where you say that it is 10 minutes, What if they were to down times
of nine minutes in an hour and that is fairly close, right? So you've been robbed
of your right? So that is
what I'm trying to say.

I mean if you do actually go ahead and have
particular SLS make sure that you consider
in right points that suit in your business as well. Follow your business
needs again guys storage as we move further, we will be discussing
what are the different kinds of storage is so when you talk
about cloud service providers, they provide UN number of storages or In types
of storage is what I should say. So depending upon the business
you're dealing with the kind of data that is generated. You should be able to choose a proper storage
for your requirements. I mean, whether you're dealing
with a real time data, whether it's stationary
data archival data based on that you should be able to actually go ahead and set
up your cloud storage.

Also, you need to
understand as an okay. Um, this is the date
I would be putting in and these are the Integrations
I would be needing because I'm using
these kinds of tools. So are those With
my cloud platform, so probably you need to consider
these pointers as well. And if you follow these rules probably a business would end
up saving a lot of money.

Now there have been used cases where businesses have
actually gone ahead and saved lakhs of dollars
thousands of dollars. So yes considering
these pointers understanding your business also
becomes important. You need to ensure that the security
which you are actually managing or monitoring
is defined properly. I've already mentioned that if you talk
about cloud service providers, they let you have an SLA where you both come
to a similar agreement. So understand the security
what are the accesses that you have? What are the accesses? You want to give? What kind of data are you
dealing with and based on that? Probably you can come to terms when you're actually
moving to Cloud. Plan your storage future what we are trying trying
to say here is plan the future of your storage again.

Do you need to scale
up in your future? What are the peak times
that we can expect and stuff like that. So when you initially actually
set your storage up probably you would be in a much
better position to scale up. I'm not refraining from the fact that cloud providers
are already scalable, but just to be secure you can do that when you talk
about Cloud providers mostly the give you an option
of scaling, right? V or instantly but still
having an understanding of how much storage you need where you going to move
in like two years three years time probably
having an understanding of all those things
would definitely hold you in a much better position. Be aware of hidden costs
again guys have talked about the first SLA, right? So it is similar to that understand
what you're paying for. How much are you paying for? It is a pay-as-you-go model but having an understanding
of which Services would cost you how much would help you
in performing proper essays or having proper policies
for your storage. So these are some
of the do's and don'ts of cloud storage guys.

Again, if you need more insights
on different Services as well. We have a video or a session
on YouTube which is called as Interviews best practices you
can take a look at that as well where we talk
about different services and how can you actually perform
certain tasks which would ensure that you are in the
best possible position. So guys we've talked
about quite a few things. We wonder stood
what cloud storage is. We were understood
what are the benefits what are some of the myths and what are some
of the practices that you should take
care of now, let's take a look at some of the different
cloud service providers that provide you
with the services and once we are done with it, then probably we would move
into the demo part.

So guys the quite
a few cloud service providers, which also provide you
with storage Services. We have Google cloud platform, which is one of the leading ones digitalocean
probably it's everywhere whether you search
for Internet ads companies. It's there. Tara Mark again, this is a popular cloud
service provider IBM. Is there in storage or in Cloud
for a very long time guys now if you go way back
I happen to did like I happened
to attend a session where I believe it was AWS
and some reinvent session where I do not remember
the name of the speaker, but that wasn't made
a very valid point.

He's at that in 1980s. He remembered or he happen
to visit a facility. I believe it. As IBM's I'm not sure
who's I think it was IBM's so he said that they had this huge machine
which was for storage. I mean, it looked very cool
in 1980s use machine and it was very costly it was like somewhere
around thousands of dollars and the storage space was 4mb. Yes for 4mb, the cost
was thousands of dollars. So you can understand
how far storage has come how far cloud has come
and And yes, IBM, it has been there. I mean it has been
there since then. So if you talk about IBM you talk
about Google's Cloud platform. These are principal
cloud service providers.

Then you have Microsoft
Azure knife you talk about current market. I mean if you go by the stats
alone Microsoft Azure and AWS. These are the leading
cloud service providers AWS is way ahead of all the other
cloud service providers. I'm so sorry, but if you talk about Mike Soft as your it is
actually catching up that Amazon web services
and greeson starts show that Microsoft Azure
is doing fairly fairly. Well, so yes, these are some of the popular
cloud service providers and more or less all of them have
good storage Services as well.

But as I've already mentioned
Amazon web services is one of the best in the market
and in today's session, we would be understanding some of the popular
cloud service services that Amazon web services
has to offer to you and when I say popular Services, I would be focusing on
storage Services specifically. So guys, let me switch into the
console and we can discuss some of these Services there and directly move
into the demo part. So yes guys, I hope this screen is
visible to you people. This is how the AWS
Management console looks like. So again for people who are completely new
to Cloud platform. Let me tell you that what Amazon web services are most of the other
cloud service providers do is they give you
a free tier account? What they're trying to say here
is you come you use our services for free for a short
duration of period And if you like then go
ahead and buy our services so These services are actually
made available to you for free for one complete Year.

Yes. There are certain limits
or bounds on these services. So if you exceed those limits
you would be charged. But if you stay
in the bounds or limits, you won't be charged and if you talk
about exploring these Services, these limits are free tier
services are more than enough. So again guys, if you are completely new
you should come here. That is Amazon web services
Management console create a free tier account. It is a very simple process. Put in certain details
where you work. Why do you want to use
these services are basic details and then probably you would have
to enter your debit card or credit card details. Don't worry. They won't charge you but this
is for the verification purpose. And again, if you're worried about
whether you would be charged or an amount would be – from your credit amount that
or your credit card that does not happen guys, aw is gives you a notification
saying that okay, you've been using these services
and probably you might be over using some of your services
also you An setting alarms where if you reach
a particular limit after that, you can actually
go ahead and ensure that there is an alarm so that you do not exceed
the free tier limit.

So yes, once you do have an account you can Avail all
the services that are here guys. So let's just go ahead and take
a look at the console a little and just jump into the storage
Services right away. So when you click
on this icon here storage guys or Services rather you get
access to all these Services as I've already mentioned
AWS provides you quite a few Services the same
room hundred Services guys, and they cover
different domains. You can see the domain names
at the top computer Vortex analytics business
applications storage. You have management and governance security
identity management and all those Services guys. So the in number of services
whether it's migration whether its Media
Services you Services for almost everything so as we would be focusing
on the storage Services before we go there.

This is one thing probably
you can select a region where you want to operate
from that is you want to create your resources
in this particular region. You can always have
this option of using it. So what is the reason
guys your data is based in a data center, right? I mean your data
is copied somewhere. So if you are
using those resources, probably your data would be fetched
from that particular location. Asian so you can choose
a region probably which is close to you if you like if your business
is located somewhere else probably you can choose
that region as well. So you need to go
through the list of regions that are available
and accordingly make a decision. Now this being
a simple demo guys, I'm would be sticking up
or sticking to Ohio basically. So let's just go ahead and jump into the cloud
services part and let's talk about storage in particular.

So guys, if you take a look
at the storage services that are here you can see that These are
the storage services that AWS has to offer to you. We have S3. We have EFS you have FSX
you have S3 Glacier storage Gateway an AWS back up. Let me just try and throw some light
on some of these services and probably we would just
go ahead and get into the demo of one or two
of these services at least.

So guys, I'm
when you talk about S3, it is simple storage service. So that is s now this storage is basically
Object bucket kind of a storage. I mean your container
where you put in your data where you store your data
is called as bucket and your data or your files
are basically stored in the form of objects. Let's just go ahead and quickly
create a small bucket. This would be a very small
introduction to the service. Let's just go ahead and do that. So when you keep on click
on this icon guys, that is S3. It redirects you
to the S3 console guys where you can actually go ahead
and create a bucket. I've mentioned the pointer that there are Don't services that make your job very easy
with cloud service providers and when you talk
about storage Services, it is no different.

I mean there are
Services which ensure that your job is fairly easy. So let's just go ahead and see
how easy it is to work with S3. If you wish to create
a bucket guys, if you wish to
create a container, it is very easy. Just go ahead and click
on create bucket and give it some name say Sample
for today, maybe guys. I'm very bad
at naming conventions. But please forgive me for that. Again. The names here should be unique. I mean if the name is taken
somewhere else probably you cannot renamed. I mean you cannot use
that name again. So yes, and so
that your name is unique and probably guys you should try and name your buckets
in such a way that those are more
relatable say for example, if you have a bucket for maybe creating
a particular application, so maybe bucket
for that application.

And or something like that so that you have a hierarchy and in that way you
can assign IM users or access to those buckets
in a particular order because you would not want all your users to have
access to that bucket. Right? So naming convention
becomes very important. So just go ahead and say next. Keep all the virgin's
guys versioning becomes very important again. Let's not get into the details. But let me give you a small idea
what happens here versions. That means each time
of buckets get updated. Probably I would want
to version or a copy of it and I would want the latest one. So when I was on it, it maintains those copies and if I need to go back
I can actually go back to a particular level
or a benchmark, which I set the previous
time in this case.

Let's stick to basic one and I'd not want
any logging details either. So just next. Again, guys, there are
certain Public Access has which have been given so permissions and access
we would talk about that not worry for now just say next and I
would say create a bucket. And guys the bucket
is already ready. I'm in my container
is already ready so I can just go ahead
and probably open this bucket and put in a file if I want and that was
very easy guys. I say upload and if I'm
connected to my local system, I just say add files.

Let's pick this random file, which uses this name
and I see upload. And there you go guys
the file is already there. I mean, we've created a bucket
a container will put in a files. It's as simple as that permissions
as I've already mentioned now, let me talk about this point. I skip this point, right? So let's discuss this a little
so guys security something that you can handle. So you would decide
or you need to decide what are the users that need to access
a particular bucket suppose. Your organization has
different people working on different different teams. I mean you have somebody
who is a developer. There's somebody who's working
on maybe The administrative part on maybe on the designing part. So for particular bucket, you have particular data so you can decide
who gets to access what so setting
in policies becomes important. You can create your own policies
as well initially. We saw that certain
Public Access is restricted to this bucket. I said, let's skip it skip
that for now. So when I say that
Public Access is restricted, that means not any public
policy can come in and dictate terms are saying
that use this policy why because There is a restriction.

This is a private bucket
and not anyone can use it. So guys when you talk
about S3 in particular, you can create buckets
you can have backups. You can have your EBS backups
also moved here. You can have your you can move
your data from here to Glacier. We would be talking
about they should not worry. You can have your elastic
Beanstalk applications your past applications and the data can be stored
in your S3 buckets. You can have
your CI CD pipelines and the data can be moved
again to the S3 bucket. Now, this is highly durable
and highly reliable. It's of storing data and it gives you fast retrieval
of data as well.

Let's go ahead and try to understand some other
services as well guys. So when I come back here and I cefs elastic file storage
or system browser. So here basically in this storage you
can store files. Yes. We are talking about data
that is in the form of files. And if you wish
to connect it better with the network you can go for EFS as well because then you have something
called as S3 Glacier. Yes. We talked about S3 right
where data is. Is durable and it
can be accessed very quickly S3 on the other hand lets
you store archival data. Let me tell you what
archival data is first. So guys when you talk
about archival data, basically what happens with archival data is
you're dealing with data that you do not need
to use every day.

Let me give you an analogy. I'm not sure whether you'd be able
to relate to that. So guys, I'm your
birth certificate now, I belong to India and we've been taking A lot
but we still have a lot of data that is in the form of papers. Even if you go to hospitals
attempt to request for a birth certificate. It might take days for you to get
that birth certificate. Why because there is some person who will be going
through all those documents and giving you that document. This is just an example. Do not relate it
like very seriously. But yeah, so it might take
a couple of days, right so and the birth
certificate thing. I mean, I might not need
birth certificate every day. It might be once-in-a-decade
that I might go to a hospital and probably request
that particular birth. Ticket, right? So this is a kind
of data probably which had not need
regularly or in real time. So I can compromise
a little on the fact that if the person is giving
me that data in two days time. It's okay because
that does not cost me anything.

I can wait for two days maybe
but that's not the case at times you need the data
to be retrieved very quickly. So if that is the case you
should store it where in S3, but if you're fine
with this delay, probably you would want
to store it in Glacier. Why? These are normally
takes a longer while to retrieve your data, but the advantage
of Glacier is it is profitable because it is very affordable compared to S 3 S 3 is
already affordable. You can check in for the prices.

But if you have archival data, which you won't be using
everyday, you can store it here and the fact
that it takes a longer while it won't cost you. I mean, it won't cost in that perspective of accessing
your data in real time. Right? So if the data is something that is not needed regularly you
can Move to S3 Glacier, right? So what happens is S 3 you can
actually move in all your data. And then if you realize
that there is certain data, which would not need every day. Just move it from S
3 to S 3 Glacier where the data is stored
in archival form and it is or it does not cost you a lot. So again guys, I won't be getting
into the demo of S3 Glacier.

We have a session on S3 Glacier or Amazon web services
Glacier other and to do that. What you need is you need
probably a third party tool. That makes it easier
for you to retrieve the data. So I won't be getting
into the stuff where I download that to land
and show you how it works. It's very simple. We'll just like
we created buckets. Are you create volts there
and you probably move in your data and you
can retrieve that data. But again, it takes a long while
to retrieve that data. So it is similar to S3,
but little different so yeah, that is S3 Glacier. We understood what EFS is
and what S3 is then again guys, you have some other
services as well here if I Scroll down you have
your storage Gateway.

You have your AWS
backup as well. So what are these things? And what do these things
do well storage Gateway an AWS back up basically back up as it says you can have
backup of your data and you can like save it
from going down and stuff like that when you talk about storage
get with these are services that let you move your data
from on-premise atmosphere or your infrastructure
rather to Cloud. So if you already have data that
is on your existing on-premise or infrastructure rather, you can actually move
that data to Cloud as well. So there are services
to help you do that. And those services are
your storage Gateway services? So guys we've discussed some of these Services
there is something else which is called as
elastic block storage. Elastic Block store is what it does is it lets
you create volumes snapshots and copies of the volume that is attached
to your instances.

Let's go ahead and take
a look at how this works. I mean there are a lot
of pointers to talk about it. So as I move further, I would be discussing
those pointers while I also show you how to do it. So guys when I say EBS
or elastic block storage what that does is it lets
me attach some kind of volume to my instance now instances. Let me tell you
what instances are first.

Now when you talk
about cloud services, they give you compute Services
where you can spawn instances or spawn temporary
servers or servers where you want to host
a data now each time. I won't be going out and buying
a new machine right instead. What cloud does is it? What happens? Yes, guys. Okay, guys, I'm not sure
whether there was a lag while you were going
to this session. What happened is let me tell you
what happened my connection the streaming connection
to my software, which I'm using to stream. This session did go down
a minute back and it shows now that it is connected.

So I would like to know whether I'm audible
to you people are not if yes, then we can continue
with this session guys. Okay, I'm guessing we're fine. So I'm just gonna go ahead
and continue with the session. I was talking about instances. Let me talk a little
more about it. So when I talk
about these servers that are ready to use basically
these servers are something that you can use and you can have
some memory attached to it. So what we're going
to do is we're going to go ahead and launch
one instance and understand how memory or hose
storage works with it.

So to do that we
were going to go ahead and just launched
that particular service. It is called as To which
is a compute service guys. So here I can actually go ahead
and create servers or launch instances
in simple words. So let's just go ahead and
launch a particular instance. Now, I have the freedom
of launching both linux-based windows-based one
to based kind of instances. So you have the freedom
of choosing what kind of instance do you want
this being a simple demo guys. I'm going to stick
with the windows instance. I'm not going to show you
how to deal with that instance because I've done
that in previous sessions. You can take a look at some of those switch
sessions as well guys. Let's just go ahead and launch
this particular session or this particular instance
rather now guys, this is a Windows
instance and okay, not this let me launch
on basic one.

This is also free tier guys. But yeah, I would
want this make sure that your instance
is EBS backed. So guys, you're backing
up Works in two ways. You can back it up on S3. You can back it up on eBay as that is elastic block storage
now elastic block. Storage is important why it lets
you create images and volumes. What are those
we'll talk about that once we create this instance. So ensure that
for now it is EBS. So if I click
on this is the thing if I click on this icon, It would give me
details what kind of instance I'm launching
when I say T2 micro.

It is a small instance
which has one CPU and one gigabytes of memory for now and I can just
go ahead and say next. Okay, some of the other details whether you want
to be PC or not. Let's not discuss
that and then you get into the storage part guys. This is the device with two which I am attaching
my root volume. So this is the path rather. So I need to focus on this. It is SDA one guys. That is slash Dev slash sd1. You need to remember this
when you create new volumes and the types of volumes that you can attach
to your instance are these you have general-purpose SSD
provision tie offs and magnetic.

It is take a something that is getting outdated
probably might be replaced. So these are the few ones you
also have some other kind of volumes that you
can attach but the point that you need to remember is when you talk about having
a primary volume in that case you have only these options because these are bootable guys so there are certain other
volumes that you can attach if I attach a secondary volume, you see the options are more.

I have SSD for traffic
optimization and then I have cold SSD as well. But this is a basic thing. We not going to get
into the details of that. You would skip that so guys all I'm trying
to say is this is the device this is the size and probably this is the type
of instance or volume. Sorry is that would be attached
to my instance. So let's just go ahead and say
next a tax for now. Let's not add anything and then let me say
configure the settings. So guys when I launched
an instance it says that security is not Optimum. It's okay. I mean you can assign the port
you want to when you use it for a higher security purpose. And then this is important guys
for your each instance.

You need a key pair which is a secret way
of logging in or a secure way of logging
in not secret a secure way. So this is a second
place authentication. Once you're logged
into your account. You would be needing a key pair if you wish to
use this instance, so make sure you create one and you store that one
as well if you have one which you can use probably. can do that as you can just
create one say Nuki I said download guys. Once you download it. Keep it safe somewhere. It is stored in the form
of that p.m. File. So do that and then I
say launch an instance. So guys once this happens if I just go back
to the ec2 dashboard probably I can see that there is an instance
which is running for now. It is 0 why because guys my instances
still getting launched. It takes a couple of minutes or 1 and 1/2 or 1 minute
probably to launch an instance. The reason for this is probably
a lot of things happen in the background.

I mean certain
network is associated. If you talk about an instance, it needs to communicate
with other instances, right? So in that case Probably
you need to have a network that lets all
these instances connect. So a network is set here
basically and probably all the storage volume is attached
in a lot of things happen. That is why there are
certain statistics that your instance needs
to go through and hence. It takes a minute
or so to launch this instance. So if you take a look
at this the status text it says that it is initializing. So if you refresh it
probably it happens at times. So let's just try our luck see
whether it's No, it's still initializing but guys we can see the volume
that would be attached to it.

So, let me just come here
and rather go here if I click on volumes, there is there is a volume
that is attached to it. So there is a 30 GB volume. So there's a volume that probably has
a size of 30 GB. So it is here already
and it is in use so it would be attached
to my instance once it is up and running. So the point I'm trying to make
here is what elastic block. Storage does is it lets
you manage all these things now? There are two ways to manage
these things either you create a copy of this volume
disable this volume and then attach the next one or probably you can directly
scale your existing volume or make changes
to it right away.

So what elastic Block store is does is it lets
you manage the storages? So again, let me tell
you how it works. So when I create
an instance probably discredited in a particle particular region, right so in that A particular
region say for example now I'm based in India. So I have a data
center in Mumbai. So my instance would be created
at that data center and probably the storage
for it would also be there.

So there is no latency
when I try to use that storage. So this is what EBS does it lets you manage
that particular storage. So how it works is I
can create a copy of it. So what this copy does is it
serves two purposes so next time if I wish to make In just
to that storage I can do that if this particular storage
or volume goes down. I have a backup copy again. I can create snapshots as well. Now what snapshots do is
basically they let me replicate my instance and the volume
that is attached with it. So instead of creating
an instance again, and again with if I've defined certain
properties for my instance and not have to worry about
defining those properties again, and again, I can just create
a snapshot or I can rather create an Emi out of it, which I can store
and use it next time. If I want to spawn
a similar instance, so this is very BS helps
in it lets you have backups of all these storages it lets you create copies of it.

So even if something goes
down you can work on the copy that you have so guys by now. Our instance would be created. Let's just go ahead
and take a look at it. It says it is running guys, and we've already taken
a look at the volume. Let us create a copy
of this volume to do that. I'm going to go to the actions
my instances selected already. I can just go to modify and make changes
to this volume right away, which is an easier way, but I'm going to show you how it can be done
the other way as well how it used to work
previously so I can just say that create a snapshot. details Sample,
and I say create. So guys are snapshot is created. If I come here I can take a look
at the snapshot again.

It is spending might
take half a minute for the snapshot to get created so I can just come here
and replace or refresh other. These things at times
take a little while. So guys we would be creating
a copy of it probably viewed by detaching the volume that we have created and it
is attached to our instance and we would replace
that with the copy that we are creating now. So once this thing is done
and created we can do that. For some reason it's taking
longer while today. Let's hope that it
gets done quicker. Look, it's still processing. Let's bear with me
or just bear with me. Meanwhile this happens. Again guys if I was too fast
and if I missed out on certain things I
would like to tell you that you can go through
our other sessions on YouTube and probably you would be in a
much better state to understand what has happened here again, there was an outage
we're not out.

It's my software did not work
properly the streaming software and probably there was a lack
of a minute or two. So I'm hoping that you are did not miss out
on anything that was happening. Meanwhile. Just hope that this Snapshot
gets created quickly. It is still pending
and this is irritating at times when it takes a long
while It's completed guys. A snapshot is ready. I can just go ahead and say
create a volume out of it, which I wish to attach. So guys there
are certain details that we need to do. So for that laces go back first. Let's go back to the instance
that we have and let's see where the instance
is created guys.

So as you can see
if you come here, it would give you
the details of the place where the instance is created. So it is u.s. East to see so when you
create an volume a volume, it is necessary that you created
in the same region guys because as I've already
mentioned the benefits of having it in same
reason is region is that you can attach it
to your existing instance and it saves you
from various Layton sees so, yep, let's go back to the snapshot spot and say
create a volume of it.

I say create and then I probably let's say
I want more storage guys that's in 90. Okay, this is general
purpose it is to way. So let's go to to see
if I'm not wrong. It was to see let's just go ahead and create it in
to see and say create volume. Clothes so guys are instances where our volume is created
successfully again guys. Now you can take a look at it.

From this perspective. I have my Snapshot here, right? So this snapshot says 30gb
that does not mean that the snapshot
which I took its size is 30 GB. It says that it was created from an instance
or size is 30 GB. So there's a difference between
these two things guys understand that as well. So I have a volume which is based in availability
Zone to see I have an instance which is here and it again is
it availability Zone to see so we can attach to it.

Let's just again go back
to the volume spot. So guys, I have two volumes. I created this one and this
is attached to my instance. Let me just try
and remove this first. detach volume Okay, it's giving me an error
try to understand why this error is there guys. My instance is already running. So I cannot directly remove
this volume from here for that. I would have to select
this instance go to instant State and say stop
so it stops working for now. And once it does I
can attach the volume. So for now what you can see
is there are these volumes here it is in use right? So once the instant
stops it would be available and won't been used so I can replace it
with this instance. So it has stopping it
hasn't stopped yet. So as do not worry, we would be done
with the session very soon.

And once we are done probably
you all would be free to leave. I believe that this session
has taken longer than my normal sessions. But yeah, there was
a lot of stuff to talk about we talked about
the complete story services that you have reached has
to offer to you people hence. This session was so long. So let's just quickly go ahead
and finish the stuff.

It has stopped. So guys I can now go ahead
and remove the volume or detach this volume and go
ahead and attach the other ones if I say detach it would detach. Yeah, see both are available. Now. Let's try to attach
this volume and say attach volume search
this is the instance guys, which I have created and you need to give
in the device details, which was / what
with the details. Let's just go back
and take a look at the details that we're supposed
to enter in here. So as a you need to give
in the path that we talked about which is the drive
that we've discussed, right? So that is the part
that you need to enter. And then you actually go
ahead and say SD a one. Slash and probably you
would be more than four to go.

So this is the other thing I
do not remember the other part. So you need to go ahead and put
in these details here. If you put in
these part details guys, you can just go ahead and attach your volume
right away and this volume would get attached
to your instance. So this is how it works and you can actually go back
and do other things as well. So if I just come here
I have this instance. So what you have to do is
you have to actually go ahead and click on this thing for now. It's not working. But if you just come back
here or to the volume part. So if you just go
to the volumes part with we were at in the previous.

Slide you can actually go ahead and attach the volumes now here
you go by just go to instances. Probably go back
and I say ec2 again. Yeah, if I come back
two volumes guys. You can attach the volumes that are there you
can delete those and you can do a number
of changes that you wish to do. So just go ahead
and attach these volumes and you would be more
than good to actually go ahead and launch our instances
or manage the storages that are there. Again. The only thing that I missed out
on is the path I told you to note the path
the device name, right? You just have to go ahead and
enter in the device name here. And if you enter
in the device name while creating your volume or attaching your
volume your instance. Get attached to that or your volume
would get attached to that instance right away. So yes guys thus pretty
much sums up today's session. We've talked about
quite a few things here guys. We've talked about S3 Services
we've talked about we've talked about EBS in particular.

We've understood like
how to detach a volume how to attach on I
just did not show you how to attach the volume, but you can do that. The reason I'm not showing you that is probably lost out
on On the device name here, which normally comes in here. So before you
deactivate your device, make sure that you have
this name and when you do launch or attach your volume
to that particular thing, all you have to do is you
just go to the volume spot. And probably when you say attached to a particular
instance put in that device name there and you are
instance would be attached or your volume would be attached
to your instance and you can just go
ahead and say launch or just start
this so-called instance again, and you'll be good to go guys. So as far as this particular
session goes Guys, these are the pointers
I wanted to talk about. I hope that I've talked
about most of these pointers and I've cleared all your mints
or doubts that were there. So that's when you
talk about S3. Now. It has a simple storage service which is simple or easy
to use in real sense.

It lets you store
and retrieve data which can be in any amount which can be of any type
and you can move it from anywhere using
the web or Internet. So it is called as
storage service of the internet. What are the features
of this particular service? It is highly durable guys now. Now, why do I call it durable, it provides you durability
of 99.999999 some 11:9 now when you talk about
that amount of durability, it is understandable how durable this Services
what makes it this durable. It uses a method
of checksum wear.

It constantly uses
checksums to analyze whether your data was corrupted
at a particular point and if yes that is rectified right away, and that is why this service is
so durable, then it is. Be flexible as well as I've
already mentioned S3 is a very simple service and the fact that you can store
any kind of data. You can store it in any reason
or any available reason is what I would mean
by the sentence. It makes it highly flexible
to store the data in this particular
service and the fact that you can use so many pi as you can and of secure
this data in so many ways and it is so affordable. It meets different kinds of needs thus making it
so flexible available. Is it available? Yes, definitely it is
Is very much available as we move into the demo part, I would be showing
you which regions basically let you create
these kind of storages and how can you move and store data
in different regions as well.

So if you talk
about availability, yes, it is available in different
regions and the fact that it is so affordable making it available becomes all
the more easy cost-efficient. Yes now to start with we normally do not get
anything for free in life. But if you talk about S3 storage
AWS has a free tier which lets you use. Public services for free
for one complete year but this happens
in certain limits. Now when you talk about S3, you can store 5 GB of data
for free at least to get started or get used to the service.

I believe that is more
than enough and what it also does is it lets you have somewhere
around 20,000 get requests and somewhere around 2,000
put requests as well. So these are something
that let you store and retrieve data
apart from that. You can move in 15 GB
of data every month outside. Side of your S3 Service as well. So if you are getting
this much for free, it is definitely
very much affordable. Also, it charges you on pay
as you go model. Now. What do I mean by this? Well, when I say pay
as you go model what we do here is we pay only
for the time duration that we use the service
for and only for the capacity that we use this service form.

So that is why
as you move along if you need more services,
you would be charged more. If you do not need more amount of the service you
won't be charged to that. Extent, so is it cost efficient? Definitely it is scalable. Yes. That is the best thing
about AWS Services. Most of them are scalable. I mean you can store
huge amount of data, you can process
huge amount of data. You can acquire
use amount of data if it is scalability that is your concern you do
not have to worry about it here because even this
service readily scales to the increasing data that you
need to store and the fact that it is pay as you go model
did not have to worry about the cost Factor as well. Is it secure definitely? It is now you can encrypt
your data you have various bucket policies as well that let you decide
who gets to access your data who gets to write data
or gets to read data.

And when I said you can encrypt your data
you can actually go ahead and encrypted data
both on client side and on your server side as well. So is it secure I believe that answers the question
on its own. So Guys these were some
of the features of Amazon S3. So guys now let
us try to understand how does S3 storage
actually work now it works with the Concept of objects
and buckets now bucket, you can think
of it as a container where as an object is a file that you store
in your container. These can be thought of
as AWS S3 resources. Now when I say an object basically object is
your data file. I've already mentioned that you can store any kind
of data whether it's your image, whether it's your files blocks,
whatever it is, these are nothing but your data and this data
comes with metadata when I say an object.

It is combination of your data
plus some metadata or Or information
about the data what kind of information basically
you have the key that is the name of the file that you use inversion
ID is something that tells you
which version are you using as we discuss versioning? Probably I would talk
about Virgin ID a little more. But meanwhile, I believe this is more
than enough your objects are nothing but your files
with the required metadata and the buckets as I've already mentioned.

These are nothing but containers
that hold your data. So how does it work guys? Well, what happens
is Sickly you go ahead and create pockets in regions and you store your data
in those regions. How do you decide what buckets
to you is what reasons to use where to create the bucket
and all those things. Well, it depends
on quite a few factors when I say I have
to create a bucket. I need to decide what reason
would be more accessible to my customers or to my users and how much cost
does that region charge me because depending upon
the region your cost might vary. So that is one factor that you need to consider
and let and see as well. I mean if you put your data
In an S3 bucket, that is far away from you fetching it might cause
high amount of latency as well.

So once you
consider these factors, you can create a bucket
and you just tore your objects when I said version ID key, actually a system automatically generates
these features for you. So for you it is very
simple create a bucket pick up your object put it in it or just go ahead and retrieve the data from
the bucket whenever you want. So I believe this gives
you some picture about what S 3 is now let
me Weekly switch into the demo part and let
me give you a quick idea or quick demo as to how S3 works so that it is not too
much theory for you people. So guys what I've done is
I've actually gone ahead and I've switched into
my Amazon Management console. Now as I've already
mentioned AWS gives you a free tier for which
you can use AWS services for free for one complete year. Mine is not a free tier account. But yeah, if you are a starter you
can create a fresh account.

You just have to go ahead
and given certain details all You do is you just go to your
web browser search for AWS free tier and sign in
with the required details. They would ask you
for a credit card or your debit card details enter any one of those
for the verification purpose and you can actually go ahead and set up alarms as well
which would tell you as in. Okay. This is the limit to which
you have used the services and that way you won't be
charged for Access of data usage or service usage having said
that guys this is about creating an account. I believe it is fairly simple. You can create an account
once you create an account. Is this is the console
that would be available to you? What you have to do is
you have to go ahead and search for Amazon S3. If you search s3r, it would kind of redirect you
to that service page. So guys as you can see, this is the company's
account probably somebody uses it in the company
and they have the buckets that are already created.

Let's not get the that is just go ahead
and create our own bucket and just go ahead and put
in some data into it. It is fairly simple guys. I've already mentioned. It is very simple
to use kind of service. All I have to do is click
on create bucket and enter in name for some bucket guys. Now this name is unique. It is globally unique once you enter a name
for the bucket you Not use the same name
for some other bucket.

So make sure you put
in valid name and the fact that I use the term Global
something reminded me to be explained of so guys as you can see
if I go back here. I want you to notice this part. So guys when you are
into the Management console or you open any service by default the region
is North Virginia? Okay. So if I create a resource,
it would go to this region. But when I select the service
that is S 3 you can see that this region
automatically goes to Global that means it is
a global Service. It does not mean
that you cannot create bucket in particular regions
you can do that. But the service is global is what they're trying
to see so let us go ahead and create the bucket
Let Us call it today. Demo, you cannot use caps guys. You cannot use some symbols. So you have to follow
the naming Convention as well.

Today is demo. Sorry. I'm very bad
at naming conventions guys. I hope it is. Okay, let it be in u.s. East you can choose
other regions as well guys, but for now,
let it be whatever it is. So I'm going to stick
to North Virginia. There are 76 buckets
that are being used. Let us just say next
bucket name already exists. So this was already taken
guys see So you cannot use it. Let's call it say. vamos bucket 1 3 1 1 3 Okay. Do you want to keep all
the versions of the object? We will talk about
what versions are. Okay guys. Meanwhile, you just
bear with me. I'm just going to go ahead
and create a bucket create a bucket and there you go guys.

I'm sure removes bucket
should be here somewhere. Here it is. If I open it I can just go ahead
and create folders inside it or I can directly upload data. So I say upload select a file. Let's just randomly
select this file. It is Van Dusen
founder of python. Basically, let's just say next. Next next and the data
is uploaded guys. You can see the data being uploaded and my file
is here for usage. So guys, this is how object
and bucket kind of stuff works. You can see that this is
the data that I have if I click on it,
I would get all the information.

What is the key? What is the version
value for now? Let's not discuss version. But this is the key or the name of the file
that I've uploaded. So it is fairly
clear right guys. So let us just
quickly switch back to the presentation and discuss
some other stuff as well. Well now guys
another important topic that is to be discussed
here is S3 storage classes. Now, we've discussed how the data is stored
or how buckets and objects work but apart from that
we need to discuss some other pointers as well as
in how does AWS charge me or what kind
of options do I have when it comes
to storing this data.

So it provides you
with three options guys standard infrequent and Glacier. Let me quickly give
you an explanation to what do these storage classes
mean and what all this? Offer to us when I say standard
it is the standard storage which gives you low latency. So in case if there is some data that needs
to be refreshed right away, you can actually go
ahead and use standard storage say for example, I wish to go to a hospital
for certain kind of checkup. So in that case my details
would be entered in and the fact that I am getting
myself checked in a hospital or diagnosed in the hospital. What happens is this data is important and
if it is needed right away, it should be available. So this kind of data can be
stored in your standard storage where the latency is
very less the next we have in frequent access. Now, what do I mean
by that now in this case my latency period has to be low because I'm talking
about data that I would actually need any time if I want to but
when I store this data for a little longer duration, all I want is this data
to be retrieved quickly say, for example, I get
a particular report or a particular test done.

So in that case I
Actually go ahead and submit my details
or say for example, my blood samples, but I need this information
maybe after three days. So what happens is
in this scenario, I would want to store this data
for a longer term, but the retrieval should be
faster here in the first case that was not the case if I
needed that data right away, and if I wanted it to be stored
for a very short duration, I would use standard. But if I want to store it
for a longer duration, and I want a quick
retrieval in that case, I would be using
in frequent access and finally I Glacier we have
already discussed this here. Your retrieval speed is low and the data needs to be put in
for a longer duration. And that is why
it is more affordable. If you take a look at the stats that are there in the image
above you can see that minimum storage
duration is nothing for standard for infrequent.

It is 30 days and for
Glacier it is 90 days. If you take a look at latency, it is milliseconds
milliseconds and four hours. So that itself explains
a lot of stuff here. So what art This classes
and what do they do? I believe some ideas clear
to you people again as we move into the demo part, we would be discussing
this part as well. And we would also discuss
expiration and transition that supports these
terms but let us move further and try to understand something else first
versioning and cross region replication now guys when I say virginie, I'm actually talking about keeping multiple copies
of my data now, why do I need versioning? And why do I
need multiple copies? He's of my data. I've already mentioned that AWS S3 is highly
durable and secure. How is that because you can fix
the errors that are there and you can also have
multiple copies of your data. You can replicate your data. So in case if your data center goes down
a copy of it is mentioned or maintained somewhere
else as well.

How is this done by creating multiple versions
of your data say for example, an image, I store it
in my S3 bucket. What happens here is there is
key the name is same image. And virgin is some 3 3 3 3 3
right now take a look at the other image. If I actually go ahead and create a copy of the first
image its name would remain same but it's version
would be different. So suppose both of these images. They reside in one bucket. What these images are doing
is they are having multiple copies are giving
me multiple copies now in case of image
not a lot would change but if I have doc files
or data files in that case versioning
becomes very important because if I make changes
Changes to particular data if I delete a particular file
a backup should always be there with me and this is where versioning becomes
very very important.

What are the features of
versioning by default poisoning is disabled when you say
or when you talk about S3, you have to go ahead
and enable this versioning it prevents over writing
or accidental deletion. We've already discussed that you get non-concurrent
version by specifying version ID as well. What do I mean by this? That means if I
actually go ahead and create one more copy
of the data and store it. So the latest copy
would be available on top but I can go to the virgin's
option put in the ID that belong to the previous
version and I can fetch that version as well. So what is cross reason
replication now guys, we've discussed versioning.

Let us talk about
another important topic that is cross region replication. Now when you talk about S3, basically what happens is you
create a bucket in a region and you store data
in that region, but what if I want to move
my data from one region or from one bucket in The region
to other bucket in other region, can we do that? Yes cross reason replications
let you do that. So what you do is you
basically go ahead and create a bucket
in one region you create another bucket in another region and probably you give access to the first bucket to move data
from itself to the other bucket.

So this was about versioning. This was about
cross region replication and I believe you've also talked
about storage classes. Let me quickly switch
into the demo part and discuss these topics
too little He did so guys moving back. What we have done is
we've actually gone ahead and created
a bucket already right when you talk about
what was the name of the pocket.

It was removes if I'm not wrong. Yep. So if you click
on the bucket name removes what it does is it basically
shows you these details guys. Now you can see that your versioning
is disabled, right? So if I click on it, I can actually come to this page
and I can say enable virginie. That means a copy of the data. That I create
is always maintained. So if I go to the most bucket, or I just move back
get this interface can be a little irritating
at times you have to move back and forth every now
and then so guys there is a file which we have stored.

You can just take a look
at this date first. It says that it is 235
that was the time when the object was moved. Let me just say
that upload the same file. This was the file
will be uploaded as in next next next upload. So where is this file
is getting uploaded. You can see the name
of the file is still same. We have only one file here. Why because it was recently
modified at 2:45 from to 25 to 35. It got changed to 245. So it is fairly clear guys. What is happening here? Your data is getting modified. And if you wonder as in what happened to the previous
version, don't worry. If you click
on this show option, you can see that both of your virgins are
still here guys. This was created to 30. And at 2:45. So this way data replication and data security
works much better. So you can secure your data. You can replicate your data. So in case
if you lose your data, you always have
the previous versions to deal with how does the previous
version thing works so as what happens is
if I delete this file what Amazon S3 would do
is it would set a marker on top of this file.

And once I delete it if I search for that ID
that ID won't be available. Why because the our car
has switched to the next ID now. So whatever I want to do I
can do with the next ID as well. So there is one more thing that you also need
to understand here is what happens to the file. I mean, I've actually deleted
a file but a virgin is there with me can I delete
all the versions? Yes, you can specify the ID and you can delete all
the versions that you want. You can also do one thing that is you can set a particular
life cycle for your files when I say life cycle you
can decide as an okay now.

I have a file instead. That storage we've discussed. This storage is Right standard
storage infrequent and Glacier what you can do
with your life cycle management is you can decide as an okay
for a particular time duration. I want this file to stay
in standard maybe after a while. I want to move it to infrequent
and after a while. I want to move
to Glacier say for example, there is certain data, which was very important for me
but having used that data, I don't want to use it
for next few months.

So in that case I can move to the substitutes or to
the other storage classes. We're probably I won't
be needing to use that data for a long while and doing that. I won't be paying for this data as I used to pay
for the standard because standard is
the costliest of the three. So let us quickly. See can we do that or
how does it work? At least if I just go back? This is my file. I can actually just go ahead and
switch to management in that. I have the option of life cycle
if I click here. There is no life cycle
add a life cycle. You can add
a lifecycle rule as well.

This new let me call it new
and let me say next it asks me. What do I want to do? You can add rules
in life cycle configuration to tell Amazon S3
to transition objects to another storage class. There are three requests fees when using lifecycle
to transition data to any other S3
or sa Glacier storage. So which version do I
wish to use current? I can say yes a transition and I can select
transition to this tear when after 30 days. Days, and if I say next
it would agree expiration. You can select other
policies as well. So guys when I say
transition first thing what it does is it tells
me what time to transition to which storage
class and expiration. It tells me when does this
expire so I can decide when to clean up the objects
and when not to let's not do that for now. Let's just say next next so guys what will happen here is
after 30 days my data would move to a standard one a storage so you can actually
go Then decide whether you want to move
to Glacier in that drop-down you had more options as well.

I did not do that,
but it is pretty understandable. You can move to Glacier as well. So this is about
life cycle guys. One more thing. You have something called as replication you can add
replication as well. If you wish to replicate your
data cross reason replication. I believe guys, I do not have access to do that because I'm using
someone else's account for now, but let me just give
you some idea as to what you can do
to replicate your data. You can just go ahead
and click on get started. Dated so replication
to remind you people it is nothing but a process of moving data from bucket in one region to add the bucket
in some other region. So for that I need
to select the source bucket.

So let us just say that this is the bucket that I
have next now guys in my case. I haven't created
the second bucket. What you can do is
you can just go ahead and create one more bucket. Once you create
the bucket you can select the destination bucket for now. Let us just say
that this is a bucket that has been created
by someone else. I'm not gonna transfer data
are but let's just select this for the demo sick. This is the bucket
that I have see it says that bucket does not
have versioning enabled. This is very
important Point guys. I showed you how to
enable versioning right? If you select the bucket there is an option on the right
side saying virginie, you can actually go ahead
and enable versioning there. So once you enable
versioning you would be able to use this bucket. Do you want to change
the storage class for the replicated objects if you say yes it Would give
you the option of selecting. What storage class do you
want to select right? If you don't you don't have
to you can say next you have to enter an IM role.

If you do not have any you
just say create a roll and then the rule name
in this case. I do not have any details about this and I
don't want to create a role because this account
does not belong to me. Sorry for that inconvenience, but you can actually go ahead
and select create a role in just say next and I'm sure that you can actually go ahead
and your bucket starts. Audio our cross reason
replication starts working. What happens after that is once you store your object
in a particular file, you can actually move that object not in a particular
file in a particular bucket. You can move the data
from that bucket to the other bucket and a copy
of your data is maintained in both the buckets that you use.

So this is what cross
region replication is guys. I believe that we have discussed what our storage classes
we have discussed. What is cross region replication
and we've discussed versioning in general let Let's
quickly move back to the presentation and discuss the remaining
topics as well. So guys have switched
into the presentation part till time we've discussed how cross region replication
Works we've discussed how versioning works
and we have seen how to carry out that process.

The other important topic that we need to focus
on is we've know like how to create versions how to move data from one place
to the other but the thing is what if I have to move data
from a particular location to a location that is
very far away from me. And still ensure that there is not too
much latency in it. Because if you're moving data
from one location to location that is far away from you. It is understandable that it would take
a longer while why because we are moving
data from internet. So the amount of data that you move and the further
you move it should take a longer while for that. So how do you
solve that problem? You have S3
transfer acceleration. You can do that by using
other services as well. We discussed snowball
and snowmobile as well, but they physically move. The data and at times
it takes a number of days to move your data
with S3 transfer acceleration that is not the issue because it moves at data
at a very fast pace.

So that is a good thing. So, how can you move your data
at a faster Pace by using S3 transfer acceleration? Okay, let us first understand
what it is exactly. So what it does is
it enables fast easy and secure transfers of files or long distances
between your client and S3 bucket and to do that. It uses a service call. Cloudfront and the S locations
it provides you as I move further I would be talking
about what cloudfront is do not worry about it first.

Let us take a look
at this diagram. So normally if you
are moving your data or directly uploading your data
to a bucket that is located at a far away distance. I mean suppose I'm a customer
and I wish to put my data into an S3 bucket, which is located maybe
a continent away from me. So using internet it might take
a longer while instead. What I can do is I
can use transfer. Generation. So how is it different now guys, there is a service called
as AWS Cloud front what it does.

Is it basically lets
you cash your data when I say cash or data that means you can store
your data at a location that is in the interim or that is close
to your destination. Now this service
is basically used to ensure that data retrieval
is faster suppose. I'm searching for
a particular URL. What happens is when I type that URL request is sent to
the server it fetches the data and sends it to me. So If it is located
at a very far location, it might take long
while for me to fetch the data.

So what people do is
they analyzed as in how much requests are coming
from a particular location and if there are frequent
and a lot of requests what they do is they set
up an age location close to that particular region. So you can put your data
you can cash a data on that is location and the data can be fetched from that is location
at a faster rate. So this is how is locations work what transfer acceleration
does is it basically puts in your data
at the edge location so that it can be moved to your S3 bucket
at a quicker pace. And that is why it is fast. So guys this was
about S3 data acceleration.

Let us quickly move into the console part
and try to understand how S3 acceleration works. So guys have switched
into the console S3 acceleration or data transfer acceleration
is very easy thing to do. I do not remember
the bucket name. I think it was Ram or something. Okay, if I select this
and open it I actually go to the Properties part less. There are other things
that you might want to consider. You can come here and take
a look at those as well for now. I'm just going to say
go ahead and enable transfer acceleration.

It is suspended. I can enable it it gives
me the endpoint as well and I say save So guys
what this means is if I'm putting my data
into this bucket, it would be
transferred very quickly or I can use this bucket
to transfer my bit data at a quicker Pace by using
data transfer acceleration by S3 again guys. I missed out on one
important point the fact that we have been talking about
buckets and stuff like that. There is something important that I would like to show
to you people first. Let us just go back
and disable this part. I do not want it to have
the transfer acceleration. Going and I just wanted to show
it to you people how it is done. I just say go back to suspended
and one more thing guys, if you once you actually unable
the transfer part and if you upload a file, you can see the difference
in the speed. The problem is you need
a third party tool to do that. So you can actually go ahead and download a third-party tool
as well and using that you can actually go
ahead and see how it works.

Having said that I was talking
about buckets in general. So let us just go back
and go to removes again. There you go. And I'm going to copy the a RN. I'll tell you why
I've copied the iron now when I open this bucket guys, we have quite a few
things permissions. I talked about security, right so you can decide
Public Access as in who gets to access your bucket. So guys, you can actually
go ahead and decide who gets to access
what kind of buckets say, for example here
in your blog Public Access. You can decide who gets to access
what data publicly for that you have access control
lists using these ACLS. You can actually decide who gets
to How other thing you can do is you can just go ahead and create
a bucket policy and decide who gets to access your bucket
or who gets to put your data or delete your data
and do all these things. Let us just go ahead
and create a policy. Now, you can write
your own policy or you can just use a policy generator which again is
a third party tool.

So I want to create
a bucket policy forum is 3 so, let's just say S3 bucket policy
and what kind of effect I want. I mean do I want someone
to access my system or do I want to deny someone
from accessing my system I can. Decide that so let's
for now just say that I want to deny someone
from doing something and what I wanted someone to do
is to deny a particular thing for that person
for all the objects. I mean, I do not want
that person to access any of the objects that is there. So what I say is star that means nobody
should able to do anything to any of the objects
that are there in this bucket. So it says star service
Amazon S3 what action I want. I want to prevent someone
from deleting an object they go and This is the AR
n that is why I copied it. It should be followed
by a forward slash and a star add a statement
and Ice Age ended policy.

So guys the policy
has been generated. I just have to copy it if I copy this thing
and I go back to the console if I paste it here I can say save It
saved I'll save it again just to be safe. So guys we have actually gone
ahead and let me just go ahead and again go to ramose. So there's not there is
an object here. Let me just try
and delete this object. If I just go
to the actions part here and I say delete see
the file is still here. Is it the other version? No, it's not deleted. See there's an error here. If I click on it. It says hundred percent field
why access denied because I do not have the access
to delete the object right now.

Why because I've created
a bucket policy guys. So that is what bucket policies
an AC else do the Let you make your objects
or your data more secure. And as you saw in the option, there are quite a few options
that you have at your disposal, which you can choose
from which you can mix and match and decide
as an look at this is what I want to do. I want to probably give someone
an access to delete a bucket. I want to give someone
an access to do this or do that. So, where's this was about
S3 data transfer acceleration, and we've also seen how you create a bucket policy
how you attach it to your bucket and stuff like that now, let me just go back
and kind of Shove this session or finish this session
up with a use case so that you can probably
understand the topics that we've discussed
a little more first. Let us go back
to the use case guys. So guys have switched into
my presentation console again, and we would be discussing
IMDb media now for people who watch movies.

They might know what IMDb
is it is a website that gives you
details about movies. They tell you what are
the movies that are nice if you probably select
or type a particular He named they would give you
details about it as a whole where the actors
how was the movie how was the review a short
snippet explaining you what the movie is about its genre
and stuff like that. Plus they have their own
ratings to kind of gauge in the customers even better
as an IMDb being a popular site and when they say that this movie is
this person good or like by these many people people
normally believe it so they have that score as well. So if you talk about a website
that basically deals with movies you understand
the number of movies that are released worldwide. And if most of them
are present here on IMDb, that means that database is huge
but we are talking about data that is being processed
in great numbers great amounts. I mean when you talk
about the data that is here.

What is happening here
is you have n number of movies that are being released. So if someone searches
for a particular movie, it has to go through
the database and the data has to be fresh to him right away. So how do you deal
with the latency issue? Well, this would answer
a lot of questions or it would sum up lot of topics
that we've discussed. Here let us go through
this use case probably. So what happens here
is in order to get the lowest possible latency
all the possible results for a search our pre-calculated with a document for every combination
of letters in the search what this means is probably
based on the letters.

You have a document that is created and it
is traversed in such order that all the data
is scanned letter wise when you actually go
ahead and put forth a query what happens is suppose if there is a 20 character
Or a word that you put in so there are somewhere
around twenty three two, one zero three zero combinations
that are possible. So your computer has to go
through these many combinations. What S3 does is it basically lets you store
the data that I am DB has and once IMDb has told that data
they use cloudfront again, we have discussed. What cloudfront is they use
cloudfront to store this data to the nearest possible
location so that when a user fetches this data, it is Fest from that location. So what happens is Basically, when these many possibilities
are combinations are to be dealt with it becomes complicated
but in practice what IMDb does is it basically
uses analytics in such a way that these combinations
become lesser? So in order to search for a 20 character letter
they basically have to go through one five
zero zero zero zero documents and because of S3 and cloudfront you basically can distribute all the data
to different Edge locations and two buckets with in as
And since we're talking about huge amount of data, it is more than terabytes.

It is like hundreds
thousands of terabytes of data so we can understand how much data are we talking
about and S3 actually features or serves a number of
such use cases or requirements. So as I Believe by now, you've understood what S3
is let me give you a quick sum up or a quick walkthrough as
to what we've studied because we've talked about a lot of topics guys first we
started with the basic. Six of different
storage Services we were understood sorceresses
like EFS EBS storage Gateway. We've talked about Glacier. We've talked about
snowmobile snowball and then we move to S 3 S
3 we talked about buckets. We talked about objects. We talked about versioning we understood why
versioning is needed so that we can basically
replicate our data prevent it from deletion prevent
it from corruption. We also talked about
across region replication where you can move
data from one region to the other we talked about how we can Move
data Faster by using S3 data transfer acceleration. And then we also took a look at the basics like what
are the storage classes? What are the bucket policies how to create bucket policies and we also discussed
an important topic called as transition and expiration where if your data
expires it is deleted if your data needs
to be transferred to different stages you
can do that as well.

So all these topics are
discussed and we also discussed some important features and finally We finish
this session up with a use case. So networking domain
basically offers three kind of services the VPC
Direct Connect and out 53. Let's discuss each. One of them. So vbc is
a virtual private Cloud. So it's a virtual network. If you include your all
your air pollution sources that have launched
inside one VPC then all these resources
become visible to each other or can interact with each other. Mine said inside the VPC
now the other use for PPC is that when you have
a private Data Center and you are using
AWS infrastructure as well and you want your AWS resources
to be to be used as if they were on your own
network in that case, you will establish
a virtual private Network that is a VPN connection
to your virtual private cloud in which have included
all the services that you want in
on your private Network.

You will connect
a private Network through the V PC using the VPN and then you You can access
all your AWS resources as if they were
on your own network. And that is what we
see is all about. It provides you security
it makes communication between the AWS Services easy and it also helps you connect
your private data center to the AWS infrastructure. So guys, this is what
VPC is all about. Let's go ahead on
to our next service, which is Direct Connect so
Direct Connect is a replacement to an internet connection.

It is a leased line. A direct line
to the AWS infrastructure. So if you feel that the bandwidth
of internet is not enough for your data requirements
or your networking requirements. You can take at least line
to the AWS infrastructure in the form of the
direct connect service. So instead of using the internet you would now
use the direct connect service for your data stream to flow between your own data center
to the illness infrastructure.

And that is what
Direct Connect is all about. Nothing much further to explain. Let's move on to a next service
which is is Route 53 be so Route 53 is
a domain name system. So what is the domain
name system, basically, whatever URL you enter
has to be directed to a domain name system which converts the URL
to up IP address. The IP address is of the server on which
your website is being hosted. The weight functions is
like this you buy a domain name and the only setting that you can do
in that domain name or the setting which is required in that domain name are
the name servers right. Now. These name servers
are provided to you by Route 53 these name servers that are
To provide you are to be entered in the settings
of that domain name. So whenever user
points to that URL, he will be pointed to Route 53 the work in the
domain name settings is done. You have to configure
the Route 53. Now another your request
has reached out 53. It has to be pointed to the server on which
your website is hosted. So on a Route 53 now you
have to enter the IP address or the Alias of the instance on of to which you want
your traffic to be directed to so you peed in the IP address
or you feel in the Alias and It's done.

You're the loop is now complete
your url will now get pointed to Route 53 and Route 53 in turn
will point to the instance on which your application or website is being hosted. So this is the role
which Route 53 plays. It's a domain name system. So it basically redirects
your traffic from your url to the IP address of the server
on which an application or website is hosted. Alright guys, so we're done
with the networking domain. In today's session we
would be understanding what AWS Cloud front is but before we do go
ahead and understand what cloudfront exactly is. Let's start by taking a look at today's agenda first
first and foremost.

I would be talking about
what AWS exactly is good. Also understand. Why do we need
it abuse cloudfront and what it is exactly
the never talked about how content gets delivered
using Amazon cloudfront and what are its applications? Finally, I would finish
things off with the demo part where I would be talking
about AWS Cloud turn distributions having said that let's not waste any time and jump into the first
topic of discussion that has what is AWS. Will AWS stands
for Amazon web services, which is a leading
cloud service provider in the market and it has
the highest market share when you talk about
any cloud service provider. Now what Amazon web services
does is it provides you with 70-plus services and these services
are Growing the name some of these Services we
have something called as your computation
Services your storage Services your database services and all these services are made
available to you through Cloud.

That means you can rent all these services and pay
only for the services that you use and only
for the time duration you use these services for if you want to know more about
how a database works exactly. I would suggest that you go through the videos
that we have on YouTube. We have quite a few videos
on YouTube with talk about AWS in particular all you
have To do is you have to go to our YouTube channel and type a direct iaws and
you'd be having all the videos that are related to AWS.

But that is not the
discussion for today. We are here to discuss
what cloudfront is and I would like
to stick to that. So coming back to cloudfront when you talk about AWS
you have some Services now, what aw does is it offers you various
infrastructure as services and even platform as Services
now these services are made available to you in the form
of infrastructures or platforms where you can actually
go ahead and host.

Applications or websites. So when you do go ahead and host
these applications online what your cloud provider
has to worry about is the way data is fetched because if you have
a website online now that website would be visited
by quite a few people and they would be requesting
particular content or data, right? So in that case that data has to be made
available to your customers. So how does it happen? Exactly and how does AWS
make it happen to understand that consider the scenario
suppose you You are a particular user in your trying to visit
a particular website and imagine that that website
is based somewhere at a very far location suppose. You are based somewhere in USA.

And that website
its server actually hosts or is based in Australia. Now in that case when you make a request
for a particular object or particular image or maybe
content now your request is sent to the server that is in Australia and then
it gets delivered to you. In this process to there are quite a
few interrelated networks that deal which you are
not aware about the content directly gets delivered to you
and you have a feeling where you feel that you type
in a particular URL and the content is directly
made available to you, but that is not how it works
quite a few other things happen in the interim and due to that.

What happens is the data
that gets delivered to you. It does not get delivered
to you very quickly. Why is that because
you'd be sending in a request it would go
to the original server. And from there. The content is delivered. To you now,
if you are based in USA, the situation would be
convenient if the data is delivered to you
from somewhere close by now when you talk about
a traditional system where you are sending a request
to somewhere in Australia, this is what happens your data or your request is sent to
the server based in Australia and then it processes
that request and that data is made available to you
which gets delivered to you.

But if you have
something like cloudfront what it does is it sets
in an intermediate point where? Data actually gets cached first and this cache data
is made available to you on your request. That means the delivery
happens faster and you save a lot of time. So how does AWS Cloud
front exactly do it? Let's try to understand
that but when you talk about aw, cloudfront what it
does is first and foremost, it speeds up
the distribution process and you can have
a any kind of content whether it's static or dynamic and it is made
available to you quickly.

What cloudfront does is it? It focuses on these three points
one is your outing to is your Edge locations and three is the way the content
is made available to you. Let's try to understand
these one by one when you talk about routing. I just mentioned that the data
gets delivered to you through a series of networks. So what cloudfront
does is it ensures that there are quite
a few Edge locations that are located close to you and the data that you want
to access it gets cached so that it can be delivered
to you quickly. And that is why the data that is being delivered
to you is more available than in any other possible case.

So what happens exactly and how does this content
gets delivered to you? Let's try to understand
this with the help of this diagram suppose. You are a user. So basically what you would do
is you would send in a request that needs to reach
a particular server. Now in this case what happens is first
your request it goes to an edge location and from there to your server
to understand this to you have to understand
two scenarios first and foremost suppose
you're based in USA and you want to fetch
a particular day. That is based in Australia. You would be sending
in a request. But what AWS does is instead of sending the request
directly to your server, which is based in Australia. Maybe it has these
interim as locations which are closer to you. So the request it goes to the edge location
first and it checks whether the data that you are requesting
is already cashed their or not.

If it is not cached then the request is sent to
your original server. And from there the data is
delivered to the edge location and From there it comes to you. Now, you might wonder as an this is a very
complex process and if it is taking
these many steps. How is it getting delivered to me quicker than
in normal situation. We'll think of it
from this perspective. If you do send in
this request directly to the main server again, the data would flow
through some Network and then it would be delivered
to you instead. What happens here is at your age location
the data gets cached. So if you requested again, it would be delivered
to you quicker if it is requested by anyone. It would be delivered
to them quicker plus how as locations work is when you do send in this request and when there's location Fitch's this data from
your so-called original server in that case 2 when
the first bite it arrives at your age location, it directly gets delivered
to you and how does this content exactly get stored here? Well, first and foremost
what happens is what your age location has is it
has some Regional cash as well.

Now this cash would basically
hold all the content that is requested more. More frequently in
your region suppose a website has summon number
of content and out of it. Some content is kind of requested a lot
in a particular region. So surrounding that region. The closest is location
would have a regional cash which would hold all the content that is more relevant
for those users so that it can be frequently
delivered to these users and can be made available
to them quickly in case if this data gets outdated
and it is no longer being requested then this data
can be replaced with Guys that is requested
more frequently. So this is how cloudfront work. What it does is it
creates a distribution and you have some Edge locations through which you can actually
request the data faster. So what are the applications that cloudfront has
to offer to you now, I won't say
applications instead. I would say some of the benefits
of using cloudfront. Let's try to understand those
one by one first and foremost what it does is it accelerates your static website
content delivery.

We just discussed
that that means if you are requesting
a particular image or something like that, it gets delivered
to you quicker. Why because it is cashed
at your age location and you do not have to worry
about any latency issues. Next. What it does is it provides
you various static and even Dynamic content
suppose you need some video or a live session
or something like that even that gets delivered
to you quickly.

I just mentioned that when you request
a particular thing When the first bite it arrives
at your age location your cloudfront starts streaming that to you our start delivering
that to you same happens with the live streaming
videos as well. You would be getting
that streams instantly without any Latin see
what server encryption now when you do access this content what AWS Cloud Trend
does is it lets you have this so-called domain
where you put in HTTP and you get secured data. So you already have
one layer of security, but it also lets
you add another.

Layer of security by giving you
something called as encryption by encrypting your data or by
using your key value pairs, which is the same. You're actually ensuring
that your data is more secured and it can be accessed
privately as well customization at the age. Now. What do I mean by this now? There is some content that needs to be delivered
to the user or to the end user if the customization it happens
at the server again, it might be time consuming and there are quite
a few drawbacks of it. Say for example, I need a particular content
and it needs to be processed or Customized at
the very last moment. So these things can be done
at the age location as well. Thus helping you save time money and various other
factors as well.

And finally what it does
is it uses something called as Lambda H which again lets you deal
with various customizations and lets you serve
your content privately. So these are some
of the applications or uses of cloudfront. What I'm going to do now
is I'm going to switch into my AWS console and I'm going to talk about
AWS Cloud Trend distributions. And how can you go ahead
and create one? So stay tuned and let
me quickly switch into the console first. So yes guys, what I've done is I've gone
ahead and I've logged into my AWS console.

Now for people who are
completely new to AWS. What you can do is you
can actually go ahead and create a free tier account. You have to visit AWS website and search for free tier
you would get this option. Just create an account. They would ask you
for your credit or debit card details probably but And charge you
a minimal amount is charged and that is reverted
back to your account that is for
verification purposes. And after that what aw
is does is it offers you certain Services which are made available to you
for free for one complete year that is as long as you
stay in the limits or the specified limit
switch AWS has set so those limits are more
than enough to practice or to learn AWS.

So if you want to do go
ahead and get a proper hands on on various database Services, I would suggest that you do visit their website
and create this free Terror. Count once you do have that account you have
all these services that are made available to you
as I just mentioned. There are 70 plus services
and these are the services that are there which are can actually
go ahead and use for different purposes
our Focus today. However is creating
a cloudfront distribution which we just discussed
in the so-called theory part. I would be repeating
few topics here to while we do go ahead and create
our cloudfront distribution. Now as I've already mentioned
we want to fetch data or fetch a particular object and if that is placed
A particular Edge location that would be made
available to me.

So what we are doing
here is imagine that our data is placed at a particular original
server in our case. Let's consider it
as an S3 bucket. Now S3 is nothing
but a storage service with AWS that is simple
storage service rather. That is SS and that is
why we call it S 3 so what we are going to do
is we're going to go ahead and create an S3 bucket in that we would be putting
in certain objects, and we'd Be accessing that by using
our Cloud Trend distribution. So let's just go ahead
and create a bucket first you can see we have S3
in my recently Used Services.

You can just type S three-year and that would made
available to you. You can click on it and your simple
storage service opens. You would be required to go
ahead and create a bucket. This is how you do it. You click on Create
and you give it some name say maybe bucket use small
letters bucket for AWS demo, maybe and I would given
some number 0 0 0 I see next next next
I need a basic bucket. So I won't be putting
in any details. Do we have a bucket here? There you go. We have a bucket here. And in this bucket, what I'm going to do is
I'm going to put in some content that we can actually request
for so let's just go ahead and create an HTML file and put
in maybe an image or something. So I have a folder
here in that folder. I have a logo of ADA Rekha,
I would be using that logo and I would want to go
ahead and create. Create an HTML file
which I can refer. So I would open my Notepad and I would write
a simple HTML code.

I won't get into the details
of how to write an HTML code. I assume that you all know it. If not, you can use this code. So let's create a head file
basically or a head tag rather. Let's see a demo tag, maybe and I close
this head tag. I need somebody in here, right? So let's say Did
the body we say? Welcome to Eureka and I and the body here
and I save this file and save as where do I want to save it? and see if it here
and I would save it as a maybe index dot HTML. I save it probably
got saved somewhere else. Let me just copy it
and paste it here. I've done that. This is the file now. We have these files. Let's upload it
to our S3 bucket. Come here. I say upload I want
to add files.

So add files. Where do I go? I go to the folder I go to demo
and I select these two files and I say upload. There you go. My files are here and I say upload small files
so should not take a long time fifty percent successful
hundred percent successful. There you go. You have these Two files now, we have our S3 bucket
and we have two files. This is our origin server. Now. I need to create a distribution
and use it to do that. I would click on services and come here and I
would search for cloudfront. There you go. And I say create a distribution. So I click on this icon. Now you have two options. First one is something that lets you have
your static data moved in or moved out or if you want to live stream your data you
should go for this option. But that is not the case.

We would be sticking
with this thing. I say get started. I need to enter
in a domain name. So it gives me suggestions
and this is the first one which I just created
original path is something that you can give in father. A folders from where you
want to access the data, but mind directly
resides in the bucket. There are no extra folder. So I don't need to enter
anything original ID. This is what I have here. Basically I can use this or I can just go ahead
and change the name if I want to but I would let it stay the waiters
restrict bucket access.

Yes. I want to keep it private. So I say restrict
and I create a new identity and the you I have a new user
created here apart from that Grant read
permissions on bucket. Update my bucket
policy according this what I would say then I would
scroll down customer headers and on I don't need to put
in these details. How do I want my data to
be accessed the protocol policy? I would say redirect
is TTP to https, so that it is secured if I scroll down I have
some other options as well cast STP methods
and all those things. Do I need to change
these object caching? Can I customize it? Yes, I can. But again, I would be using
the by default one if you want to you can
Is it smooth streaming? No, these are some of the things that you need to focus on
if you have some streaming data, you can put in
details accordingly, but we are not doing that.

What is the price class
that you want to choose? You have some options here
which you can pick from I would be going for the default
one and then I just scroll down and I say create a distribution. So your distribution
is getting created now and this process
might take a long while if you click on this thing
you realize that. It is in progress and it takes somewhere
around 10 to 12 minutes for this distribution
to get created. So meanwhile, I'm going
to pause this session and I would come back
with the remaining part. Once this distribution
is completed. So bear with me for that while
so there you go. The distribution
has been deployed. The status is deployed here so we can actually go
ahead and use this thing. Now, we have a domain name here, which I can use and I
can just enter it here and we would be redirected
to the page. And what happens here is
you would be actually given access to this page
through the age location. That means you're not going
to the server instead.

The data has been cast away
from your distribution or your eyes location other so you enter this website
and you hit the enter button. As an error it
shouldn't have been. Oh, I know what just happened. When you do go ahead and create
your so-called distribution in that you actually have an option
of selecting a by default file, which I did not so I will have to give an extension
here saying slash index dot HTML and if I hit
the enter button now, it should redirect you to the demo tag with says
welcome to edu Rica, right? So this was the HTML file that we created and we
also had a PNG file which we wanted.

Access the name
was logo dot PNG. Okay, this is funny. This should not happen. Why is this happening? Let's take a look at it. Whether we have that file there because if it was there we
should be able to access it. And what was my bucket
this was the one oh, this has happened when I uploaded
that file it got saved with this extension
dot PNG dot PNG.

So if I come here
and I type dot PNG here, there you go. You have that object delivered to you through your Or so-called
distribution in this session. We will be discussing
about Amazon cloudwatch. So without any delay, I'll walk you
through the topics which we will be
discussing today firstly. We will see what Amazon cloudwatch has
and why do we need it? Then? We'll discuss certain Amazon
cloudwatch Concepts moving on. We'll take a look
at two most important segments of Amazon cloudwatch. What chart Amazon cloudwatch? Events and Amazon cloudwatch locks and finally
to make the soil more fun and interesting for you
of included Adam as well.

So let's get started first. Let us try to understand why
we need cloud based monitoring with couple of scenarios
in our first scenario consider that you have hosted
a messenger app on cloud and your app has
gained a lot of Fame but clearly the number of people using an application
has gone down tremendously and you have no idea
what the issue is. Well, it could be due
to two reasons firstly since your application has
complex multi-tier architecture monitoring the functionality of every layer by yourself
will be a difficult task.

Don't you think and secondly, since you're not using any kind
of monitoring tool here, you wouldn't know how your application
is performing on cloud. Well one solution for that is to employ a monitoring tool
this monitoring tool will provide you insights
regarding have your application is performing on cloud and with the state. You can make
necessary improvements and you can also make sure that your application is in part
with today's customer needs and definitely after
a while you'll notice that the number of people using
your application has increased moving on to our next scenario. Let's say your manager
as assigned you with a project and he wants you
to make this project as cost effective as possible. So as you can see in this project you using
five virtual servers which perform highly
complex computations and all these Servers are
highly active during data. That is the and most
traffic during data. But during nighttime, the servers are idle by that
I mean the CPU utilization of these servers during night time is
less than 15% and yet as you notice here in both the cases you
are paying same amount of money.

You have to notice
two points here firstly all your virtual servers
are underused during night time and secondly you're paying
for the resources which are not using and this definitely
Is not cost-effective. So one solution is
to employ a monitoring tool this monitoring tool
will send you a notification when they serve as our Idol and you could schedule
to stop the servers on time. So guys, this is one way to make
your project most cost-effective and avoid paying
unnecessary operating costs.

Let's consider another scenario
for better understanding. So let's say I have o stood
an e-commerce website on cloud and during sale season
many customers are trying to access my website which Which is
definitely a good thing, but for some unfortunate
reason application downtime has occurred and you
guys have to remember that I'm not using any kind
of monitoring tool here. So little bit difficult for me to identify
the error and troubleshoot that in reasonable amount
of time and it's quite possible that in this period
my customer might have moved on to different website. So you see that I've lost
a potential customer here. So if I have had
a monitoring tool in this situation, it would have identified
the error in all yours. Just itself and
rectify the problem. Well at could have easily
avoided losing my customer. So I hope guys with help
of these use cases you were able to understand as to why we
need cloud-based monitoring. So let me just summarize
what we have learnt till now.

We need monitoring firstly because it provides a detailed
report regarding performance of your applications
on cloud and secondly, it helps us to reduce
unnecessary operating costs, which we are paying
to the cloud provider moreover it did. Ex problems at all your stage itself so that you
can prevent disasters later and finally it monitors
the users experience and provides us inside so that we
can make improvements. So while guys in this session, we will be discussing about one such versatile
monitoring tool called Amazon cloudwatch Amazon cloudwatch basically
is a powerful monitoring tool which offers your most reliable scalable and flexible
way to monitor your resources or applications which
are currently active. One Cloud it's
usually offers you with two levels of monitoring which are basic monitoring
and detailed monitoring if you want to resources to be
eligible for basic monitoring. All you have to do is to sign up for 80-plus feet here
in basic monitoring. Your resources are
monitored less frequently, like say every five minutes and you're provided
with a limited choice of metrics to choose from whereas in detail
monitoring all your resources are monitor more frequently
like say every five minutes.

And you're provided
with a wide range of metrics to choose from but if you want your resources to be
eligible for detail monitoring, you'll have to pay
a certain amount of money according
to a SS pricing details. Now, let's have a look at few monitoring services offered by Amazon cloudwatch
Amazon cloudwatch firstly it provides a catalog
of standard reports, which you can use
to analyze Trends and monitor system performance and then it monitors stores
and provide access to system. And application
log files moreover. It enables you to set
up high-resolution alarms and send notifications if needed and Amazon cloudwatch
also send system events from AWS resources to AWS
Lambda functions SNS topics Etc. So if you have not
understood any terms, which I've used
here, don't worry, we'll get to know more
about this terms as we progress through the course
of this session earlier. I mentioned that Amazon
cloudwatch allows administrators to monitor multiple.

Sources and applications
from single console these resources include virtual
instances hosted in Amazon ec2. Database is located
on Amazon RDS data stored in Amazon S3
elastic load balancers and many other resources like auto-scaling groups
Amazon Cloud 12 Etc. So guys now let's try
to understand Amazon cloudwatch a little deeper firstly
we'll have a look at few Amazon
cloudwatch Concepts and then I'll explain you how Amazon cloudwatch
actually operate So it's metric or metric represents
at time audit set of data points that are published a cloud. So what I mean by that
is suppose let's say you have three variables XY and z
and you have created a table which has values of X
with respect to Y over a period of time
in this scenario the variable X, which have been
monitoring till now is a metric so you can think
of metric as a variable which needs monitoring next. We have Dimensions. Let's consider same variables
XY & Z Basically, you had created a table which has values of X
with respect to Y now, let's create another table which has values of X
with respect to Z.

So basically we have two tables
which describes same variable X, but from two
different perspectives. These are nothing
but Dimensions. So basically our Dimension
is a name value pair that uniquely identifies a metric and Amazon cloudwatch
allows you to assign up to ten Dimensions to a metric then you
have statistics previously. We had created two tables which are values of X
with respect to Y and as well as that you can
combine data from these tables like to create a chart or maybe plot a graph
for analytical purposes. This combination of
data is nothing but statistics statistics
are metric data aggregations over specific period of time, then you have alarm. Let's say you have
been monitoring this variable X for some time now and you want
a notification to be sent to you when the value
of x reaches certain.

Short all you have to do is set an alarm to send
you a notification. So basically alarm can be used to automatically initiate
actions on your behalf. Now that you have
clear understanding of concepts of Amazon cloudwatch. Let's see how Amazon cloudwatch
operates Amazon cloudwatch has complete visibility into your
AWS resources and applications which are currently
running on cloud. So firstly it collects metrics and locks from all
these AWS resources and applications. And then by using this metrics
it helps you visualize your applications on
cloudwatch dashboard moreover. If there is some sort
of operational change in a SS environment
Amazon cloudwatch becomes aware of these changes and response to them by taking some sort
of corrective action, like maybe it sends
you a notification or it might activate
a Lambda function Etc.

And finally it provides
your real-time analysis by using cloudwatch metric map. So if you're wondering
What cloudwatch metric Mathis it is a service which integrates multiple
cloudwatch metrics and creates a new time series and you can view
this time series on cloudwatch dashboard as well. So working this way
Amazon cloudwatch provides you with system by disability it even provides
you actionable Insight so that you can monitor your application
performance moreover. It allows you to optimize
resource utilization if needed and finally
it provides a unified. I'd view of operational health
of your AWS environment.

So I hope that by now if you know what Amazon cloudwatch has so
now let's try to understand how Amazon cloudwatch works
with help of a demo. So guys, this is my AWS console. Let's say AWS Management console and the services
which you can see on the screen are the services
offered by Amazon AWS. But in this demo we are going
to use only few Services. Let's say cloudwatch,
and then you have easy to and a service called
Simple notification. Service and when I click on ec2
it takes me to ec2 dashboard where you can see that I have four instances
which are currently active, you know that here in this demo.

I'm supposed to get
a notification saying that CPU utilization of
my instances less than or if a person for me to receive
a notification first, I'll have to create a topic And subscribe to it
with my email ID. So let's explore
a service called Simple notification service where you can create
a topic And subscribe. To it. Once you reach SNS dashboard click on topics
optional navigation Pane and click 'create new topic
give you a topic a name. Let's say CW topic and
if the display name as well, let's give the same name and click on create
topic option here. You can see that I've successfully created
a topic now click on the topic, which you have created
and select actions and subscribe to topic option. Well, I want notifications
to be sent to me in form of email you
Of different options as well and form of Lambda function
or Jason Etc. But I'm going to choose it
as email and give my email ID which is her and then click
on create subscription option. So now whenever AWS console
wants to send me a message.

It will send to the email ID which are used to
subscribe the topic now, let's go back
to cloudwatch dashboard. So guys this is my cloudwatch dashboard and you
can see different options or navigation pane firstly. I have dashboard where I can view all
my metrics at same place. Then you have alarms
which shows the list of alarms which you have configured and then you have
events and locks which will be exploring later. Our topic of interest
is the last one which has metrics select
the metrics option here and then choose ec2
and then / instant metrics when you do that or list
of metrics will be shown to you like Network out. Soup utilization Network packet
in network packets out and various other metrics
for various resources, which are currently
active on your Cloud. So but we are interested only
with CPU utilization. So I'm going to type that here. Well, it shows
the list of instances which are active on my cloud and I'm going to choose Windows
to instance and then click on graph metrics option here.

Okay, let's select Windows to
only and then on the right side, you can see you have
a alarm button when you click on that a dialog box will be open where you can configure
your alarm firstly. Let's give alarm a name. Let's say low CPU utilization. And a brief description as well. Let's say lower
than 25 percent lower than 25 percent CPU utilization.

Now I'm going to set
the threshold Which is less than 25% in this case
and on the light side, you can see of period option if you resources are eligible
for basic monitoring the speed option
by default as five minutes. And if your resources are eligible for
detailed monitoring, it's usually one minute and when you scroll down you
can see a send notification to option here so
select the topic which you have previously
created that will be C Topic in my case and then
click on create Allah. But there is some error. Okay. It says there's an alarm
already with this name. So let's give it another name
of my instance. Now, let's try again
and when you click on this alarm button And click
on refresh option here. It says that I've successfully
created a alarm here. You can see that low
CPU utilization of my instance. And when you click on that it
shows you all the details like description threshold and what action it
is supposed to take when alarm is configured
and all the details. So guys try it out.

It'll be easy for you to understand
cloudwatch console much better. Okay guys. Now, you know
what Amazon cloudwatch has what it does and wait operates, but to understand
the capabilities. You have Amazon cloudwatch
completely we should be aware of two important segments
of Amazon cloudwatch, which are cloudwatch events
and cloudwatch locks. So let's discuss them one
by one firstly we have Amazon cloudwatch events
consider the scenario. Let's say you've created
an auto scaling group and this Auto
scaling group currently has terminated an instance so you can see this as some sort of operational change
in area Bliss environment when this happens Amazon cloudwatch becomes aware
of these changes.

Changes and response to them by taking some sort
of corrective actions, like in this case. It might send you
a notification saying that your auto scaling group
has terminated an instance or it might activate
and Lambda function which updates the recording
Amazon Route 53 zone. So basically what Amazon
cloudwatch Evans does is it delivers a real-time stream
of system events that describe change
in your AWS resources. Now, let's have a look
at few concepts related to Cloud watch events. First TV happy Venter
and even indicates change in a SS environment and
AWS resources generate events, whenever the state changes. Let's say you have terminated
an active ec2 instance. So that state of this ec2 instance has changed
from active to terminated and hence an event is generated.

Then you have rules rules are
nothing but constraints every incoming event
is evaluated to see if it has met the constraint. If so, the event is routed
to Target Target is is where the events are handled Target can include
Amazon ec2 instances or a Lambda function
or an Amazon SNS topic Etc. Now let's try to understand
Amazon cloudwatch events better with help of use case
in this use case.

We are going to create a system that closely mimics
the behavior of Dynamic DNS. And for those who don't know what Dynamic DNS has Let
me Give an example. Let's say you want to access
internet at home then internet service provider
assigned to an IP address, but Since internet service
provider users different kind of online systems. This IP address keeps changing because of which it
might be difficult for you to use this IP address
with other services like webcam security camera
thermostatic cetera. So this is where Dynamic
DNS comes into picture what Dynamic DNS does is
it assigns a custom domain name to your home IP address and this domain name
is automatically updated when IP address
changes so basically dynamic ANS is a service that automatically
updates a name server in domain name system and Amazon office you with a similar kind of service
called Amazon Route 53.

So in this use case, we are going to update
Amazon dropped 50 3 whenever a Amazon ec2 instance
changes its state. Now. Let's see how the use case
actually works this use case precisely works this way. So whenever an ec2
instance changes, it states Amazon cloudwatch
event becomes aware of these. Operational changes and it
triggers a Lambda function this Lambda function
uses different kind of information regarding
the instance like that's public and private IP address and it updates a record in appropriate Route
53 hosted zone. So let's say you have
an ec2 instance and you have terminated the instance. So Amazon cloudwatch events
become aware of this and it triggers
a Lambda function and this Lambda function
deletes the record from Amazon Route 53 similarly if you have created
a new instance, Once again Amazon cloudwatch
events become aware of this and it triggers
a Lambda function in this Lambda functions creates
a new record in Amazon Route 53.

I hope you have understood
what Amazon cloudwatch even sees and what it does. Now, let's discuss how Amazon cloudwatch events
works with help of a demo. So in this demo, we will schedule to stop
and start ec2 instances with help of Lambda function
and cloudwatch events. So let's go ahead with demo. So guys, you can see
that I have four instances which are currently Deaf first, I'm going to create
a Lambda function which is going to stop
my windows to instance and you guys need to know that
for Lambda function to do that. We need to assign permission. So Amazon provides you
with the service called I am which is identity
and access management where you can assign
permissions when you search for I am in the tab, it shows you the service select that and on IM dashboard
on the navigation pane.

You can see a policies option
here select that and click on create policy option. First it's asking you
for a service here. We should be easy
to in our case click on easy to function and actions which will be to start
and stop may see two instances. So let's search
for start instance. Well, a predefined function
is already there. So you can choose that then you have stopped
instance again select that And then I wanted to be
eligible for all the resources. So I'm going to choose
all resources here and click on review policy option. Let's give our policy a name that is to start
and stop ec2 instances and description as
well a brief description. Let's say to start
and stop instances. And now click
on create policies. It's taking a while. So I've successfully
created a policy here.

Next we have to assign
this policy to Lambda function. So click on rolls here then click on create role choose
Lambda function here and click on next permission. Search for the policy
which we have created earlier that is to start and stop
the found the policy select that and click
on next view option that's asking for a name. Let's give a name
start-stop instances and click on create role. I've successfully
created a role. So what we have done here is
we have assigned permission for Lambda function
to control ec2 instances. Now, let's create
a Lambda function. You can search
for Lambda in the search that and there R click on create function give you
a Lambda function a name. Let's say to stop instance
and select the role, which you have previously
created and click on create function. You can see
that I've successfully created and Lambda function and now I'm just going
to copy the code to stop ec2 instances here. I'm going to select this
and paste it over here and make sure to save it as you can see here
in this function a task for instance region
and instance ID.

So let's configure the details. Let's give it a stop instance and here you will have to insert
instance region and ID. and Stan's region an instance
ID Novel have to copy the instance region
and ID of the instance, which I ever need. So let's go
to ec2 dashboard here. Now let's say I want my windows
to instance to be stopped. But this is the instance ID, which I'm going
to paste it over there.

Similarly instance
region now Well, in this case, I'm choosing
Windows to instance. You can choose whichever
instance you want to stop. Once you're done
that you click on create option here test
the configuration details. When you scroll down you can see
the execution results here. It says that my instance
has been successfully stopped. Let's go and check and easy to dashboard here
on the ec2 dashboard. I'm going to refresh
it and you can see that my windows to instance
has successfully stopped now, we'll create
another Lambda function which will restart this function
again the same search for Lambda function
in the search tab and click on create function
option it ask for a name. So let's say start instance. And choose the role with your previously
created and click on create function again. You'll have to paste the code to
start the instances over here. And click on Save option. Let's try to configure this. Let's name it as start instance. and again a task for to our tributes which are
instance region and ID.

Now what we have to do is copy
the instance region and ID here like we did earlier. Let's go to easy to
dashboard and copy the instance ID and region. Well, you guys
can see that here. My windows to instant has been
successfully stock now. I'll copy this
and paste it over there. similarly instance region as
well and click on create option not test the configuration and
when you scroll down you can see that my instance
has successfully restarted in the ec2 dashboard.

I'm going to refresh this. Well, my windows to instance is on its way
to get restarted till now. I've used Lambda function
to start and stop my instances. But now I'm going to automate
this process with help of Amazon cloudwatch. So let's go to
cloudwatch dashboard here. Well, it's taking a while to
load then choose events option and click on create true.

So here we are going to share
Jewel to stop my instances every day at 6:30 p.m. And to restart this instances
every day at 6:30 a.m. So click on schedule. If you want to know more
about Grand Expressions, you can visit
Amazon documentation. So let me show you it has
six Fields firstly it's minused. Then you have hours then day
of month day of the week and your your concern. Only with minutes and house because we want
our instances to be start and stop every day every month. So let's give the details. So if you're going to create
a rule to stop the instance, let's say 6:30 in the evening
30 minutes and 18, which is nothing but 6 p.m. And then rest all you
don't have to mention anything. When you give a proper
cron expression sample timings would be provided to you. You can see her the rest
of the sample timings and now let's add
the target function which is Lambda function
in our case and select on stop instance function and click on configure details
give you a rule a name.

Let's say stop my ec2 instance and description to stop
my ec2 instance. At 6:30 p.m. Every day. And click on create
video you can see that I've successfully created
a rule to stop my instance every day at 6:30 p.m. Now. Let's create another rule
to restart this instance every day at 6 a.m. In the morning. Again. The scene shows the schedule
here and cron expression which will be 6 a.m. In the morning. Again, the sample time
is shown here. Then that's that Target function
again Lambda function and select the function that is to start instance
and click on configure details. Let's name it as start my ec2 instance
and the scripture has to start my ec2 instance
every day at 6 a.m. And click on create. So now we have successfully
created two rules to start and stop the easy two
instances at 6:30 p.m.

And 6:30 a.m. Respectively. So what we have done is we
have saved our time here. We've automated the
process of stopping and starting ec2 instances. So try it on yourself. It will be easier
for you to understand. So guys now let's discuss
our next topic which is Amazon cloudwatch locks. Have you guys heard
of log files? Well log files are nothing but detailed record
of events that occur when you are using
your AWS environment, you can view a log files
on your on-premise server as well search for an app called
Event Viewer select the app and click on Windows locks
and select systems or list of log files
will be shown to you when you choose a particular
log file all the details regarding the clock files will be shown like the number of
keywords the login time number.

Of hours, the file
has been logged onto and various other details. Similarly. You have log files created when you use AWS
environment as well. So you can consider this log
files is a data repository. Most of the metrics are
generated from these log data. So whenever a metric
is generated a part of data is extracted
from this log data. So you're designing metrics
according to your like by choosing a part of data
from this log data. So basically this log files are what we call
a primary data store. Please and Amazon cloudwatch
locks is used to monitor store and access log files
from AWS resources, like ec2 instances cloud
trail Route 53 Etc.

Let's try to
understand cloudwatch locks better with help of some features firstly you can use Amazon cloudwatch locks
to monitor your application and system log files. Let's say you have made
a lot of errors, but trying to deploy
your application on cloud in this scenario. You can use cloudwatch locks
to keep track of your errors. And send a notification to you when the error rate
increases certain threshold so that you can make
avoiding errors again, then you have log retention by defaults logs
are kept indefinitely but cloudwatch provides
you with an option where you can set the period
between 10 years to one day.

Then you have locked storage. You can use cloudwatch logs
to store your log data and highly durable storage
and in case of system errors, you can access raw log data
from this storage space and then you have DNS queries
you can use Watch lugs to log information
about the DNS queries that Route 53 DC's now let's have a look
at few Concepts regarding cloudwatch locks firstly we have something
called log even so log even is just to record a fact
DVD that has occurred in AWS environment. It's straightforward. Then you have locked
stream a log stream as a sequence of log events
that have same Source. Then you have something called
Law Group Law Group defines group of lock streams. That has same. And access control
settings by default. You have to make sure that each log stream
belongs to one or the other Law Group guys
not let's try to understand cloudwatch logs better
with help of this use case in this use case. We are going to use
Amazon cloudwatch looks to troubleshoot
the system errors, you can see that I have
three instances here and a cloudwatch agent which is monitoring all
these three instances.

So what cloudwatch agent does is
it collects custom level metrics from all these easy to instances
and then This metrics and locks collected by the agent
are processed and stored in this Amazon cloudwatch
Lots Amazon cloudwatch locks, then continuously
monitors these metrics as you can see here by then. You can set an alarm
which will send you notification when some sort of error
occurs in the system. So whenever you receive
a notification saying that some sort of error is there in the system you can access
the original log data, which is stored in Cloud
watch locks to find the error. So this is how you can use Amazon cloudwatch locks to
troubleshoot the system errors. So basically you are having
a look at original data so you can solve your problems
faster and quicker. So this is it guys today
in this session. We are going to discuss about
the service AWS cloudformation. So without wasting
any more time, let's move on to today's agenda.

So we'll start today's
session by discussing why cloud formation
is actually needed in the first place. Once we're done with that,
we'll move on to the what of what is cloud formation. Actually after that. We'll be discussing what things
are needed to get started in the cloud formation service. Now among those things. You have a Json document. So we will be learning
how to create a Json document. So before that we'll
be seeing the structure of a Json document. Once we learn
the structure will see how a Json document
actually looks like so we'll see how a sample Json document looks and in the end we'll be
doing a demonstration. Ocean so in the demonstration
will be doing two demos. The first one will be
a really simple one and the other one will be
a little Advanced. Let's move on
to the first topic. That is why AWS cloudformation? So why do we
need cloud formation? So for example, you have an application now most
of you guys know that for and we have done this
in the previous sessions as well that we created
an application right.

Now. The application is
actually dependent on a lot of AWS resources. Now if we were to deploy and manage all these resources
separately it will take up a lot of time of yours, right? So to reduce that time or to
manage all these resources. What if I told you
you have a service? Yes. Yes, you got that, right. So you have a service
called AWS cloudformation. So using AWS cloudformation, you can manage
and create and provision all these resources
at a single place. Now, this is
what cloud formation does. But now what is
cloud formation exactly. So a cloud formation
is basically a service which helps you model and set
up your AWS resources so that you can spend more time
on your application rather than setting up and provisioning
these resources, right? So basically It's a tool using which you can create
your applications quickly.

Also, you can create templates
in AWS cloudformation. Now, how do you
create templates? Basically, you would be using
the cloud formation designer you'd be putting in
all the resources that are needed. You would be defining the
dependencies of these resources and then you'll be saving this
design as a template right now. What will you do
with this template? This template can be used
to create as many copies as you want right? Say for example Example
you have a use case wherein you want your application
in multiple regions for backup purposes. Right? So if you want that you won't be implementing
or you won't be creating each and every resource one by one
in each of the regions.

What you can do is you
will create it at one place in cloud formation have
the template in your hand and deploy that template
in the other regions as well. Right? So what will this do? So first of all, your replication will
be very precise, right so they won't be
Any changes in the copies that you have made second of all
you will be doing that quickly because you don't have to do
the process all over again.

You just have to click a button and that template
will be provisioned or will be launched
in that region. So this is what
AWS cloudformation is all about. It makes your life simpler by handling all the creation and
the provisioning part, right? So this is what is
AWS cloudformation. Now, how do we get started in cloud formation says
it's a very useful. Is how can you as a user use the service
so let's move on. So for using
the cloud formation service. First of all,
you need a Json script now. Why do you need a Json script because you would be creating
a template right in the cloud formation designer. You would be using
the drag-and-drop option and filling in the AWS
resources right now when you will be doing
that in the back end it will actually
be creating a Json script.

Now what you can do as a user is
if you're good in Json, you can create
your own Json script. Otherwise you can use
Cloud formation designer to create a template now
for creating a template. Like I said,
you need a Json script. Now. What is the Json script then? So a Json script is basically a
JavaScript object notation file, which is an open standard form. And that means
it is human readable so you can read it as well
as well as the computer. So if you don't need the
programming knowledge for this, what you as a user
would be doing is you would be designing your template
in the cloud formation designer and that will
automatically create.

Eight a Json script
you can do it. The other side is well. Like I said, you can create your own
Json script and feed it in the cloud formation designer. So this is
how cloud formation works. This is how you would
be using AWS cloudformation. But then how can you
learn the Json script? So it's very easy. So basically you have
to follow a structure in the Json document. What is this structure? So that structure is like this you would be creating
the following Fields. So the first field will be the
This template format version. So this will basically contain
version of your template. Next up is the description. So description is a text-only
file or is a text-only field wherein you will be describing
your template in words, right? So if I'm a user
and I want to know what your Json does without reading your Json script
from beginning to end. I can read the description
in simple English and understand what ages from triple to right
then you have the metadata.

So metadata will basically
When the properties of your template then
you have the parameters. So any values that you have to pass through
the template will be included in the parameters
next comes mappings. So mappings would basically
include the dependencies between your AWS resources. Then comes conditions. The conditions are
basically the conditions that you would be giving
to your template when the Kristof will be created
or while the stack is upgraded. So if we are stack
is being created or their stack is being updated. These conditions will be looked. One two, then comes output. So whatever outputs
your template will provide or your creation of Stack
will provide will come in the output header.

Then you have
the resources field. So resources will basically
include all the AWS resources that you want to include in
your infrastructure right now. If you look carefully you
actually will be only dealing with the resources part, right because you will just
be populating in the resources and creating the dependencies. Right. So basically you'd be populating
the resources part and that is what it was all
about the resources, but right now,
this is Theory now, how does a Json document
actually look like right a Json document looks
something like this. So like I said, you would be working
on the resources field, right? So you'd be including
the resources field and in that say you so this Json document
is all about if you had noticed
it's about S3, right? So you are basically
including an S3 bucket. It and the type you'd
be specifying the type of service that will be
including this bucket. Right? Like in this example
a Json document doesn't know what service
you're talking about. So you specify the name
of the bucket and inside the brace is
you'll be specifying which service over here.

You'll be specifying
the S3 service. Don't worry. I'll be showing you guys
this Json document in a moment. But before that
you should understand how a Json document
is structured and this is what we're doing right now. Now guys, this is
the cloud formation dashboard. Now, you have to create
a stack over here, right? And for the creation of a stack
you require a template so first we'll design a template
and then we'll create a stack. So this is my cloud
formation designer. Let's go back
to our slide and see what we actually have to do. So, this is our first
demonstration here in will be creating a S3 Bucket
from cloud formation. So we'll be designing a template around that for first and then
we'll be deploying this code. Right? So let's do that. So let's go to our cloud
formation window now so we have to create
an S3 bucket. So we'll scroll down
to the S3 service.

So here is AC Service. We click on this we service. Click on bucket
and drag it over here. Right. So this is
the recipe bucket guys. Now you can edit the name
of the template over here. You can name it
as either a car CF that means and Eureka
cloud formation, right? So you specify that now, this is your Json code now you
can compare the Json code guys. Let me make it a little
bigger for you guys. Yeah. So this is the Json
code guys now, I didn't code
this Json script, right? I just dragged and dropped this Bucket
over here in cloud formation and Automatically generated
this script comparing it with the code that we have
in our presentation. Let's see so we have resources. Yes. We have resources. We have the name
of your bucket part.

So basically this is
the name of your bucket and then it's a type. We're in you'll be specifying
this you service. So you have type and specifying
the SC service over here, right? So if you want to change
the name of the bucket, we can do that over here. Let's specify it as
and Eureka CF. Alright, so we are done. This is it guys this is
all you have to do. So now for running this
in cloud formation, all you have to do is click
on this icon create stuck. Now this will lead
me to this page which is the create stack page. Now, it has automatically
uploaded this template to the S3 bucket and it has specified
the URL here, right? We click on next you specify
the stack names.

Let's specify it as a lyric RCF, right so you don't have to
specify anything are let's click on next click on create. So you'll be seeing
the events on this page. Let's refresh this. So it says create
in progress, right? So my template is now
being created into a stack and that stack will have
the AWS resource in it, which is the S3 bucket.

Right? So I think the time is enough. Let's refresh it and check
if our stack has been created. So it's still
in the creation phase. Let's wait. All right, so now it shows me
that the Creator is complete. All right guys, so let's go to our S3 service
and check whether we have Bucket that are AWS cloudformation
created for us. So we go to the AC Service. And here it is guys. So this is the bucket
that we created right? I see you can see the time.

It's March 28th. 2017. Today is March 28th, 2017. And the time is 7 5
and the time is 7 7 here. Alright, so this bucket
has just been created by cloud formation. So guys, like I said,
it is very easy. It is easy to understand
and to deploy as well. You basically just have
to create a template and that is it AWS cloudformation
will do the rest for you and the cool part is that you can replicate
the template as many times as you want. Right? So it will save you the time. Okay this demonstration is done. So we have created an S3 bucket
using cloud formation. Let's see what our second
demonstration is all about. So now we'll be creating
an easy domain students in which we will be
deploying the lamp stack which means in that
easy to instance. You'll installing Linux
you installing a patch a you'll be installing MySQL and
we'll be installing PHP as well.

Right? So, let's see. How will we do that? So for our second demonstration, we will again go back
to the cloud formation console. We will click on create stack and now we have
to launch a lamp stack. So a lamp stack is basically
a sample template in AWS, right so we can select
the sample template and we'll click on view
or edit template in designer. So a lamp stack is basically
an easy to instance with Linux Apache MySQL and PHP
installed onto it, right you can see the designer
that you have only specified and easy to instance anyway
to ask the security group to it. So you need
the security group obviously because you have
to connect to this. You do instance right now. A lamp stack is basically
a web server remember? Now, let's see the template
for this lamp stack. So we discuss the structure
of a Json document if you guys remember so
the first part was the AWS template format version.

Then you have description. Then you have
parameters so parameters if you guys remember
it is basically the values that will be passing
to the template right now. If you are creating a lamp stack you'd be needing
the database name you'd be needing
the database password. You'd be needing a lot
of things, right? If you're installing MySQL
you be needing the username you'll be needing the password. So all of that you can feed
in here in the parameters so you can specify the key name. So if you are connecting to the slough instance
through SSH connection, you'd be needing a keeper right? She would be specifying
the keep are here.

Then you will be
specifying the DB name and the other detail now how will that look
when you'll be creating a stack? So let's do that. We will click on this icon
which will now create a stack automatically so
will be prompted. It on this cage click on next
then you will reach this page where in you are feeling
the entry right? So you would specify
the stack name. So this is by default
so stack name, so we'll be specifying
the stack name first. So I'll let us tag
name be lamb demo, and then we move on
to the parameters part. So whatever you specified in the Json parameters field
will be reflected over here. So we specified
DB name over here. So it was asking me
for the DB name. So let's give it as a rake. And let's give the DB password
as something candy. Be root password DB user
as a Eureka instance type as Steven dot micro wide
even got micro because if you guys noticed
in the template, we didn't specify
a virtual private Cloud that is a VPC now
all the instances which are launched these days
of with all the new instances which are there in easy to have
to be by default launch the VPC.

But since we are creating
a Json file and we didn't specify a VPC you have
to select T' an older version of your ec2 instance. So let it be T 1 so T
1 is an older version. It runs without a V PC as well. And then you have to specify
a key name the key name would basically be used
to create SSH connection to your instance. Right? So our key pair was array
calendar score a will select that and will click on next now SSH location is
basically your IP address if you want to specify I
don't want to specify it. So we'll click on next you don't
have to enter anything over. Click on next confirm
and click on create. Now is happening in the background as it
is picking up that Json file and is creating a stack first
launch an ec2 instance. It will then install the next
onto that it will then install Apache MySQL and then
the end a PHP installation.

So what we will do the
once it says that the creation is completed
we will go and check if everything has been installed on our server by creating
an SSH connection, right? So let's wait until the stack. complete Alright guys, so as you can see
in the events that the creation
is now complete. So let's check that if our installation
has been correct will go to the ec2 instance. Now this is our instance
which has just been created. We can check that. It's been created
on March 28, right? So today is 28. Alright, so now let's connect
to this instance. So for that we will have
to copy the IP address. This is the police officer. For those of you who don't know
how to connect to easy to you'll be pasting
an IP address here. Right? And then you have
this private file, right? So this is of the pemex tension, but the party software
needs a PPK extension. So you have to convert
this pem file to PPK that can be done using
the puttygen software.

So this is the footage
and software so I will be dragging this file here. Okay, it doesn't work. So well click on load go
to downloads click on all files select my pem file
click on Open click on OK and then click
on save Private key. So let's name it as a Eureka. Underscore a click on save so a file
has been saved will close it. Go back to our party software
here enter the IP address here. You will click on SSH
click on authentication. Click on browse go
to your PPK file click on open and click on open here. So now you'll be connected
to your SSH through your SS has to your ec2 instance. So any Linux installation
on Your AWS infrastructure. The login will be
easy to – user. I see you're in let's see if you can connect
to a MySQL installation. So MySQL – Edge
so it is on localhost. – P port number which is your 6
and then the user that we gave was a Eureka
and the password was this.

Okay guys, so we are in so that means we successfully
created the Eddie Rekha username which is specified
in the Json script. That works. Well and then you specified. Okay. We also specify
that we need a database right? So, let's see if it
is showing a databases or our databases
have been created as well. Okay, so it has a data-based
called Ed, Eureka? Right. So the Json script worked. Well now the thing
here to notice. Is that how granularity you
can configure your Json file? Right? First of all, it launched an ec2 instance
then install Linux then install MySQL it
configured it settings and inside MySQL it gave
you a database, right? So this is awesome guys. So this gives you
the whole control of AWS just through Json script. Right and this is the power
of cloud formation. Now if you want
this infrastructure or whatever you have created
right now to be replicated again to some other instance that can be done
with a single click of button, right and it is
actually pretty awesome because if you were
to install this lamp stack on a server or on AWS again, if you launch ec2 instance
with the Linux OS installing Apache MySQL and PHP
may take time.

It actually takes time. We can you have
to open the console. All you have to open
the terminal you have to enter the commands and depending on
your internet speed you will install all those packages. So this is neat. It does everything for
you automatically, right? So guys, this is what cloud
formation was all about. So I'll close the session. Let me go back to my style. All right, so guys we are done
with the lamb stock demo. Today's session is going to be
on auto scaling and load. And so so today
I'm going to tell you how you can order
scale your resources so that they become
highly available and this is what we're going to do today.

All right. So with that guys, let's start with today's session
with the agenda for today. So guys, this is what we are going
to do today first. We're going to see what are snapshots
and am I so these are basically the entities using
this using which you will be or scaling your resources. So once you know, what are snapshots
in Amis will move on to why do we actually need
or scaling and what? Is auto-scaling exactly
after that we're going to see what is a load balancer
and towards the end. We'll be doing a Hands-On which is going to
be very interesting because I don't think
there's a demo out there which can show you the kind of demo that I'm going
to show you today. All right, and if you think
about a guy's if you're if you're thinking about moving
to the cloud industry order scaling our load balancing
out the very important topics in this in this in this domain, right so you should
know about them.

So if you have been so if you About them please
pay attention today because you're going and going to go and gain a lot
of knowledge today. All right moving on guys. Let's start with the first topic
which is snapshots and am is so let us see what are those so I guess
most of you are aware of what an ec2 instances
of for those of you who are not an ec2 instance
is just like a row, so it's in fresh
piece of computer that have just bought is
just like that, right? So on that computer, you can choose any operating
system that you want.

Want so once you have
the operating system, you can install any kind
of software on it. All right, so you have
to install every time you launch a new in an ec2 instance. You have to install all
the required software's on it. All right, but
there's a workaround what if you want
a specific configuration of ec2 instance a want
five easy to servers which are exactly like this
like each other, right? So one way of doing that would be to launch
a new instance every time install the required packages. Daytime and going about it, right the other way of doing it would be
to actually create an image of once you will be configuring
your ec2 instance.

And after that you'll
be creating an image of your ec2 instance. And that using that image
you can actually deploy for more easy to do servers. All right, so this image
is basically what is and am I so am I which is an Amazon
machine image is nothing but an executable image
of your already existing. You do instance, right? But before an am I can be created there is
a thing called snapshot now what a snapshots
snapshots are nothing but the copy of the data
the data the copy of the data that you have
on your hard drive.

So basically if you
have your C drive, right and you want
to copy your C drive you copy a CD drive
on to some external drive so that becomes a snapshot but if you can boot
from that external drive, so that has to your whole
operating system comes up. Some other machine
then it becomes an Ami. So this is basically the difference between
the two a snapshot is not a bootable copy and Ami is
a bootable copy that you have. Alright, so I hope
you got the difference between what is in am I
and what is the snapshot? So I'll repeat it again and you use an Ami to basically
replicate an easy two wins is easy to instance again, so that you don't have
to do the configurations all over again, right? So now you'd be Oh, we were we were to talk
about what is auto scaling.

What is load balancing? Why do we need EMS but be patient you
would be clear with everything with the session. All right moving on guys,
let's now discuss. Why do we need auto-scaling
now before the right. Now the way I will be going through the session is I'll
be explaining you each topic and then I'll show you it
in the AWS console. All right, so we just discussed
what are snapshots and what are a mere am I
so let me quickly show you How you can configure our how you can create
an Ami of an already existing ec2 instance
in the AWS console.

So, let me give me a second. So give me a second. I'll just go to my browser
and my AWS console. So guys, this is my AWS console. I hope it's visible to you. So the first thing that you'll be doing
is you'll be going on to your ec2 console
or all right. So in your easy to console you
will have all your servers that are running
right now, right? So for the for the Sake
of Simplicity I have deployed.

I've already deployed to servers
which are server 1 and server to now I have configured
them both with a purchase so that they can have your
they can host a website. Uh, let me quickly show you how the website
actually looks like. So if I go to this particular IP address
of server 1 This is in part. So what one right so this is how the website looks
like right similarly for my server to if I go to go
into my server to this is how my server to be look like. Here it is. All right. So these are my two servers. Now. What I want is I will create an exact copy
or the of these servers so that they can be replicated. All right. So when I say replicated
everything from software's to this website will
be copied onto an image and that copy or that image when I will deploy it. It will be deployed
inside one more. He should do server in which
I don't have to do anything. This website will be there. I just have to go
to the IP address and I can see this website.

All right. So now what I'll be doing
is I'll be creating an Ami of both the server. So let's create an EMF
or server one first. I'll select the server one. I'll go to actions. I'll go to image I
click on create image and all I have to do is
give an image name for it. So let me give the name
as live server one, right? This is my image name. I click on create image
and that is it. It takes in your request
for Eating an Ami and it does that right
pretty simple now similarly. I will be doing it
for server to as well. I'll select server
to I go to image. I'll create an image and I'll name the image
say live server to So once I've done that you can see the images
in your am I tab? So if you look at over here in the images section
you can look at Ami is if you go to your aim is you
can see there are two images which are just being created which are in the pending State
as of now and they are live.

So one and lives over to Now using these images you
can create any kind of server that you can create
the exact same server with just a click of a button. All right, you don't have
to configure anything much. Alright, so this is
how you create a new map pretty straightforward guys. Let's move on and discuss. Why do we need auto-scaling now? So you learned how to create
an Ami, let's go ahead and stand auto-scaling and see how they are connected
to Ami is all right.

So say you have an application
you have a website and every machine now
this website is hosted on server guys, right and so was a nothing but machines now every machine
has Has its limitation right? For example say there's
this machine is say around 8GB + C i5 processor. So say it can host
on hundred people. Right only a hundred people
can come to this website and easily and navigate
inside the website. But if more than a hundred
people comes in this computer or the server becomes slow.

All right, so say there are
a hundred people as of now and they are trying
to access your website and they can easily access. Sit now your website
becomes a hit overnight. All right, and now a lot of people are trying
to access your website which make sure
server overburdened now in this scenario you
can do only one thing that is deploy more servers and distribute the traffic
equally among those servers so that the requests
can be handled. All right.

Now this thing is a manual task and manual is a big No-No
in the IT world guys. So we invented a service call. Old Auto scaling
and using order scaling what happens is it sees it it actually analyzes
the kind of load which is coming in right and it deploys the server's
according to that. So say around 300 people
are coming in and it sees there that you need three servers to
handle those kind of requests.

It will do
that automatically, right? And that is where your am
I comes in guys because the new servers that you will be launching
those new servers have to be taken
out of some template right so The first server has to be
the exact copy of the sorry. The second server has
to be the exact copy of server 1 the third server as
well has to be the exact copy of server one, right? And that is
where the am I comes in. So what is what basically
happens is in the order scaling service you
basically attach your Ami which you created and using that Ami it deploys
most servers, right? This is why am I is significant or this is how am I
is related to Auto scaling and And this is why
do we need auto-scaling? Let's move ahead and just
give us a definition that what auto-scaling exactly is. So like I said, whenever you your load
increases and you have to scale automatically up
and down you use Auto scaling, so it's not only
about scaling up that is when you load
increases a three or four so as you have deployed and
never when you load decreases Still Force, I was up
there to sitting I'd write so that is not the case
with auto-scaling you can So skilled down as per your needs
you can configure everything which you can imagine
about scaling up and scaling down
in the auto scaling properties.

All right. So this is why
we need auto-scaling. Now one more thing that you need
with auto scaling is if you would have noticed I
said the number of servers it deployed gets deployed
in the order scaling. So there are they
there are four servers which get with get deployed you
during order scaling right now. The traffic has
to be distributed. It equally right. So this traffic
which has to be distributed has has nothing to do
with auto scaling. It has to be done by
a separate entity. And that is what we are going
to discuss in the next section. But before that, let me show you
how you can configure or how you can configure
the auto scaling properties and attach the related am I so that the related servers
are launched right? So let me go to my AWS console. So here am I and as you can see the aim
is have already been created. They are lives over one
and live server to now what I'll be doing is I'll
be creating auto-scaling groups or I'll be configuring
the auto scaling properties so that these servers
can be Auto scaled as and when required right? So before that I actually have to create
a launch configuration.

Now, what is
the launch configuration? So if you look at the a my guys
you have only specified what kind of data should be
there in your server. What you have not specified
is what kind of machine you should launch every time
there's a need right? So that is exactly what you do
in launch configuration. So you have the data but you don't have the information
about the kind of machine that you want to launch so that that that kind of stuff you will be specifying
in the launch configuration. So what I'll be doing
is I'll click on create launch configuration and now it will give me a wizard
as same as that of any issue. So right in the ECU server. I had to choose
an operating system, right so same
it'll give me the wizard but I don't have to go here. I'll have to go
to a separate tab, which is called
my m is right, so I'll select my mice and now I'll select
the newly created a match which is the Mi
which I just created which is say we are creating
a launch configuration for us over one right now.

So I'll select the lives of A1. I'll click on select and now it will ask me the kind
of the configuration that I want for my So right
so I need attitude or micro because we are doing
a demo today, right so we don't need much
of of computing power. So we just have to select E2 dot micro and will name
a launch configuration a thing.

So let's name it as life. So one. Right and the I am role
is not required and I click on next now. It will ask me for adding
the storage so easy be is enough for anyone to machine. I'll go to
configure security groups. Right? And in this regard to groups. I just have to add the HTTP rule because I have to connect
to all the instances that I'm launching. Right? So I'll select the HTTP
rule from here right and I click On review so that is it guys. Nothing else has
to be configured you. All right, and it is asking
me to check everything that I've just configured
everything seems fine.

I click on create
launch configuration. Now it last me for the keeper. Right? So every server which will be launched
it will be associated with the with a key pair which
will be specifying here right? You can create a new one if you don't have already I
already have a key pair. So let me choose my my keeper so that is a month underscore
to and I acknowledge that I have this keep your and I'll create
the launch configuration. It just takes a second
or two to do that and we are done. Alright, so now we have created
a launch configuration. We have specified what kind
of machine we want. We specified what kind of data
should go into that machine now, we'll be creating
the auto scaling group in which will be specifying
in which cases we want to Auto scale. All right, so let's create
an auto scaling group now. All right. So it has automatically picked
up the launch configuration that we have just created
that it's life. So one right let's name this
group as live server one group. Right. And what is the initial size that you want
in your launch configuration? That is the minimum number
of servers that you want.

So let it be 1
and remember guys. This is the most important part when you are creating
a launch configuration in sure that you're doing it
in your default VPC to be on the safe side because there are
a lot of settings that you have to do if you create a VPC on your own
and that becomes a hassle. All right, so
if you accidentally delete your default VPC, which I did right so you have
to contact the AWS support team and they'll help
you out with it. They'll basically
create one for you. You cannot create
one on your own. All right. So always ensure that you are in a default VPC
whenever you're creating an auto scaling group. Alright, so now I
will be specifying the subnets. So basically you have
to select a minimum number of to subnets right? I'll need not getting
into what I said Nets because then it will be
like a three-hour session.

I will click on configure scaling
properties now over here. You can specify the properties
that I was talking about that. When do you want
your server to scale? Right so over here
you can specify the average CPU utilization. Now, what do you mean
by average PT CPU utilization? So there are four servers
running as of now, right? So it takes the average
of all the four servers. All right, and if the average goes
beyond whatever number you're specified here
say I specified. 70 over here, right? So in that case whenever
the average pcpd utilization will go beyond 70 it will launch
one more server similarly.

If it goes I can configure
one more property here, which says if it goes below 20%
like scale down from one server. All right. So if there are five servers in there and see people ization
has gone less than 20 percent it will it will it
will scale down from one. Seven and come down
to four servers. All right, and you can also set how many seconds should it
paid say the traffic is spiking down and up
like to frequently, right. So for that what you can do
is you can set a time. So if the 20% Mark
has been not cross still say like five minutes, then it will scale down a server or if the seventy percent
Mark of the CPU utilization has been crossed
over five minutes. It will then scone. Scale up, it will not scale up with at only once
for only one second. It becomes 71 person. All right, so you can specify
all of that over here. But since I cannot load test
my instance over here, I'll just keep it at its initial size
with just means that it will even if I delete my instance that is I one instance has
to be there in any case if I delete the instance it will
automatically launch it again.

Alright, so let's will select
the keep this group at an edge at its initial size and we'll go
to configure notifications. So I don't want to configure the
notifications neither the tags, I click on review and I'll click on create
auto scaling group. Alright, so I've successfully
created an auto scaling group for my life server one. All right. Similarly. I will do the same steps
for my server to as well.

I'll click on create
auto scaling group and I'll select
a launch configuration which was there. For my so to so not done
that so let's create a launch configuration first
for us over to will go to a mice and we'll select
the server to part here. Alright, so I've selected
server to I do the same steps that I did earlier. Right. So let me give it the name
as live server to group. I click on add storage configure
Security Group over here. I'll add the HTTP rule. Click on review and launch configuration
select the key pair. Acknowledge it create
lawn configuration doing the same steps Kuiper
not doing any new thing here. I've traced
launch configuration. Now. I create the auto scaling Group, which is life's
over to group. Right and then the vpz as I said should be default
subnet minimum gruesomeness. You should select You'll click
on scaling properties. I keep it at initial
size configure review and create the auto scaling group. All right, nothing much guys. So same things that I did
for my server one.

I've done for
my server to as well. All right, so since I've created
or or an auto scaling group, if you go to your ec2 dashboard, you would notice
that two more servers are now being deployed, right? So you can actually
identify them over here. See these two servers
are being initialized with Eva. These have just been created
by your auto scaling group because we specified that a minimum number
of one server should be there at all times right now. If you try to go to the IP address
of this server.

Right, you will see that it will have
the exact same settings for my easy Tucson's instance. So this is my sober one. Right. So as you can see a new instance called created but with
the exact same settings, I hadn't had to do
anything it automatically created an instance
with the same settings. All right, and same is the case
with server to as well guys, if I go to my server
to and try to access it. I'll see the same things
over there as well. So I'll Show you a bit Yeah,
so this is my server to alright, so my auto scaling group
is functioning fine.

So let us come back
to our slide now. So we are done
with auto-scaling now. Like I said, you need to have an entity
which will equally divide the traffic between the servers
that have just deployed right so they say in I've created
to Auto scaling group Skies as of now write the and why I have created
a second Auto scaling group. I will tell you in a bit, but for now understand that
there is an auto scaling group.

All right and inside that auto scaling group say
there are Five servers and if a person is coming
in or a customer who has logged onto
your website is coming in How would how would
his traffic be treated? How would he know
which server to go to right? So there comes
in the third entity which is called
the load balancer. So what load balancer does is
a load balancer your customer will basically basically
be coming to your load balancer and the load balancer
will decide based on the usage of yourself.

Others that which server is more free and then we'll give
the connection to that server. All right. So this is basically the role
of a load balancer. So like I said a load
balancer is a device that acts as a proxy
and distribution Network or application across a number
of servers now, I've been saying it repeatedly that your your servers
are actually sorry. Your traffic is actually
distributed equally among the servers right
but in a few moments, I'll tell. That there is one more
one more way of Distributing your traffic, right? So before that, let me again stress
on the point that this was your auto
scaling group guys. This is just the example that I
took in the beginning, right? So there are like these set of users and they're trying
to access your website and they are being routed
to these server. So this routing is actually done
by a load balancer right now. Like I said the traffic which is distributed
it is distributed in in two types, right? The first time would be
to equally distribute them among the number of servers
like say there are five server.

So it will distribute it
among the file servers. But if there are say there
are two kind of servers now and so your load balancer can identify what kind of
request is being made by a user for example in your website on in your application
you have you have a part where in you can
process the Mitch right and you have a part where you can where you have
the your blogging section. All right. So if you want
to process the image, you want your traffic to go
to a different set of servers which are order scaled at their own in their own
Auto scaling group. Right? And if you have
the blogging section, you have a different
order scaling Group, which is auto scaled at a different weather
different Auto scaling group, but you want everything to go
from one single link. So the way to do that is using
an application load balancer. So let me just repeat
what I just said. So the say the this set of servers they host
your image processing part. They do all
your image processing and these set of servers that they host your blog's that
you have on your application.

All right, a user comes in. He just logs onto your website
and he goes to a URL which says say Eddie record
or KO / image. All right. If you go / image
your load balancer, we'll see. Okay, he's asking
for the image kind of content. So he should go
to this set of servers because this this service
of the image purpose and if you go to a Dirac array card or KO / blog your
load balancer identify. Okay, this user he is asking
for the blog content. So you should go
to this set of servers. All right. So all of that is done
using your load balance or if you compare it
with a classic load balancer it is it does not have that kind
of Of intelligence, right? What it will do is
basically all the traffic that it has got in coming to it.

It will equally distributed
among the number of servers that are under it. All right, but with application load
balancer you have this option where in you can divide the traffic according
to the needs of the customers? All right. Now when you have divided
the traffic again the same thing will happen here as happens
in classic load balancer that at this point it
will equally Traffic among the number
of image servers, right and similarly the people who want to access
the blog it will equally distribute the traffic
among the number of people who want to access
the blog server. All right. So this is what an application
load balancer is all about. So classic load
balancer was something which was invented earlier and these days nobody uses the
classic load balance anymore. People are using application
load balancer, right? And that is what our demonstration
is going to be. All about today. All right, so enough of talks. Let's move on to the hands
on that is the demo part. So let me quickly show you what we are going
to accomplish today.

So basically a user
will come in. He will have the address
of your load balancer. And if he asks
for the image path or say server one in our case, he will go to the auto
scaling group of server 1 if he asks for server to he will go to server to but all of them
will have the same at Is that is using your address
of your load balancer? All right. So this is what we are going
to accomplish today. Now for those of you
who didn't understand that why did we create
to order scaling groups is because we want these servers that is the image processing
service to be skated as well. And as as at the same time, we want the Blog shows
to scale as well. Right? So that is the reason we want we created
to Auto scaling group. So I dated a server one, which you can imagine is
for your image processing and I created an auto
scaling group for server to which you can imagine is
for your blogging section. Right having said
that guys now, let's move on to my AWS console
and go to our load balancers.

All right. So what I've been doing
now is I'll be creating a new load balancer and that load balancer would be of the type
application load balancer. You can see I have
two options here. I either I can create
a classic load balancer or I can create
an application load balancer. So I'll go on with
application load balancer and I will name it as life load balancer
and the scheme is internet-facing. So since mine is a website that I want you
guys to access right so it could be internet-facing. Otherwise you if you
are working in a company and that company wants. A load balancer
for their internal websites that the companies have
you can actually opted for an internal internal
load balancer as well. But since as we have
a website and we want that to be used via we
will use the internet facing load balancer, right and the listeners, it's HTTP, that's fine
and the availability zones.

Like I said, you have to select a minimum
of two availability zones and you click
on configure security settings. All right. So now you'll be specifying
the security group, right? So in Security Group, you'll it's better to create
a new Security Group. Remember guys don't include
the default Security Group for your load balancer. It's a good practice to always
create a new security group so that you can customize
customize your rules according to your needs. All right, so I'll create
a new security group and specify the HTTP Rule
and I click on next. And now comes the part where in will be
specifying the targets. All right. Now what our targets now in application load
balancer guys targets are basing basically
but or scaling groups, right? So Target one would be
your or scaling group one your target to would be
Auto scaling group to Target three Target for you
can have as many targets as you want.

But in this wizard, you have to specify
a minimum number one, right? So we'll create a new
Target group will call it as say Just killing
a life or two one. All right, and the protocol is HTTP Port is
80 will click on next and I'll review everything. I think everything is fine and I'll create
this load balancer, right so we have not done
all the settings guys. I'll show you how to do
all the settings for now. We are just created
a plane load balancer. All right, so I have
created a load balancer which is pointing
toward Target group. Group one and that Target group is not pointing to my auto
scaling group as of now. All right, we will do
that now in this part so we have created. I just created a Target group
called live Auto one.

I'll create one
more Target Group which will be called live Auto to for my second
Auto scaling group. All right, so I will
create this and done. So I now have to Target groups that is live Auto one
and live Auto to now these two. Get groups have to point to my auto scaling
Group C respectively. All right. Now the way to do that you
cannot appoint them here. You have to go to your auto
scaling groups, right? And in your auto scaling groups, you have to select
the auto scaling group that have just launched. So it is live server one group
and lies over two groups.

So you I will go to live so
one group and go to details and over here you click on edit. All right, and inside edit
you have this option for Target groups. You don't have to specify
anything in the load balances. This option is only
for classic load balancer, but we are creating an application load
balancer, right? So we'll be specifying
everything in the Target groups. So for live server one group will be specifying
the demo server one. So demo server one
has already been sorry. Sorry, it will be live Auto
One the target group that I just created and live Auto One is connected
to your load balancer.

So basically your load balancer
will point to your target group and your target group
is now pointing to your auto scaling group
one which are pointing to your instances. All right. So this is how it
the visibility comes in so I save it. The target group one is
live server one group and the target group 2. I'll be specifying in
the second Auto scaling Group, which is here that is live
or two to write.

I'll save it and let
me quickly verify if I've done everything, right? So this is a lifesaver one group
and this is live Auto One Fine. This is lice over to group
and it is live or to to fine. So my load balancer can now
see the auto scaling groups that I've just configured. So let me quickly go
to my load balancer. Now comes the part guys
wearing I'll be specifying when to go to auto scaling Group
1 and when to go to auto scaling group to like I
said will be specifying it using the using the kind of request that the that the user
has made, right? So the way to do that is using is by first
selecting your load balancer and going to listeners.

So once you go
to listeners guys, you will reach this particular
page now in this you have to click on view or edit rules. Alright. So once you click
on view or edit rules, you will reach this page
which is kind of an if else which is kind of FL structured. So now what will you do is
so you can see that there is
a default rule as of now that anything any requests which is made it will go
to live Auto one. All right, which means
any requests at which is made it will straight away pointed to
the auto scaling group one now, we'll specify if
the request is our is if the user is asking for sir.

To he should be pointed
to server to so let us do that the way we'll do it is like this will click
on ADD rules will click on insert Rule and now I'll specify so you
have two options here either. It could be the routing
could be based on your host. That is the address of your of your website or it
could be based on the path. Now. What is the difference say
Eddie record or Co this is the host name right now if I try If I type in
resources dot Ed u– record or go it is still
point to my domain. But if I have specified
resources dot ID record or go and if I write it over here and I specify it has
to go to server to it will go to server
to otherwise if you type in resources or Daily Record
or code nothing will happen because now if you
have not configured anything, right, so that is the host path
with paths the difference.

Is that say you right
Eddie Ricardo Coast. – block right. So that's / blog
becomes the path. But with host the thing
is the difference is resources dot edu record orko. So that becomes
one host name, right? But with path you're
basically putting a slash and you are going
into a particular folder. All right, so you can specify
the path here, right? It doesn't matter if you have not specified
in a server for different for different say you could
the way you could have done. The image processing
and block the other way round rather than having it
on two servers was that you have you could have
configured it inside to servers in your root directory, right? It could be server one
for your image processing and server to for your blog's
but I don't want that because you're
as distributed as a system. Is it becomes
more reliable, right? And that is the reason we
have two different servers for two different set of things.

So the way you can route
your traffic to body servers is by typing in the path. So say if I have
to go to server one. I'll type in server 1 /
star so star basically means anything after server
one could can be accepted but it has to go to the request will be forwarded
to live Auto one. All right, so if I have server one in my path
anywhere in my path, it will go to live Auto one. So I'll save this rule. Similarly, I say that if it
has a server to in its path and anything after that. It has to go to live Auto
to write and save it.

And that is it guys now
my load balancer has now has saved its settings. Let's hope for the best
and try executing it. So this is the Ling guys, right if you just
type in this link, it will by default
go to server one. Right. So if I go to this link, you can see it is going
to server one as of now, but if I specify / server 1 it will go
to my server 1 and if I specify / server, too. It will go to my second server.

Now. You might be wondering that he meant you might have
a different directory in your same server. So let me clear your doubt
according to that. So what I'll do is I will go
to my ec2 dashboard, right and so you have to server one. And I'll quickly show you. If what happens
if I type in server to hear? All right, so this is
the IP address, right? So if I type
in this IP address, I'm going to server one.

If I type in / server to it will give me a photo for
because there is no folder called server to write
same is the case here. So if I go to is IPL,
you can see Server one. If I don't specify anything after my address it will still
go to the same server that is here. That is this. IP address right but if I specify / over two over here It
will not be able to do so because this is
not a load balancer. It is directly your IP address, but over here
if I specify server to.

It will redirect me
to the second server one second. Right, it will redirect me
to the second server and that is all that I need. All right. So with one address you are
actually pointing to two servers which be solving
your to problems. Now the real life you skate. Like I told you it could be
four different kind of task say you have a blogging
section on our website and you have an image processing
section on our website.

If you want to different servers to host
your two different Services, you can do that easily using
a load balancer. Alright guys. So with this I
conclude my session for today today in this session. We'll be talking
about Cloud security without making any further Ado. Let's move on to today's
agenda are to understand what all will be covered
in today's session. So we'll start of the session
by discussing the why and what of cloud security after that.

We'll be seeing how we can choose
between a public or private and hybrid cloud. For that we'll see whether Cloud security is
really a concern among companies who are planning to make
a move on the cloud. So once you have established
a cloud security is really important. We'll see how secure
should you make your application after that? We'll be looking
into the process of troubleshooting a threat
in the cloud after that. We'll be implementing
that process in AWS. So guys, this is
our agenda for today. Let's move on to the first topic
of today's session that white cloud
security is important. So let's take an example here and talk of three very
popular companies linked in Sony and iCloud so LinkedIn in
2012 experience the cyberattack. We're in 6.5 million
usernames and passwords for made public by the hackers
after that soon experience the most aggressive
Cyber attack in history where in their highly
confidential files like the financials
their upcoming movie projects were made public by
the hackers, right? And this made a huge impact
on the business front of Sony.

ICloud which is a service from Apple also
experienced a Cyber attack where in personal
or private photos of users were made public
by the hackers, right? So guys now in all
these three companies you can see there's
a breach in security which needs to be addressed. Right? So Cloud security
has to be addressed. It needs to be there
in the cloud computing world. So since now we've established
that cloud security is really important. Let's move on to understand
what cloud security actually is. So what is cloud security? So it is a use
of latest Technologies and techniques in programming
to secure application, which is hosted
on the cloud or the data, which is hosted on the cloud
and the infrastructure which is associated
with the cloud computing. Right and the other part of this is that
whatever security techniques or whatever techniques
or technology that Using to secure
application should be updated as frequently as possible because every day new threats are coming
up right everyday.

There are new work
around two problems. Right and you should be able
to tackle these problems or these workarounds and hence. You should upgrade your security as frequently as
possible Right Moving ahead. Let's understand how
we can choose between a public a private
and a hybrid Cloud. So we have understood that what cloud security charity
actually is now let's talk in terms
of security and understand how we can choose
between a public private and a hybrid Cloud. So if you were to choose between
these three infrastructures, what should be our basis of judging which Cloud
we should choose right? So you would offer
a private Cloud when you have highly
confidential files that you want to store
on the cloud platform right now. There are two stories or there are two ways of thinking
a private infrastructure. You can either
offer private servers or private infrastructure
on your own from Isis or you can look up
for servers dedicated servers by a cloud provider. Right? So that all comes under
the private infrastructure. Then we have the public Cloud infrastructure
in public Cloud infrastructure. You would basically use websites
that are public facing.

So say if you have
a products page where you have application which can be downloaded
by the public so that can be hosted
on the public Cloud because there is nothing
that has to be seen. Secret over there, right? So things like websites
things like data that is not confidential
and you don't mind public seeing it can be hosted
on your public Cloud.

The third infrastructure is the
most important infrastructure, which is the
hybrid infrastructure. And this is the set of that most companies
go for right? So what if there's a use case
wherein you have private files of Highly confidential files
and a website as well, right? So if you have this kind
of use case Might go for a hybrid infrastructure, which is kind of Best
of Both Worlds, you get the security
or the Comfort or the private infrastructure and the cost effectiveness
of the public Cloud as well. Right? So you your hybrid
cloud is basically if you want your highly
confidential be stored on your own from Isis and your website be hosted
on your public Cloud. This infrastructure would be
a hybrid Cloud infrastructure. So basically you
would choose a private Cloud if you have a highly
confidential files, if you choose a public Cloud if you have files that are
not that important or files that you don't mind people
seeing and you would choose a hybrid Cloud infrastructure
if you want Best of Both Worlds, right? So this addresses
how we can choose between a public private
and hybrid Cloud moving on.

Let's understand whether Cloud
security is really a concern. So we will discussed that white cloud security
is important we've discussed what is cloud security, right? Now let's talk about
whether this really makes sense. Right? So if we say that cloud security is really
important in this is no one who is actually
thinking about it. There's no point, right? So let's see if companies were making a move
to the cloud actually think about Cloud security. So here's a gardener
research on companies who are making a plan
to move to the cloud or who has not moved
to the Cloud yet, right. So what are their concerns? Why not they're doing so so the topmost First reason listed
by these companies was security and privacy concerns, right? So as you can
see these companies who want to make a move
to the cloud are also worried about the security
on the cloud infrastructure. And this makes it clear that cloud security is actually
very important right now. We have understood that cloud security
is very important. We have understood
that companies are looking for cloud security
are actually following.

The practices
for cloud security, but now how secure should you
make your application? Right? What is the extent to which you should make
an application secure? So let us start with this line. So it is said that cloud security is a mixture
of Art and Science right why let's see
that so it's a science because obviously you have
to come up with new technologies and new techniques to protect your data
to protect your application, right? So it's a science. Because you have to be prepared
with the technical part, but it is art as well. Why because you should
create your techniques or you should create
new technologies in such a way that your user experience
is not hindered. Let me give you a guy's
an example suppose you make an application right and for making it
secure you think okay after every 3 or 4 minutes, I'll ask the user
for a password right from the security point of view. It seems okay, but from the users point
of view it Actually hindering his user experience.

Right? So you should have
that artist in you that you should understand
when to stop or till where should we extend
your security techniques and also you should be creative
as to what security techniques can be implemented so that the user experience
is not ended. For example, there is a two-step
authentication you get there when you're logging
into your Gmail account, right? So if you know your password that is not enough you should
have Have an OTP as well to log into your Gmail account, right? So this might be hindering
with user experience to some extent but it is making
your application secure as well. Right? You should have a balance
between your science and the art part that you're applying
on cloud security moving on. Let's now discuss the process
of troubleshooting a threat in the cloud. So let's take an example here.

So like you're
using Facebook right and you get a random message
from Person saying there is some kind of stories
like you usually get that by using Facebook right that such and such thing
happened and click here to know more right you get
the similar kind of message here and by mistake you actually
click on that link. You didn't know that it's a Spam
and you click on that link. Now what happens
is all the users that are there are all
your friends on the Facebook Chat gets that message, right and they get furious as to why this kind
of spam messages. They're in their inbox, right and you get scared. Now you get angry as well and you have to bring your
frustration out on Facebook. So you contact Facebook
and it get to know that they already
know the problem and they're already working on
it and then near to this leash.

Now. How did they come to know that there is
this kind of problem and needs to be solved. Right? So basically Cloud security
is done in three stages. So the identification process or the thread identification
process is done. Three stages the first stage
is monitoring data. So you have ai algorithms, which know what a normal
system behavior is and any deviation from this normal system
Behavior creates an alarm and this alarm is then
monitored by the cloud experts or the cloud Security
Experts sitting over there. And there's a thread they
see there's a thread they go to the next step which is gaining
visibility, right? So you should understand
what caused that problem right? And Or who caused
that problem precisely. So your Cloud Security Experts
look for tools, which give them the ability
to look into the data and find or pinpoint that statement
or pinpoint that event which caused this problem. Right, so that is done using
gaining visibility stage. And once we have
established, okay. So this is the problem then come stage 3
which is managing access.

So what this basically will do
is it will give you a list of users in case
we are tracking the who will give you a list
of users who have access and we will pinpoint
the user who did that, right and that user can be wiped out of the fit system using
the managing exist age. Eight. So these are the stages which are involved
in Cloud security Now if you were to implement
these stages in AWS, how would we do that? Let's see that so the first stage
was monitoring data, right? So if you have an application
in AWS and you are experiencing this same kind of thing, what will you do
for monitoring data? So you have a service in AWS
called AWS Cloud watch now, what is AWS Cloud watch? So basically it's
a Monitoring tool so you can monitor your ec2 and your other AWS
resources on cloudwatch how you can monitor them.

You can monitor the network
in network out of your resource and you can also
monitor the traffic which is coming on
to your instance, right? You can also create alarms
on your Cloud board. So if there's deviation
from normal system Behavior, like I said, so it will create
an alarm for you. It'll escalate the event
and alert you about that thing so that you can go on
around and see See what that problem actually is, right. So this is cloud
the monitoring tool, right? So this was about
AWS Cloud watch. Let me give you a quick demo
of how the AWS Cloud watch dashboard actually
looks like Okay.

I said this is
your ews dashboard. So now for accessing cloudwatch, you can go under the management
tools here is cloudwatch Will click on cloudwatch. Now over here you
can monitor anything right? We'll go to Matrix. And you can see there are
three Matrix over here. You can monitor your EBS. You can monitor your ec2. You can monitor your S3
right now suppose. I want to monitor my ec2. So as you can see, so I have two instances running
in my easy to one is called for batch instance.

And the other is called
WPS instance right now. These are all the metrics
which are there so I can check Matrix
for my WPS instance for network in I can check
the disk read Ops. So let me select
the network out metric and they'll be a graph over
here so I can see this graph and as you can see
between six o'clock and 6:30, I experienced.

Search in my traffic, right? So basically this is how you monitor
your instance in cloudwatch. And you have all
these default metrics to check how your instance is doing
and you know AWS, right? So this is what cloud watches. You can also set
alarms here, right? So if you go to alarms
click on create alarm. You go too easy, too. And you can select your metric from over here now
select a discrete bite. So we're now once I do
that will ask me if there's a Time range to which I want to monitor
that instance, right? Okay, let's not set. Any time Ray. Let's click on next.

So when I click next you
will be prompted with this page so you can set your alarm name. You can set your alarm
description here and then you can specify that for
what read rights number. You should get
this alarm for right? So you'll be setting that. Over here after that
we will go to actions. So once an alarm is triggered. We should that alarm go who
should that alarm go to right? So you can see
as I said over here. Now whenever the state
is alarm, right? What should we do? So when the state is alarm you can send you a notification
to your SNS topic now, what is this nation SNS? So basically it's
a notification service will be discussing what SNS is
in the next session.

Don't worry if you don't
understand so basically for now what you can understand Is
that SNS is a protocol where a new set if you get a notification what to do with that
notification or whom to send to that notification, right? So if there's a topic
called notify mean SNS, so in notify me, I have configured
an email address. That is my email adress that whenever a notification
comes to the SNS service or the notify me
topic to be precise.

It sends an email to me right
with that message. So I will get a message
with this alarm. Such and such thing
that has happened in cloudwatch. Now you do whatever is required. The other thing that you can do over here is
in the same as soon as topic. You can also configure Lambda function to
be executed right now what that Lambda function
will do so say suppose I configure the metric
to be of CPU usage. Right and I say whenever 40-person metric
is crushed create an alarm or like go to an alarm State
and it notifies the SNS know Or if I mean topic about this
in the notify me topic, I can configure
a Lambda function to clear all the background processes
in that easy ruins, right? So if I do that the CPU usage will
automatically come down, right? So this becomes a use case that you want to launch
a Lambda function, wherever your CPU uses goes
beyond 40 percent, right? And hence. This is the way you would do it. So this was about cloudwatch. There's nothing much to it. You create alarms
and you monitor metrics, right? Moving ahead and let's move on
to the second process which is gaining visibility.

So for gaining visibility, basically, you have to track
your whatever activity is happening in
your AWS account. So this is service in AWS called
Cloud trade, right? So the cloud rail service is
basically a logging service where in each
and every log to each and every API call is made now. How is it useful? Let's talk about
the security perspective. Right? So your hacker got
access to your system, so you should know
how he got eggs. Your system. So if you have a timeframe say
he got access to your system or you started to face
the problem say around four o'clock, right so you can set the time
between two o'clock and whatever the damage
right now and monitor what all has been going
around and hence. You can identify the place where that hacker got access
to your system right now. This is the part where you will get to know who
that person actually is or you can isolate
the problems or which calls that so if you take Q from
our Facebook example over here.

You can actually pinpoint who is responsible
for those spam messages because you all have those logs
right you will see the origin of those messages now, once you've done that the next step is managing
this guy out of the system or wiping this guy
out of the system. But before that let
me show you guys how cloud trail actually looks like so let's go back
to our ews dashboard and go to Cloud tree service. So I again under
the management tools. You have the cloud
forest service you click on the cloud resources and you
will reach this dashboard. All right. So here you have the logs. So as you can see you
can set the time range here, but I'm not doing that. I'm just showing you the logs. So even for logging
into my console it is showing me that I'm logged
into my console at this time on this date, right? So every event is logged guys.

Every event that is happening on your ews console
is being blocked. So let's talk
about the S3 bucket. So somebody deleted a bucket and that has again
been locked, right? So it happened at 7:30 8:00 p.m. On 28th of March 2017, right? So any activity
any kind of activity, which happens in AWS
would be logged where? Okay guys, so this is
about Cloud Trails. Let's go back to our slide
and move ahead and play session. So like I said, so now you have identified
who is responsible for your problem. Right? So now the next step
is managing access, right? So now you should be able
to throw that person or remove that person
from the system. So most of the times
what happens is like if we take
our Facebook use case, so basically there was a user
who triggered that problem right so too Things that you
have to do is first of all, you have to remove
that spam from a system. So you've got to know
where it originated. So now you start
wiping it after that.

You have to D by that user
from doing it again, right? So from The Source, you'll get to know who that user
is now using managing access. You will actually get
access to do all that right? So if you talk about AWS
this service is called AWS. I am so what AWS I am does is It basically authenticates
that particular service. Now, you are a root user. Right so you can do anything. But what if you have employees and obviously all employees will
not have all the rights right. Now. What if you want to give
granular permissions to your employees now for
like in our example, what if one specific employee is capable to track down
this problem right or track down what has to be done? So you can give that particular
person the rights how using I am right? So I M is used to provide
granular permissions.

It actually secures your access to the ec2 instances
by giving you a private file and also it is free
to use right. So, let's see how I am is used. So let me go back
to my AWS console. Okay. I said this is my AWS dashboard. I will go to the security
identity and compliance domain and then click on I am. Right now over here. I'll click on rolls. Now. I can see all the roles which are there
in my I am right? So since I would have identified which role
is creating a problem, so I'll go to that role.

So for example, I have a problem in save AWS
elastic Beanstalk easy to roll, right I click on this now once I click I will
be getting this screen. So now I can see the The trust
relationship success advising the revoke sessions, right? So I'll go to revoke
sessions and I click on the book active sessions. And hence. I will be able to wipe out
that user from accessing my AWS resources, right? So this is how you use I am
guys are now one more thing that you can do over
here is you'll go back to your dashboard go to Rose. Now I get told you guys
you can actually create a role for a person who would be able to access
restricted things on. Your AWS account, right? So let me quickly show you
how you can do that. So you will click
on create new role and you will give
you a roll some name. So let's give it
hello over here.

Right click on Next Step go to
roll for energy provider access. Right, and now you can select how that user of yours will
be accessing your AWS account. Right? So allow users from Amazon Cognito
Amazon Facebook Google ID. All right, so let's
select this now. Let us select Facebook and
let's give it some random application ID, right? So anyways not going
to create this role.

I'm just telling you
guys how to do it. Right? So basically you get
an application ID by Facebook over there. You'll be since you
are using Facebook thoughts. Educate that guide
to your AWS account. You'll get an application ID by going on to
graph at You can do all
of that over there. Okay, so that is not the concern
you'll enter the application ID and click on next step. Right? So you get the policy document. So whatever you configured in your text boxes has actually
been created in a Json file, right so you don't have
to edit anything over here. Click on next step. Now you have to attach
a policy now, what are the policies
of policies basically what all permissions you
want to grant that user.

Right? So if you want to Grant
him the execution role for Lambda you can do that. You can grant them
the S3 execution roll, right? So whatever policy that you create you can actually
create a policy near I am right. I'm not going much
in details of this because all of this is covered
in your I am session, but I'm showing you guys because I just told you
guys This can be done to let me show you
how it can be done. Right? So you'll select
whatever policy want and click on next step
and review it and create that rule. This is it guys right so you
can actually select a policy whatever policy you want
that role to have and hence. So policies basically a permission that you
want that role to have. So if you get the permissions it
to just review your instances, he'll be only able
to review your instances. Okay, one more thing. I want to make Make clear is that you don't have to give
your security credentials to that kind anymore because now you'll be specifying that user can will be able
to connect to Facebook.

Okay. So also you have a part
here wherein you can specify what specific user
can access it right so I can type in my name here. And if I'm being logged in through Facebook is
my username is him and Shauna only then I
will be able to connect to my AWS account right now. This is ID right I can also
set the local parameter. Right so idea I think is fine wherein you
will be adding the ID of the guy whom you want this AWS account
be accessed by right? So you all have
Facebook IDs, right? So you all have to just
punch in your Facebook IDs.

We're here click on next step and then you'll be able
to access this AWS account. If I create this role
right now with the policies that I will be attaching
to your role. Right? So this is
how you use I am guys. Let us go back to our session. Okay. So these are
the three services guys. So you have I am
you have cloud trail and you have cloudwatch using
which you can control or you can actually see what is going on
in your AWS account. So let's go ahead and start with today's session
with the first topic which is why do we
need access management? All right, so to
discuss this topic, let's understand it using an
example say you have a company in which you have a server and the server has
everything in it. It has all the modules
in it and it gives you the it gives different users
the permission to use the different servers
right now in your company.

First of all, you should have an administrator
which will have all All the rights to to access
the server, right? So nobody in the today's it World works
on the root account, right? So there has to be
an administrator account. So first we will create
an administrator account with all the permissions
now tomorrow say a UI developer comes into your company right now
A UI developer will only work on the graphical tools, right? So he should only be allowed
the graphical tools and not some other tools. Maybe he shall not be given
the internet access. Or something like that, right? Maybe he's not giving
the PowerPoint access. Maybe he's not given some folders access some drives
access anything like that. So all of that can be defined in
the server by the administrator and specific rights will be given to a UI developer
right similarly if to if after that
a business analyst comes in so he should only be able
to access the analytics module which is there
in your soul, right? He should not be able to get
into the UI development.

In part, or he's not be able
to see the other aspects of what is there in your server? Right? So each and every
user each every rule will have specific rights
assigned to them. Right? And this is done by policies which are in turn
given by administrators. Right? So this is
what access management is that giving each role
the specific rights that they deserve and this is what we are going to accomplish
today in AWS, right? So this this is We
need access management.

Let's go ahead and understand. How can we accomplish
this in AWS? Right? So as to accomplish this in AWS, you need a service called I am
you have a service called I am which uses this concept
of access management and allows you to give it
to your users who are going to use your account. All right.

So what is I am so I am is basically
a service from AWS using which you can give
permissions to different users who are using
the same AWS account that you have created, right? So in a company like
in any company be it, you don't have to have
two or three AWS accounts. You can have one AWS account on which a number
of people can work. Right? For example, you can Define that maybe a developer
would like to Work on your AWS account and he should only
have the ec2 instances or you should only work
on the ec2 instances you decide that right? So you can only Define you
can define a policy like that that only the devel the developers will only be able
to access the ec2 instances on AWS account. Similarly if say
database administrator comes in so you should be able only able
to access DB instances on your AWS account and so on right so all
of that is possible using I am but what I am is not only about creating users
and creating policies.

It's more there is more to I am right and hence
will be discussing the different components of I am now so let's go on and see what are
the different components. So there are basically
four different components in I in the I am service. So the first service is
user then we are groups then we have Rose and then you
have policies right? So the way we are going to go
about these are first I'm going to explain you
each role on each service in I am each component and I am and then we're going
to see how Can execute them or create them
and the AWS console, right? So let's start with the users. So the very first time you
actually create a AWS account that is basically
the root account that you have created, right? So there is no user inside it.

So why do we basically
need a user you need a user because you are supposed to give
permissions to someone right? So say I first of all want
to give administrator Rights to a user right? So you understand you have
to have an entity first to which you can assign
permissions, right? So these entities are called
users on E. Wa so any person who wants to access
your AWS account has to be added as a user in I am and then you can attach
different policies on to that user. Right? So this is what
user is all about. Let me go to my AWS Management
console and show you how you can create
a user in I am. All right, so give
me a All right guys, so this is my AWS
sign sign in page. All right. So this email ID when you log in
through your email ID and your password that is basically
your root account.

So what I'm going to do
right now is I'm gonna log in using my root account and first create
a admin account for myself. Alright guys, so you should
never work in your root account. You should always have
an administrator account through work in the root account
should only Used when there is an emergency
say you have been locked out of our administrator account only then you should be using
your route accounts. The first thing
that you should do when you enter the root
account is go to I am which is just right here go
to I am and then you will have this dashboard thing
right over here. You can see there is
a thing called users. You will click on users
and you will click on add user. All right, so now it will ask
you for the The username so you can provide a username say I'll add my name first so that be hemanth,
right and what what kind of access do I want to give
to this particular user? So there are basically
two kinds of access that I can give first is
the AWS Management console axis, and then we have
the programmatic access, right? So what is these two so if you want to so
there are basically two ways you can access
the AWS resources right? You can either access.

Using apis that is using your code say you
have created an application which is interacting
with your AWS resources. Right? So in that case if you're interacting
with the apis using the API is that is called
the programmatic access, right secondly is the AWS Management
console access that is when you are using the AWS website to actually
deploy resources or create or create or remove policies
or whatever, right? So that That is called
the AWS Management console axis. So for my user I'd be giving
it both the accesses that is programmatic axis
and the Management console axis. Also, there is when you enable the programmatic
access programmatic access, basically you get the access key
and the secret key as well. What are these I will be
explaining you in a bit. All right, so we have selected
both of these options and then move ahead
to choose the password. So do you want an auto
generated password? A custom password. I'll choose a custom
part for password since I'm creating account
for myself, right? So I'll choose a custom password and do I want to reset
the password on the first login? No, I don't want that.

So I'll click
on next permissions, right? So what kind of permissions do I want my account to have I
will become drink configuring that over here. So as of now there
are no groups, there is no existing user
that I can copy from. So I'll attach
existing policies. And since I want to attach
the administrator access that is the first
policy over here. I'll select that and click on next right so you
can review all the settings that you did over here
and click on create user. This will create a new user
in your AWS account. So as you can see, I have got my access key ID and
a secret access key now guys, the secret access key. You only get to see one time only one time
when Created your account. So it is essential that is tore your access key
and secret access key once you get this page. All right, let
me store it quickly. So this is my access key ID
why we are copying it.

You'll get to know
during the session. Don't worry and
my secret access key, which is this let me copy this
and paste it in the notepad. All right, so don't worry. You might be thinking that I've exposed
my secret key to you. So I will be deleting
this account afterwards so you don't have
to worry about that. All right, so I've got
my access key ID and my secret access key. So that is done.

Now. What I'll be doing is
I'll be logging out from my from my root account
and logging in this user account that I just created. All right. So one more thing that you
have to be very careful of that you will not be logging in
through the same login page that is just saw right
so you'll have to log Through a different
login page now and the URL for that is this right? So you will be logging in
through this link as a from now on so what
whenever you create a user if you want them to log
into your account, you have to give them
this link to log into right? So let us copy this link
over here and log out from a root account.

All right. So I've logged out I'll close
this and I'll come here and go to this particular link. All right. So once you reach
this particular link, it will be asking you the account name which will
be self filled by your link. Right? So you have to give
your username now, which is hemant and then the password so I'll type in
the password that I've given it. and click on sign- in So now I have basically signed in two months
to mature to the user that I've just created
on my route account.

Right? So I no longer have
to use my root account. I can basically lock
away my root account for emergency purposes. I'll be using my administrator
account from now on I can do everything from administrators
on that could be done from a root account as well. But there are cases where in you get locked out
from your administrator account in that cases you will be
Notable success rate so moving on guys, so I'll go to I am not so as you can see we
have created a user and we have logged
in to that user. And if I go to I
am now you can see that it will show
that one user has been created. That is here. All right, so let's get
back to our slide and discuss the next component. All right, so we've discussed what our users let's move
on to the second component which are groups. All right. So whenever you create
users they can also be combined into groups. Now, why do we need groups? We need groups because say
let's take an example. So say you have five users and these five users have
to be given identical axis.

Right say these five users
belong to the development. And the developing team has
to have some common access that they all will have right. Now one way of doing
this would be that I would go to each and every user
and attach a policy that they need right
the smart way to do this would be to to include
them inside one group and to that group. I will once only once I will attach the policy
and it will apply to all these five users, right? So these are why groups are
very important now how we can create groups. Let me shed a light on. On that so you will go to you can see you can click
on groups over here. And what you'll do is
basically is you'll click on create new group, right? So, let me give
the group name as live demo.

All right, and I
click on next step. Now lastly the policy that I want to attach
to this particular group. All right, so say for example, I just want this group
to be able to access the S3 service from AWS. So what I'll do is I
will select the policy which says Amazon S3 full access
and I'll click on next step. Now this policy basically
tells you that you can only use the S3 service
in the Management console and no other service.

All right, so I'll
click on create. Whoop and now whatever
whichever user I will be putting in putting inside. This group will
have this property. All right, so I don't have
to configure the policy for any user now. So what I'll do is
I'll create a new user now. So say I create
a new user saying test. All right, and then I'm not giving him
the programmatic access. I'm just giving him
the Management console axis. All right, I'll click
on this and I'll give it a custom password. And then I don't want
him to reset his password and click on next. Right, and now it is asking me whether I want to include
it inside a group. So yes, I do. I want to include it
inside the group that I've just created
and I'll click on next and review all
the settings are adjusted and click on create user. All right. So the test account
has just been created now as you can see guys
in the case of my account, which I created.

I got an access key
and a secret access key, right? So in this case, I'm not getting any because I didn't select
the programmatic access only when you select the programmatic
access it will give you the key so that your application
can actually interact with the services
that you have launched. All right, so I have have created
a test user successfully. Let's log into this test user. so I will type in the URL
that has been given to me. Right now when I
reach this page, I'll enter the username as test
and the password as what I have entered right
and I click on sign in. Now with this you can see that.

I will now be able to see
the Management console the Management console
will exactly look like how it was used to see how I used to see it
in my root account or my administrator account. But when you will try
to access say a service, which you have not
been assigned to say, for example, I only have
access to S right now because I've deployed it in the group where it has
only the access to S3. If I try to go inside easy
to let's see what'll happen. Right. So it says you
are not authorized to describe running instances. As a matter of fact, I'm not authorized to see
anything on my ec2 page.

Alright, so that is because I cannot I don't have
access to the ec2 dashboard. But let's see
if I can see the S3 dashboard. So I'll quickly go to S 3 and
if I have the S3 axis, I will be able to see all
the buckets which are there in – 3 And yes, I do. So let me go inside a bucket and delete something
so that all right. Let me delete an object
from this particular bucket. So yes, I can lead it. All right, so let me check
if what if what happens if I delete or II detach
this particular policy from that group? All right. Let's see what happens. So I will go to I am
and I will go to groups. I'll go to this particular
group and I can see that the policy
is listed over here.

What I do is I click
on detach policy and let's see what happens now, right? So I'll go
to Management console. So on if now I
try to exercise 3. It will show me
that access is denied. Right so I no longer have access to the S3 service
on my AWS console. So this is how you can control
access to different users. You can revoke access
you can include access right you can do all
of that and I am right.

So let us come back to our slide
to discuss our next component or as we've discussed what our users
we have discussed. What a groups now let's come
back come down to rules. All right, so rules
are Similar to users but roles are actually
assigned to Applications. All right, so users are actually
assigned to people right? So whenever you have
a developer in the company, you will have sine M
the developer rules, right but when you have rules rules are basically
assigned to Applications, how let me explain you say
you create an ec2 instance and inside that needs, you know instance you're hosting
your web application. Now that web application
has been has been designed in such a way that it has to interact
with your S3 service.

Is for example
that will be doing to a will be I will be showing you the
demonstration today for this. Right. So say that application has
to interact with the S3 service. Now if I want to want that application to interact
with the S3 service, I have to give it permissions
and to give it permissions. I will use rule so I will create a rule
wherein I will specify that this role can
access the S3 service and I will attach
this particular role to that particular
e0 instance in which my application is hosted and in Kiss my application
will be able to interact with the S3 service, right? It might sound complicated guys, but it is very
easy to implement.

Let me show you how so what I'll do now is I'll go back
to my Management console which is here. All right, I'll go
to the dashboard and say I will go to rolls now. All right, so I'll create a new
role now roles can be assigned to any either Lewis service
which is listed here. What I'll do is I'll assign
it to I'll create a rule type of easy to write so I will select Amazon ec2. And what type of role
do I want to apply to I want to say have
the access to S3. Right? So I'll select Amazon S3
full access over here and I'll click on next step. So, it'll ask me the role name. So let me specify the role name as Eddie Rekha
underscore one right and I'll click on create role.

So with this role
has now been created but mind you guys
are not attached this role to any easy to instance. Right? So what I'll do now is I'll go
to my ec2 console so over there. I already have built
an issue instance. It is stopped. So I'll start it and attach
this particular policy to that ec2 instance. Alright, so my ec2 instance name
is hemant underscore one. So here it is. I go to actions I start
this particular instance. Right. And what I can do is I
can attach the policy using instance test settings. It says attach or replace. I am roll. I'll go here. I will go to the drop-down
and select the role that I just created which is
a lyric underscored one. I'll select that and
I'll click on apply. Now with this what will happen is
my rule is now my sorry. My ec2 instance is now
configured to interact with the S3 service
in this particular account. Alright, so any application that I deploy in this ec2
instance will be able to interact with the S3. Okay, so I don't have
to specify any access key any secret access key.

If you're still confused
with that be patient. We are getting onto where do we
actually use these keys? And where do we not? All right. So this is what
your roles are all about. Right so roles. Like I said, they are for resources
in AWS users are for people roles and uses a similar things you attach polled
policies on to them and they basically
identify Particular instance or a particular person as the owner of that
particular service, right? So we've discussed
what roles are let's move on and discuss policies. So if you think about it guys, we've actually been
dealing with policies, right so policies
and nothing but permissions that you give to your
with whatever role or user or group
that you have created, right? So, for example, I want to give
the ec2 instance axis, right so that ec2 instance
access is basically a Policy that I will be attaching
to the user or to the rules.

All right. Let's see how we
can create policies guys. So I'll go to
my Management console. I'll go to I am Right. So the you can either create
policies or you can actually use already existing ones. So there are a couple there
are a couple of policies that have already been created
in your AWS account, but you can go ahead and create
your own policy as well. Alright, so let me show you how. So say for my test account, what I'll do is I will go
inside test account. All right, and I
will add permissions. And I will attach
existing policies. Directly and here I am guys. So now you can you
can create policies as well. So you see the tab
over here guys, it says create policy. So if you feel you're
the kind of policy that you want to create
is not listed over here in the default policies.

You can actually create one and creating a policy
is very easy guys. You just click on create policy
and you will see this page. All right, so you'll
have three options. You can either copy
and AWS managed policy. That is a default policy. Can create our own policy
by just typing in the Json code and if you're
not comfortable with coding, what you can do is you
can use the policy generator. Now. What is policy generator? Let me explain you. So with policy generator, you just have to select
what effect do you want poor? Do you want it to allow it
or do you want it to deny it? Right? So say I want to allow
the easy to service to this particular test account? All right, so I'll go
too easy, too. Right, here. It is. I selected easy to what kind of actions can he perform say
I want to give him all the actions you can do
anything with these two and the show's name is
basically a particular resource.

So where they are and you can
identify a particular resource. So I don't want a particular
resource to be assigned to him. I want PE can access
every resource in easy to write so I just add star
for all of them right and click on Next Step. So with this you as you can see it
has Automatically created a policy document for you. All you have to do now
is click on create policy. And it will create
the policy for use as you can see there are
18 customer managed policies that are now 19 so I
can go here and select. T' the policy
a policy over here. Alright, so if I go
to my user now, which is test I'm
going to permissions. I will just click on add
in line police policy. Click on select again Guru ec2. select actions all actions right
and pull it to Star. So I click on ADD statement
click on next step and click on apply policy. So a policy has been applied
on the test user that it can actually access
the ec2 instances now, so if I go to my test user now which in which I was not allowed
to access the ec2 instances, I can actually use
easy to instances now, so if I go too easy, too You can see the Lord give
me the access denied thing, right so I can access all
the instances over here as if I was using
the root account, but only for
the ec2 service right? If I go to S 3 you
can see I will still have the access denied page.

Because I'm not been
assigned the access to this particular service. Alright, one more thing is
if what if you add an allow and Adonai policy
together inside a group what will happen then? So in that case so since I
have allowed easy to access what I'll do is I'll deny
is you access as well in this particular user. So I'll click create
one more policy and I'll say deny
I'll select ec2. Right as like the actions
as all actions. I will give the resources all
at the statement and click on Next Step apply the policy. So now I have denied
ec2 instances as well and created and allowed ecd
instance ec2 instances as well. What do you think
will happen now? So if now I try
to go too easy, too. Let's see what will happen. So it will say you're
not authorized to use Easy to anymore because whenever
you creating policy guys, you either get the along option
or the deny option.

If you have selected
both of them, it will always prefer the least permission
that you have given. So in our case
that is the deny option, right so it will always
deny the case. Even if you have allowed
it in the same user, right if you have mentioned that that particular
service has to be denied to that particular user. Alright, so this was
about policies guys.

Let me come back to my slides. So we have discussed
what our users what a groups for a rose
and what apologies let's go ahead and discuss the very
important part of authentication which is called
the multi-factor authentication. So what is multi-factor
authentication guys, so multi-factor authentication
is basically something like OTP that you get when you log
into your Gmail account, right? So you enter a Gmail Email ID
you enter your password and when you click on continue, it will ask you
for your OTP, right? So same as the case
here as well. You can configure
your AWS account in such a way that you will enter username.

You'll enter your password. And when you click on login, it will ask also
ask you for a code that has to be given to it. Now that code is basically the multi-factor authentication
thing that we document so there are basically
two layers of security Now one layer is a password
and second layer. MC code that will be entering
right now with AWS.

There is an application called
the Google Authenticator right which you can use to create a virtual multi-factor
Factor authentication device. Now for those of you who already are using
multi-factor authentication in your company's you so there's
a thing called gemalto, right? So people who work from home and they have to connect
to the company's Network the way you connect it
is using a gemalto token. And so those of you who are from the IT background
you can relate to it. Right but if you want to go through to
through a simpler way, you can actually create a virtual multi-factor
authentication device and to create that
in your AWS is pretty simple.

You just have to download
an application called the Google Authenticator
on your phone and you have to connect that
application to your AWS account. And that is it now. It might sound tough,
but it's very simple. Let me show you how so you you will basically go
to your AWS Management console and you will go
to the particular user that you want that multi-factor authentication
to be assigned to. All right. So for example, I wanted to be assigned
to the test user right. So what I'll do is
I'll go to users. I'll go to test right and
in the security credentials tab, I will have this page
which says assigned MFA device. So it says no as of now, so I'll assign it
a device I click on edit and now it'll give me an option
between a virtual MFA device and a hardware MFA device. Now. I have to choose among the two. So since I said, you can create a virtual
MFA device very simple easily.

So I'll select
the virtual MFA device. And now it's basically
asking you to install the application on your phone. So we have already done that. Let's click on next step and now you'll be presented
with this screen. So basically now
what you have to do is you would be logging in to
your Google Authenticator app, and you will be scanning
this barcode from your phone. So let me show you how let me connect
my phone to the computer so that you can see the screen. Give me a second. Alright, so this is
the screen to my phone guy. So what I have what I have
to do now is I have to go to the Google Authenticator app. I'll ask me to
create an account.

So I click on begin
and once I have that basically now I'll have to
scan the barcode from my mobile. So the way to do that is I'll click
on scan a barcode and then I'll scan
this barcode over here. Right, it might take some time. So be patient. Yeah, so it's done now,
you're all set. Right. So you just click on done and now you have
to enter two codes that you are you
will be receiving on your on your Google Authenticator. So basically these codes change
from every 30 seconds, right? So I have to endure
these codes over here. So it's 2 0 4 and then 3 5. Sorry 0 2 0 & 3 5 3 Zero
two zero three five three, and I have to enter
the next code as well. So let's wait for
the next code and it's 1 2 7 8 9 1 so I'll enter
that over here as well. So it's 1 2 7 8 9 1
and that is it guys. So now I'll click
on activate virtual MFA and it says the MFA device
was successfully Associated.

So I'll click on finish and that is it guys
you're done, right? so now if I log out
from my test account that is From here, right? This is my test account. So if I log out
from here right now. And try to login
again using test. So I come to my normal
login page, right? So I'll enter my username
and my password. Which is this and now I'll click
on sign in so now it will ask me for the MFA code. So let's see.

What is our MFA code as of now. So it has changed to
seven three four five five two. So let us enter that seven
three four five five two. And click on submit. So with this I will
now be able to log into my AWS console
using the test account which are configured using
the administrator account in I am right so it's
very simple guys. It's you can actually get
a world-class security with the click of a button
using I am alright, so we have seen how we can do
multi-factor authentication. Let's move on
to the Hands-On part now, so this is what Is you
guys have been waiting for so just give me a second? So that I can configure
everything on my end.

All right. So what we'll be doing now is
I have created an application which can interact
with the S3 service. All right. So using that as
the service now. We will be. Uploading files to RS3 console and how will we are going
to do that first? We are going to do that using
Local Host and that is where our secret keys
and my accesskey comes in and then we will be we have assigned role
to are easy to instance.

Right? So we'll be accessing
that website using easy to without the access key
and the secret access key and we can and we'll see do
we get the access to our SEC service or not? Alright, so let us do that. So now what I'll do is I will go
to my local host application. So guys this is
basically my application. What I have to do is I'll choose
a file upload a picture from any sample pictures and then it will upload it
to a particular bucket that I've defined in S3 and that the bucket looks
something like this. It show that buckets name
is quarantine demo. So let me show you the bucket. So as of now,
I think there are some objects. So let's delete those objects. So here it is. This is the
bucket quarantine demo. So I have like three objects
over here as it's now. So let's delete these objects. Alright, so now what I'll be doing
is this is the code for my application guys. All right. So in this code
as you can see, I'm not specified the key
and the secret key as of now, so I'll get the key and the secret key
from here, right? So let me quickly.

So let me show you without
the secret can access key. How is this localhost
website functioning? So if I try to upload a file
as it is now See, this is the file
that I want to upload I click on upload image
and I will get an error right because it
is not authenticating itself to the service
that I want to go to. So now I'll add the credentials that that is a key
and the secret key. Now the way to do that is
like this so I'll copy it. And I'll paste it here. I'll delete this and this is well not required
and now I'll paste my key and my secret key, which is this right
so I'll copy the key. Over here and then
my secret key as well. over here and now I'll save it if I try to access
my Local Host website now, I should be able
to upload a file right so if I try to upload the file now. It says well done
S3 upload complete. So these credentials that have just entered
our basically credentials for my him and account.

So if you want to see where did I get
these credentials from again? You can basically go
to users you can go to your user and you can go
at security credentials and over here. It will last you
the access key ID lot list you the secret access key because it is only available
once you can only use it once. Copy it once you will
not be able to see it again. And if I make this particular
key inactive from over here, and if I try to
Upload anything again. I will again get an error because without the keys
my account will not be.

I will not be authenticated
to the S3 Service as you can see it says
invalid access key because it is not valid anymore. All right, so I can make
it active again, but that is not required as now. So what I do now is I
have already configured this website on the ec2 console. All right, so let me go
to my easy to Right, here. It is. So remember in this starting
of the session we created a role for S3 full access, right? So that role has been attached
to my ec2 instance.

So let me show you the website. Here it is. All right, so I can access
the website on my ec2. Now if I choose a file as of now
and I try to upload the file. I'll be able to do so because my policy
has been attached now. Let's see what happens
if I d– attach the policy. All right, so I'll go to this and I'll select
no role click on apply. Yes detach. And now if I try
to upload a file again. As you can see I see a blank
page with basically means that an error has occurred. All right, so I am
not able to upload any file because my role has been
detached from my ec2 instance. So if I wanted
to be working again, I'll just simply go here go
to actions settings. Attach the rule. That is this click on apply
and it will again work. Right, I'll choose a file see
this file upload the image and your work again works
like a charm, right? So that is it guys.

You don't have
to configure much. You just have to have
the knowledge of I am and with that you can do complex procedures
with the click of a button and you don't have
to swear about it, right? You might want to you might be wondering did I change
anything in the code when I uploaded to easy to so you don't have
to do anything guys. You just have to delete
the Choose key and secret and you will upload the code as it is you don't have
to change anything it will if it doesn't have
the key mentioned in this particular function, it will basically get those keys
from the metadata of easy to and metadata is the place where your role is actually
assigned or your role is actually attached right? So if it doesn't find
the key in the code, it basically goes to the metadata and picks
the key from over there.

All right. So guys that is it for the demo part
in this session. We will be discussing
about Amazon redshift the most popular
cloud-based data warehouse. So let me run you
through today's agenda quickly. We will Begin by taking a look
at traditional data warehouse will be discussing
its underlying architecture and the disadvantages of using
traditional data warehouse, and then we'll move on
to our today's topic which is I'm redshift
here will be discussing its architecture its key
Concepts its unique features and the advantages
of using Amazon redshift.

And finally, we'll be doing a demo
on Amazon redshift in this demo. We'll see how to import
data from Amazon S3 to Amazon redshift and perform queries
on this data very easily. So I hope that was
clear to you guys. Let's get started. I'm sure you know, what a data warehouses you
can think of data warehouse as a repository. Story that data generated from your organization's
operational systems and many other external sources
is collected transform and then store you can host
this data warehouse on your organization's
Mainframe server or on cloud, but these days companies are increasingly moving towards
cloud-based data warehouses, instead of traditional
on-premise systems and to know why we need to understand
the underlying architecture and the disadvantages of using
traditional data warehouses.

So let's begin
By looking at architecture, but it is important to
understand where the data comes from traditionally data sources
are divided into two groups. First. We have internal data
that is the data which is being generated and Consolidated from
different departments within your organization. And then we have external data
that is the data which is not getting generated
in your organization. In other words. That is the data which is coming
from external sources. So this traditional
data warehouse follows, It's a simple three-tier
architecture to begin with we have bottom tier
in bottom tier. We have a
warehouse database server or you can say
a relational database system in this jar using different kind
of back in tools and utilities. We extract data
from different sources and then cleanse the data and transform it before loading
it into Data Warehouse and then comes the middle tier and middle tier we
have olap server.

Olap is an acronym
for online analytical processing this Oily performs
multi-dimensional analysis of business data and transforms the data
into a format such that we can perform complex
calculations for analysis and data modeling
on this data very comfortably. Finally. We have top-tier. The stopped here
is like a friend and client layer this jar
holds different kind of query and Reporting tools using which the client applications
can perform data analysis query reporting and data mining. So to summarize what we have Vlad till now
traditional data warehouse as a simple three tier architecture
in the bottom curve.

We have back in tools using
which we collect and cleanse the data and then
in mid 80 or we have tools which is olap server using which we transform the data
into the wavy Ward and then finally dropped your
in which using different query and Reporting tools. We can perform data analysis and data mining moving on
to the disadvantages of traditional data
warehouse concept there. Is this leading
us Business Service Company. And this company is running a commercial Enterprise data
warehouse this data warehouse as data coming from different sources
across different regions. The first problem
that this company faced was when it was setting up
a traditional data warehouse as we discussed earlier, the architecture of
traditional data warehouse is not very simple.

It consists of data
models extract transform and load processes, which we call ETL and you
have bi tools sitting on top. So this US based Denis
had to spend lot of money and resources to set up a traditional data
warehouse data warehouse, which was initially
5 terabytes is growing over 20% year-over-year
and it was expected that the might be
higher growth and future. So to meet this continuously
increasing storage and compute needs
the company had to continuously keep upgrading
the hardware again this task of upgrading the hardware
continuously involves lot of money Manpower
and so many resources so, To scaling and traditional
data warehouse is not an easy concept and since the company
could not meet all the storage and compute needs easily. It was facing a lot
of performance issues as well. And finally the company
had to deal with increasing cost initially that to spend a lot
on setting up data warehouse like that to spend
on Hardware Manpower electricity security real estate and deployment cost
and many other and as their data warehouse grew
they had to spend again to meet Courage and compute needs so to sum it up setting
up a data warehouse and deploying it and managing it later
involves lot of money and resources moreover auto-scaling in traditional data
warehouse is not an easy concept because of all these reasons many companies
are increasingly moving towards cloud-based
warehouses instead of traditional on-premise systems.

So guys in this session, we'll be dealing with one
of the most famous cloud-based data warehouse
provided by Amazon, which is arms, And redshift and simple
what's Amazon redshift is a fast scalable data warehouse that makes it simple
and cost-effective for you to analyze all your data
across your data warehouse and data leak guys. I have a definition
which is put up on the screen and I have few words, which I have
highlighted over there. So as we progress
through the course of the session will know
what those words exactly mean.

So let's ignore them for now, but there are certain
key Concepts which you should be aware of when you're dealing
with Amazon redshift. So we'll discuss them now. Now Amazon redshift data. Where is a collection
of compute resources, which we call notes
and these notes when organized into a group
they become clusters each of these clusters run
an Amazon redshift engine and it contains one
or more databases. So this cluster
has a leader note and one or more compute nodes as
for the leader node, it receives queries
from Klein applications. And then it passes these queries and develops a suitable
query execution plan and then it coordinates the power.

Execution of these plants
with one or more compute nodes watch the compute nodes finish
executing this plan. Again, the leader node Aggregates the results from all
this intermediate compute nodes and then sends it back
to client application. Then we have compute nodes you
can think of this compute nodes as a compute resources that execute the query plan which was developed
by leader node, and when they are
executing this plan, the transmitted data among themselves to
solve many queries.

These compute nodes are further. Added into slices which we call note slices each
of this note slices receive part of memory and disk space. So the leader node distributes
data and part of user query that receives from Clan
application to this note slides and all this note splices walk
in parallel to perform operation and increase the performance
of your redshift data warehouse. So to say we have leader node, we have compute nodes
and nodes slices.

But how do they interact
with line application? That is the question here. So I This line applications basically bi tools or it can be
any other analytical tools which communicate with Amazon
redshift using drivers like jdbc and odbc jdbc ref is to Java
database connectivity driver. It is an API for
programming language Java. Then we have odbc it refers to other
database connectivity driver and it uses SQL to interact
with leader node. So basically using
this drivers client application sends a query to lead
a new read a note on receiving the client
applications queries. It passes these queries and develops a
suitable execution plan. Once the plan is set
up compute nodes and compute slices start working on this plant
the transmitted data among themselves to
solve this queries. So once the execution
is done leader node again Aggregates the results
from all this intermediate totes and sends it back
to client application.

So this is the simple
explanation of Amazon redshift Concepts moving on when you launch a cluster
you need to specify the know. But basically we have two types
of nodes then storage notes. These are storage optimized and I used to handle
huge data workloads. And basically they
use hard disk drive or HDD type of storage and then we have dense
compute distance compute nodes are compute optimized and they are used to handle high
performance intensive workloads in the mainly
use solid-state drive or SSD kind of storage, but there are three things
that you should keep in mind when choosing one among them
firstly you should be aware.

If the amount of data that you want to import
into your Amazon redshift and then the complexity
of the queries that you run on your database and the need
of Downstream systems that depends on the results
of these queries. So keeping this three
things in mind, you can choose
either Den storage nodes or dense compute nodes. So guys that
was the architecture and its key Concepts now, we'll take a look
at few reasons as to why Amazon redshift is very popular as we discussed earlier
setting up a You smell data warehouse involves
lot of money and resources, but it's very easy
to setup the deploy and manage a suitable data
warehouse using Amazon redshift on Amazon redshift console.

You will find create
a cluster option. When you click on that option Amazon redshift ask you
for certain details, like the type of node. You want to choose the number
of nodes the VPC in which you want to create your data
warehouse user ID password and many other details. Once you feel that you
have given the right set of details you have an option which says launch the cluster and one click your data
warehouse is just created.

So with one click you can easily
create a data warehouse in Amazon redshift. Once your data warehouse is set
up Amazon redshift automates most of the common
administrative tasks like managing monitoring and scaling your database. So you don't have
to worry about managing or scaling your database needs. So that's how easy
it is to develop or set up a data. Using Amazon redshift. We also learned that auto scaling is difficult
in traditional data warehouse, but you can scale quickly
to meet your needs and Amazon redshift.

Well, we already know that
a cluster node as a leader note and one or more compute nodes. So if you want to order
scale an Amazon redshift, all you have to do
is resize your cluster size as we know this compute nodes
are like compute resources. So if you want to scale up, you can increase the number
of compute notes similarly if you want to scale. Held up you just have
to decrease the amount of compute nodes alternatively. We have something
called single note and multiple new and single node cluster one node takes the
responsibilities of both leader and compute functionalities and the multi node cluster
contains one lead in node and user specified number
of compute nodes.

So suppose you want
to resize your cluster and you are using
a single mode cluster, then you can change
from single node cluster to multi-node kirsta. Similarly. You can change from multiple node cluster
to single node cluster. Of a need so that's
how easy it is to scale up and down and Amazon
redshift moving on. We learned earlier that while using
traditional data warehouses. It's possible that the performance of your
data warehouse might decrease but with Amazon redshift, you can get ten times better
performance than any other traditional data warehouse. It uses a combination
of different strategies, like columnist storage and massively parallel
processing strategies to deliver high throughput
and response times. So let's discuss the strategies
one by one will first we have columnar data storage
to understand what that is first. We should know row storage most of the traditional data
warehouse and database is used this row storage in row storage.

All the data about the record
is stored in one row. Okay. So let's say I have
this database here. I have three columns
and two rows the First Column contains
the unique number associated with student the second column
contains the name of a student and the third column
contains the edge as we already know. Data is stored in form
of blocks in databases or data warehouses. So as you can see
in row storage the block one contains all information. There is about a particular
student has SSN his name and then age. So basically it stores
all the information that there is in a single Loop.

So in the first block you have
information about first student and in the second block you have
information about second student and it goes on now
the columnist storage again. I'm using the
same database again. I have three columns
and two rows. Rose but Colin storage stores
data by columns with data for each column store together. So again, we have blocks but the first block
here has all the data that is there in First Column. So you have all assistant
stored in first block and all named store in second block and all
the ages Stone in third block. So it goes on there are a lot of advantages of using
this column storage firstly since and column storage
a single block contains same type of data. You can achieve
better data compression. As you can see columnist storage can hold values
3 times the records as robe a storage because of this the number of input/output
operations decreases and thirdly by storing
all the records for one field together
columnar database can query and perform analysis on similar type of data far
quicker than row storage.

So this is how the concept
of columnar storage which is used by Amazon redshift provides
us a better performance. And then we have
massively parallel processing. I'm sure you might have
or of parallel processing and computer science. It's just that number of different
processors walk together or compute together
or in Palin similarly massive parallel processing in Amazon redshift is nothing but cluster we have already
discussed this earlier. We have a cluster and this cluster
has a leader node and one or more compute nodes and this compute nodes is
further divided into something called note slices.

So when this leader node
receives a query it develops execution plan and
this compute nodes and computes. Isis walk together or in parallel to execute
this plan and later. Thus leader node
sends the results back to client application. So basically this compute slices and compute nodes work in parallel to achieve
better performance moreover Amazon redshift is also able
to smartly recognize the data or notes before running a query which dramatically
boost the performance. So that's how we can get
our ten times better performance using Amazon redshift and then the cost
and traditional data. A bear houses people
had to spend a lot of money to set up and then later to maintain
the data warehouse. But Amazon redshift is the most cost-effective
cloud-based data warehouse. If you remember
in traditional data warehouse, they had to spend
on Hardware real estate man, power electricity and deployment cost
and many others and as their data warehouse
grew they had to spend again on meeting the storage
and compute needs but an Amazon redshift. We don't have to pay
any upfront cost. So Amazon, Redshift is
most cost effective and it cost one tenth
of traditional data warehouse.

You can start small fishes point
two five dollars per hour without any commitments and you
can gradually scale up later. If you need in addition
to all those advantages Amazon redshift allows
you to query data from data leak data leak
is a storage repository that holds a vast amount
of raw data in its native format until it is needed. So in data Lake you have data in different formats
you can Can load data from Amazon S3 into
your Amazon redshift cluster for analysis very easily that is from data leak
you can store easily 20 or Amazon redshift
but it needs more effort and cost the first because loading data into
Amazon redshift cluster involves extract transform and load which we simply called
ETL process and this process is very time-consuming and compute intensive
and it's costly because uploading lots
of data cold data from Amazon S3 for analysis.

Is growing your clusters, which is again costly
and requires a lot of resources. So as a solution, we have something called
Amazon redshift Spectrum, which acts as the interface
between your Amazon S3 or data Lake
and Amazon redshift. So you can directly query
data stored in Amazon S3 or data lake with this red shift
Spectrum without need for Unnecessary data movement.

I hope that was clear and
finally with Amazon redshift. Your data is safe
and secure it offers. Backup and recovery. So as soon as data is created or stored in Amazon redshift
a copy of that data is made and through secure connections
a snapshot of it a sin to Amazon S3 for later. So suppose you lose your data or if you have deleted the data
from Amazon redshift by mistake, you can restore the data easily from Amazon S3 service
Amazon redshift also provides you with an option
to encrypt your data.

So when you enable
this encrypts option all the data in your cluster
in your leader node, and Compute nodes
and nodes slices is encrypted. And this way your data
is very safe and secure. So Guys, these are
all the advantages of using Amazon redshift. So now you have a basic idea
of its architecture. Its various key Concepts, like clusters nodes
leader node note slices now, it's time to work
on a demo in this demo.

We'll see how to transport data from Amazon S3 to Amazon
redshift data warehouse and perform simple queries. So I hope that was
clear to you guys. Let's get started the first First thing
there are certain software's which you need to pre-install so that you can start working on
Amazon redshift first suppose. You want to perform queries
on the data on Amazon redshift.

Then you need a SQL work bench where you can perform
your queries and as we learned earlier
the client application need a connection
to communicate with redshift. So we need to install
a jdbc driver and for that jdbc driver
to run we need to have a Java runtime environment. So we have three things to
install your now I'll show you how to install it. And I have this Java runtime
environment download link by Soft So it says free download
and you click on that. It will be downloaded. You can store it anywhere
and once you're done with that search for
Amazon redshift documentation. So here it is. Okay, not that not that just one and when you scroll down
it says Amazon redshift get started click on that
and in the step one, we have prerequisite UPS.

Okay, scroll down and Chase in the Step
2 you have an option where you can download a go
to SQL work bench website and download it. So click on that and here it
says build current version and you have download generic
packages for all systems. You can download it. Once you click on that it'll start downloading
and there is one more thing which is jdbc driver. Go back to documentation part
scroll down in the step 4, you can see configure
a jdbc connection click on that it will take you
to a page where you have. I've jdbc drivers
of different version. You can download
the first one click on this and it will be downloaded. So once all these three things
are downloaded stored them in a file of your choice.

Well, I have stored
them on my desktop. I have this AWS folder and in that which ifft
so here's my workbench. Zip file. It was a zip file. So extracted all the files
and then I have my jdbc driver your well Java runtime
environment as in download, so that's okay. So I hope that was easy to just
install all these things and you are set to go And your backdoor
Amazon Management console? I have previously used
the Amazon redshift. So I have this Amazon redshift
in recently visited Services. Anyway, you can search for
Amazon redshift here your it is whether it's taking
time to load. Okay. This is my Amazon redshift
console page and you have different kind of options
on your navigation pane on the left side and there are two ways to create
a launcher cluster first. You have quick
launch cluster option and launch cluster option. This is the very easy way
to launch a cluster but suppose you want
the freedom to specify all the details as in the vp's.

He's the security groups
different type of notes username password and all that. You can go for launch
clustered option. Let's go ahead an Explorer. So first it asks for a name. Let's say my cluster
and database day T1. And the poor this is default
Port 5 4 3 9 is a default Port which would be handled
by Amazon redshift you then the master user name. Let's say AWS user and password. That's it and confirm
your password and click on continue option. So cluster details are done and dusted then you
have note configurations. Well for the free tire, you only have DC too large
but suppose you have a premium membership. Then you can choose any
of this for this DC to large.

This is the CPU capacity
memory and storage and the input output performance
has moderate you can go ahead and choose the cluster type. We discussed this. We have multi node
and single load and single node. We have both the leader
and the compute nodes. Note responsibilities handled
by single note the multi node. We have a single leader node
and use a specified number of compute notes
click on continue and then here it asks for
the VPC details parameter group in suppose you want encryption
or not and all the details. So basically in this
launch cluster option, you have the freedom
to specify all the details, but for this demo, I'm going to use
quick launch cluster option. So again as for the free tire, I'm using DC too large
and again for the free tier. I'm using DC to large type
it says Our to compute nodes and let's retain
the same cluster name as for the master
user AWS user now.

Let me give the password. And the default Port is 5 4 3 9 and last option we have
to choose among the viable. I am users or IM roads, but the question is why we need
our I am role here in this demo. I said that we're trying
to import data from Amazon S3, but you need certain set
of permissions to access data, which is stored
in Amazon S3 for that. We need to create a I am roll. So let's go back
to I am service. Let me close all the steps.

Okay, here you
have roles option. You can click on that
and click create true. And since we're dealing
with Amazon redshift select red shift, let's shift customizable
and click on next permissions. So we want Amazon redshift
to access data from Amazon S3. So search for S3 of
and you have a permission which says Amazon S3 read-only
access well for this demo, this is an if but there
is one more permission, which is Amazon S3 full access so you can perform read and write operations as
well as for this demo. I'm going to
choose this permission, which is Amazon S3 read-only access provides read-only
access to all the buckets and Amazon S3 and click on next
to view give you a role in name. Let's say my redshift role
to and click on create rule. So now our Amazon redshift
database as permission to access data from Amazon S3. Let's go back
to redshift console. Okay, let me refresh this
and now it's showing the role which has been created
by showing your so as you can see unlike
other launch option in this I didn't have to specify By much details just the node
type the number of notes and then the master user name
cluster identifier and password and the default database port and you can click
on launch cluster option.

So with one click you
have easily deployed a database on Amazon redshift. If you remember when we try to use
this launch cluster option we had option to select
a default database or use or create our own database, but when you use this quick
launch cluster option a default database called
death will be created for us. So guys this cluster
has been created. So before we connected
to your SQL work bench.

Let's try to explore here. You need to make sure that the database health status and in maintenance state is
everything is in green color as for the cluster
a cluster status. It should be available. And for the database Health, it should be healthy
only then you can make a perfect connection
with your SQL work bench. So you have this icon
here click on that. Well, you get all
the information there is about your cluster or you can just go ahead
and click on this cluster. So this is the end point
this tells me all about how to make a connection
with this cluster. I have this when I click on that
it says publicly accessible. Yes in the username as AWS user
and the security groups. Apparently, it just
shows the TCP rules which are set so that's about the end point then the cluster name you have
cluster type node type and it shows the nodes and
the zone and the date and time when it was created and you have cluster version
as well on the right side.

You have cluster status, which is why Syllable
database health healthy. So is it currently
in maintenance mode? No, and then you have
parameter group apply status which is in sync with your database and there
are few other features as well. But here you can see
this VPC group click on that go for inbound
and make sure it is set for TCP. Okay edit make this custom TCP Rule and here
are five four three nine. Custom that's it
and click on Save option. So that's the default port with which you can access
the redshift and it's go back.

Clusters. Okay, where were
we we will change the default group of a PC. So this is the URL with which you can connect
to the SQL work bench. So let's copy this
and paste it in our x file I pasted over there. Well, if you
using odbc connection and you can use this URL when you scroll down you
have capacity details of your entire cluster, it's DC too large. So seven easy to compute units total memory storage
and platform, okay. Let's go back to the I am role but I should have
an IM roll option here. Let me see check it out. Okay, there's an option. It's acim rules. You can copy this entire thing
and paste it again the editor so that while connecting
it will be easy for us to find it.

Okay, then so now we
have cluster has created your database or data
warehouse is set up now. You can just connect it
with SQL work bench and start working on it. So let's go back to the folder where I stored
my Workbench here it is. When you scroll
down there's a file which says SQL work bench
executable jar file. Open so here it is. It's asking for
a default profile name. Let's say new profile one. Okay, then driver that was Amazon redshift driver
only jdbc driver. And this was the yarol. We copied it earlier
in the editor. So I'm going to paste
it over here.

Now. This is the URL control C and pests AWS user
in the password. Okay, that should work make Sure that you select
this order commit save it and then click on OK it says
connecting new database now, it's successfully connected
so I can easily perform queries. Now first. Let's create some tables. Well, I'm using the sample
database from Amazon S3. So you have this AWS
redshift documentation. Go back to that and here
it says get started and in the step 6 you have
this default SQL queries and tables provided. You can go ahead and use
that I have it stored in my data. So I'm going to copy first. I'm going to create
all the tables.

Control C and paste
it over there. Let's check what tables are
there first we have user table. Well, this is like
an auction data schema. So you have
user table many users. When you have category users the category different
categories to which users belong to then you have a date date on which
a particular event occurred. Then you have even
table all the details regarding an event
listing as in the items, which are being sold are listed here all the details
about the items. Then you have sales
as in which user is Sighing how much which item
in on that details? So basically we have
six to seven tables. I'm going to select all
that and say run option. So here it says
table users created table when you created category
date event listing and sales. So all the tables
are easily created now as for the next part, we need to copy the data
or the data for the database from Amazon S3
to Amazon redshift. Let's go back to the editor
and I have this copy command.

I'll explain you the format. Control C. And let's paste it at herb. Okay, let's explore
this copy command. It says copy to the table users, which you just created from
this path that is from the file, which is toward an S3 bucket. But this is the credential AWS. I am role which we copied
to the editor the earlier. Apparently, we just
giving a permission to access the data from S3. So we need to copy
this I am rollio and then we have delimiter as then let me go back
to a return show you an example. Amber Okay, let's say
I've added all the child's name. Archana space some h b. Hobbies
so you can see the straight line This is the delimiter
as in the thing which are using two separate.

All the fields are the columns. So going back. So that's delimiter
which separates the data and this is region in which
you are S3 bucket is located. So that's it. We have to replace diam roll. This is the AR and if
the role I'm going to copy it and wherever this is you
need to just paste it ctrl-v. Can the dawn last one
so select everything and click on the execute button. It might take a while because the data set
which was stored in Amazon S3 might contain
large number of rows. So it might take a while as
far as you can see it states executing statement here. It says one out
of seven finished so we have six more to go.

So this is good work bench
has successfully executed all the script
which we have written here. Let's go and start performing
some simple queries. Let's say I want to extract
the metadata of user table. I have this query OK select star
from page table definition. So since we are extracting
metadata from table name, let's say users and click
on execute option. So you have so many columns. You ought to taste
First Column user ID of type integer
and coding Delta. Then you have user name first
name last name city state email.

So basically that's the metadata
or the structure of user table. So we have sales ID list
ID seller ID by your ID and many other details. Let's execute another command. Let's say I want to find
total sales on a given date. Okay some the count
your have some function. Which will count the number
of sales from sales and date where the sales data is date ID and the date on which I want
to calculate a specified here and then click. Okay the summit at your number. Let's just walking on it
that is not working. I've selected the user table and I've asked them
to display all the all that dairies in the user table. So this has the data say
select star from users. So I want to extract
the names of people who are from let's
say some states. Let's consider some State. Let's take an edge
so s Tage Like and hatch it should work now
it is executing statement.

So these are the people
who are from State and Edge. So basically once you if the perfect connection
from your SQL work bench to your Amazon redshift, you can perform
whatever queries you like. So let's go back
to our Amazon redshift console. Well, So this is the cluster. I'm going to click
on this here you have queries when you click on that
all the queries, which you performed
till now will be shown.

So this is the query so it says first name
from users was from State NH. This was the query
which we performed earlier. So you have all the data or all the information
regard the queries which are executed. Well, that's all
about Amazon redshift. So guys, this is how easy it is to create
a data warehouse using Amazon redshift go ahead and explore different many other
features of Amazon redshift. Well, I've just showed
a part of them here. So go ahead and create
a database perform various queries and have fun. So when you talk
about software development, you have to mention develops. Now. Let's try to understand
why to do that.

Let me give you
this basic definition first. So it is nothing but a set
of practices intended to reduce the time between committing
the change to a system and the change being placed
into normal production while ensuring high quality. Yes, very text bookish
and again for people who do not know what devops has this
might seem a little way. So let me just simplify this
definition for you people again. See an image here what you see is
you see a developer. You see an operator
and there is a deployment wall which none of these two
is ready to take responsibility of they're pushing the
responsibility on someone else. So yes, this is
what the scenario is when you talk about
software development again, let me give you a little more
idea about this particular term. So let's try to understand
how developers work and how operators work and
when you talk about developers, their responsibility is
to create code to update this code whenever required wait
for the next releases and if there are any changes
commit those changes submit those changes and again move it
to the production environment where the operators take care
of it then wait for the feedback from The Operators if there is any and then again go
through the changes if there are any likewise wait for newer software is
newer products to work on.

So, yes, this is what
their responsibility is create code create applications, right? So what happens here is
when you do create a software, so there are constant releases
that you need to focus on. We all know that every now and then you'd be getting
a Windows update or Our mobile phone update
saying that okay, you have a new operating system
new release new version updated. So this is
how the technology is working. Everything gets updated every
now and then so the reason this is happening is people want to
stay competitive in the market. The software company is at least
and they want to ensure that the product has
the latest features.

So this puts burden
on the developers because they have to constantly
update the software now once they update
a particular software. It has to go and work
in the production environment, but at times it does not work
in the production environment because the developer
environment And the production environment might be
a little different. So something that works
in the developer environment might not work
in the production environment. So again, some changes
are thrown back by The Operators and developers again get stuck. So they have to wait till they get the response
from The Operators and if it takes a longer
while their work is stuck.

Now if you take a look at it
from The Operators perspective the job is to ensure that whatever is working
in the developer environment. It has to work in the production
environment as well. They deal with the customers
get their feedback and if there are any changes
which need to be implemented. At times the
implemented themselves if there are any core
or important changes that are required those have to
be forwarded to the developers. So yes, what happens
at time says what works as I've already mentioned works
in the developer environment does not work
in the production environment and operators might feel that this was the responsibility
of the developer which they did not do and
probably they are facing problem because of it again
the customer inputs. If those are forwarded back
to the developers team. The operator team has to depend on the developers to make
those changes, right? So as you can you see
these two teams are interdependent on each other
and at times they feel that somebody else's work.

The developers work is pushed
upon the administrators or the developers feel that the administrators teams
work is pushed up on their side. So there is this constant tesl
with the company owners have to take care of they
have to think as an okay if this goes on
how can I generate or produce new releases
new software's every now and then this could be
a problem, right? So this is what devops does
as the name suggests. It is deafplus Ops that means
it combines the operation. Team and the devops team when I say combined
they bring in this approach where integration
and deployment and delivery. It happens continuously
and the fact that these things
happen continuously. We do not see the tussle
between these two teams. So yes as you move further
develops helps you unite these two teams and they
can work happily together. So this is what happens in devops you code your plan
you release this deployment.

There's operations. There's monitoring this testing
everything happens in a Pipeline and these are some
of the popular devops tools that let you take care
of all these things. But now again this is the warps in general you have get
you have puppet you have Chef you have ansible saltstack that help you automate
this process of integration and deployment of your software, but the fact that everything is moving
to Cloud these days we are thinking about how can we
do all these things from cloud. Do I need to move
in these many tools if you want definitely you
can move all these tools but a platform.

Ew s which is a popular
cloud service provider what they have done
is that ensured that all the requirements
of develops can be taken care on the platform itself and you
have various services that are made available to you that help you in this process
now say for example, you have easy
to write instances. Now you can launch servers at your will you can launch
instances at your will so if your concern
is scaling up and down, aw takes care of it you
have various Services, which help you
monitor your process. So monitoring is something
that is taken care of. There's auto-scaling
their various other services which this cloudfront which actually lets you create
content delivery networks. I mean, you can
have temporary caches where you can store your data
and stuff like that. So there are
various AWS services that actually help
you carry out the divorce or the CI CD process
with a lot more ease and that is why it develops an AWS. They form a very good
combination or a combo, hence. We are talking
about this term today. That is AWS develops. Not that we have some idea
about what AWS is what devops is let's try to understand how continuous integration
delivery and deployment work with AWS and how they incorporate
the devops approach to do that.

Lets try to understand
continuous integration and delivery first. So let's take a look
at this diagram to understand this process. So these are the four steps that are there you
have split the entire chunk of code into segments. So guys think of it as more of your mapreduce kind
of an action. I mean, I mean what happens is in your continuous
integration and delivery. We are trying to bridge the gap
between the developer team and the operations team, right? So we try and automate this process
of integration and delivery. So the fact that continuously you have
various software updates, which I just mentioned right? So what if I have like
50 or maybe a hundred developers who are working parallely now, there are certain resources that need to be
used by everyone. Right? So what problem it
creates is suppose if I'm working
on a particular code.

I work on that piece of code. And if somebody else is working
on that piece of code and we have this Central system where the data
needs to be stored. So I'm working
on this piece of code. I make a particular change
and I store it there now someone else is working
on this piece of code and that someone
makes a change and he or she stores it there, right? So tomorrow if I come back
probably I need a fresh copy of this piece of code.

What if I just start working
on the piece of code that I'm working and then
I submit that code there so there would be an ambiguity
right whose coat to be accepted who's codes copy should be made so we need this Central system
to be so smart that each time. I submit a quote it updates. It runs tests on it and see is whether it's the most
relevant piece and if someone else submits that deputies of code then tests
are run on that piece of code. This system should
be able to ensure that each of us next time when we go and pick
the piece of code.

We get the latest piece of code and we get the most updated
one are the best piece of code. So this process of meeting
the code putting in that piece of code and automating
this whole process so that as it moves further, it also gets delivered
and deployed to the production in the similar manner
with the tests that need to be conducted is
called as continuous integration and delivery now integration as I've mentioned here
the continuous updates in the source code or the code that I'm building the code
is built compiled and when I talk about delivery and
deployment the pieces of code once they're ready to move
to the production environment, those are continuously
he deployed to the End customer now deployment seems
a very easy process, right? I mean picking up the code
and giving to the End customer.

No, it's not that easy
deployment actually involves taking care of all the servers and stuff like that
and spawning up. These servers is
a difficult task. So automating this process
becomes very important. And if you do it manually
you're going to suffer a lot. So yes, this is
where continuous integration and delivery comes
into picture code. It is continuously generated. It is compiled it is built
and compiled again then tested. And then delivered and made sure that it gets deployed
to the End customer the way it was supposed
to be so you can see that there are certain steps are
it says split the entire chunk into codes or into segments
keep small segments, of course into manageable form basically integrate these
segments multiple times a day, which I mentioned that there should be
a central system and then adopt a continuous
integration methodology to coordinate with your team. So this is what happens. I mean you have
a source code repository where the developers
work they continuously. Submit their pieces
of code now repository think of it as a central place where the changes
are constantly committed.

Then you have a build server where everything gets compiled
reviewed tested integrated and then packaged as well. Finally certain tests final tests are run to go
through the final integrity's and then it goes
to the production environment where this process
the building the staging and the committing process it gets kind of automated
to reduce your efforts. So guys when you talk
about a double Dress in particular you have something
called as AWS code pipeline, which lets you
simplify this process. It lets you create a channel or a pipeline in which
all these processes can be automated. So let's take a look at
those processes as well first. Let's get through
the definition part. Let's see what it has to say. I wouldn't be blankly
reading this thing and then promptly we'd be having the explanation
part that follows. So as the definition says
it is a code pipeline which Is nothing
but a continuous delivery service we talked about
continuous delivery already and you can use the service
to model visualize and automate certain steps required to release
your software something that we've already discussed in continuous
integration and delivery.

So this is basically
a continuous delivery service which lets you automate
all these processes. So as I mentioned
automating these processes becomes very important. So once you do use the service, these are some
of the features it provides you it lets you monitor
your processes in real-time with Comes very important because we are talking
about deploying software's at a greater pace. So if this can happen
in real time, I mean if there
is any change and if it is committed right
away probably just saving a lot of time right you ensure
consistent release process. Yes as I've told you deploying
servers is a difficult task and time-consuming task. If this can be automated a lot of effort is saved
speed of delivery while improving quality. Yes, we've talked
about this as well and will pipeline history
details monitoring becomes. Very important guys. So what court pipeline does is
actually lets you take a look at all the processes
that are happening. I mean if your
application is built, it goes to the source, then it moves
to the deployment.

All these processes
can be tracked in the pipeline. You get constant
updates as a new cat. This happened at this stage. If anything failed
you can detect as know K. This is the stage where it is feeling maybe
stage number 3 stage number four and accordingly
you can edit the stuff that has happened at that stage
only so weaving the pipeline. Details actually helps a lot and this is where code
by plane comes into picture. So this is what the architecture
of Code by plane looks like. It's fairly simple guys. So some of this might seem a
little repetitive to you people because the concepts
are similar the concepts which we discussed
those can be implemented by using Code pipeline. So ESF talked
about these things, but let's try to understand how the architecture works and we will be using
some other terms and discuss some terms
in the future slides as well, which we've already
talked about but each of these Isis they do
this task a little differently or help you automate these
processes hence the discussion.

So, let's see how much level can we keep
it unique and let's go ahead with this discussion as well. So, let's see
how the code pipeline Works. Basically there are developers as I've already mentioned these
developers would be working on various pieces of codes. So you have continuous
changes and fixes that need to be uploaded. So you have various Services. One of them is code commit
which lets you have a initial Source
management system kind of a Which lets you basically take
care of repositories and stuff like that. So it lets you directly connect with get I would
be talking about get what get is but for people
who know what get is if you have to manage
your git repositories, you have a service called
as code commit.

So this is what happens if there are any changes those go to the source developers
can commit those changes there and then it goes
into the build stage. This is where all
the development happens. Your source code is compiled and it is tested then it goes
to the twist aging phase. Where it is deployed and tested now when I say tested
these are some final tests that have to be implemented before the code gets deployed. Then it has to be approved. Manually. It has to be checked manually
whether everything is in place. And finally the code is deployed
to the public servers where customers can use it again
if they have any changes as I've mentioned those
can be readily taken from them and it goes back again
to the developers and the cycle continues so that there is
continuous deployment of code. This is another look at it. It is very Simple but this is more
from AWS perspective. So if there are any changes that developers commit those go
to the source now, your data is stored
in a container called as S3 that is simple storage service
in the form of objects.

So if there is anything that has to happen
the data is either fetched from the storage container, which is S3 and the changes
are built and then again a copy of it is maintained
in the form of zip as you can see here. There are continuous changes that are happening
and those get stored. In the S3 bucket now
S3 should preferably be on the region or in the place
where you are pipeline. Is that helps you carry out the process of continuous
integration and delivery with he's in case if you are concerned
with multiple reasons, you need to have
a bucket at each reason to simplify these processes. So again here to the code
gets to the source. It is probably submitted
to the build stage where the changes happen
a copy is maintained at S3.

And then it goes to the staging
again a copy is maintained and then it gets deployed. So this is how the Quarter pipe line works
and to actually go ahead and Implement all the actions
of quarter pipe line. You have a service
or the services that is your code deploy built
and code commit in AWS. So these Services actually
help you carry out some or most of these processes
that are there. Let's take a look
at those services and understand what do they do? So first and foremost you have
your code deploy code built and code commit.

So this is not the order in which you deal
with these things. Now these things actually
help you in Automating your continuous delivery and deployment process they have
their individual commitments. Let's talk about them
one by one first. Let's talk about code commit
which is last in the slide. So basically I talked
about moving a piece of code to a central place where you can continuously
commit your code and get the Fresh store the best copy. That is their right
so code commit what it does is
it helps you manage? Your repository is
in a much better way. I mean think of it as
a central repository. So it also lets you connect with get Which itself is
a central storage or a place where you can commit
your code you can push and pull that piece
of code from their work on it make own copy
of it submitted back to the main server or your main or
Central operating place where your code gets
distributed to everyone. So that is get
and what core come it does is it lets you integrate
with get in a much better way so you do not have
to worry about working on two different things.

It helps you
not Ematic authorization pulling in the repositories that are there
in your gate account and a number of other things. He's so yeah, that is what code commit as then you have something
called as code built as the name suggests. It helps you automate the
process of building your code where your code
gets compiled tested certain tests are performed. And again, making sure that artifacts of the copies
of your code are maintained in your S3 and stuff like that. So that is what code billed as
and then you have code deploy as I've already mentioned
deployment is not an easy task. I mean if we are stuck
in a situation where we are supposed
to manage the repositories we're supposed to On quite
a few things in that case if we are forced to kinda take
a look at the servers as well for new instances pain
new piece of servers that could be a tedious task.

So code deploy
helps you automate these processes as well. So this was some basic
introduction to these things. Let's just move further
and take a look at the demo so that we can talk about some
of these terms and the terms that we've discussed previously
in a little more detail. Now in one of
my previous sessions. I did give you a demo
on continuous integration and delivery I believe If they were certain terms that people felt were taken care
of in a speedy way hope that I've explained
most of the terms with more finesse this time
and in more detail as we go through the demo to I will try and be as low as
possible so that you understand what is happening here. So let's just jump
into the demo part guys. So guys, what I've done
is I've gone ahead and I've switched
into my AWS console for people who are new to AWS again. You can have a free
tier account with AWS.

It's very easy. You have to go and sign
input A credit card or debit card details
a free verification would happen and probably you would be given
access to these Services most of these services
are made available to you for free for one complete year and there is certain limitation
on these services. So you have to follow
those limitations if you cross those limitations, maybe you'd be charged
but that happens rarely. I mean if you want
to get started definitely this one year
free subscription is more than enough to get Hands-On
on most of the services. So I would suggest that you create
this free tier account. If you've taken a look
at my previous videos, you know that how to create
a free to your account. If not, it's fairly simple.

Just go to your browser and type AWS free tier
and probably you would be guided as in what details
have to be entered. It's not a complex process. It is fairly simple
and it happens very easily. So we just have to go
ahead and do that. Once you do that again, you'd be having access
to this console guys. So once you have an access
to this console, you have all the services
that you can use. So in today's session we would
be working on a similar demo that we worked in our one
of the previous sessions here. We would be
creating an application. In a pass application platform
as a service application and we would be deploying
that application using our core pipeline.

So there would be talking
about other terms as well. Like code commit code
different code built. So do not worry we would
be discussing those as well. So this is what the demo is
for today's session. So guys, let's start by creating
a pass application to do that. We would be using
elastic Beanstalk, which lets you have a ready
to use template and using which you can create
a simple application at this being a demo guys. We would be creating a very
simple and a basic application. So just Come here
and type elastic Beanstalk. So when I come
to this page guys, if you've created
an application, it would show you
those applications, but the fact that if you're using it
for the first time, this is the console
that you'd be getting that is why I have created
this demo account.

So that probably we get to see how you can start
from the scratch. So if you click on get started as creating an application
here is very easy, like extremely easy you have
to enter in certain details only it takes a while to create
an application under Stan double I would tell you why it takes
the time but once it happens, it happens very quickly. So all you have to do is
give your application name. Let's call it
say deployment tap. I'm very bad
at naming conventions.

Let's assume that this is good. You can choose a platform guys. You can choose
whatever platform you want. Say PHP is what I'm choosing right now as I
told you it's a pass service past that is platform
as a service means that you have already
to use platform guys. That is why you can just choose
your platform and your elastic. In stock would ensure that it takes care of all
the background activities.

You do not have to set
up your infrastructure. It takes care of it. So once I select the platform I
can use the sample application or use the code
if I have in this case, I would be using a sample code that AWS has to offer
and I say create. There you go guys. This thing is
creating my application. So whatever is happening here, it shows that these are
the processes now, it is creating a bucket
to store all the data and stuff like that. So it would take care
of all these things guys. It might take a couple
of minutes of meanwhile. Let's just go ahead
and do something else. Let me just open it up
loose console again. Somewhere else. I hope it does not ask
me to sign in again.

I've already signed in. So meanwhile that
application gets created. Let me just go ahead
and create a pipeline guys. So code pipeline again
as fairly simple guys. What happens here is very easy. I just go ahead and put in
certain details here as well in my pipeline would be created. So do you want to use
the new environment or wanna stick to the old one? You can click on Old right and you can go back and create
it the way it was done or you can use
the previous environment. I'm going to stick. And I was very
comfortable with that. So let's just stick with it. If you want you can use
the new interface. There's not a lot of difference certain little
are minor differences. So you can just come
here and add in the name of the pipeline that you want
to creates a demo pipeline. I see next Source provider guys. I would be using GitHub here because I want to basically
pick up a repository from GitHub that helps me in deployment.

So I need to connect
together for that. It would ask me to authorize
if you have an account. You can always do that so that it can basically
ringing all the repositories that you have. So just say authorized if not, you'll have to sign in once
so my account has been added here guys repository. I need to pick a repository. This is the repository
that I would be picking.

Do not worry. I would be sharing
this piece of cord or is what you can do is you can just
go to GitHub and type AWS – Cole pipeline –
S3 – code deploy – Linux now it is a repository
given to you by AWS if you take a look at it, and if you type it just the way
it is named here from AWS. You should get
that repository in GitHub. You just have to go
ahead and Fork it into your GitHub account and probably you would be able to import
that repository directly. You can see that repository
has been fought. Here into my GitHub account. You just type the name hear
this name search it and probably there would be
an option your fork. I fucked it. So it does not activate
this option for me in your case. It would be activated. You have to just click on it
and the repository would be forked
into your account. So I am getting or importing
a fork from my GitHub.

I was authorized my account
and then I can just go ahead and do the stuff
Branch Master Branch. Yes, and just do the next step
build provider no build here. I don't have Teenager to build
so I don't need to go ahead and provide a bill provider. You can use code
build right guys, if you want to move or basically deploy
your code to ec2 instances.

You can use code build. If you want in this case. I have an application in which I have an ec2 instance
and stuff like that. So I don't need to go ahead
and do any building stuff. Hence no build for me. So I say next deployment
provider in this case. My deployment provider
would be my EBS so we have that option. Yes. Yes select EBS
elastic Beanstalk. Naughty BST b stands
for elastic block storage. That is a different thing guys. Elastic Beanstalk. Make sure you do
that application name deployment a pause the name, right? Yep, and the environment. This is the environment. It creates the environment
on its own. I believe that it
has created the environment. It says it is starting. I hope the environment
has been created. So guys, let's just see whether our application
is up and running so that probably I
can pass in the details.

Yes, the application
has Been created guys. So let's just go back
and select this say next now create an IM role is already
saying so let's say sample. Okay guys, so what happens normally is and I
am user gets created each time. You create a role. So in this case it is asking me
to create one taxes create a new item role database code pipeline
nice shell of successful. So role has been In
created next step now. It gives me the details guys. Basically it would tell
me what are the stuff that I've done. So everything is here. I don't think I need
to cross check it. You might just cross
check the stuff that has happened and
say create a pipeline. So guys, the pipeline
has been created here as you can see. These are the stages
that have happened. If you want you can just go
ahead and say release a change now these things
are happening guys, and let's hope the deployment
also happens successfully.

We've just created an eye. User let's see
whether it falls in place. Everything is in place. As far as the source part
is concerned it has succeeded and now the deployment
is in progress. So it might take a while. Meanwhile just go back and take
a look at this application. So if I open this application
guys It would give me an overview of what has happened
with this application guys, as you can see, these were the steps
that were implemented.

Now the application
is available for deployment. It successfully launched
the deployment environment. It started with everything that it was supposed
to do like create or launch an ec2 instance
and stuff like that. So everything is mentioned here what happened at what time so
this is a passive is guys and it works in the background. I mean if you actually go ahead and launch an instance
on your own configure, I am users can
As you go to groups, it takes a longer while but what the service does
is it automate that process. It understands that you need
an ec2 instance. It launches that instance. It assigns security groups. We PCS and stuff like that. All you have to do is run
your application on top of it as simple as that. So it has taken care of everything and run
a PHP application for me. So yes, this is
what has happened here. If I just go back here. Meanwhile, let's see whether our code
has successfully run you can see what has happened here. I am released the change as well and you can move
the pipeline history.

If you want you can click
on this icon and all the details would be given to you
what happened in what stage. So these are the things that have happened
till time now guys, let's just go back
and take a look at something that we could so I'm going
to come here and say service easy to because my app
launched an ec2 instance. So there should be
an instance created by elastic Beanstalk
C1 instances running. It has a keep your attached
to it as well. So He's any details guys. I have a public IP
associated with it. If I copy it. There you go copy this IP and I say run this IP you have
successfully created a pipeline that retrieved this
source application from an Amazon S3 bucket and deployed it
to three instances. It did not deploy to three instances
using Code deploy.

It deployed it
to only one instance. You see this message that it deployed it
to three instances is because the code or
the repository that I used it. Supposed to deploy
two different instances if there are multiple
instances and hence. This message would have made
more sense than but the fact that we've deployed it
to only one ec2 instance. It should actually
display that message. So the message that you're supposed to give
you can actually come back here and make change to the piece
of code that you worked on. If you go to the readme MD file, I think this is
where the piece of code is. There you go not here. Where is that file
that needs to be edited? Let me just take a look at. Some other files as well. Yeah. This is the file. Sorry. So if you go to the index dot
file here is the message guys, so you can probably make
a change to this message instead of seeing three you can say
one here edit this piece of code and then you submit
the code again.

So when you do launch or type in this IP address probably
that change would be reflected. So guys, what we've done
is we've actually gone ahead and created
a pipeline successfully and in that process we've
actually gone ahead and move. Move or deployed
our application from here. So guys in case if I do go ahead
and commit changes to the code that I just talked about those would get reflected
right away in my history when I talk about this pipeline. So it does give you a continuous
integration and deployment. So, I hope that this session made
sense to you people and we've talked artist
upon most of the stuff that I wanted to talk about. And as far as the
session goes guys, I would be resting it here. So let's start
with the first question. Now I first question says
I have some private servers on my premises. Also. I have distributed
some of my workload on the public Cloud. What is this
architecture called? So basically our workload
has been divided between the public cloud
and the private Cloud now, they're asking me what is
this architecture called? It's a pretty
basic question guys, but if you look at the options
are quite confusing, the first option is
a virtual private Network then We have private Cloud, which is obviously not there.

Then we have a virtual private
Cloud could be the option and then we have hybrid Cloud. All right guys. So what do you think? What do you think is
the right answer for this? Come on guys, let's be more interactive
in this session because if it's
a two-way thing then it's going to be interesting
for you and for me as well. So let's make it
as interactive as possible and let's get the most
out of this session today. Alright, so a she says it's either virtual private cloud
or hybrid cloud. So as usual, it's actually only one
out of all the for so give one answer. Okay, I can see some of you are saying the right
answer some are confused. It's okay. I shall clear your doubts. Alright guys, so the answer
is hybrid Cloud now, why hybrid Cloud because okay. So let's actually discuss
the first three options which are actually
not the right answer. So it is not a virtual private Network because
a virtual private Network.

Is something that you use
to connect your private cloud and your public, right? So to connect between
your private cloud and the public Cloud
you actually have to make a connection and that connection is done
using a virtual private Network. Alright, then we
have private clouds or private cloud is something where in you have
your own servers on your own premise, right, but in our case we have
public Cloud involved. So it is obviously not private
Cloud virtual private cloud is not the As well because a virtual private
cloud is basically a logical isolation kind of thing wherein you
isolate your instances from the rest of the instances
on your AWS infrastructure.

And this logical
isolation cloud is called a virtual private cloud
and then you have hybrid Cloud which I think fits aptly
by its name as well. We're in it's a mixture
of your public cloud and your private
Cloud infrastructure, right? So, let's see the answer. So the answer is hybrid cloud
and the nation is like this because we are using
both the public cloud and you're on from Isis servers, which is a private
Cloud be called and hybrid architecture, right and it says here
that if you want to be better if your private and
public Cloud were all on the same network, right? So basically when you
connect your public cloud and private Cloud together
using virtual private Network, you basically are accessing
one network and you feel that all your resources.

Is it says dead
on the public cloud and the private Cloud
are actually there in one network, right? So it seems It's
a virtual private and virtually you feel that you are
on the same network, but it's they are actually
two different resources or two different locations from where you are
accessing your resources. Alright guys, so guys any questions regarding
to the first question that we have discussed anything
that you're not clear whether it was
a very basic question, but then we are getting
a very lot of Concepts. Here, we have a virtual private
Network concept then we have the virtual private
Cloud concept, right so it can be confusing and this is how they asked
to you in interviews as well.

Right? So you have to be
very clear in your answer. You have to be very clear
in your thoughts that what shall be
the right answer. All right, so I can see that people are giving
me a go there all clear. Okay guys, so let's move on
to the next question then so our next question starts with
our Section 1 which is easy. Questions, so it's from here. We'll be talking all about AWS. So let's start
with the question first. So we have a video
transcoding application and the videos are processed
according to work you with the processing of a video
is interrupt in one instance. It is resumed
in another instance. Okay, good enough. Then currently there is
a huge backlog of videos which needs to be processed. But this you need
to add more instances, but you need these as mrs. Only until their backlog
is Oost right. So once your backlog
is reduced you don't need those many servers. So which pricing option
should be the efficient should be the most
cost efficient for this? Okay guys, so first of all, when you have
question like this, a lot of things are added
into it to make it confusing.

So first of all, the things is the first
line reads that it's a video transcoding application. So it is not relevant
to your question, right? It is not relevant to
what is being asked so you Discard that out and then it says the videos are processed
according to work you again, it's their confuse. You don't the first thing
that you should look out into a question, which are trying to men
are trying to figure out an answer is the important part. What is important in the question you
should be able to unfair that so according to me. The thing that is important is that there is a huge
backlog of a video. So there is a lot
of pending work and this pending work has
to be reduced right and one.

Is it is reduced we will not be needing
those many servers. So basically we are increasing
our number of servers to actually reduce the number
of backlogs that we have. And once we have reduced that we have an application
wherein we don't need those too many servers anymore so we should get rid of them. Right? So now it is asking
me which pricing option should be efficient
for this seller.

Now, you have three kind
of pricing options you have on demand pricing then
you have spot pricing. And then you have
reserved place, right? So you spot pricing
is basically used when you want servers
at the minimum cost. So basically what happens is why spot pricing has
an introduced is because of this that new AWS has centers, right? It has service zones where it has a lot
of servers now not all the time that the servers
are actually being used. Some of the times
are idle, right. So in times like this when the servers
are ideal, what eight? SS does is it gives
you a discount that since no server is being used.

I shall give you a discount. If you want to use
my servers now in this case you use pot pricing. So if you are going
for spot pricing you see these reduced rates from AWS
whenever their servers are idle and you should bid rate, right? So say example servers
are being offered at some particular price. And you say Okay. I want these many servers,
but I can only afford $10. So as long as the server
And be allotted to me for $10. I shall use them. Right so you set
your price a $10 and then you use the service but the moment
the demand increases in that particular server
location the prices go up again. All right, and if the price crosses $10
your server shall be shut down, right you will not be able
to access that server anymore. Right? So this is what spot pricing is
you basically bid for the minimum price
and whenever the price.

Co- op your server is taken
from you right then second type of pricing
is called reserved pricing. When you reserve your servers
for a particular amount of time say a one-year term
or a three-year term, right? So it the application
for this could be when say I have a company right? And my company has a website. So my website is hosted on AWS. Now, my website
is going to be there till my company is there right? So it makes sense for me to actually
reserved the instances for like maximum Dome.

Possible because I have
no plan to sell my company and hence take down
my website right now. The reason people offer
reserved instances is because as compared
to the on demand pricing the reserve pricing is
actually pretty cheap, right? So if you reserve your instances
for a longer term, you get discounts
from AWS, right and then we have
on demand pricing where and we can get
as many servers as you want at the time what we want as
per your requirement at whatever time you Choir and the pricing for them
are standard right? I'll not say they are high
but they are standard but they are more
than reserved pricing and your spot pricing.

Now. Our question says that we have to reduce
the backlog and once a backlog has been reduced. We'd have to get rid
of the service. So obviously will not be
using reserved instances because we cannot save and our backlog
will be ending right? We cannot be using spot prices because we want that backlog to
be reduced as soon as possible. So what we'll do
is we'll be using on-demand instances or on
demand pricing and using that we will reduce the workload or will reduce the backlog
of the videos. And once it's been reduced we
will reduce the server size for our instance. Right? So the answer for this
should be on-demand instances and if you read the explanation, you should be using an on-demand
instance for the same because the workload
has to be processed now meaning it is urgent. Secondly you don't need them.

Once you have
a backlog is cleared. Therefore is evidence is
out of the Picture and since the work is urgent. You cannot stop the work
on engines just because the spot price by right. So therefore spot price
in can also not be used and hence will be using
on demand has. All right guys, so any doubt in this question
anything that you're not clear with by are we using
on demand pricing?.

You May Also Like