The motive of the DevOps automation

Create three Jobs in Jenkins

Job#1

If Developer push to the dev branch, then, Jenkins will fetch from dev and deploy on dev-docker environment

Job#2

If Developer push to master branch, then, Jenkins will fetch from master and deploy on the master-docker environment. Both dev-docker and master-docker environment are on different docker containers.

Job#3

Manually the QA team will check (test) for the website running in dev-docker environment. If it is running fine then Jenkins will merge the dev branch to master branch and trigger #Job2

Project Introduction:

The workflow of the project:

Image for post
Image for post

As you have already understood from the above Image and description of our Task. Now, Some of the major changes that are done in this version of the project as compared to the previous version of this…


About Dynamic Jenkins Slaves

Static systems are very bad at maintaining resources. When you create the static Jenkins cluster, there is a wastage of a lot of computing resources. Hence, we prefer dynamic slaves, which are launched and run only when they are needed. This saves a lot of computing resources and hence money for an organization or a company.

But why we actually need different slaves?

Because we can’t install and run everything in one OS. Like we can’t run Maven, Jenkins, Scala, Node apps, Testing system, etc. in one OS only. …


Image for post
Image for post
Photo by Markus Winkler on Unsplash

What is Artificial Intelligence?

Back in the 1950s, the fathers of the field Minsky and McCarthy described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

In simple words, AI-powered systems/Softwares show some behavior of human intelligence in them, such as learning, reasoning, problem-solving, creativity, perception, etc.

What is Machine Learning?

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.


Image for post
Image for post
AWS — Amazon Web Services

Goals or steps:

  • Create a key pair
  • Create a security group
  • Launch an instance using the above created key pair and security group.
  • Create an EBS volume of 1 GB.
  • The final step is to attach the above created EBS volume to the instance you created in the previous steps.

All the above steps must be performed using AWS CLI

Create a script to automate most of the things.

Let’s see some prerequisites

For doing this task, you need to know about AWS public Cloud, AWS CLI program, AWS ec2 service, etc. So, let’s take the above things first:

What is the AWS CLI?

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. For more info: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html


Image for post
Image for post

What is Dropbox?

Dropbox is a file hosting service operated by the American company Dropbox, Inc., headquartered in San Francisco, California, that offers cloud storage, file synchronization, personal cloud, and client software.

The Dropbox software enables users to drop any file into a designated folder. The file is then automatically uploaded to Dropbox’s cloud-based service and made available to any other of the user’s computers and devices that also have the Dropbox software installed, keeping the file up-to-date on all systems. …


Image for post
Image for post

What is Big Data?

Big Data refers to the large volume of data — both structured and unstructured — that grows at an ever-increasing rate.

Big Data doesn’t only relate to the large volume of data. But how the data is processed by the company or the organisation matters more and the time taken in that processing.

Big data often includes data with sizes that exceed the capacity of traditional software to process within an acceptable time and value.

4 V’s of Big Data

1. Volume of Big Data

The volume of data refers to the size of the data sets that need to be analyzed and processed, which are now frequently larger than terabytes and petabytes.

About

Adarsh Saxena

Hey Everone, I am DevOps Practitioner, Cloud Computing, BigData, Machine Learning are my favorite parts. Connect me on LinkedIn to know more about me.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store