Tuesday, January 24, 2023

High Available and Scalable With Cloud

 

For Better Cloud Services available we can follow the following steps.

For High Availability, Applications should be available in multiple regions.

For Scalability, we can set up an Autoscale mechanism or set up add-on instances.

Communication Steps are following,

1. User will Request To  Web Application Firewall.
2. If any Security Vulnerability, then block the requests.
3. Request will reach Autoscaling and Application Load Balancer.
4. Application Load Balancer will forward the request to Web Servers based on availability.
5. Web Applications will communicate with Application Server via Application Load Balancer.
6. Application Auto Scaling will increase/decrease the instances based on the request traffic.
7. Database will act as Master/Slave.
8. If there is any slave or master is down then remaining the available slave will act as a master until the master is back.


Please reach out me if any clarification or suggestions.

Thanks.







Thursday, January 12, 2023

Application Scalability and Availability

1. Application with High Scalability and Availability 

Step 1: Reactive native / VueJs
Step 2: Docker image creation 
Step 3: Deploy in Containers
Step 4: Spring Boot / Microservices 
Step 5: Docker image creation
Step 6: Deploy in Containers
Step 7: Testing the Details
Step 8: Testing the Database connectivity


Sunday, May 31, 2020

Hadoop Installation With Cluster Step by Step Guide

Installation of hadoop cluster step by step Guide


Hadoop Cluster Setup:

Hadoop is a fault-tolerant distributed system for data storage which is highly scalable.
Hadoop has two important parts:-

1. Hadoop Distributed File System(HDFS):-A distributed file system that provides high throughput access to application data.

2. MapReduce:-A software framework for distributed processing of large data sets on compute clusters.

In this tutorial, I will describe how to setup and run Hadoop cluster. We will build Hadoop cluster using three Ubuntu machine in this tutorial.

Following are the capacities in which nodes may act in our cluster:-

1. NameNode:-Manages the namespace, file system metadata, and access control. There is exactly one NameNode in each cluster.

2. SecondaryNameNode:-Downloads periodic checkpoints from the nameNode for fault-tolerance. There is exactly one SecondaryNameNode in each cluster.

3. JobTracker: - Hands out tasks to the slave nodes. There is exactly one JobTracker in each cluster.

4. DataNode: -Holds file system data. Each data node manages its own locally-attached storage (i.e., the node's hard disk) and stores a copy of some or all blocks in the file system. There are one or more DataNodes in each cluster.

5. TaskTracker: - Slaves that carry out map and reduce tasks. There are one or more TaskTrackers in each cluster.

In our case, one machine in the cluster is designated as namenode, Secondarynamenode and jobTracker.This is the master. The rest of machine in the cluster act as both Datanode and TaskTracker. They are slaves.

Below diagram show, how the Hadoop cluster will look after Installation:-

Fig: After Installation, Hadoop cluster will look like.

Installation, configuring and running of hadoop cluster is done in three steps:
1. Installing and configuring hadoop namenode.
2. Installing and configuring hadoop datanodes.
3. Start and stop hadoop cluster.

INSTALLING AND CONFIGURING HADOOP NAMENODE

1. Download hadoop-0.20.2.tar.gz from http://www.apache.org/dyn/closer.cgi/hadoop/core/ and extract to some path in your computer. Now I am calling hadoop installation root as $HADOOP_INSTALL_DIR.

2. Edit the file /etc/hosts on the namenode machine and add the following lines.
           
192.168.41.53    hadoop-namenode
            192.168.41.87    hadoop-datanode1
            192.168.41.67    hadoop-datanode2

Note: Run the command “ping hadoop-namenode”. This command is run to check whether the namenode machine ip is being resolved to actual ip not localhost ip.

3. We have needed to configure password less login from namenode to all datanode machines.
            2.1. Execute the following commands on namenode machine.
                        $ssh-keygen -t rsa
                        $scp .ssh/id_rsa.pub ilab@192.168.41.87:~ilab/.ssh/authorized_keys
                        $scp .ssh/id_rsa.pub ilab@192.168.41.67:~ilab/.ssh/authorized_keys

4. Open the file $HADOOP_INSTALL_DIR/conf/hadoop-env.sh and set the $JAVA_HOME.
export JAVA_HOME=/path/to/javaeg : export JAVA_HOME=/user/lib/jvm/java-6-sun
Note:  If you are using open jdk , then give the path of that open jdk.

5. Go to $HADOOP_INSTALL_DIR and create new directory hadoop-datastore. This directory is creating to store metadata information.

6. Open the file $HADOOP_INSTALL_DIR/conf/core-site.xml and add the following properties. This file is edit to configure the namenode to store information like port number and metadata directories. Add the properties in the format below:
            <!-- Defines the namenode and port number -->
            <property>
                              <name>fs.default.name</name>
                              <value>hdfs://hadoop-namenode:9000</value>
                              <description>This is the namenode uri</description>
            </property>
            <property>
      <name>hadoop.tmp.dir</name>
      <value>$HADOOP_INSTALL_DIR/hadoop-0.20.2/hadoop-datastore
      </value>
      <description>A base for other temporary directories.</description>
            </property>

7. Open the file $HADOOP_INSTALL_DIR/conf/hdfs-site.xml and add the following properties. This file is edit to configure the replication factor of the hadoop setup. Add the properties in the format below:
           
<property>
                       <name>dfs.replication</name>
                       <value>2</value>
<description>Default block replication.The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
                       </description>
            </property>

8. Open the file $HADOOP_INSTALL_DIR/conf/mapred-site.xml and add the following properties. This file is edit to configure the host and port of the MapReduce job tracker in thenamenode of the hadoop setup. Add the properties in the format below:
            <property>
                        <name>mapred.job.tracker</name>
                        <value>hadoop-namenode:9001</value>
                        <description>The host and port that the MapReduce job tracker runs
                        at.  If "local", then jobs are run in-process as a single map and reduce 
                        task.
                        </description>
            </property>

9. Open the file $HADOOP_INSTALL_DIR/conf/masters and add the machine names where a secondary namenodes will run. This file is edit to configure the Hadoop Secondary Namenode
hadoop-namenode.
           
Note: In my case, both primary namenode and Secondary namenode are running on same machine. So, I have added hadoop-namenode in $HADOOP_INSTALL_DIR/conf/masters file.

10. Open the file $HADOOP_INSTALL_DIR/conf/slaves and add all the datanodes machine names:-
            hadoop-namenode     
/* in case you want the namenode to also store data(i.e namenode also behave like a datanode) this can be  mentioned in the slaves file.*/
            hadoop-datanode1
            hadoop-datanode2

INSTALLING AND CONFIGURING HADOOP DATANODE

1. Download hadoop-0.20.2.tar.gz from http://www.apache.org/dyn/closer.cgi/hadoop/core/ and extract to some path in your computer. Now I am calling hadoop installation root as $HADOOP_INSTALL_DIR.

2. Edit the file /etc/hosts on the datanode machine and add the following lines.
           
192.168.41.53    hadoop-namenode
            192.168.41.87    hadoop-datanode1
            192.168.41.67    hadoop-datanode2

Note: Run the command “ping hadoop-namenode”. This command is run to check whether   the namenode machine ip is being resolved to actual ip not localhost ip.

3. We have needed to configure password less login from all datanode machines to namenode machine.
            3.1. Execute the following commands on datanode machine.
                        $ssh-keygen -t rsa
                        $scp .ssh/id_rsa.pub ilab@192.168.41.53:~ilab/.ssh/authorized_keys2

4. Open the file $HADOOP_INSTALL_DIR/conf/hadoop-env.sh and set the $JAVA_HOME.
export JAVA_HOME=/path/to/java
eg : export JAVA_HOME=/user/lib/jvm/java-6-sun

Note:  If you are using open jdk , then give the path of that open jdk.

5. Go to $HADOOP_INSTALL_DIR and create new directory hadoop-datastore. This directory is creating to store metadata information.

6. Open the file $HADOOP_INSTALL_DIR/conf/core-site.xml and add the following properties. This file is edit to configure the datanode to determine the host, port, etc. for a filesystem. Add the properties in the format below:
           <!-- The uri's authority is used to determine the host, port, etc. for a filesystem. -->
            <property>
                        <name>fs.default.name</name>
                        <value>hdfs://hadoop-namenode:9000</value>
                        <description>This is the namenode uri</description>
            </property>
            <property>
                       <name>hadoop.tmp.dir</name>
                       <value>$HADOOP_INSTALL_DIR/hadoop-0.20.2/hadoop-datastore
                       </value>
                       <description>A base for other temporary directories.</description>
            </property>

7. Open the file $HADOOP_INSTALL_DIR/conf/hdfs-site.xml and add the following properties. This file is edit to configure the replication factor of the hadoop setup. Add the properties in the format below:
            <property>
                                    <name>dfs.replication</name>
                                    <value>2</value>
                                    <description>Default block replication.
                                    The actual number of replications can be specified when the file 
                                    is created. The default is used if replication is not specified in
                                    create time.
                                    </description>
            </property>

8. Open the file $HADOOP_INSTALL_DIR/conf/mapred-site.xml and add the following properties. This file is edit to identify the host and port at which MapReduce job tracker runs in the namenode of the hadoop setup. Add the properties in the format below
            <property>
                        <name>mapred.job.tracker</name>
                        <value>hadoop-namenode:9001</value>
                        <description>The host and port that the MapReduce job tracker runs
                         at.  If "local", then jobs are run in-process as a single map and reduce 
                         task.
                        </description>
</property>

Note:-Step 9 and 10 are not mandatory.

9. Open $HADOOP_INSTALL_DIR/conf/masters and add the machine names where a secondary namenodes will run.
            hadoop-namenode

Note: In my case, both primary namenode and Secondary namenode are running on same machine. So, I have added hadoop-namenode in $HADOOP_INSTALL_DIR/conf/masters file.

10. open $HADOOP_INSTALL_DIR/conf/slaves and add all the datanodes machine names
hadoop-namenode                  /* In case you want the namenode to also store data(i.e namenode also behave like datanode) this can be mentioned in the slaves file.*/
            hadoop-datanode1
            hadoop-datanode2

  
Note:-
Above steps is  required on all the datanode in the hadoop cluster.

START AND STOP HADOOP CLUSTER

1. Formatting the namenode:-
Before we start our new Hadoop cluster, we have to format Hadoop’s distributed filesystem (HDFS) for the namenode. We have needed to do this the first time when we start our Hadoop cluster. Do not format a running Hadoop namenode, this will cause all your data in the HDFS filesytem to be lost.
Execute the following command on namenode machine to format the file system.
$HADOOP_INSTALL_DIR/bin/hadoop namenode -format

2. Starting the Hadoop cluster:-
            Starting the cluster is done in two steps.
           
2.1 Start HDFS daemons:-
           
Execute the following command on namenode machine to start HDFS daemons.
            $HADOOP_INSTALL_DIR/bin/start-dfs.sh
            Note:-
            At this point, the following Java processes should run on namenode
            machine. 
                        ilab@hadoop-namenode:$jps // (the process IDs don’t matter of course.)
                        14799 NameNode
                        15314 Jps
                        14977 SecondaryNameNode
                        ilab@hadoop-namenode:$
            and the following java procsses should run on datanode machine.
                        ilab@hadoop-datanode1:$jps //(the process IDs don’t matter of course.)
                        15183 DataNode
                        15616 Jps
                        ilab@hadoop-datanode1:$

            2.2 Start MapReduce daemons:-
            Execute the following command on the machine you want the jobtracker to run 
            on.
$HADOOP_INSTALL_DIR/bin/start-mapred.sh     
//In our case, we will run bin/start-mapred.sh on namenode machine:
           Note:-
           At this point, the following Java processes should run on namenode machine.       
                        ilab@hadoop-namenode:$jps // (the process IDs don’t matter of course.)
                        14799 NameNode
                        15314 Jps
                        14977 SecondaryNameNode
                        15596 JobTracker                 
                        ilab@hadoop-namenode:$


            and the following java procsses should run on datanode machine.
                        ilab@hadoop-datanode1:$jps //(the process IDs don’t matter of course.)
                        15183 DataNode
                        15616 Jps
                        15897 TaskTracker               
                        ilab@hadoop-datanode1:$

3. Stopping the Hadoop cluster:-
            Like starting the cluster, stopping it is done in two steps.
3.1 Stop MapReduce daemons:-
Run the command /bin/stop-mapred.sh on the jobtracker machine. In our case, we will run bin/stop-mapred.sh on namenode:
            3.2 Stop HDFS daemons:-
                        Run the command /bin/stop-dfs.sh on the namenode machine.

Friday, August 3, 2012

Please let me back


http://www.goorulearning.org/gooru/index.g#/collection/f3027b45-3543-4f33-a018-3b0c874af3d0/play?utm_source=Dharmaraja&utm_medium=personal%2Boutreach&utm_term=collection&utm_campaign=Sharing%2Bis%2BCaring

Monday, August 22, 2011

hi Dear Friends...Walkin details there..

Some of the jobs related website,..there..

1.www.Naukri.com
2www..timesjobs.com
3.www.durgajobs.com
4.www.jobstreet.com
5.www.fresherscafe.com
6.www.freshersworld.com

Saturday, February 5, 2011

Steps for ur life is grown

Do u want to grow in ur life...follow these steps
1.DOnt think about past and future
2.Enjoy current or todays life
3.Always be cool
4. Be innovative and open minded
5.Share your knowledge to others
6.make a friendship around the world
7. Always be positive  thoughts





Then finally you will be a success man in the world