Monday, February 8, 2021

Server - Docker and Hadoop Development Environment.

■ Docker & Kubernetes Development Environment.
Host Name Ip address Installed software
Kubernetes01 192.168.0.71 Docker() - njoonk
Kubernetes() - njoonk
Kubernetes02 192.168.0.72 Docker() - njoonk
Kubernetes(POD) - njoonk
Kubernetes03 192.168.0.73 Docker() - njoonk
Kubernetes(POD) - njoonk

■ Hadoop Development Environment.
Host Name Ip address Installed software
hadoop301 192.168.0.51 hadoop(master name node)
hadoop302 192.168.0.52 hadoop(secondary name node)
hadoop303 192.168.0.53 hadoop(data node1)
hadoop304 192.168.0.54 hadoop(data node2)
hadoop305 192.168.0.55 hadoop(data node3)

Saturday, December 5, 2020

How to install Hadoop3.3.0 with clusters

■ Install OpenJdk1.8.0.
$ sudo dnf install java-1.8.0-openjdk-devel
■ Makes the yarn user (in root).
$ /usr/sbin/useradd -d /home/hadoop -m hadoop
 
■ Download hadoop-3.3.0 (/usr/local/src)
$ wget https://archive.apache.org/dist/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
$ tar xvf ./hadoop-3.3.0.tar.gz
$ mv ./hadoop-3.3.0 ../hadoop
$ chown -R hadoop.hadoop ./hadoop

Set environment variables
$ vim /home/hadoop/.bash_profile 
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH

Add the following lines to /home/hadoop/.bashrc on all the nodes
$ vim /home/yarn/.bashrc 
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_HOME
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

■ Add the host name to /etc/hosts

$ vim /etc/hosts 
192.168.0.51 hadoop301

Thursday, December 3, 2020

Yarn - how to install hadoop-3.3.0

■ Install OpenJdk1.8.0.
$ sudo dnf install java-1.8.0-openjdk-devel
■ Makes the yarn user (in root).
$ /usr/sbin/useradd -d /home/hadoop -m hadoop
 
■ Download hadoop-3.3.0 (/usr/local/src)
$ wget https://archive.apache.org/dist/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
$ tar xvf ./hadoop-3.3.0.tar.gz
$ mv ./hadoop-3.3.0 ../hadoop
$ chown -R hadoop.hadoop ./hadoop

Set environment variables
$ vim /home/hadoop/.bash_profile 
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH

Add the following lines to /home/hadoop/.bashrc on all the nodes
$ vim /home/yarn/.bashrc 
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk export HADOOP_HOME=/usr/local/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_HOME
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
■ Add the host name to /etc/hosts
$ vim /etc/hosts 
192.168.0.51 hadoop301

■ Make a public key (in hadoop user)
$ ssh-keygen -t rsa

Contribute the public key to master and slave server on Hadoop's user
Copy a public key in id_rsa.pub into authorized_keys.
$ vim /home/hadoop/.ssh/authorized_keys
Changes the permission
$ chmod 644 /home/hadoop/.ssh/authorized_keys

■ Add the following to /usr/local/hadoop/conf/core-site.xml
$ vim /usr/local/yarn/etc/hadoop/core-site.xml 
 <configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop301:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/tmp/yarn-${user.name}</value>
    </property>
</configuration>

■ Make the name directory (in hadoop)
$ mkdir -p /home/hadoop/namenode/name
$ mkdir -p /home/hadoop/datanode/data01
$ mkdir -p /home/hadoop/datanode/data02
$ mkdir -p /home/hadoop/mapred/local 
$ mkdir -p /home/hadoop/mapred/system

■ Add the following to /usr/local/hadoop/etc/hadoop/hdfs-site.xml for HDFS configuration
$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml 
<configuration>
        <property>
                <name>dfs.name.dir</name>
                <value>/home/hadoop/namenode/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/home/hadoop/datanode/data01,/home/hadoop/datanode/data02</value>
        </property>
</configuration>

■ Add the following to /usr/local/hadoop/etc/hadoop/yarn-site.xml for HDFS configuration
$ vim /usr/local/yarn/etc/hadoop/yarn-site.xml 
<configuration>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>2</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
    </property>
</configuration>

■ Add the following to /usr/local/yarn/etc/hadoop/mapred-site.xml
$ vim /usr/local/yarn/etc/hadoop/mapred-site.xml 
<configuration>
  <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
  </property>
  <property>
      <name>mapreduce.cluster.local.dir</name>
      <value>/home/hadoop/mapred/local</value>
  </property>
  <property>
      <name>mapreduce.jobtracker.system.dir</name>
      <value>/home/hadoop/mapred/system</value>
  </property>
<configuratiot>
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/DeprecatedProperties.html
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html

■ Format the Namenode
$ /usr/local/hadoop/bin/hadoop namenode -format

■ Start and Stop the Yarn.
$ /usr/local/hadoop/sbin/start-all.sh
$ /usr/local/hadoop/sbin/stop-all.sh


■ Just check the working.
http://hadoop301:9870

Wednesday, December 2, 2020

Linux - Initial setting after Linux install

■ Register a user on the following file.
njoonk  ALL=(ALL)  ALL
$ cat /etc/sudoers
njoonk ALL=(ALL) ALL

@ Disallowing Root Access
■ Edit the following file, set the PeritRootLogin parameter to no
PermitRootLogin no
$ vim /etc/ssh/sshd_config
PermitRootLogin no
sshd should be restared on CentOs8.2
$ systemctl restart sshd
$ systemctl status sshd

■ Change a host name
@ Before
[root@localhost home]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain
@ After
[root@localhost home]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=centos04
GATEWAY=192.168.11.1
Check hostname information
[root@localhost home]# hostnamectl
■ Edit hosts file
@ Before
[root@localhost home]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
@ After
[root@localhost home]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 centos04
192.168.11.23 centos04 localhost
■ Just restart the network
$ /etc/init.d/network restart

@ Set DNS
@ Before
[root@centos04 sysconfig]# cat /etc/resolv.conf
search centos04
@ After
[root@centos04 sysconfig]# cat /etc/resolv.conf
search centos04
nameserver 192.168.11.1

Tuesday, December 1, 2020

Linux - Changing a DHCP for STATIC IP on CentOs8.2

■ Changing a DHCP for STATIC IP.

1, Add the static IP information as below


2, Edit or add nameserver 192.168.0.1 into /etc/resolv.conf

3, Restart network (Don't execute the following command on remote)
    $ sudo nmcli networking off
    $ sudo nmcli networking on

OR
    $ sudo systemctl restart NetworkManager.service

4, Check the logs
    $ sudo journalctl -fu NetworkManager

To resolve the conflicting or being different Mac address

 ■ You might meet the following error when importing an images on VirtualBox.

 ■ How to solve the troubleshooting on CentOs

1, Check the net


2, Edit /etc/udev/rules.d/70-persistent-net.rules
    $ vim /etc/udev/rules.d/70-persistent-net.rules

3, Check to work as a restarting the network.




■ How to install net-tools (ifconfig, etc) on CentOs 8.2

$ sudo yum -y install net-tools