Hi, I'm an architecture engineer and native Korean speaker.
I have started to live in Seoul in Korea.
I have been developping Abilists tool,
Please feel free to visit abilists.com
Friday, May 31, 2013
Wednesday, May 29, 2013
Hadoop - commands
@Find files you want to see.
hadoop dfs -lsr /hadoop/flume/ | grep [search_term]
.@hadoop error(release safe mode)
$./bin/hadoop dfsadmin -safemode leave
Hodoop - Get a content of file from Hadoop sample
public class TestMain {
/**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://xxx.28.xxx.51:9000");
FileSystem dfs = FileSystem.get(conf);
Path filenamePath = new Path("/home/hadoop/data/flume/20130529/12/ch1.1369796403013");
FSDataInputStream fsIn = dfs.open(filenamePath);
// org.apache.commons.io.IOUtils
byte[] fileBytes = IOUtils.toByteArray(fsIn);
fsIn.read(fileBytes);
//create string from byte array
String strFileContent = new String(fileBytes);
System.out.println(strFileContent);
}
}
/**
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://xxx.28.xxx.51:9000");
FileSystem dfs = FileSystem.get(conf);
Path filenamePath = new Path("/home/hadoop/data/flume/20130529/12/ch1.1369796403013");
FSDataInputStream fsIn = dfs.open(filenamePath);
// org.apache.commons.io.IOUtils
byte[] fileBytes = IOUtils.toByteArray(fsIn);
fsIn.read(fileBytes);
//create string from byte array
String strFileContent = new String(fileBytes);
System.out.println(strFileContent);
}
}
Tuesday, May 28, 2013
Hadoop Manger URL
I will make a arrangement about below this later.
http://confluence.openflamingo.org/pages/viewpage.action?pageId=5537913&focusedCommentId=7209528&#comment-7209528
http://confluence.openflamingo.org/pages/viewpage.action?pageId=5537913&focusedCommentId=7209528&#comment-7209528
Monday, May 27, 2013
Flume - Shell script for starting in Flume NG 1.3.1
# If you have faced the error, should install the below.
# This script is permitted for Hadoop user.
$sudo su - hadoop
sudo
yum
install
redhat-lsb.x86_64
# This script is permitted for Hadoop user.
$sudo su - hadoop
Shell - init-functions
@If you want to install this /lib/lsb/init-functions on the linux?
@Just install below this.
$ yum install redhat-lsb
@=======================================
#!/bin/sh
# LSB initscript functions, as defined in the LSB Spec 1.1.0
#
# Lawrence Lim <llim@redhat.com> - Tue, 26 June 2007
# Updated to the latest LSB 3.1 spec
# http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic_lines.txt
start_daemon () {
/etc/redhat-lsb/lsb_start_daemon "$@"
}
killproc () {
/etc/redhat-lsb/lsb_killproc "$@"
}
pidofproc () {
/etc/redhat-lsb/lsb_pidofproc "$@"
}
log_success_msg () {
/etc/redhat-lsb/lsb_log_message success "$@"
}
log_failure_msg () {
/etc/redhat-lsb/lsb_log_message failure "$@"
}
log_warning_msg () {
/etc/redhat-lsb/lsb_log_message warning "$@"
}
@Just install below this.
$ yum install redhat-lsb
@=======================================
#!/bin/sh
# LSB initscript functions, as defined in the LSB Spec 1.1.0
#
# Lawrence Lim <llim@redhat.com> - Tue, 26 June 2007
# Updated to the latest LSB 3.1 spec
# http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic_lines.txt
start_daemon () {
/etc/redhat-lsb/lsb_start_daemon "$@"
}
killproc () {
/etc/redhat-lsb/lsb_killproc "$@"
}
pidofproc () {
/etc/redhat-lsb/lsb_pidofproc "$@"
}
log_success_msg () {
/etc/redhat-lsb/lsb_log_message success "$@"
}
log_failure_msg () {
/etc/redhat-lsb/lsb_log_message failure "$@"
}
log_warning_msg () {
/etc/redhat-lsb/lsb_log_message warning "$@"
}
Friday, May 24, 2013
Shell - init.d Script
Let's edit later.
http://werxltd.com/wp/2012/01/05/simple-init-d-script-template/
http://werxltd.com/wp/2012/01/05/simple-init-d-script-template/
Thursday, May 23, 2013
Flume - Ganglia Install in Flume NG 1.3.1
$ yum install arp apr-devel
$ yum install rrdtool rrdtool-devel
$ yum install libconfuse libconfuse-devel
$ yum install pcre pcre-devel
$ yum install expat expat-devel
$ yum install zlib zlib-devel
@ Install
$ mkdir -p /home/hadoop/ganglia/rrd/
$ chown nobody.nobody /home/hadoop/ganglia/rrd/
$ cd ./ganglia-3.6.0
$ ./configure --with-librrd=/home/hadoop/ganglia/rrd/ --with-gmetad --prefix=/usr/local/
$ make
$ make install
@You can confirm
$ ls /usr/local/bin/gstat
$ ls /usr/local/bin/gmetric
$ ls /usr/local/sbin/gmond
$ ls /usr/local/sbin/gmetad
@[.] is Ganglia compiled home
@ Register to service
$ cp ./gmond/gmond.init /etc/rc.d/init.d/gmond
$ chkconfig --add gmond
$ chkconfig --list gmond
$ vi /etc/rc.d/init.d/gmond
--> Edit ->GMOND=/usr/local/sbin/gmond
$ cp ./gmetad/gmetad.init /etc/rc.d/init.d/gmetad
$ chkconfig --add gmetad
$ chkconfig --list gmetad
$ vi /etc/rc.d/init.d/gmetad
--> Edit -> GMOND=/usr/local/sbin/gmetad
@ Copy conf
$ /usr/local/sbin/gmond --default_config > /usr/local/etc/gmond.conf
@ Set rrd tool
$ vi /etc/local/etc/gmetad
-> rrd_rootdir "/home/hadoop/ganglia/rrd"
@ Start
# /etc/rc.d/init.d/gmond start
# /etc/rc.d/init.d/gmetad start
@ Confirm process
# telnet localhos 8649
--> Output XML
http://blog.daum.net/_blog/BlogTypeView.do?blogid=0N9yp&articleno=25&_bloghome_menu=recenttext#ajax_history_home
http://apexserver.iptime.org/users/yk.choi/weblog/7eca7/
http://ahmadchaudary.wordpress.com/tag/ganglia-monitoring/
$ yum install rrdtool rrdtool-devel
$ yum install libconfuse libconfuse-devel
$ yum install pcre pcre-devel
$ yum install expat expat-devel
$ yum install zlib zlib-devel
@ Install
libconfuse
$ ./configure --with-pic
$ make
$ make install
$ mkdir -p /home/hadoop/ganglia/rrd/
$ chown nobody.nobody /home/hadoop/ganglia/rrd/
$ cd ./ganglia-3.6.0
$ ./configure --with-librrd=/home/hadoop/ganglia/rrd/ --with-gmetad --prefix=/usr/local/
$ make
$ make install
@You can confirm
$ ls /usr/local/bin/gstat
$ ls /usr/local/bin/gmetric
$ ls /usr/local/sbin/gmond
$ ls /usr/local/sbin/gmetad
@[.] is Ganglia compiled home
@ Register to service
$ cp ./gmond/gmond.init /etc/rc.d/init.d/gmond
$ chkconfig --add gmond
$ chkconfig --list gmond
$ vi /etc/rc.d/init.d/gmond
--> Edit ->GMOND=/usr/local/sbin/gmond
$ cp ./gmetad/gmetad.init /etc/rc.d/init.d/gmetad
$ chkconfig --add gmetad
$ chkconfig --list gmetad
$ vi /etc/rc.d/init.d/gmetad
--> Edit -> GMOND=/usr/local/sbin/gmetad
@ Copy conf
$ /usr/local/sbin/gmond --default_config > /usr/local/etc/gmond.conf
@ Set rrd tool
$ vi /etc/local/etc/gmetad
-> rrd_rootdir "/home/hadoop/ganglia/rrd"
@ Start
# /etc/rc.d/init.d/gmond start
# /etc/rc.d/init.d/gmetad start
@ Confirm process
# telnet localhos 8649
--> Output XML
http://blog.daum.net/_blog/BlogTypeView.do?blogid=0N9yp&articleno=25&_bloghome_menu=recenttext#ajax_history_home
http://apexserver.iptime.org/users/yk.choi/weblog/7eca7/
http://ahmadchaudary.wordpress.com/tag/ganglia-monitoring/
Git - Add a tag in order
1.Edit pom.xml
-Like 1.0-SNAPSHOT to 1.1 2.Index
2.Add Index
$ git add *
3.Commit
$ git commit -m "Tag v1.1"
4.Add a tag
$ git tag v1.1
5.push to remote server
@ When I did the following command for Github,
the system didn't ask me to input Id and Password
(master = tag version)
$ git push origin v1.1
6.return to the development
$ git fetch origin
$ git reset --hard origin/master
-Like 1.0-SNAPSHOT to 1.1 2.Index
2.Add Index
$ git add *
3.Commit
$ git commit -m "Tag v1.1"
4.Add a tag
$ git tag v1.1
5.push to remote server
@ When I did the following command for Github,
the system didn't ask me to input Id and Password
(master = tag version)
$ git push origin v1.1
6.return to the development
$ git fetch origin
$ git reset --hard origin/master
Tuesday, May 21, 2013
How to use Flume ng
bin/flume-ng agent --conf-file conf/flume.conf --name agent1 -Dflume.monitoring.type=GANGLIA -Dflume.monitoring.hosts=172.xx.xxx.xx:5455
Monday, May 20, 2013
Friday, May 17, 2013
Flume - Install Flume NG 1.3.1
■What's Changed?
■Install Flume NG 1.3.1
$ git clone https://git-wip-us.apache.org/repos/asf/flume.git flume
$ cd flume
$ git checkout trunk{code}
OR
$ wget http://ftp.kddilabs.jp/infosystems/apache/flume/1.3.1/apache-flume-1.3.1-bin.tar.gz{code}
■Configuration
$ cp conf/flume-conf.properties.template conf/flume.conf
$ cp conf/flume-env.sh.template conf/flume-env.sh
■Change the file (conf/flume.conf)
#=====================================================
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414
# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.log-sink1.channel = ch1
agent1.sinks.log-sink1.type = logger
# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = log-sink1
#=====================================================
■Execute
$ bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1
■reference
https://cwiki.apache.org/FLUME/getting-started.html
- There's no more logical or physical nodes. We call all physical nodes agents and agents can run zero or more sources and sinks.
- There's no master and no ZooKeeper dependency anymore. At this time, Flume runs with a simple file-based configuration system.
■Install Flume NG 1.3.1
$ git clone https://git-wip-us.apache.org/repos/asf/flume.git flume
$ cd flume
$ git checkout trunk{code}
OR
$ wget http://ftp.kddilabs.jp/infosystems/apache/flume/1.3.1/apache-flume-1.3.1-bin.tar.gz{code}
■Configuration
$ cp conf/flume-conf.properties.template conf/flume.conf
$ cp conf/flume-env.sh.template conf/flume-env.sh
■Change the file (conf/flume.conf)
#=====================================================
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414
# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.log-sink1.channel = ch1
agent1.sinks.log-sink1.type = logger
# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = log-sink1
#=====================================================
■Execute
$ bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1
■reference
https://cwiki.apache.org/FLUME/getting-started.html
Thursday, May 16, 2013
Linux - user commond
# Add user
# Set password
# Create a public in linux
$/usr/sbin/useradd -d /home/njoonk -m njoonk -g njoonk
# Set password
$/usr/bin/passwd njoonk
# Delete user
$userdel testuser #Only a user.
$userdel -r testuser #Only a user with user's directory.
$userdel -r testuser #Only a user with user's directory.
# Create a public in linux
$ ssh-keygen -t rsa
# Input the public-key into the authorized_keys
$ vi authorized_keys
$ chmod 644 .ssh/authorized_keys
$ chmod 644 .ssh/authorized_keys
Wednesday, May 15, 2013
Java - GC
JAVA_OPTS="-server"
JAVA_OPTS="${JAVA_OPTS} -Xms1024m -Xmx1024m -Xmn768m -XX:SurvivorRatio=2 -XX:PermSize=64m -XX:MaxPermSize=256m"
JAVA_OPTS="${JAVA_OPTS} -XX:+PrintGCDetails -Xloggc:/usr/local/tomcat/logs/gc.log"
Reference URL
http://fly32.net/438
http://www.javaservice.com/~java/bbs/read.cgi?m=&b=weblogic&c=r_p&n=1221718848&p=6&s=t
JAVA_OPTS="${JAVA_OPTS} -Xms1024m -Xmx1024m -Xmn768m -XX:SurvivorRatio=2 -XX:PermSize=64m -XX:MaxPermSize=256m"
JAVA_OPTS="${JAVA_OPTS} -XX:+PrintGCDetails -Xloggc:/usr/local/tomcat/logs/gc.log"
Reference URL
http://fly32.net/438
http://www.javaservice.com/~java/bbs/read.cgi?m=&b=weblogic&c=r_p&n=1221718848&p=6&s=t
Monday, May 13, 2013
About git information
@Add remote git server URL
git fetch origin
git reset --hard origin/master
@Download new version from remote(you need to command the fetch)
git checkout HEAD
@Delete tags in remote.
git push origin :tags/{tag name}
@Upload remote
git push origin v1.5
git push origin --tags
@Delete tags in local
git tag -d {tag name}
@Create a tag
git tag v1.0
@Up load a tag to remote
git push --tags
@Show the commit id
git rev-parse [Tag Name]
@Delete Tag
git tag -d [Tag Name]
@Order a commit
git add *
git commit -m "This is the first commit"
git push
@DownLoad all branch from remote
git fetch origin
@
git fetch origin
git reset --hard origin/master
@Download new version from remote(you need to command the fetch)
git checkout HEAD
@Delete tags in remote.
git push origin :tags/{tag name}
@Upload remote
git push origin v1.5
git push origin --tags
@Delete tags in local
git tag -d {tag name}
@Create a tag
git tag v1.0
@Up load a tag to remote
git push --tags
@Show the commit id
git rev-parse [Tag Name]
@Delete Tag
git tag -d [Tag Name]
@Order a commit
git add *
git commit -m "This is the first commit"
git push
@DownLoad all branch from remote
git fetch origin
@
1.$ vim ./.git/config
2.[branch
"master"
]
remote = origin
merge = refs/heads/master
@a good thing to put this bellow into the config file
[alias]
hist = log --pretty=format:
'%h %ad | %s%d [%an]'
--graph --date=
short
[color]
ui =
true
How to get a thread dump in Linux.
If you have not been ready to get a thread dump in Linux.
./jstack -l -F 22431 > /home/share/kim_joon/thread3.txt
./jstack -l -F 22431 > /home/share/kim_joon/thread3.txt
Friday, May 10, 2013
Ruby
http://dimdim.tistory.com/56
http://www.jacopretorius.net/2012/01/ruby-map-collect-and-select.html
http://ruby-doc.org/core-2.0/Array.html
http://www.jacopretorius.net/2012/01/ruby-map-collect-and-select.html
http://ruby-doc.org/core-2.0/Array.html
Thursday, May 9, 2013
Subscribe to:
Posts (Atom)