Sunday, December 29, 2013

MyStory - When I disputed with my wife.

Just after getting married, I often had disputes with my wife.
Actually I cannot remember everything,
But I will try to tell you the situation.
We were disputing a matter.
At that time, I was so angry for some reason.
I had felt that my wife was ignoring me.
"Who am I?" I asked loudly. "Who am I?" I asked again.
She didn't say anything.
So, I answered for her,
"you are my husband!"
As I said that, I became aware that I misspoke --
I wasn't fluent in Japanese at that time.
It was silent for a while.
Then, she burst out laughing and shed tears.
In my mind, I also burst out laughing.
I tried to suppress laughter, but she became aware that I was laughing.
I couldn't remember why I was angry exactly.
Now, that we are love love....

Monday, December 16, 2013

Troubleshooting - Hive

@When you see the following error.
----------------------------------------------------------------
FAILED: SemanticException [Error 10035]: Column repeated in partitioning columns
----------------------------------------------------------------
@Solution
sudo -u hdfs hive -e "CREATE TABLE table_temp (time string, aaa string, bbb string, dt string) partitioned by(dt string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS SEQUENCEFILE;"

Friday, December 13, 2013

Troubleshooting - An error has occurred in Eclipse

For resolve this problem
$ eclipse -clean
---------------------------------------------------------------------------------
!ENTRY org.eclipse.osgi 4 0 2013-12-13 18:44:55.618
!MESSAGE Startup error
!STACK 1
java.lang.RuntimeException: Exception in org.eclipse.osgi.framework.internal.core.SystemBundleActivator.start() of bundle org.eclipse.osgi.
    at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.resume(InternalSystemBundle.java:233)
    at org.eclipse.osgi.framework.internal.core.Framework.launch(Framework.java:657)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.startup(EclipseStarter.java:274)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:176)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629)
    at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
    at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.osgi.framework.internal.core.SystemBundleActivator.start() of bundle org.eclipse.osgi.
    at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734)
    at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
    at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.resume(InternalSystemBundle.java:225)
    ... 10 more

Monday, December 9, 2013

Gradle - Add the jars to the dependencies in Eclipse


You need to install the [https://github.com/spring-projects/eclipse-integration-gradle]
I installed this version[http://dist.springsource.com/release/TOOLS/gradle]

build.gradle
apply plugin: "java"
apply plugin: 'eclipse'
defaultTableTasks = ['assemble']
version = '1.0'
sourceCompatibility = 1.7
targetCompatibility = 1.7
repositories {
mavenLocal()
mavenCentral()
}
eclipse {
classpath {
downloadSources=true
}
}
dependencies {
compile 'org.apache.poi:poi:3.9'
compile 'commons-codec:commons-codec:1.5'
compile 'commons-lang:commons-lang:2.6'
testCompile 'junit:junit:4.9'
}
view raw build.gradle hosted with ❤ by GitHub

$ gradle cleanEclipse eclipse

You can see jars file in the Eclipse

DataArtists - What is the Data Artists?

Let's watch the Ted.

Thursday, December 5, 2013

MyDesign - This is a office partition design.

15 years ago, I was a professional 3D rendering Designer.
when I learned 3D Studio program, I was used to make some rendering all night.
Of course, I read all of the 3D books and I even thought to write a book of 3D renderings.
The two pictures are my work when I was a college student.

Wednesday, December 4, 2013

Shell - Apache And Tomcat to start or stop

This is the shell script for release.

#========================================================================
# Apache and Tomcat Start, Stop
#========================================================================
# Apache #
apacheInvoker="/etc/init.d/httpd"
apacheTargetLog="/usr/local/apache/logs/error_log"
# Tomcat #
tomcatInvoker="/etc/init.d/${tomcat}"
tomcatWebappsPath="/usr/local/${tomcat}/webapps"
tomcatOtherWebappsPath="/usr/local/${tomcat}/${webapp}"
tomcatTargetLog="/usr/local/${tomcat}/logs/catalina.out"
#!/bin/sh
#========================================
# goal : Start apache
# prameter :
# 1.expectPath
# 2.expectTimeOut
# 3.adminUserId
# 4.ip
# 5.sudoPath
# 6.apacheInvoker
# 7.tailPath
# 8.tailOption
# 9.apacheTargetLog
#========================================
apacheStart(){
echo "Attempt to startup apache..."
${expectPath} <<EOF
set timeout ${expectTimeOut}
spawn ssh ${adminUserId}@$ip
expect "Last login:"
send "${sudoPath} ${apacheInvoker} start\r"
expect "resuming normal operations"
send "${tailPath} ${tailOption} ${apacheTargetLog}\r"
expect "OK. Normally startup."
close
EOF
}
#========================================
# goal : Stop apache
# prameter :
# 1.expectPath
# 2.expectTimeOut
# 3.adminUserId
# 4.ip
# 5.sudoPath
# 6.apacheInvoker
# 7.tailPath
# 8.tailOption
# 9.apacheTargetLog
#========================================
apacheStop(){
echo "Attempt to shutdown apache..."
${expectPath} <<EOF
set timeout ${expectTimeOut}
spawn ssh ${adminUserId}@$ip
expect "Last login:"
send "${sudoPath} ${apacheInvoker} stop\r"
expect "shutting down"
send "${tailPath} ${tailOption} ${apacheTargetLog}\r"
expect "OK. Normally shut down"
close
EOF
}
#========================================
# goal : Start tomcat
# prameter :
# 1.expectPath
# 2.expectTimeOut
# 3.adminUserId
# 4.ip
# 5.sudoPath
# 6.tomcatInvoker
# 7.tailPath
# 8.tailOption
# 9.targetLog
# 10.expectLog
#========================================
tomcatStart(){
echo "Attempt to startup tomcat..."
${expectPath} <<EOF
set timeout ${expectTimeOut}
spawn ssh ${adminUserId}@$ip
expect "Last login:"
send "${sudoPath} ${tomcatInvoker} start\r"
expect "Using JRE_HOME:"
send "${tailPath} ${tailOption} ${tomcatTargetLog}\r"
expect "Server startup"
close
EOF
}
#========================================
# goal : Stop tomcat
# prameter :
# 1.expectPath
# 2.expectTimeOut
# 3.adminUserId
# 4.ip
# 5.sudoPath
# 6.tomcatInvoker
# 7.tailPath
# 8.tailOption
# 9.targetLog
# 10.expectTomcatLog
#========================================
tomcatStop(){
echo "Attempt to shutdown tomcat..."
${expectPath} <<EOF
set timeout ${expectTimeOut}
spawn ssh ${adminUserId}@$ip
expect "Last login:"
send "${sudoPath} ${tomcatInvoker} stop\r"
expect "Using JRE_HOME:"
send "${tailPath} ${tailOption} ${tomcatTargetLog}\r"
expect "${expectTomcatLog}"
close
EOF
}

Thursday, November 28, 2013

Gradle - Release Script on Jenkins

You just put the following this to the Execute shell of the Post Steps on Jenkins.
APP_HOME=/usr/local/app
APP_NAME=appName
APP_JAR=appName.jar
APP_ZIP=appName.zip
for ip in ${IP_ADDRESS}
do
scp -o StrictHostKeyChecking=no ${WORKSPACE}/build/distributions/${APP_ZIP} user_id@${ip}:${APP_HOME}/
ssh -o StrictHostKeyChecking=no user_id@${ip} /usr/bin/unzip -o ${APP_HOME}/${APP_ZIP} -d ${APP_HOME}/${APP_NAME}/
ssh -o StrictHostKeyChecking=no user_id@${ip} /bin/mv -v ${APP_HOME}/${APP_NAME}/${APP_JAR} ${APP_HOME}/${APP_NAME}/bin/
done
view raw release.sh hosted with ❤ by GitHub

Also, you need to configure the following option on Jenkins.
Build > invoke Gradle script > Invoke Gradle Gradle Version >> Default
Build > invoke Gradle script > Tasks >> zip

Gradle - Sample build.gradle for Batch

This Gradle file is for batch.

apply plugin: 'java'
// Version number adhere to the name of zip file.
version = ''
// apply from: 'gradle/maven.gradle'
defaultTasks = ['assemble']
sourceCompatibility = '1.7'
targetCompatibility = '1.7'
archivesBaseName = 'vertxWebSocketServer'
repositories {
mavenLocal()
mavenCentral()
}
dependencies {
compile group: 'commons-configuration', name: 'commons-configuration', version: '1.6'
compile group: 'commons-daemon', name: 'commons-daemon', version: '1.0.15'
compile group: 'org.slf4j', name: 'slf4j-api', version: '1.7.5'
runtime group: 'org.slf4j', name: 'jcl-over-slf4j', version: '1.7.5'
compile group: 'ch.qos.logback', name: 'logback-core', version: '1.0.13'
compile group: 'ch.qos.logback', name: 'logback-classic', version: '1.0.13'
compile group: 'org.springframework', name: 'spring-core', version: '3.1.4.RELEASE'
compile group: 'org.springframework', name: 'spring-context', version: '3.1.4.RELEASE'
compile group: 'org.springframework', name: 'spring-context-support', version: '3.1.4.RELEASE'
compile group: 'org.springmodules', name: 'spring-modules-jakarta-commons', version: '0.9'
compile group: 'io.vertx', name: 'vertx-core', version: '2.0.2-final'
compile group: 'io.vertx', name: 'vertx-platform', version: '2.0.2-final'
}
task batch(type: Zip) {
from 'src/dist'
from jar.outputs.files
baseName = 'vertxWebSocketServer'
into('libs') {
from configurations.runtime
}
}
batch.dependsOn(assemble)
//assemble.dependsOn(batch)
view raw build.gradle hosted with ❤ by GitHub

Monday, November 25, 2013

Memcached - Set Conf files in Repcached

# memcached1 run config
-u cy_memcached
-p 11211
-l xxx.xxx.xxx.69
-d
-c 1024
-m 128
# Set replication
-x xxx.xxx.xxx.56
-X 11212
view raw memcached1.conf hosted with ❤ by GitHub
# memcached2 run config
-u cy_memcached
-p 11211
-l xxx.xxx.xxx.56
-d
-c 1024
-m 128
# Set replication
-x xxx.xxx.xxx..69
-X 11212
view raw memcached2.conf hosted with ❤ by GitHub

Tuesday, November 19, 2013

Vert.x - Vert.x and Highchart

@This is a sample for showing the chart on Real Time.
<html>
<head>
<title>Vert.x and Highchart</title>
<meta http-equiv='cache-control' content='no-cache'>
<meta http-equiv='expires' content='0'>
<meta http-equiv='pragma' content='no-cache'>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>
<script src="http://code.highcharts.com/stock/highstock.js"></script>
<script src="http://code.highcharts.com/stock/modules/exporting.js"></script>
<script src="js/sockjs-0.2.1.min.js"></script>
<script src="js/vertxbus.js"></script>
<script src="js/json2.js"></script>
</head>
<body>
<script language="javascript" type="text/javascript">
var xx = 0;
var yy = 0;
function myFunction(x, y) {
xx = x;
yy = y;
var div = document.getElementById('demo');
div.innerHTML = div.innerHTML + ', y=' + y;
}
var eb = new vertx.EventBus("http://localhost:8091/eventbus");
function browserHandler(msg, replyTo) {
var obj = JSON.parse(msg.text);
myFunction(obj.x, obj.y);
}
eb.onopen = function() {
eb.registerHandler('app.conduit', browserHandler);
};
// eb.publish('app.conduit', {text: 'Publish message: testaaaaaa'});
// This is from highchart.
$(function() {
Highcharts.setOptions({
global : {
useUTC : false
}
});
// Create the chart
$('#container').highcharts('StockChart', {
chart : {
events : {
load : function()
// set up the updating of the chart each second
var series = this.series[0];
setInterval(function() {
var x = (new Date()).getTime(), // current time
y = yy;
//y = Math.round(Math.random() * 100);
series.addPoint([x, y], true, true);
}, 1000);
}
}
},
rangeSelector: {
buttons: [{
count: 1,
type: 'minute',
text: '1M'
}, {
count: 5,
type: 'minute',
text: '5M'
}, {
type: 'all',
text: 'All'
}],
inputEnabled: false,
selected: 0
},
title : {
text : 'Live random data'
},
exporting: {
enabled: false
},
series : [{
name : 'Random data',
data : (function() {
// generate an array of random data
var data = [], time = (new Date()).getTime(), i;
for( i = -999; i <= 0; i++) {
data.push([
time + i * 1000,
Math.round(Math.random() * 100)
]);
}
return data;
})()
}]
});
});
</script>
<div id="container" style="height: 500px; min-width: 500px"></div>
<p id="demo">Pushed Data from Server >>> </p>
</body>
</html>
view raw gistfile1.html hosted with ❤ by GitHub

Reference

Saturday, November 9, 2013

MyDesign - This is a clock desging on the desk.

I designed the follow drawing when I worked the Industrial Design Company.


OS:Windows 97
Graphic Tool: Corel Draw

Why I was going to decide to enter the college for Industrial Design.
I had a reason. I will tell you why in this Session.


Friday, November 8, 2013

Java - Common Daemon in Java

@Download Commons Daemon
$ wget http://ftp.meisei-u.ac.jp/mirror/apache/dist//commons/daemon/source/commons-daemon-1.0.15-src.tar.gz
@Decompress
$ tar xvf ./commons-daemon-1.0.15-src.tar.gz
@Change Directory
$ cd /usr/local/src/commons-daemon-1.0.15-src/src/native/unix
@You need to build the "configure" program with:
$ ./support/buildconf.sh
@Set configuration and compile
$ ./configure --with-java=/usr/local/java
$ make
@Move jsvc to home of apps
$ mv /usr/local/src/commons-daemon-1.0.15-src/src/native/unix/jsvc /usr/local/app/

#!/bin/sh
# Batch process check script - start
# declare -i batchCnt
# batchCnt=`ps -ef | grep VertxWebSocketServerMain | grep -v "grep VertxWebSocketServerMain" | wc -l`
# if [ $batchCnt -ge 1 ]
# then
# echo "VertxWebSocketServerMain already started !!"
# exit 0
# fi
# Batch batch process check script - end
DAEMON_USER=njoonk
PID_FILE=/var/run/vertxWebSocket.pid
JAVA_HOME=/usr/local/java
BASEDIR=/usr/local/app/vertxWebSocketServer
PROGRAM_NAME=njoonk.vertx.main.VertxWebSocketServerMain
export JAVA_HOME
for f in `find $BASEDIR/lib -type f -name "*.jar"`
do
CLASSPATH=$CLASSPATH:$f
done
CLASSPATH=${CLASSPATH}:${BASEDIR}/bin/vertxWebSocketServer.jar
case "$1" in
"start")
$DAEMON_HOME/jsvc -user $DAEMON_USER -home $JAVA_HOME \
-wait 10 -pidfile $PID_FILE -outfile $DAEMON_HOME/logs/vertxWebSocket.out \
-server -Xmx128m -Xms128m -Xmn64m \
-errfile '&1' -cp $CLASSPATH $PROGRAM_NAME
#To get a verbose JVM
#-verbose \
#To get a debug of jsvc.
#-debug \
echo "Daemon start"
exit $?
;;
stop)
$DAEMON_HOME/jsvc -stop -pidfile $PID_FILE $PROGRAM_NAME
echo "Daemon stop"
exit $?
;;
*)
echo "Usage vertxWebSocket.sh start/stop"
exit 1;;
esac
exit 0
view raw daemonScript.sh hosted with ❤ by GitHub
public class VertxWebSocketServerMain implements Daemon {
@Override
public void destroy() {
// TODO Auto-generated method stub
System.out.println("destory");
}
@Override
public void init(DaemonContext arg0) throws DaemonInitException, Exception {
// TODO Auto-generated method stub
System.out.println("start >>> init");
new ClassPathXmlApplicationContext("springConfig.xml");
System.out.println("stop >>> init");
}
@Override
public void start() throws Exception {
// TODO Auto-generated method stub
System.out.println("start");
}
@Override
public void stop() throws Exception {
System.out.println("stop");
}
}

Link - Gradle

@Plug-in in Eclipse
http://www.kaczanowscy.pl/tomek/2010-03/gradle-ide-integration-eclipse-plugin

@Multi-modules
http://blog.tamashumi.com/2012/11/muliti-module-gradle-project-with-ide.html

Thursday, November 7, 2013

Troubleshooting - Vert.x

@The following error occurred in the Vert.x, it included the vertx-core-1.3.1.final.jar in the Lib.
The resolution is that you should use  vertx-core-2.0.2.final.jar.
------------------------------------------------------------------------------
nested exception is java.lang.IncompatibleClassChangeError: Found interface org.vertx.java.core.VertxFactory, but class was expected

Monday, October 28, 2013

HTML5&CSS3 - Link

@Xcode with Web
http://cordova.apache.org/#about

@Maker Css3
http://www.css3maker.com/index.html

@CSS3
http://www.hongkiat.com/blog/html5-web-applications/

@Fonts - You can use these in free.
http://crazypixels.net/50-precious-free-fonts-for-commercial-use/

Java - How to install java on CentOs

@ How to install java on CentOs.

@ Changes a user for the root.
$ sudo -s

@ Decompresses jdk-7u75-linux-x64.tar.gz (or upper version).
$ tar xvf /usr/local/src/jdk-7u75-linux-x64.tar.gz

@ Moves the java directory under the local directory.
$ mv /usr/local/src/jdk1.7.0_75 /usr/local/java

@ Changes the user and group ownership of each given file for root.
$ chown -R root.root /usr/local/java

@ Add the following comment into a user in /home/njoonk/.bash_profile
export JAVA_HOME=/usr/local/java
export PATH=$JAVA_HOME/bin:$PATH

Friday, October 18, 2013

Objective-C - How to remove Cocos2d

@When you can't updates Cocos2d new version.

$ cd /Users/username/Library/Developer/Xcode/Templates/File Templates
$ m -rf ./cocos2d

$ cd /Users/username/Library/Developer/Xcode/Templates
$ rm -rf ./cocos2d

$ cd /Users/username/Library/Application Support/Developer/Shared/Xcode/File Templates
$ rm -rf ./cocos2d\ 1.0.0/

$ cd /Users/username/Library/Application Support/Developer/Shared/Xcode/Project Templates/
$ rm -rf ./cocos2d\ 1.0.0/

Objective-C - iPhone to Server

I will write it
@You need to get the following library.
https://github.com/msgpack/msgpack-objectivec

@ On the Iphone
static void listenerCallback(CFSocketRef socket, CFSocketCallBackType type,
                             CFDataRef address, const void *data, void *info) {

    NSString* str = nil;
    switch (type) {
        case kCFSocketNoCallBack:
            str = @"kCFSocketNoCallBack";
            break;
        case kCFSocketReadCallBack:
            str = @"kCFSocketReadCallBack";
            break;
        case kCFSocketAcceptCallBack:
            str = @"kCFSocketAcceptCallBack";
            break;
        case kCFSocketDataCallBack:
            str = @"kCFSocketDataCallBack";
            break;
        case kCFSocketConnectCallBack:
            str = @"kCFSocketConnectCallBack";
            break;
        case kCFSocketWriteCallBack:
            str = @"kCFSocketWriteCallBack";
            break;
        default:
            break;
    }

    if(type == kCFSocketDataCallBack) {
        // Get a message from server
        NSData* receiveData = (NSData*)data;
        NSDictionary* parsed = [receiveData messagePackParse];
        NSNumber *numx = [parsed objectForKey:@"x"];
        NSNumber *numy = [parsed objectForKey:@"y"];
        NSLog(@"numx is %f", [numx floatValue]);
        NSLog(@"numy is %f", [numy floatValue]);

        /* another way to print
         UInt8 *gotData = CFDataGetBytePtr((CFDataRef)data);
         int len = CFDataGetLength((CFDataRef)data);
         for(int i=0; i < len; i++) {
             NSLog(@"%c",*(gotData+i));
         }
         */
    } else if(type == kCFSocketWriteCallBack) {
        // Send a message to server
        CGPoint translation = CGPointMake(5.0, 6.0);
        NSNumber *numx = [NSNumber numberWithFloat:translation.x];
        NSNumber *numy = [NSNumber numberWithFloat:translation.y];
        NSDictionary *someDictionary = [[NSDictionary alloc] initWithObjectsAndKeys:
                                        numx, @"x",
                                        numy, @"y",
                                        nil];
        NSData* packed = [someDictionary messagePack];
        CFSocketSendData(socket, NULL, (CFDataRef)packed, 10);
        // CFRelease((CFDataRef)packed);
    }

}

@ On the Server
....

Saturday, October 12, 2013

MyStory - An earthquake has happened on 11 march 2011 in Japan

There has been an earthquake in Japan.
It was very sad thing.
When the earthquake happened,
I was very anxious for my wife and my daughter.
So I had called my wife as soon as possible.
But I couldn't connect with her.
I feared for both her and my daughter's safety.
After I was allowed to leave the company,
I left the office a little early at 5 o'clock to get to my wife's job.
I went to the Shinagawa Station from Shibuya.
I think It took about 5 hours.
But we didn't meet there, because the communication network system wasn't working.
After a little time had passed, We were able to speak to each other by smart phone.
My wife said she was walking to the child-care institutions.
I had to walk to the same place.
I had reached home at 5 AM.
So altogether I had walked for 12 hours.
I was happy for my family safety.
But I am very worried about the accident at the nuclear power plant.
I worry about the radioactivity.

Tuesday, October 8, 2013

Java - Setting server.xml on tomcat 7

@ on Server.xml
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
acceptCount="150"
backlog="200"
enableLookups="false"
maxThreads="200"
minSpareThreads="75"
maxPostSize="0"
maxSavePostSize="0"
maxKeepAliveRequests="1"
server="false"
URIEncoding="UTF-8"
compression="on"
compressionMinSize="2048"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml,text/x-json" />
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="false">
<Valve className="org.apache.catalina.valves.AccessLogValve"
directory="logs/AccessLog"
prefix="access_log."
pattern="common"
fileDateFormat="yyyyMMdd"
resolveHosts="false"/>
view raw Server.xml hosted with ❤ by GitHub

Java - Basic Authentication on Tomcat 7

@$ cd /usr/local/tomcat/conf
@Add the following this on the web.xml
<security-constraint>
        <web-resource-collection>
                <web-resource-name>
                        My Protected WebSite
                </web-resource-name>
                <url-pattern> /* </url-pattern>
                <http-method> GET </http-method>
                <http-method> POST </http-method>
        </web-resource-collection>
        <auth-constraint>
                <!-- the same like in your tomcat-users.conf file -->
                <role-name> aname </role-name>
        </auth-constraint>
</security-constraint>
<login-config>
        <auth-method> BASIC </auth-method>
        <realm-name>  Basic Authentication </realm-name>
</login-config>
<security-role>
        <description> aname role </description>
        <role-name> aname </role-name>
</security-role>
---------------------------------------------------------------------------------------
 @tomcat-users.xml
  <role rolename="manager-gui"/>
  <role rolename="admin-gui"/>
  <role rolename="aname" />

  <user username="tomcat" password="pwd" roles="manager-gui,admin-gui"/>
  <user username="aname" password="pwd" roles="aname"/>

Mysql - Remove the bin log

@Remove the bin log
mysql -e "PURGE MASTER LOGS BEFORE DATE_SUB(CURRENT_DATE, INTERVAL 60 DAY)"

Friday, September 13, 2013

Mysql - One sequence table can manage

It show you that the sequence table can manage all tables.

// These are Schema Desgins
CREATE TABLE zz_app
(
    id BIGINT UNSIGNED NOT NULL DEFAULT '0',
    app_id VARCHAR(45) NOT NULL,
    app_aaa VARCHAR(45) NULL,
    app_bbb VARCHAR(45) NULL,
    app_status_flag CHAR(1) NULL,
    insert_time TIMESTAMP NOT NULL,
    update_time TIMESTAMP NOT NULL,
    PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE INDEX zz_app_idx1 ON zz_app(app_id);
CREATE INDEX zz_app_idx2 ON zz_app(insert_time);

CREATE TABLE zz_sequence
(
    seq_name VARCHAR(30) NOT NULL,
    id BIGINT UNSIGNED NOT NULL DEFAULT '0',
    PRIMARY KEY (seq_name)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

----------------------------------------------------------------------

// Update sequence number into the Mysql
    <insert id="updateSequece" parameterType="map">
        UPDATE
            zz_sequence
        SET
            id=LAST_INSERT_ID(id+1)
        WHERE
            seq_name = #{seqName}
        <selectKey resultType="Long" order="AFTER">
            SELECT
                LAST_INSERT_ID()
        </selectKey>
    </insert>

// Insert a data into the Mysql
    <insert id="insertApp" parameterType="map">
        <selectKey keyProperty="id" resultType="Long" order="BEFORE">
            SELECT
                id
            FROM
                zz_sequence
            WHERE
                seq_name = #{seqName};
        </selectKey>
        INSERT INTO zz_app (
            id,
            app_id,
            app_aaa,
            app_bbb,
            app_status_flag,
            insert_time,
            update_time
        ) VALUES (
            #{id},
            #{appId},
            #{appAaa},
            #{appBbb},
            #{appStatusFlag},
            now(),
            now()
        )
    </insert>

Thursday, September 12, 2013

Mysql - How to remove mysql on the mac

  • sudo rm /usr/local/mysql
  • sudo rm -rf /usr/local/mysql*
  • sudo rm -rf /Library/StartupItems/MySQLCOM
  • sudo rm -rf /Library/PreferencePanes/My*
  • edit /etc/hostconfig and remove the line MYSQLCOM=-YES-
  • sudo rm -rf /Library/Receipts/mysql*
  • sudo rm -rf /Library/Receipts/MySQL*
  • sudo rm -rf /var/db/receipts/com.mysql.*

Objective-C - How to remove Xcode

@Remove the old Xcode
>sudo /Developer/Library/uninstall-devtools --mode=all

Java - Jetty to run in eclipse

・・Main
・Location
/usr/share/maven/bin/mvn

・Working Directory
1.Browser Workspace
2.Select the project name

・Arguments
-P staging
jetty:run

・Execute
$ CD /.../workspace
$ mvn jetty:run -P staging

・・Environment
@ For Debugging
MAVEN_OPTS = -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=y

・・Run/Debug Configure....
Then, pull up the "Run/Debug Configure...." menu item and select "Remote Java Application" and click the "New" button. Fill in the dialog by selecting your webapp project for the "Project:" field, and ensure you are using the same port number as you specified in the address= property above.
Now all you need to do is to Run/External Tools and select the name of the maven tool setup you created in step 1 to start the plugin and then Run/Debug and select the name of the debug setup you setup in step2.

@pom.xml - Sample
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>jp.xxxx.ns</groupId>
<artifactId>html5</artifactId>
<packaging>war</packaging>
<version>1.0-SNAPSHOT</version>
<name>html5 Maven Webapp</name>
<url>http://maven.apache.org</url>
<repositories>
<repository>
<id>public</id>
<name>Internal Repository</name>
<url>http://xxx.xxx.xxx.xx1:x0x1/nexus/content/groups/public/</url>
</repository>
<repository>
<id>stg-common-mvn01</id>
<name>stg-common-mvn01-releases</name>
<url>http://xxx.xxx.xxx.xx2:x0x1/artifactory/libs-releases-local</url>
</repository>
</repositories>
<distributionManagement>
<repository>
<id>releases</id>
<name>Internal Release Repository</name>
<url>http://xxx.xxx.xxx.xx1:x0x1/nexus/content/repositories/releases</url>
</repository>
<snapshotRepository>
<id>snapshots</id>
<name>Internal Snapshot Repository</name>
<url>http://xxx.xxx.xxx.xx1:x0x1/nexus/content/repositories/snapshots</url>
</snapshotRepository>
</distributionManagement>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.jetty.version>6.1.26</project.jetty.version>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.5.1</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
<encoding>${project.build.sourceEncoding}</encoding>
<vervose>true</vervose>
</configuration>
</plugin>
<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<version>${project.jetty.version}</version>
<configuration>
<contextPath>/</contextPath>
<scanIntervalSeconds>3</scanIntervalSeconds>
<connectors>
<connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
<port>8080</port>
<maxIdleTime>60000</maxIdleTime>
</connector>
</connectors>
</configuration>
</plugin>
</plugins>
<sourceDirectory>${basedir}/src/main/java</sourceDirectory>
<testSourceDirectory>${basedir}/src/test/java</testSourceDirectory>
<testResources>
<testResource>
<directory>${basedir}/src/test/resources</directory>
</testResource>
<testResource>
<directory>${basedir}/src/main/webapp/WEB-INF</directory>
</testResource>
</testResources>
<finalName>ROOT</finalName>
</build>
<profiles>
<profile>
<id>staging</id>
<build>
<resources>
<resource>
<directory>config/staging/resources</directory>
</resource>
<resource>
<directory>src/main/resources</directory>
</resource>
</resources>
</build>
</profile>
<profile>
<id>product</id>
<build>
<resources>
<resource>
<directory>config/product/resources</directory>
</resource>
<resource>
<directory>src/main/resources</directory>
</resource>
</resources>
</build>
</profile>
</profiles>
<dependencies>
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>servlet-api</artifactId>
<version>6.0.33</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.3</version>
</dependency>
</dependencies>
</project>
view raw pom.xml hosted with ❤ by GitHub

Link - Good Information

@ Service for Serverside
https://baas.io/

@Install Visual Studio Express 2012
http://ariy.kr/71

@Manager tool
https://trello.com/

@Prediction system
http://www.iaeng.org/publication/WCECS2008/WCECS2008_pp804-809.pdf

Monday, September 9, 2013

Link - Html5 and JavaScript

@Sample Game
http://www.gamedevacademy.org/create-a-html5-mario-style-platformer-game/

@Open Wysiwyg editor
http://www.openwebware.com

@Plug-in for javascript in Eclipse
http://www.aptana.com/products/studio3/download

@Can test the JavaScript on WEB
http://jsfiddle.net/b9ndZ/1/

@Pick up color as HTML CODE
http://html-color-codes.info/Korean/

@Tutorial
http://www.cadvance.org/?leftmenu=doc/include/total_menu.asp&mainpage=doc/java/tutorial/js_function.asp

Wednesday, August 14, 2013

Hbase - Testing to MapReduce

@If you are facing the following error, you should change version 2.1 on the common.io.
------------------------------------------------------------------------
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.io.FileUtils.isSymlink(Ljava/io/File;)Z
        at org.apache.hadoop.fs.FileUtil.getDU(FileUtil.java:456)
        at org.apache.hadoop.filecache.TrackerDistributedCacheManager.downloadCacheObject(TrackerDistributedCacheManager.java:463)
        at org.apache.hadoop.filecache.TrackerDistributedCacheManager.localizePublicCacheObject(TrackerDistributedCacheManager.java:475)
        at org.apache.hadoop.filecache.TrackerDistributedCacheManager.getLocalCache(TrackerDistributedCacheManager.java:191)
        at org.apache.hadoop.filecache.TaskDistributedCacheManager.setupCache(TaskDistributedCacheManager.java:182)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:124)
        at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:437)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:912)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:912)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:500)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
        at jp.ameba.hadoop.main.FreqCounter1.main(FreqCounter1.java:92)

Monday, August 12, 2013

Link - Hadoop, Hbase, Zookeeper

@All information about Hadoop
https://www.ibm.com/developerworks/data/library/techarticle/dm-1209hadoopbigdata/

http://www.ne.jp/asahi/hishidama/home/tech/apache/hbase/Filter.html#h_class

@Good install process
http://knight76.tistory.com/entry/hbase-Hbase-%EC%84%A4%EC%B9%98%ED%95%98%EA%B8%B0-Fully-Distributed-mode

@Hbase and Zookeeper
http://blog.naver.com/PostView.nhn?blogId=albertx&logNo=100187419333
http://promaster.tistory.com/82

@Hbase
http://engineering.vcnc.co.kr/2013/04/hbase-configuration/

@Hadoop - Good Install Information
http://blog.beany.co.kr/archives/1373#hdfs-sitexml
http://crazia.tistory.com/entry/%ED%95%98%EB%91%A1-%ED%95%98%EB%91%A1Hadoop-%EC%B4%88-%EA%B0%84%EB%8B%A8-%EC%84%A4%EC%B9%98-%EC%99%84%EC%A0%84-%EB%B6%84%EC%82%B0-Full-Distributed-%EB%B0%A9%EC%8B%9D

Hbase - anything else on the Scan

// Hbase
BinaryComparator comparator = new BinaryComparator(Bytes.toBytes("key"));
Filter filter = new RowFilter(CompareOp.EQUAL, comparator);
// Mysql
[select * from aTable where ROW='key']

// Hbase
byte[] prefix = Bytes.toBytes("key");
Filter filter = new PrefixFilter(prefix);
// Mysql
[select * from aTable where ROW like'key%']

// Hbase
byte[] stop = Bytes.toBytes("key");
Filter filter = new InclusiveStopFilter(stop);
// Mysql
[select * from aTable where ROW<='key']

Hbase - Page limit


@For a paging like [Select * from aTable limit 10]

// HbaseDao.java
    public ResultScanner resultScanner(String tableName, int intPages) throws Exception {

        // Get a object from Pool
        HTableInterface hTable =  htablePool.getTable(tableName);

        long pageSize = intPages;
        Filter filter = new PageFilter(pageSize);
        Scan s =new Scan();
        s.setFilter(filter);

        ResultScanner rs = hTable.getScanner(s);

        return rs;
    }

Monday, August 5, 2013

Link - How to do in Java

@How to configure a Netty 4 project using Spring 3.2+ and Maven
http://nerdronix.blogspot.jp/2013/06/netty-4-configuration-using-spring-maven.html

@HOW TO
http://www.kodejava.org/how-do-i-convert-inputstream-to-string/

@Like tail in Linux
http://blog.naver.com/PostView.nhn?blogId=jchem95&logNo=60008769821&redirect=Dlog&widgetTypeCall=true

@Jetty Document
http://www.eclipse.org/jetty/documentation/current/jetty-maven-plugin.html#get-up-and-running

@Jetty of Eclipse
http://wiki.eclipse.org/Jetty_WTP_Plugin/Jetty_WTP_Install

Spring - Quartz

public class JobDetails {
@Autowired
private HbaseService hbaseService;
public JobDetails() {
}
public void executeJob() {
try {
// Business Logic
} catch (Exception e) {
logger.error("Exception >> ", e);
}
}
}
view raw JobDetails.java hosted with ❤ by GitHub
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:mvc="http://www.springframework.org/schema/mvc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:task="http://www.springframework.org/schema/task"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd">
<bean id="jobDetails" class="xx.xxx.xxxx.admin.quartz.JobDetails" />
<task:scheduler id="scheduler" pool-size="10"/> <!-- pool-size attribute optional -->
<task:scheduled-tasks scheduler="scheduler"> <!-- scheduled job list -->
<task:scheduled ref="jobDetails" method="executeJob" cron="0 * * * * ?"/>
<!-- Add more job here -->
</task:scheduled-tasks>
</beans>
view raw Quartz.xml hosted with ❤ by GitHub

Tuesday, July 23, 2013

Hbase - Important thing


# When you execute a client App on Tomcat,  If you faced the following the error,
→Will not attempt to authenticate using SASL (unknown error)
# You add host's name in the hosts file. this is sample.
17x.2x.xxx.xx1   master01
17x.2x.xxx.xx2   slave02
17x.2x.xxx.xx3   slave03
17x.2x.xxx.xx4   slave04
# A client App(Hbase) on Tomcat is related with all of hbase server host name.

Friday, July 19, 2013

Hadoop - exclude a node on ruuning server

■dfs.hosts.exclude:
Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.

# Add below this to hdfs-site.xml
       <property>
              <name>dfs.hosts.exclude</name>
              <value>/home/hadoop/hadoop/conf/excludes</value>
      </property>

■mapred.hosts.exclude
Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded. # Add below this to mapred-site.xml
    <property>
        <name>mapred.hosts.exclude</name>
        <value>/home/hadoop/hadoop/conf/excludes</value>
    </property>

# Excute
$ bin/hadoop dfsadmin -refreshNodes

# Excute Banlancer to banlanc for data
bin/hadoop balancer

Thursday, July 18, 2013

Hbase - Connection Pool


@org.springframework.context.annotation.Configuration
public class HBaseConfig {
final Logger logger = LoggerFactory.getLogger(HBaseConfig.class);
@Autowired
private org.apache.commons.configuration.Configuration configuration;
@Bean
public Configuration defaultHBaseConfig() throws IOException{
Configuration conf = null;
try{
conf = HBaseConfiguration.create();
conf.set("hbase.master", configuration.getString("hbase.master"));
conf.set("hbase.zookeeper.quorum", configuration.getString("hbase.zookeeper.quorum"));
conf.set("hbase.zookeeper.property.clientPort", configuration.getString("hbase.zookeeper.property.clientPort"));
}catch(Exception ex){
logger.error("Exception", ex);
}
return conf;
}
@Bean
public HTablePool defaultHTablePool() throws IOException {
HTablePool tablePool = new HTablePool(defaultHBaseConfig(), 30);
return tablePool;
}
}
-------------------------------------------------------------------------
# You need to set ips in the hosts file.
    <hbase>
        <master>server1:6000</master>
        <zookeeper>
            <quorum>server1</quorum>
            <property>
                <clientPort>2181</clientPort>
            </property>
        </zookeeper>
    </hbase>

Flume - flume-env.sh

JAVA_HOME=/usr/local/java

FLUME_CLASSPATH="/home/hadoop/flume/lib"

Wednesday, July 17, 2013

Git - Push to remote And Delete a branch

# Push origin branch, not upstream
$ git push orign service_branch

@ Delete a local branch, on the master branch.
$ git branch -d branch_name

@ If you get a error like this, but if you change the branch to master, and then
@ commend git branch -d branch_name on the prompt.
@---------------------------------------------------
@error: The branch 'branch_name' is not fully merged.
@If you are sure you want to delete it, run 'git branch -D branch_name'.
@---------------------------------------------------
$ git branch -D dev

@ Delete a remote branch
$ git push origin --delete dev

@ Make a branch in Local
$ git checkout -b branch_name

@ Make a branch in Remote
$ git push origin branch_name

@ You can not delete branches showing list
git branch --no-merged


Thursday, July 11, 2013

Troubleshooting - hadoop

#   ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000.
# You'd better check your hosts file
http://gh0stsp1der.tistory.com/66

# Hbase
#Unable to find region for 99999999999999 after 10 tries
http://nosql.rishabhagrawal.com/2013/04/hbase-orgapachehadoophbaseclientnoserve.html

Monday, July 8, 2013

Spring - Spring3 connect to Mybatis on spring-mybatis.

// A part of ServiceImpl.javaList<HadoopGameModel> hadoopGameList = slaveAdminDao.getMapper(SlaveDao.class).selectGameList(mapSelectGameList);


<bean id="masterAdminDao" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionMasterFactory" />
</bean>
<!--
<bean id="slaveAdminDao" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionSlaveFactory" />
</bean>
-->
<bean id="slaveDao" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionSlaveFactory" />
</bean>
view raw Dao.xml hosted with ❤ by GitHub
<!-- Master DB-->
<bean id="master" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://${my.master.admin}:3306/database_name?useUnicode=true&amp;characterEncoding=utf8&amp;autoReconnect=true" />
<property name="username" value="${mysql.id}" />
<property name="password" value="${mysql.pwd}" />
<!-- Pool Setting -->
<property name="maxActive" value="5" />
<property name="maxIdle" value="5" />
<property name="maxWait" value="10000" />
<property name="poolPreparedStatements" value="true"/>
<!-- Delete when it will release to real searvice -->
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="7200000"/>
</bean>
<!-- Slave DB 00 -->
<bean id="slave00" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://${my.slave.admin.00}:3306/dual_hadoop?useUnicode=true&amp;characterEncoding=utf8&amp;autoReconnect=true" />
<property name="username" value="${mysql.id}" />
<property name="password" value="${mysql.pwd}" />
<property name="maxActive" value="5" />
<property name="maxIdle" value="5" />
<property name="maxWait" value="10000" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="7200000"/>
</bean>
<!-- Master-->
<bean id="sqlSessionMasterFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="master" />
<property name="configLocation" value="classpath:masterAdminMap.xml"/>
<!-- <property name="mapperLocations" value="classpath:sqlMap/masterAdminSql.xml" /> -->
</bean>
<!-- Slave-->
<bean id="sqlSessionSlaveFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="sg2Slave00" />
<property name="configLocation" value="classpath:slaveAdminMap.xml"/>
<!-- <property name="mapperLocations" value="classpath:sqlMap/slaveAdminSql.xml" /> -->
</bean>
view raw Db.xml hosted with ❤ by GitHub
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
"HTTP://mybatis.org/dtd/mybatis-3-config.dtd">
<configuration>
<!-- These settings control SqlMapClient configuration details, primarily to do with transaction
management. They are all optional (more detail later in this document). -->
<settings>
<setting name="defaultStatementTimeout" value="5000" />
</settings>
<mappers>
<mapper resource="sqlMap/slaveSql.xml" />
<!-- <mapper resource="sqlMap/slaveAdminSql.xml" /> -->
</mappers>
</configuration>
public interface SlaveDao {
public HadoopGameModel selectGame(Map<String, Object> map) throws SQLException;
public List<HadoopGameModel> selectGameList(Map<String, Object> map) throws SQLException;
}
view raw SlaveDao.java hosted with ❤ by GitHub
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="jp.hadoop.admin.dao.SlaveDao">
<select id="selectGame" parameterType="map"
resultType="jp.hadoop.admin.bean.model.HadoopGameModel">
SELECT
game_id AS gameId,
game_domain AS gameDomain,
game_title AS gameTitle,
game_explain AS gameExplain,
game_file AS gameFile,
game_status_flag AS gameStatusFlag,
insert_time AS insertTime,
update_time AS updateTime
FROM
dual_game
WHERE
game_id = #{gameId}
</select>
<select id="selectGameList" parameterType="map"
resultType="jp.hadoop.admin.bean.model.HadoopGameModel">
SELECT
game_id AS gameId,
game_domain AS gameDomain,
game_title AS gameTitle,
game_explain AS gameExplain,
game_file AS gameFile,
game_status_flag AS gameStatusFlag,
insert_time AS insertTime,
update_time AS updateTime
FROM
dual_game
</select>
</mapper>
view raw slaveSql.xml hosted with ❤ by GitHub

Git - Let's Fork the project

$ #Fork the "project-name" repository
$ git clone git@github.com:njoon/project-name.git
$ cd project-name/
$ git remote add upstream git@github.com:organizations/project-name.git
$ git fetch upstream

@If you want to remove the upstream
$ git remote remove upstream
----------------------------------------------------------------------------------------------

# If you want to merge from original branch(not forked master)
$ git fetch upstream

# To merge its changes into our local branch.
$ git branch -va
$ git checkout master
$ git merge upstream/master
# And you'd better use the Pull Request.


https://help.github.com/articles/syncing-a-fork


Friday, June 28, 2013

Flume - This is the flume.conf in a service server in Flume NG 1.3.1

# To bring logs and send to other source

agent1.channels = ch1
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 100000
agent1.channels.ch1.transactionCapacity = 1000

# Define an Exec Source called exec1
agent1.sources = exec1
agent1.sources.exec1.type = exec
agent1.sources.exec1.command = tail -F /usr/local/tomcat/logs/api/api.log
agent1.sources.exec1.interceptors = ts
agent1.sources.exec1.interceptors.ts.type = timestamp
agent1.sources.exec1.channels = ch1

# properties of hdfs-Cluster1-sink
agent1.sinks = avro-sink1
agent1.sinks.avro-sink1.type = avro
agent1.sinks.avro-sink1.channel = ch1
agent1.sinks.avro-sink1.hostname = 1xx.xxx.111.01
agent1.sinks.avro-sink1.port = 41414

Flume - This is the flume.conf in Flume NG 1.3.1

# To save logs data to HDFS.

agent1.channels = ch1
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 100000
agent1.channels.ch1.transactionCapacity = 1000

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to xxx.xxx.xxx.xxx:41414. Connect it to channel ch1.
agent1.sources = avro-source1
agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 1xx.xxx.111.01
agent1.sources.avro-source1.port = 41414

agent1.sinks = hdfs-sink1
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.channel = ch1
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://xxx.xxx.xxx.xxx:9000/home/hadoop/data/flume/%Y%m%d/%H
agent1.sinks.hdfs-sink1.hdfs.filePrefix = ch1
agent1.sinks.hdfs-sink1.hdfs.inUseSuffix = .txt
agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
agent1.sinks.hdfs-sink1.hdfs.rollInterval = 1200
agent1.sinks.hdfs-sink1.hdfs.writeFormat = text
agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
agent1.sinks.hdfs-sink1.hdfs.rollCount=1000000
agent1.sinks.hdfs-sink1.hdfs.batchSize = 10
agent1.sinks.hdfs-sink1.hdfs.threadsPoolSize=10



----------------------------------------------------
http://www.nextree.co.kr/p2704/

Tuesday, June 25, 2013

Install - Memcached

1. libevent

wget http://www.monkey.org/~provos/libevent-1.3a.tar.gz

tar xvfz libevent-1.3a.tar.gz
cd libevent-1.3a
./configure --prefix=/usr/local/libevent1.4.4
make
make install



2. memcached

wget http://www.danga.com/memcached/dist/memcached-1.2.1.tar.gz

tar xvfz memcached-1.2.1.tar.gz

cd memcached-1.2.1

./configure --prefix=/usr/local/memcached-1.2.5 --with-libevent=/usr/local/libevent
make
make install

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/libevent/lib
echo "/usr/local/lib" >>/etc/ld.so.conf
echo "/usr/local/libevent/lib" >>/etc/ld.so.conf
/sbin/ldconfig

memcached -d -m 1024 -l 127.0.0.1 -p 11211 -u root


# to replication
./configure --enable-replication --prefix=/usr/local/memcached-1.2.8 --with-libevent=/usr/local/libevent

MyStory - I went to Tokyo to work in Japan.

I worked as a programmer in Korea, but the IT Bubble had burst. 
I was 29 years old and my job was gone, so I was concerned about my future.
At that time, I thought, "maybe I should move to the other company?"
Should I start my a business that services something online Internet?
I hesitated about to make a decision.
At home, I would study the computer programing and English.(2002 year)
But it had been difficult to make steady progress.
Because of my nephew.
While I would study, he would interrupt me.
For example, he would cry in front of me and would loudly knock on the door.
So my parents and I had some problems at home.
I felt that I wanted to move out of the house. but I couldn't afford to live out on my own.
That was the reason why I didn't have enough money to rent a house.
By the way, I have worked a part-time job(a part-time lecturer in a college).
When I was working,  I saw a web page recruiting software engineers from a educational institution of Government to work abroad. and I decided that I would apply.
I needed some money,  and I had to take classes for Java and Japanese.
But I didn't care about any of that and applied anyway.
For 10 months, I studied advance Java, Oracle and Japanese.
It was particularly difficult to study Japanese.
Honestly, I wondered "can I do this?"
Sometimes, I felt frustrated and despaired.
I just went on, although I was very tired of studying.
One classroom had about 25 students.
I took eight exams. but I failed half of them.
But I didn't get frustrated, and I would try to challenge it again.
In the end, I didn't pass the exam.
However, I passed the interview for a job at the company
Therefore, I would be to going to Japan to work.
Since it was my first time going abroad.
I was very nervous
I was afraid that an accident might happen while I was in Japan.
So, I wasn't at all calm. I took an airplane in February 2003.
At last, I arrived at my destination Narita Airport.

Monday, June 24, 2013

Mysql - Install mysql5.5

cmake install - manual
--------------
./bootstrap
make
make install
--------------

mysql5.5 install
-------------
$ yum groupinstall "Development Tools"
$ yum install ncurses-devel
$ yum install cmake

$ cmake . \
-DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
-DMYSQL_DATADIR=/usr/local/mysql/data \
-DSYSCONFDIR=/etc \
-DWITH_ARCHIVE_STORAGE_ENGINE=1 \
-DWITH_BLACKHOLE_STORAGE_ENGINE=1 \
-DWITH_FEDERATED_STORAGE_ENGINE=1 \
-DWITH_PARTITION_STORAGE_ENGINE=1 \
-DDEFAULT_CHARSET=utf8 \
-DDEFAULT_COLLATION=utf8_general_ci \
-DENABLED_LOCAL_INFILE=1 \
-DENABLED_PROFILING=1 \
-DMYSQL_TCP_PORT=3306 \
-DMYSQL_UNIX_ADDR=/tmp/mysql.sock \
-DWITH_DEBUG=1 \
-DWITH_EMBEDDED_SERVER=1;
$ make
$ make install

$ useradd mysql

$ cd /usr/local/src/mysql-5.5.38

$ chmod 755 scripts/mysql_install_db
$ scripts/mysql_install_db --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data

@ Start and Stop
$ cp /usr/local/mysql/support-files/mysql.server /etc/init.d/
@ Configuration
$ cp /usr/local/mysql/support-files/my-medium.cnf /etc/my.cnf

mac
http://hoyanet.pe.kr/1942

Hadoop - Searching for something

/**
* Search files system for something in Hadoop
* @param path the path in hadoop directory, for example like [/home/hadoop/data/flume]
* @param searchWord key words, for example like [test1 test2]
* @return result for searching
*/
public String readLogsPlural(String path, String searchWord) throws Exception {
String findStr;
StringBuffer sb = new StringBuffer();
String[] multiWord = searchWord.split(" ");
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://172.xx.xxx.xxx:9000");
FileSystem dfs = FileSystem.get(conf);
// You need to pass in your hdfs path
FileStatus[] status = this.dfs.listStatus(new Path(path));
for (int i=0;i<status.length;i++){
FSDataInputStream fsIn = dfs.open(status[i].getPath());
BufferedReader br=new BufferedReader(new InputStreamReader(fsIn));
String line;
while ((line=br.readLine()) != null){
for (int n=0 ; n < multiWord.length; n++) {
logger.info("multiWord[" + n + "] >>> " + multiWord[n]);
findStr = ".*" + multiWord[n] + ".*";
if (line.matches(findStr)) {
if((multiWord.length-1) == n) {
logger.info("last n >>> " + n);
sb.append(line);
}
} else {
break;
}
}
}
}
return sb.toString();
}

Monday, June 17, 2013

Link - Collection

@Iphone Emoji
http://www.easyapns.com/category/just-for-fun

@Best Site(How to install)
http://xmodulo.com/

@UML
http://www.objectaid.com/home

@Data Compression
https://code.google.com/p/snappy/

@Monitor
https://github.com/Netflix/servo/

@Arrange Json
http://jsonformatter.curiousconcept.com/

@Using sequence in Mysql
http://bryan7.tistory.com/101

@C++ Tutorial
http://www.soen.kr/

@Mysql with cash
https://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL/blob/master/docs-en/installation.en.txt

@Oracle Function List
http://jhbench.tistory.com/27

@Java Sample
http://kodejava.org/

Tuesday, June 4, 2013

Hadoop - Remove node

@1. Get the IP or Hosts list by running "report" command
$ $HADOOP_HOME/hadoop dfsadmin -report | grep Name

@2. You include IP:Port The following file.
@$HADOOP_HOME/conf/excludes
00.xx.xxx.001:50010

@3. invoke command:
$ $HADOOP_HOME/bin/hadoop dfsadmin -refreshNodes

@4. Verification
$ $HADOOP_HOME/bin/hadoop dfsadmin -report | grep -Eiw ‘Name|Decommission’

@4. This time is MapReduce
@If It has the exclude file, You can command this
$ $HADOOP_HOME/bin/hadoop mradmin -refreshNodes

http://pearlin.info/2012/04/best-way-to-blacklist-node-from-live-hadoop-cluster/

Linux - command

@a user register into the group
#/usr/sbin/usermod -g groupname username
$ /usr/sbin/usermod -g hadoop hadoop

@Find files to include strings
find . -exec grep -l "Contents of directory" {} \; 2>/dev/null

@Find files
find . -name "*Status*" -print

@Make ssh keys
ssh-keygen -t dsa -> Make a dsa
ssh-keygen -t rsa -> Make a rsa

Monday, June 3, 2013

Logback - Setting for Spring3.1.4

    <properties>
        <org.slf4j.version>1.7.5</org.slf4j.version>
        <org.logback.version>1.0.13</org.logback.version>
    </properties>

        <!-- Logging -->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${org.slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
            <version>${org.slf4j.version}</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-core</artifactId>
            <version>${org.logback.version}</version>
        </dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>${org.logback.version}</version>
        </dependency>


#logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>

  <appender name="HADOOP_FLUME" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>${user.dir}/logs/flumeAdmin.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
        <!-- daily rollover -->
        <fileNamePattern>${user.dir}/logs/flumeAdmin.log.%d{yyyy-MM-dd}.log.zip</fileNamePattern>
        <!-- keep 90 days' worth of history -->
        <maxHistory>90</maxHistory>
    </rollingPolicy>
    <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
        <charset>UTF-8</charset>
        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg%n</pattern>
        </layout>
    </encoder>

  </appender>

  <root>
      <level value="info" />
    <appender-ref ref="HADOOP_FLUME" />
  </root>

</configuration>

Wednesday, May 29, 2013

Hadoop - commands

@Find files you want to see.
hadoop dfs -lsr /hadoop/flume/ | grep [search_term].

@hadoop error(release safe mode)
$./bin/hadoop dfsadmin -safemode leave

Hodoop - Get a content of file from Hadoop sample

public class TestMain {
    /**
     * @param args
     * @throws Exception
     */
    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        conf.set("fs.default.name", "hdfs://xxx.28.xxx.51:9000");
        FileSystem dfs = FileSystem.get(conf);

        Path filenamePath = new Path("/home/hadoop/data/flume/20130529/12/ch1.1369796403013");
        FSDataInputStream fsIn = dfs.open(filenamePath);

        // org.apache.commons.io.IOUtils
        byte[] fileBytes = IOUtils.toByteArray(fsIn);
        fsIn.read(fileBytes);

        //create string from byte array
        String strFileContent = new String(fileBytes);
        System.out.println(strFileContent);
    }
}

Tuesday, May 28, 2013

Hadoop Manger URL

I will make a arrangement about below this later. 

http://confluence.openflamingo.org/pages/viewpage.action?pageId=5537913&focusedCommentId=7209528&#comment-7209528

Monday, May 27, 2013

Flume - Shell script for starting in Flume NG 1.3.1

# If you have faced the error, should install the below.
sudo yum install redhat-lsb.x86_64
 
# This script is permitted for Hadoop user.
$sudo su - hadoop


#!/bin/bash
. /lib/lsb/init-functions
[ -e /etc/sysconfig/flume ] && . /etc/sysconfig/flume
STATUS_RUNNING=0
STATUS_DEAD=1
STATUS_DEAD_AND_LOCK=2
STATUS_NOT_RUNNING=3
ERROR_PROGRAM_NOT_INSTALLED=5
FLUME_LOG_DIR=/home/hadoop/flume/logs
FLUME_CONF_DIR=/home/hadoop/flume/conf
FLUME_RUN_DIR=/home/hadoop/var/run/flume
FLUME_HOME=/home/hadoop/flume
FLUME_USER=hadoop
FLUME_LOCK_DIR="/home/hadoop/var/lock/subsys/"
LOCKFILE="${FLUME_LOCK_DIR}/flume-ng-agent"
desc="Flume NG agent daemon"
#FLUME_CONF_FILE=${FLUME_CONF_FILE:-${FLUME_CONF_DIR}/flume.conf}
FLUME_CONF_FILE=${FLUME_CONF_DIR}/flume.conf
EXEC_PATH=${FLUME_HOME}/bin/flume-ng
FLUME_PID_FILE=${FLUME_RUN_DIR}/flume-ng-agent.pid
# These directories may be tmpfs and may or may not exist
# depending on the OS (ex: /var/lock/subsys does not exist on debian/ubuntu)
for dir in "$FLUME_RUN_DIR" "$FLUME_LOCK_DIR"; do
[ -d "${dir}" ] || install -d -m 0755 -o $FLUME_USER -g $FLUME_USER ${dir}
done
#DEFAULT_FLUME_AGENT_NAME="agent1"
#FLUME_AGENT_NAME=${FLUME_AGENT_NAME:-${DEFAULT_FLUME_AGENT_NAME}}
#
FLUME_AGENT_NAME="agent1"
start() {
[ -x $exec ] || exit $ERROR_PROGRAM_NOT_INSTALLED
pidofproc -p $FLUME_PID_FILE java > /dev/null
status=$?
if [ "$status" -eq "$STATUS_RUNNING" ]; then
exit 0
fi
log_success_msg "Starting $desc (flume-ng-agent): "
# /bin/su -s /bin/bash -c "/bin/bash -c 'echo \$\$ >${FLUME_PID_FILE} && exec ${EXEC_PATH} agent --no-reload-conf --conf $FLUME_CONF_DIR --conf-file $FLUME_CONF_FILE --name $FLUME_AGENT_NAME >>${FLUME_LOG_DIR}/flume.init.log 2>&1' &" $FLUME_USER
/bin/bash -c "/bin/bash -c 'echo \$\$ >${FLUME_PID_FILE} && exec ${EXEC_PATH} agent --no-reload-conf --conf $FLUME_CONF_DIR --conf-file $FLUME_CONF_FILE --name $FLUME_AGENT_NAME >>${FLUME_LOG_DIR}/flume.init.log 2>&1' &"
RETVAL=$?
[ $RETVAL -eq 0 ] && touch $LOCKFILE
return $RETVAL
}
stop() {
if [ ! -e $FLUME_PID_FILE ]; then
log_failure_msg "Flume agent is not running"
exit 0
fi
FLUME_PID=`cat $FLUME_PID_FILE`
if [ -n $FLUME_PID ]; then
kill -TERM ${FLUME_PID} &>/dev/null
status=0
while [ $status -eq 0 ]; do
sleep 1
ps -p $FLUME_PID &> /dev/null
status=$?
done
fi
rm -f $LOCKFILE $FLUME_PID_FILE
log_success_msg "Stopping $desc (flume-ng-agent): "
return 0
}
checkstatus(){
pidofproc -p $FLUME_PID_FILE java > /dev/null
status=$?
case "$status" in
$STATUS_RUNNING)
log_success_msg "Flume NG agent is running"
;;
$STATUS_DEAD)
log_failure_msg "Flume NG agent is dead and pid file exists"
;;
$STATUS_DEAD_AND_LOCK)
log_failure_msg "Flume NG agent is dead and lock file exists"
;;
$STATUS_NOT_RUNNING)
log_failure_msg "Flume NG agent is not running"
;;
*)
log_failure_msg "Flume NG agent status is unknown"
;;
esac
return $status
}
condrestart(){
[ -e ${LOCKFILE} ] && restart || :
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
checkstatus
;;
restart)
restart
;;
condrestart|try-restart)
condrestart
;;
*)
echo $"Usage: $0 {start|stop|status|restart|try-restart|condrestart}"
exit 1
esac
exit $RETVAL
view raw flume.sh hosted with ❤ by GitHub

Shell - init-functions

@If you want to install this /lib/lsb/init-functions on the linux?
@Just install below this.
$ yum install redhat-lsb

@=======================================
#!/bin/sh

# LSB initscript functions, as defined in the LSB Spec 1.1.0
#
# Lawrence Lim <llim@redhat.com> - Tue, 26 June 2007
# Updated to the latest LSB 3.1 spec
# http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic_lines.txt

start_daemon () {
        /etc/redhat-lsb/lsb_start_daemon "$@"
}

killproc () {
        /etc/redhat-lsb/lsb_killproc "$@"
}

pidofproc () {
        /etc/redhat-lsb/lsb_pidofproc "$@"
}

log_success_msg () {
        /etc/redhat-lsb/lsb_log_message success "$@"
}

log_failure_msg () {
        /etc/redhat-lsb/lsb_log_message failure "$@"
}

log_warning_msg () {
        /etc/redhat-lsb/lsb_log_message warning "$@"
}

Friday, May 24, 2013

Shell - init.d Script

Let's edit later.

http://werxltd.com/wp/2012/01/05/simple-init-d-script-template/

Thursday, May 23, 2013

Flume - Ganglia Install in Flume NG 1.3.1

$ yum install arp apr-devel
$ yum install rrdtool rrdtool-devel
$ yum install libconfuse libconfuse-devel
$ yum install pcre pcre-devel
$ yum install expat expat-devel
$ yum install zlib zlib-devel

@ Install libconfuse
$ ./configure --with-pic
$ make
$ make install

$ mkdir -p /home/hadoop/ganglia/rrd/
$ chown nobody.nobody /home/hadoop/ganglia/rrd/
$ cd ./ganglia-3.6.0
$ ./configure --with-librrd=/home/hadoop/ganglia/rrd/ --with-gmetad --prefix=/usr/local/
$ make
$ make install

@You can confirm
$ ls /usr/local/bin/gstat
$ ls /usr/local/bin/gmetric
$ ls /usr/local/sbin/gmond
$ ls /usr/local/sbin/gmetad

@[.] is Ganglia compiled home
@ Register to service
$ cp ./gmond/gmond.init /etc/rc.d/init.d/gmond
$ chkconfig --add gmond
$ chkconfig --list gmond
$ vi /etc/rc.d/init.d/gmond
--> Edit ->GMOND=/usr/local/sbin/gmond

$ cp ./gmetad/gmetad.init /etc/rc.d/init.d/gmetad
$ chkconfig --add gmetad
$ chkconfig --list gmetad
$ vi /etc/rc.d/init.d/gmetad
--> Edit -> GMOND=/usr/local/sbin/gmetad

@ Copy conf
$ /usr/local/sbin/gmond --default_config > /usr/local/etc/gmond.conf

@ Set rrd tool
$ vi /etc/local/etc/gmetad
 -> rrd_rootdir "/home/hadoop/ganglia/rrd"


@ Start
# /etc/rc.d/init.d/gmond start
# /etc/rc.d/init.d/gmetad start

@ Confirm process
# telnet localhos 8649
--> Output XML

http://blog.daum.net/_blog/BlogTypeView.do?blogid=0N9yp&articleno=25&_bloghome_menu=recenttext#ajax_history_home

http://apexserver.iptime.org/users/yk.choi/weblog/7eca7/

http://ahmadchaudary.wordpress.com/tag/ganglia-monitoring/

Git - Add a tag in order

1.Edit pom.xml
 -Like 1.0-SNAPSHOT to 1.1 2.Index

2.Add Index
 $ git add *

3.Commit
 $ git commit -m "Tag v1.1"

4.Add a tag
 $ git tag v1.1

5.push to remote server
  @ When I did the following command for Github,
       the system didn't ask me to input Id and Password
       (master = tag version)

 $ git push origin v1.1

6.return to the development
 $ git fetch origin
 $ git reset --hard origin/master

Tuesday, May 21, 2013

How to use Flume ng

bin/flume-ng agent --conf-file conf/flume.conf --name agent1 -Dflume.monitoring.type=GANGLIA -Dflume.monitoring.hosts=172.xx.xxx.xx:5455

Monday, May 20, 2013

Hive - join

@Join

SELECT a.uu_id, a.time as registerTime, a.activity as register, b.activity as buy
FROM (select * from tableA where dt="2013-05-17" and activity="register") a
JOIN (select * from tableB where dt="2013-05-17" and activity="buy") b
ON(a.uu_id = b.uu_id) limit 10;
view raw JoinHive.sql hosted with ❤ by GitHub

Friday, May 17, 2013

Flume - Install Flume NG 1.3.1

■What's Changed?
  • There's no more logical or physical nodes. We call all physical nodes agents and agents can run zero or more sources and sinks.
  • There's no master and no ZooKeeper dependency anymore. At this time, Flume runs with a simple file-based configuration system.

■Install Flume NG 1.3.1
$ git clone https://git-wip-us.apache.org/repos/asf/flume.git flume
$ cd flume
$ git checkout trunk{code}
OR
$ wget http://ftp.kddilabs.jp/infosystems/apache/flume/1.3.1/apache-flume-1.3.1-bin.tar.gz{code}

■Configuration
$ cp conf/flume-conf.properties.template conf/flume.conf
$ cp conf/flume-env.sh.template conf/flume-env.sh

■Change the file (conf/flume.conf)
#=====================================================
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory

# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.type = avro
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 41414

# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.log-sink1.channel = ch1
agent1.sinks.log-sink1.type = logger

# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = log-sink1
#=====================================================

■Execute
$ bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1

■reference
https://cwiki.apache.org/FLUME/getting-started.html

Thursday, May 16, 2013

Linux - user commond

# Add user
$/usr/sbin/useradd -d /home/njoonk -m njoonk -g njoonk

# Set password
$/usr/bin/passwd njoonk
# Delete user
$userdel testuser #Only a user.
$userdel -r testuser #Only a user with user's directory.

# Create a public in linux
$ ssh-keygen -t rsa
# Input the public-key into the authorized_keys
$ vi authorized_keys
$ chmod 644 .ssh/authorized_keys

Wednesday, May 15, 2013

Java - GC

JAVA_OPTS="-server"
JAVA_OPTS="${JAVA_OPTS} -Xms1024m -Xmx1024m -Xmn768m -XX:SurvivorRatio=2 -XX:PermSize=64m -XX:MaxPermSize=256m"
JAVA_OPTS="${JAVA_OPTS} -XX:+PrintGCDetails -Xloggc:/usr/local/tomcat/logs/gc.log"

Reference URL
http://fly32.net/438

http://www.javaservice.com/~java/bbs/read.cgi?m=&b=weblogic&c=r_p&n=1221718848&p=6&s=t

Monday, May 13, 2013

About git information

@Add remote git server URL
git fetch origin
git reset --hard origin/master
 
@Download new version from remote(you need to command the fetch)
git checkout HEAD

@Delete tags in remote.
git push origin :tags/{tag name}

@Upload remote
git push origin v1.5
git push origin --tags

@Delete tags in local
git tag -d {tag name}

@Create a tag
git tag v1.0
@Up load a tag to remote
git push --tags

@Show the commit id
git rev-parse [Tag Name]

@Delete Tag
git tag -d [Tag Name]

@Order a commit
git add *
git commit -m "This is the first commit"
git push

@DownLoad all branch from remote
git fetch origin

@
1.$ vim ./.git/config
2.[branch "master"]
        remote = origin
        merge = refs/heads/master
@a good thing to put this bellow into the config file
[alias]
        hist = log --pretty=format:'%h %ad | %s%d [%an]' --graph --date=short
[color]
        ui = true

How to get a thread dump in Linux.

If you have not been ready to get a thread dump in Linux.

./jstack -l -F 22431 > /home/share/kim_joon/thread3.txt

Friday, May 10, 2013

Ruby

http://dimdim.tistory.com/56
http://www.jacopretorius.net/2012/01/ruby-map-collect-and-select.html
http://ruby-doc.org/core-2.0/Array.html

Thursday, May 9, 2013

Eclipse - c, c++

http://chanroid.tistory.com/6
http://paralaxer.com/cpp-vs-objective-c/

Wednesday, May 8, 2013