Tuesday, July 23, 2013

Hbase - Important thing


# When you execute a client App on Tomcat,  If you faced the following the error,
→Will not attempt to authenticate using SASL (unknown error)
# You add host's name in the hosts file. this is sample.
17x.2x.xxx.xx1   master01
17x.2x.xxx.xx2   slave02
17x.2x.xxx.xx3   slave03
17x.2x.xxx.xx4   slave04
# A client App(Hbase) on Tomcat is related with all of hbase server host name.

Friday, July 19, 2013

Hadoop - exclude a node on ruuning server

■dfs.hosts.exclude:
Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded.

# Add below this to hdfs-site.xml
       <property>
              <name>dfs.hosts.exclude</name>
              <value>/home/hadoop/hadoop/conf/excludes</value>
      </property>

■mapred.hosts.exclude
Names a file that contains the list of hosts that should be excluded by the jobtracker. If the value is empty, no hosts are excluded. # Add below this to mapred-site.xml
    <property>
        <name>mapred.hosts.exclude</name>
        <value>/home/hadoop/hadoop/conf/excludes</value>
    </property>

# Excute
$ bin/hadoop dfsadmin -refreshNodes

# Excute Banlancer to banlanc for data
bin/hadoop balancer

Thursday, July 18, 2013

Hbase - Connection Pool


@org.springframework.context.annotation.Configuration
public class HBaseConfig {
final Logger logger = LoggerFactory.getLogger(HBaseConfig.class);
@Autowired
private org.apache.commons.configuration.Configuration configuration;
@Bean
public Configuration defaultHBaseConfig() throws IOException{
Configuration conf = null;
try{
conf = HBaseConfiguration.create();
conf.set("hbase.master", configuration.getString("hbase.master"));
conf.set("hbase.zookeeper.quorum", configuration.getString("hbase.zookeeper.quorum"));
conf.set("hbase.zookeeper.property.clientPort", configuration.getString("hbase.zookeeper.property.clientPort"));
}catch(Exception ex){
logger.error("Exception", ex);
}
return conf;
}
@Bean
public HTablePool defaultHTablePool() throws IOException {
HTablePool tablePool = new HTablePool(defaultHBaseConfig(), 30);
return tablePool;
}
}
-------------------------------------------------------------------------
# You need to set ips in the hosts file.
    <hbase>
        <master>server1:6000</master>
        <zookeeper>
            <quorum>server1</quorum>
            <property>
                <clientPort>2181</clientPort>
            </property>
        </zookeeper>
    </hbase>

Flume - flume-env.sh

JAVA_HOME=/usr/local/java

FLUME_CLASSPATH="/home/hadoop/flume/lib"

Wednesday, July 17, 2013

Git - Push to remote And Delete a branch

# Push origin branch, not upstream
$ git push orign service_branch

@ Delete a local branch, on the master branch.
$ git branch -d branch_name

@ If you get a error like this, but if you change the branch to master, and then
@ commend git branch -d branch_name on the prompt.
@---------------------------------------------------
@error: The branch 'branch_name' is not fully merged.
@If you are sure you want to delete it, run 'git branch -D branch_name'.
@---------------------------------------------------
$ git branch -D dev

@ Delete a remote branch
$ git push origin --delete dev

@ Make a branch in Local
$ git checkout -b branch_name

@ Make a branch in Remote
$ git push origin branch_name

@ You can not delete branches showing list
git branch --no-merged


Thursday, July 11, 2013

Troubleshooting - hadoop

#   ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000.
# You'd better check your hosts file
http://gh0stsp1der.tistory.com/66

# Hbase
#Unable to find region for 99999999999999 after 10 tries
http://nosql.rishabhagrawal.com/2013/04/hbase-orgapachehadoophbaseclientnoserve.html

Monday, July 8, 2013

Spring - Spring3 connect to Mybatis on spring-mybatis.

// A part of ServiceImpl.javaList<HadoopGameModel> hadoopGameList = slaveAdminDao.getMapper(SlaveDao.class).selectGameList(mapSelectGameList);


<bean id="masterAdminDao" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionMasterFactory" />
</bean>
<!--
<bean id="slaveAdminDao" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionSlaveFactory" />
</bean>
-->
<bean id="slaveDao" class="org.mybatis.spring.SqlSessionTemplate">
<constructor-arg index="0" ref="sqlSessionSlaveFactory" />
</bean>
view raw Dao.xml hosted with ❤ by GitHub
<!-- Master DB-->
<bean id="master" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://${my.master.admin}:3306/database_name?useUnicode=true&amp;characterEncoding=utf8&amp;autoReconnect=true" />
<property name="username" value="${mysql.id}" />
<property name="password" value="${mysql.pwd}" />
<!-- Pool Setting -->
<property name="maxActive" value="5" />
<property name="maxIdle" value="5" />
<property name="maxWait" value="10000" />
<property name="poolPreparedStatements" value="true"/>
<!-- Delete when it will release to real searvice -->
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="7200000"/>
</bean>
<!-- Slave DB 00 -->
<bean id="slave00" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://${my.slave.admin.00}:3306/dual_hadoop?useUnicode=true&amp;characterEncoding=utf8&amp;autoReconnect=true" />
<property name="username" value="${mysql.id}" />
<property name="password" value="${mysql.pwd}" />
<property name="maxActive" value="5" />
<property name="maxIdle" value="5" />
<property name="maxWait" value="10000" />
<property name="validationQuery" value="select 1"/>
<property name="testWhileIdle" value="true"/>
<property name="timeBetweenEvictionRunsMillis" value="7200000"/>
</bean>
<!-- Master-->
<bean id="sqlSessionMasterFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="master" />
<property name="configLocation" value="classpath:masterAdminMap.xml"/>
<!-- <property name="mapperLocations" value="classpath:sqlMap/masterAdminSql.xml" /> -->
</bean>
<!-- Slave-->
<bean id="sqlSessionSlaveFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="sg2Slave00" />
<property name="configLocation" value="classpath:slaveAdminMap.xml"/>
<!-- <property name="mapperLocations" value="classpath:sqlMap/slaveAdminSql.xml" /> -->
</bean>
view raw Db.xml hosted with ❤ by GitHub
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
"HTTP://mybatis.org/dtd/mybatis-3-config.dtd">
<configuration>
<!-- These settings control SqlMapClient configuration details, primarily to do with transaction
management. They are all optional (more detail later in this document). -->
<settings>
<setting name="defaultStatementTimeout" value="5000" />
</settings>
<mappers>
<mapper resource="sqlMap/slaveSql.xml" />
<!-- <mapper resource="sqlMap/slaveAdminSql.xml" /> -->
</mappers>
</configuration>
public interface SlaveDao {
public HadoopGameModel selectGame(Map<String, Object> map) throws SQLException;
public List<HadoopGameModel> selectGameList(Map<String, Object> map) throws SQLException;
}
view raw SlaveDao.java hosted with ❤ by GitHub
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="jp.hadoop.admin.dao.SlaveDao">
<select id="selectGame" parameterType="map"
resultType="jp.hadoop.admin.bean.model.HadoopGameModel">
SELECT
game_id AS gameId,
game_domain AS gameDomain,
game_title AS gameTitle,
game_explain AS gameExplain,
game_file AS gameFile,
game_status_flag AS gameStatusFlag,
insert_time AS insertTime,
update_time AS updateTime
FROM
dual_game
WHERE
game_id = #{gameId}
</select>
<select id="selectGameList" parameterType="map"
resultType="jp.hadoop.admin.bean.model.HadoopGameModel">
SELECT
game_id AS gameId,
game_domain AS gameDomain,
game_title AS gameTitle,
game_explain AS gameExplain,
game_file AS gameFile,
game_status_flag AS gameStatusFlag,
insert_time AS insertTime,
update_time AS updateTime
FROM
dual_game
</select>
</mapper>
view raw slaveSql.xml hosted with ❤ by GitHub

Git - Let's Fork the project

$ #Fork the "project-name" repository
$ git clone git@github.com:njoon/project-name.git
$ cd project-name/
$ git remote add upstream git@github.com:organizations/project-name.git
$ git fetch upstream

@If you want to remove the upstream
$ git remote remove upstream
----------------------------------------------------------------------------------------------

# If you want to merge from original branch(not forked master)
$ git fetch upstream

# To merge its changes into our local branch.
$ git branch -va
$ git checkout master
$ git merge upstream/master
# And you'd better use the Pull Request.


https://help.github.com/articles/syncing-a-fork