怎么将hadoop从2.6换为2.7 如何从python2.6.1升级到2.6.7

hadoop1.x \u548c hadoop 2.x \u533a\u522b\uff0c2.5 2.6 2.7\u4e4b\u95f4\u7684\u533a\u522b

Hadoop 2.0\u6307\u7684\u662f\u7248\u672c\u4e3aApache Hadoop 0.23.x\u30012.x\u6216\u8005CDH4\u7cfb\u5217\u7684Hadoop\uff0c\u5185\u6838\u4e3b\u8981\u7531HDFS\u3001MapReduce\u548cYARN\u4e09\u4e2a\u7cfb\u7edf\u7ec4\u6210\uff0c\u5176\u4e2d\uff0cYARN\u662f\u4e00\u4e2a\u8d44\u6e90\u7ba1\u7406\u7cfb\u7edf\uff0c\u8d1f\u8d23\u96c6\u7fa4\u8d44\u6e90\u7ba1\u7406\u548c\u8c03\u5ea6\uff0cMapReduce\u5219\u662f\u8fd0\u884c\u5728YARN\u4e0a\u79bb\u7ebf\u5904\u7406\u6846\u67b6\uff0c\u5b83\u4e0eHadoop 1.0\u4e2d\u7684Ma...

\u5347\u7ea7python2.6\u52302.7\u5efa\u8bae \u5728\u666e\u901a\u7528\u6237\u4e0b\u9762\u5347\u7ea7\u3002\u60f3\u66f4\u597d\u7684\u5b66\u4e60python\u8bf7\u5173\u6ce8\u9017Python\u57fa\u7840\u6559\u7a0b\u5730\uff01
\u6b65\u9aa4\u5982\u4e0b\uff1a

1.\u4e0b\u8f7dpython2.7.10\u7684src\u5b89\u88c5\u5305
2.\u627e\u5230Modules/Setup\uff0c\u627e\u5230463\u884c\u5de6\u53f3

#zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz
\u53bb\u6389\u6ce8\u91ca\u53d8\u4e3a\uff1a
zlib zlibmodule.c -I$(prefix)/include L$(exec_prefix)/lib -lz
3.\u4e3aconfigure\u589e\u52a0\u6267\u884c\u5c5e\u6027
chmod +x /app/hadoop/python/src/Python-2.7.10/configure

chmod +x /app/hadoop/python/src/Python-2.7.10/Modules/_ctypes/libffi/configure
chmod +x /app/hadoop/python/src/Python-2.7.10/Modules/zlib/configure

4.\u5229\u7528root\u7528\u6237\u5b89\u88c5\u76f8\u5173\u8d44\u6e90

yum groupinstall "Development tools"
yum install MySQL-python zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel -y
yum install python-psycopg2 -y
yum install MySQL-python -y
yum install gcc python-devel -y
yum install python-memcached -y

5. \u5207\u6362\u4e3a \u666e\u901a\u7528\u6237 \u4f20\u7edf\u76843\u6b65\u9aa4
configure --prefix /app/hadoop/python/2.7
make
make install
\u5efa\u7acb\u8f6f\u8fde\u63a5
ln /app/hadoop/python/2.7/bin/python2.7 /app/hadoop/python/2.7/bin/python

6.\u4fee\u6539\u73af\u5883\u53d8\u91cf ~/.bash_profile
export JAVA_HOME=/app/hadoop/java/jdk1.6.0_38
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_HOME=/app/hadoop/hadoop/hadoop-2.5.2
export PATH=/usr/sbin:$PATH
export PYTHONPATH=/app/hadoop/python/2.7/bin
export PATH=$PYTHONPATH:$HADOOP_HOME/bin:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin:$PATH
7.source ~/.bash_profile
8. \u4e0b\u8f7d pip\u5b89\u88c5\u6587\u4ef6 \u5e76\u5b89\u88c5\u5373\u53ef\u3002\u60f3\u66f4\u597d\u7684\u5b66\u4e60python\u8bf7\u5173\u6ce8\u9017Python\u57fa\u7840\u6559\u7a0b\u5730\uff01

nn1为active,nn2为standby,upgrade.sh为批量执行脚本

首先下载2.7.2的hadoop源码进行编译,形成hadoop-2.7.2.tar.gz

安装新版本的hadoop,从log-server上分发新版的hadoop并解压 
cd /letv/setupHadoop 
./upgrade.sh distribute cluster_nodes hadoop-2.7.2.tar.gz /letv/usr/local 
./upgrade.sh common cluster_nodes “cd /letv/usr/local;tar -xzvf hadoop-2.7.2.tar.gz”

将新版hadoop 的 /etc/hadoop下的所有文件替换为原版hadoop的相同文件夹下的配置文件。 
./upgrade .sh common cluster_nodes “cd /letv/usr/local/hadoop-2.7.2/etc;rm -rf hadoop;cp -r /usr/local/hadoop/etc/hadoop /letv/usr/local/hadoop-2.7.2/etc “

一、 Prepare Rolling Upgrade 
1、以hadoop用户登录nn2执行 
“hdfs dfsadmin -rollingUpgrade prepare” 
 
2、nn2上执行 
“hdfs dfsadmin -rollingUpgrade query” 
等待,直到出现 “Proceed with rolling upgrade”如果不出现,重复执行前一个命令

3、完成操作后,在namenode的50070端口页面最上端上会出现相应提示信息,表示rollback文件生成。

二、 Upgrade Active and Standby NameNodes and ZKFC 
1、关闭standby的namenode即nn2和ZKFC。 
hadoop-daemon.sh stop namenode 
hadoop-daemon.sh stop zkfc

2、以rollingUpgrade的方式启动nn2 
切换到root用户 
然后建立新hadoop的软链 
cd /usr/local 
rm -rf hadoop 
ln -s /letv/usr/local/hadoop-2.6.3 hadoop 
chown - R hadoop:hadoop hadoop 
重新启动namenode和zkfc 
hadoop-daemon.sh start namenode -rollingUpgrade started 
hadoop-daemon.sh start zkfc 
启动完成后nn2为standby状态

3、切换nn1和nn2使得nn1为standby(nn2上操作),nn2为active。 
hdfs haadmin -failover testnn1 testnn2

4、 在nn1上重复步骤1和2

5、切换nn1和nn2恢复原来的状态 
hdfs haadmin -failover testnn2 testnn1

三、Upgrade Journalnode 
journalnode更新需要一台一台操作,不可以批量执行操作,否则导致集群挂掉。

1、登录一台journalnode(新集群resourcemanager和journalnode启动在了一台机子上,所以先从standby的resourcemanager入手,然后是active的resourcemanager,其次顺序任意) 
ssh sdf-resourcemanager2

2、停止journalnode服务 
hadoop-daemon.sh stop journalnode 
停止resourcemanager服务(这里是rm2所以会执行这一步,如果接下来的journalnode上不存在rm进程则不需要执行)

3、 安装新版本hadoop 
切换到root用户 
然后建立新hadoop的软链 
cd /usr/local 
rm -rf hadoop 
ln -s /letv/usr/local/hadoop-2.7.2 hadoop 
chown - R hadoop:hadoop hadoop

4、启动新版的journalnode 
hadoop-daemon.sh start journalnode。 
yarn-daemon.sh start resourcemanager。

5、每台journalnode执行1-4操作,没有resourcemanager进程的机子不做rm的重启操作。

注意,一定要等journalnode 完全启动后,才可以执行下一个journalnode的升级操作。可以查看日志,也可以查看/data/hadoop/data2/journal_node/test-cluster/current下的edit文件,看重启journalnode服务的这台服务器是否和其它journalnode节点上相同文件夹的的edit文件同步。同步后,方可继续。

四、Upgrade DataNodes and Nodemanager 
1、选择任意一个datanode(集群可以按机架来批量执行),执行 
hdfs dfsadmin -shutdownDatanode DATANODE_HOTS:50020 upgrade 
yarn-daemon.sh stop nodemanager 
完后datanode和nodemanager进程关闭。

2、安装新版本hadoop 
切换到root用户 
然后建立新hadoop的软链 
cd /usr/local 
rm -rf hadoop 
ln -s /letv/usr/local/hadoop-2.7.2 hadoop 
chown - R hadoop:hadoop hadoop

3、启动datanode和nodemanager 
hadoop-daemon.sh start datanode 
yarn-daemon.sh start nodemanager

4、所有datanode和nodemanager节点执行步骤1、2和3

五、 
确定升级完成后,nn1和nn2上执行 hdfs dfsadmin -rollingUpgrade finalize来结束升级操作。Namenode的50070端口页面的提示信息消失。此步骤不可逆,一旦执行,rollbackfsimage文件变为普通的fsimage文件,集群不可再回到升级前的状态



  • windows10涓媏clipse杩炴帴ubuntu涓媓bas
    绛旓細涓嬭浇鍚庯紝灏唕elease涓殑hadoop-eclipse-kepler-plugin-2.6.0.jar澶嶅埗鍒癊clipse瀹夎鐩綍鐨刾lugins鏂囦欢澶逛腑锛堝紑鐫eclipse闇瑕侀噸鍚級銆4銆佷笅杞絟adoop.dll銆亀inutils.exe銆傞厤缃幆澧冨彉閲廕AVA_HOME鍜孭ath銆5銆侀厤缃Hadoop-Eclipse-Plugin锛屽湪鍚姩Hadoop涔嬪墠锛岀‘淇滺adoop閰嶇疆鏂囦欢涓璫ore-site.xml鏂囦欢涓殑localhost鏀逛负绯荤粺...
  • hadoop闆嗙兢鎼缓瀹屾暣鏁欑▼
    绛旓細1銆佷负闆嗙兢鑷畾涔変竴涓悕绉帮細鍦ㄥ畨瑁呴泦缇ょ粍寤轰箣鍓嶅畨瑁呯▼搴忛渶瑕佸涓绘満杩涜涓浜涚幆澧冩鏌ュ伐浣滐紝DKhadoop闇瑕乺oot鏉冮檺锛屽苟閫氳繃ssh閾炬帴鍒颁富鏈恒2銆侀夋嫨瀹夎妯″紡锛屽畨瑁呮ā寮忔湁涓夌鍙互閫夋嫨锛屽垎鍒槸鍩虹瀹夎銆佸畬鏁村畨瑁呭拰鑷畾涔夊畨瑁呫傚熀鏈畨瑁呬腑鍖呭惈鐨勬湇鍔″唴瀹规湁锛歨afs-2.6.0鐨勫畨瑁呫丣DK1.7.0_79瀹夎銆乊arn-2.6.0...
  • 濡備綍鍦↙inux涓婂畨瑁呬笌閰嶇疆Hadoop-IT168 鎶鏈紑鍙戜笓鍖
    绛旓細涓嬭浇Hadoop-0.20.2.tar.gz骞跺皢鍏惰В鍘,杩欓噷浼氳В鍘嬪埌鐢ㄦ埛鐩綍涓,涓鑸负:/home/[浣犵殑鐢ㄦ埛鍚峕/銆 鍗曡妭鐐规柟寮忛厤缃: 瀹夎鍗曡妭鐐圭殑Hadoop鏃犻』閰嶇疆,鍦ㄨ繖绉嶆柟寮忎笅,Hadoop琚涓烘槸涓涓崟鐙殑Java杩涚▼,杩欑鏂瑰紡缁忓父鐢ㄦ潵璋冭瘯銆 浼垎甯冨紡閰嶇疆: 浣犲彲浠鎶浼垎甯冨紡鐨凥adoop鐪嬪仛鏄彧鏈変竴涓妭鐐圭殑闆嗙兢,鍦ㄨ繖涓泦缇や腑,杩欎釜鑺傜偣鏃㈡槸...
  • Win7 64浣嶇郴缁熷畨瑁hadoop2.2.0鐨勬柟娉
    绛旓細1銆侀厤缃瓹ygwin鐜鍙橀噺PATH.鎶Cygwin瀹夎鐩綍涓嬬殑锛宐in鐩綍浠ュ叆 usrsbin鐩綍锛岄兘娣诲姞鍒癙ATH涓紱2銆佸惎鍔–ygwin.bat 锛屾墦寮濡備笅鍛戒护琛岋紱3銆佷緥濡hadoop瀹夎鍖呮斁鍦ㄥ垎鍖篋锛氫笅锛屽垯瑙e帇鐨勫懡浠や负锛 tar -zxvf /cygdrive/d/hadoop-0.20.2.tar.gz,瑙e帇鍚庡彲浣跨敤ls鍛戒护鏌ョ湅锛堥粯璁よВ鍘嬬洰褰曚负銆侰ygwin瀹夎鐩綍锛夛紱4...
  • 濡備綍瀹夎hadoop鏈湴鍘嬬缉搴
    绛旓細Hadoop瀹夎閰嶇疆snappy鍘嬬缉 [涓]銆 瀹為獙鐜 CentOS 6.3 64浣 Hadoop 2.6.0 JDK 1.7.0_75 [浜宂銆 snappy缂栬瘧瀹夎 2.1銆佷笅杞芥簮鐮 鍒板畼缃 http://code.google.com/p/snappy/ 鎴栬呭埌 https://github.com/google/snappy 涓嬭浇婧愮爜锛岀洰鍓嶇増鏈负 1.1.1銆2.2銆佺紪璇戝畨瑁 瑙e帇 tar -zxvf snappy...
  • ubuntu鎬庝箞瀹夎hadoop yarn
    绛旓細鏈暀绋嬪熀浜庡師鐢 Hadoop 2锛屽湪 Hadoop 2.6.0 (stable) 鐗堟湰涓嬮獙璇侀氳繃锛屽彲閫傚悎浠讳綍 Hadoop 2.x.y 鐗堟湰锛屼緥濡 Hadoop 2.4.1銆侶adoop鐗堟湰 Hadoop 鏈変袱涓富瑕佺増鏈紝Hadoop 1.x.y 鍜 Hadoop 2.x.y 绯诲垪锛屾瘮杈冭佺殑鏁欐潗涓婄敤鐨勫彲鑳芥槸 0.20 杩欐牱鐨勭増鏈侶adoop 2.x 鐗堟湰鍦ㄤ笉鏂洿鏂帮紝鏈暀绋嬪潎鍙...
  • 涓轰粈涔hadoop2.6.0娌℃湁hadoop-client-2.6.0.jar
    绛旓細鍦╩ven浠撳簱閲屾湁hadoop-client jar锛岃繖鏄釜绌洪」鐩紝鍙槸鏂逛究寮曞叆Hadoop 鐩稿叧渚濊禆jar鍖咃紝娌℃湁瀹為檯浠g爜銆俶aven椤圭洰涓洿鎺ュ鍔犱緷璧栧氨鍙互浜 <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.0</version> </dependency> ...
  • Hadoop 瀹樼綉涓嬭浇瀹夎鍖 hadoop瀹樼綉瀹夎鍖呬笅杞
    绛旓細Hadoop瀹樼綉鐨勫畨瑁呭寘涓嬭浇 hadoop涓嬭浇鍖 鎵撳紑Hadoop瀹樼綉 鐐瑰嚮锛欴ownload Hadoop 鎺ヤ笅鏉ョ偣鍑伙細 Please head to thereleasespage to download a releaseof Apache Hadoop.鐐瑰嚮锛欴ownload 鍐嶇偣鍑伙細Download a release now!杩涘叆闀滃儚杩炴帴涓嬭浇椤甸潰锛屽湪璇ラ〉闈腑鏈夋墍涓暅鍍忚繛鎺ヤ笅杞藉湴鍧锛屼换鎰忛夋嫨涓涓繛鎺ュ氨鍙互锛屽彧瑕佽兘...
  • eclipsel閲岄潰娌℃湁hadoop闆嗙兢
    绛旓細eclipsel閲岄潰娌℃湁hadoop闆嗙兢鐨勫師鍥犳槸jdk鐗堟湰涓嶅銆傛牴鎹煡璇㈢浉鍏宠祫鏂欎俊鎭樉绀猴紝瀹夎hadoop2.6闇瑕乯dk7鎵嶅彲浠ワ紝jdk6灏辨墦涓嶅紑锛屼笅杞界殑鎻掍欢瑕佸拰hadoop鐗堟湰涓鑷达紝2.6鐗堟湰灏变笅杞2.6鐨勬彃浠躲
  • 濡備綍閫夋嫨鏈鍚堥傜殑hadoop鐗堟湰
    绛旓細hadoop1.x鐗堟湰涓環adoop-1.2.1鏈绋冲畾锛沨adoop2.x鐗堟湰涓環adoop-2.6.0鏈濂斤紱
  • 扩展阅读:韩国macbookpro ... patreon官网怎么进入 ... xsmax改12promax改回来 ... 怎么进入hadoop本地目录 ... 永久解决0x80071ac3 ... hadoop安装全套教程 ... 怎么进入hadoop的文件夹 ... hadoop怎么退出编辑 ... 一个意外错误0x80071ac3 ...

    本站交流只代表网友个人观点,与本站立场无关
    欢迎反馈与建议,请联系电邮
    2024© 车视网