如何在ubuntu上安装hadoop

\u600e\u4e48\u5728ubuntu\u4e0a\u5b89\u88c5hadoop

1\u3001\u521b\u5efahadoop\u7ba1\u7406\u5458\u5e10\u53f7
\u76f4\u63a5\u5728\u7ec8\u7aef\u6267\u884c\u5982\u4e0b\u547d\u4ee4\u884c\uff1a
1 sudo adduser hadoop
\u7136\u540e\u8981\u6c42\u4f60\u8bbe\u7f6ehadoop\u5e10\u6237\u5bc6\u7801\uff0c\u8fd9\u4e2a\u547d\u4ee4\u662f\u6dfb\u52a0\u4e00\u4e2a\u540d\u4e3ahadoop\u7684\u6807\u51c6\u5e10\u6237\uff0c\u6211\u4eec\u9700\u8981\u7684\u662f\u7ba1\u7406\u5458\u5e10\u53f7
\u53ef\u4ee5\u76f4\u63a5\u5728\u56fe\u5f62\u754c\u9762\u4e0b\u4fee\u6539hadoop\u6743\u9650\uff0c\u5c06\u9f20\u6807\u70b9\u51fb\u53f3\u4e0a\u89d2\u7684\u4e00\u4e2a\u4eba\u5934\u5904\uff0c\u6d6e\u73b0\u5217\u8868\uff0c\u70b9\u51fb\u201c\u7528\u6237\u8d26\u6237\u201d\uff0c\u89e3\u9501\uff0c\u7136\u540e\u66f4\u6539\u4e3a\u7ba1\u7406\u5458\u6743\u9650

2\u3001\u5b89\u88c5ssh\u670d\u52a1
ssh\u53ef\u4ee5\u5b9e\u73b0\u8fdc\u7a0b\u767b\u5f55\u548c\u7ba1\u7406\uff0c\u8be6\u7ec6\u60c5\u51b5\u8bf7google\u767e\u5ea6
ubuntu\u9ed8\u8ba4\u5e76\u6ca1\u6709\u5b89\u88c5ssh\u670d\u52a1\uff0c\u5982\u679c\u901a\u8fc7ssh\u94fe\u63a5ubuntu\uff0c\u9700\u8981\u81ea\u5df1\u624b\u52a8\u5b89\u88c5ssh-server\u3002\u547d\u4ee4\u884c\uff1a
1 sudo apt-get install ssh openssh-server
3\u3001ssh\u65e0\u5bc6\u7801\u9a8c\u8bc1\u767b\u5f55
\u521b\u5efassh-key\uff0c\u8fd9\u91cc\u6211\u4eec\u91c7\u7528rsa\u65b9\u5f0f\uff0c\u547d\u4ee4\u884c\u5982\u4e0b\uff1a
1 ssh-keygen -t rsa -P ""
\u51fa\u73b0\u4e00\u4e2a\u56fe\u5f62\uff0c\u51fa\u73b0\u7684\u56fe\u5f62\u5c31\u662f\u5bc6\u7801\uff0c\u4e0d\u7528\u7ba1\u5b83
1 cat ~/.ssh/id_rsa.pub >> authorized_keys
\u7136\u540e\u5373\u53ef\u65e0\u5bc6\u7801\u9a8c\u8bc1\u767b\u5f55\u4e86\uff0c\u5982\u4e0b\uff1a
1 ssh localhost
\u9000\u51fa\u547d\u4ee4\u884c\u4e3a\uff1a
exit
4\u3001\u89e3\u538bhadoop\u6e90\u7801\u5305
\u7ec8\u7aef\u4e0b\u8fdb\u5165hadoop\u6e90\u7801\u5305\u6240\u5728\u76ee\u5f55\uff0c\u4f7f\u7528\u590d\u5236\u547d\u4ee4\u628ahadoop\u6e90\u7801\u5305\u590d\u5236\u5230/home/hadoop\u4e0b
1 cp hadoop-1.2.1.tar.gz /home/hadoop
\u7136\u540e\u89e3\u538b,\u547d\u4ee4\u884c\u5982\u4e0b
tar -xzvf *.tag.gz

5\u3001\u914d\u7f6ehadoop\u7684hadoop/conf\u4e0b\u7684hadoop-env.sh\uff0ccore-site.xml\uff0cmapred-site.xml\uff0chdfs-site.xml
\u914d\u7f6ehadoop-1.2.1/conf/hadoop-env.sh\uff0c\u547d\u4ee4\u884c\uff1a
1 gedit /home/hadoop/hadoop-1.2.1/conf/hadoop-env.sh
ctrl + f \u641c\u7d22\u5230JAVA_HOME
\u628a\u524d\u9762\u7684#\u53bb\u6389\uff0c\u52a0\u4e0a\u672c\u7cfb\u7edfjdk\u8def\u5f84\uff0c\u4fdd\u5b58\u9000\u51fa
\u914d\u7f6ehadoop-1.2.1/conf/core-site.xml\uff0c\u547d\u4ee4\u884c\uff1a
gedit /home/hadoop/hadoop-1.2.1/conf/core-site.xml
\u5728hadoop\u65b0\u5efahadoop_tmp\u76ee\u5f55\uff0c
\u5c06\u5982\u4e0b \u4e4b\u95f4\u7684\u6dfb\u52a0\u8fdb\u5165\uff0c\u4fdd\u5b58\u9000\u51fa







fs.default.name
hdfs://localhost:9000


hadoop.tmp.dir
/home/hadoop/hadoop-1.2.1/hadoop_tmp
A base for other temporary directories.



\u914d\u7f6ehadoop-1.2.1/conf/mapre-site.xml\uff0c\u547d\u4ee4\u884c\uff1a
1 gedit /home/hadoop/hadoop-1.2.1/conf/mapre-site.xml.xml
\u5c06\u5982\u4e0b \u4e4b\u95f4\u7684\u6dfb\u52a0\u8fdb\u5165\uff0c\u4fdd\u5b58\u9000\u51fa







mapred.job.tracker
localhost:9001


\u914d\u7f6ehadoop-1.2.1/conf/hdfs-site.xml\uff0c\u547d\u4ee4\u884c\uff1a
1 gedit /home/hadoop/hadoop-1.2.1/conf/hdfs-site.xml
\u5c06\u5982\u4e0b \u4e4b\u95f4\u7684\u6dfb\u52a0\u8fdb\u5165\uff0c\u4fdd\u5b58\u9000\u51fa







dfs.replication
1



\u81f3\u6b64hadoop\u7684\u5b89\u88c5\u914d\u7f6e\u5df2\u7ecf\u5b8c\u6bd5\uff0c\u7a0d\u540e\u7684\u662fhadoop\u7684\u521d\u6b21\u8fd0\u884c\u64cd\u4f5c
6\u3001\u683c\u5f0f\u5316hdfs\u6587\u4ef6\u7cfb\u7edf
\u8fdb\u5165hadoop-1.2.1
/bin/hadoop namenode -format

7\u3001\u542f\u52a8hadoop\u670d\u52a1
/bin/start-all.sh
\u51fa\u73b0\u5982\u4e0b\u753b\u9762
jps
jps\u662f\u67e5\u770bjava\u865a\u62df\u673a\u8fd0\u884c\u7684java\u7ebf\u7a0b
\u7136\u540e\u51fa\u73b0\u5982\u4e0b\u753b\u9762
\u4e0d\u8ba1jps\uff0c\u6709\u4e94\u4e2ahadoop\u76f8\u5173\u7ebf\u7a0b\uff0c\u606d\u559c\u4f60\uff0chadoop\u5b89\u88c5\u914d\u7f6e\u6210\u529f\uff0c\u8fd0\u884c\u6b63\u5e38\u3002
\u7136\u540e\u53ef\u4ee5\u9000\u51fahadoop\u3002\uff0c\u4ee5\u540e\u518d\u7528\u65f6\u518d\u542f\u52a8\uff0c\u5bfc\u5165\u6570\u636e

hadoop\u5b98\u65b9\u7f51\u7ad9\u5bf9\u5176\u5b89\u88c5\u914d\u7f6ehadoop\u7684\u6b65\u9aa4\u592a\u7c97\u7565\uff0c\u5728\u8fd9\u7bc7\u535a\u5ba2\u4e2d\uff0c\u6211\u4f1a\u8be6\u7ec6\u4ecb\u7ecd\u5728ubuntu\u4e2d\u5982\u4f55\u5b89\u88c5hadoop\uff0c\u5e76\u5904\u7406\u53ef\u80fd\u51fa\u73b0\u7684\u4e00\u4e9b\u95ee\u9898\u3002\u8fd9\u91cc\u4ecb\u7ecd\u7684\u65b9\u6cd5\u662f\u7528\u4e00\u53f0\u673a\u5668\u865a\u62df\u591a\u4e2a\u8282\u70b9\uff0c\u8fd9\u4e2a\u65b9\u6cd5\u5df2\u5728\u5982\u4e0b\u73af\u5883\u4e2d\u6d4b\u8bd5\u901a\u8fc7\uff1a
OS: Ubuntu 13.10
Hadoop: 2.2.0 (2.x.x)
\u4e2a\u4eba\u8ba4\u4e3a\u5728\u5176\u4ed6\u7248\u672c\u4e0a\u5b89\u88c5Hadoop 2.x.x\u7684\u65b9\u6cd5\u57fa\u672c\u76f8\u540c\uff0c\u56e0\u6b64\u5982\u679c\u4e25\u683c\u6309\u7167\u6211\u7ed9\u7684\u6b65\u9aa4\uff0c\u5e94\u8be5\u4e0d\u4f1a\u6709\u95ee\u9898\u3002

\u524d\u63d0
\u5b89\u88c5 jdk \u548c openssh
$ sudo apt-get install openjdk-7-jdk
$ java -version
java version "1.7.0_55"
OpenJDK Runtime Environment (IcedTea 2.4.7) (7u55-2.4.7-1ubuntu1~0.13.10.1)
OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)
$ sudo apt-get install openssh-server
openjdk\u7684\u9ed8\u8ba4\u8def\u5f84\u662f /usr/lib/jvm/java-7-openjdk-amd64. \u5982\u679c\u4f60\u7684\u9ed8\u8ba4\u8def\u5f84\u548c\u6211\u7684\u4e0d\u540c\uff0c\u8bf7\u518d\u540e\u9762\u7684\u64cd\u4f5c\u4e2d\u66ff\u6362\u6b64\u8def\u5f84\u3002
\u6dfb\u52a0Hadoop\u7528\u6237\u7ec4\u548c\u7528\u6237
$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hduser
$ sudo adduser hduser sudo
\u7136\u540e\u5207\u6362\u5230hduser\u8d26\u6237
\u914d\u7f6eSSH
\u73b0\u5728\u4f60\u5728hduser\u8d26\u6237\u4e2d\u3002 \u8bf7\u6ce8\u610f\u4e0b\u9762\u547d\u4ee4\u4e2d '' \u662f\u4e24\u4e2a\u5355\u5f15\u53f7 \u2018
$ ssh-keygen -t rsa -P ''
\u5c06public key\u52a0\u5165\u5230authorized_keys\u4e2d\uff0c\u8fd9\u6837hadoop\u5728\u8fd0\u884cssh\u65f6\u5c31\u4e0d\u9700\u8981\u8f93\u5165\u5bc6\u7801\u4e86
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
\u73b0\u5728\u6211\u4eec\u6d4b\u8bd5\u4e00\u4e0bssh
$ ssh localhost
\u5982\u679c\u4f60\u88ab\u8be2\u95ee\u662f\u5426\u786e\u8ba4\u8fde\u63a5\uff0c\u8f93\u5165yes\u3002\u5982\u679c\u4f60\u53d1\u73b0\u5728\u5373\u4e0d\u9700\u8981\u8f93\u5bc6\u7801\uff0ccool -- \u81f3\u5c11\u5230\u76ee\u524d\u4f4d\u7f6e\u4f60\u662f\u6b63\u786e\u7684\u3002\u5426\u5219\uff0c\u8bf7debug\u3002
$ exit

\u4e0b\u8f7dHadoop 2.2.0 (2.x.x)
$ cd ~
$ wget http://www.trieuvan.com/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
$ sudo tar -xzvf hadoop-2.2.0.tar.gz -C /usr/local
$ cd /usr/local
$ sudo mv hadoop-2.2.0 hadoop
$ sudo chown -R hduser:hadoop hadoop
\u914d\u7f6eHadoop\u73af\u5883
$ cd ~
$ vim .bashrc
\u5c06\u4e0b\u9762\u7684\u5185\u5bb9\u590d\u5236\u5230.bashrc\u4e2d
#Hadoop variables
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
###end of paste

$ cd /usr/local/hadoop/etc/hadoop
$ vim hadoop-env.sh
\u5c06\u4e0b\u9762\u7684\u4e09\u884c\u52a0\u5165\u5230hadoop-env.sh\u4e2d\uff0c\u5220\u9664\u539f\u6765\u7684 "export JAVA_HOME"\u90a3\u884c
# begin of paste
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/
export HADOOP_COMMON_LIB_NATIVE_DIR="/usr/local/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"
### end of paste


\u914d\u7f6eHadoop
$ cd /usr/local/hadoop/etc/hadoop
$ vim core-site.xml
\u5c06\u4e0b\u9762\u7684\u5185\u5bb9\u590d\u5236\u5230 \u6807\u7b7e\u5185

fs.default.name
hdfs://localhost:9000


$ vim yarn-site.xml
\u5c06\u4e0b\u9762\u7684\u5185\u5bb9\u590d\u5236\u5230 \u6807\u7b7e\u5185

yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler


$ mv mapred-site.xml.template mapred-site.xml
$ vim mapred-site.xml
\u5c06\u4e0b\u9762\u7684\u5185\u5bb9\u590d\u5236\u5230 \u6807\u7b7e\u5185

mapreduce.framework.name
yarn


$ mkdir -p ~/mydata/hdfs/namenode
$ mkdir -p ~/mydata/hdfs/datanode
$ vim hdfs-site.xml
\u5c06\u4e0b\u9762\u7684\u5185\u5bb9\u590d\u5236\u5230 \u6807\u7b7e\u5185

dfs.replication
1


dfs.namenode.name.dir
file:/home/hduser/mydata/hdfs/namenode


dfs.datanode.data.dir
file:/home/hduser/mydata/hdfs/datanode


\u683c\u5f0f\u5316 namenode
\u7b2c\u4e00\u6b21\u542f\u52a8hadoop\u670d\u52a1\u4e4b\u524d\uff0c\u5fc5\u987b\u6267\u884c\u683c\u5f0f\u5316namenode
$ hdfs namenode -format
\u542f\u52a8\u670d\u52a1
$ start-dfs.sh && start-yarn.sh
\u4f7f\u7528jps\u67e5\u770b\u670d\u52a1
$ jps
\u5982\u679c\u4e00\u5207\u987a\u5229\uff0c\u4f60\u4f1a\u770b\u5230\uff1a
17785 SecondaryNameNode
17436 NameNode
17591 DataNode
18096 NodeManager
17952 ResourceManager
23635 Jps
\u5f53\u6267\u884cstart-dfs.sh\u7684\u65f6\u5019\uff0c\u4f60\u53ef\u80fd\u4f1a\u770b\u5230 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable \uff0c\u4e0d\u7528\u62c5\u5fc3\uff0c\u5176\u5b9e\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\uff0c\u6211\u4eec\u4f1a\u5728trouble shooting\u90a3\u4e00\u8282\u8c08\u5230\u8fd9\u4e2a\u95ee\u9898\u3002

\u6d4b\u8bd5\u5e76\u8fd0\u884c\u793a\u4f8b
$ cd /usr/local/hadoop
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -write -nrFiles 20 -fileSize 10
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar TestDFSIO -clean
$ hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5

\u7f51\u9875\u754c\u9762
Cluster status: http://localhost:8088
HDFS status: http://localhost:50070
Secondary NameNode status: http://localhost:50090

Trouble-shooting
1. Unable to load native-hadoop library for your platform.
\u8fd9\u662f\u4e00\u4e2a\u8b66\u544a\uff0c\u57fa\u672c\u4e0d\u4f1a\u5f71\u54cdhadoop\u7684\u4f7f\u7528\uff0c\u4f46\u662f\u5728\u4e4b\u540e\u6211\u4eec\u8fd8\u662f\u7ed9\u4e88\u89e3\u51b3\u8fd9\u4e2awarning\u7684\u65b9\u6cd5\u3002\u901a\u5e38\u6765\u8bb2\uff0c\u51fa\u73b0\u8fd9\u4e2awarning\u7684\u539f\u56e0\u662f\u4f60\u572864\u4f4d\u7684\u7cfb\u7edf\u4e0a\uff0c\u4f46\u662fhadoop\u7684package\u662f\u4e3a32\u4f4d\u7684\u673a\u5668\u7f16\u8bd1\u7684\u3002\u5728\u8fd9\u79cd\u60c5\u51b5\u4e0b\uff0c\u786e\u8ba4\u4f60\u4e0d\u8981\u5fd8\u8bb0\u5728hadoop-env.sh\u4e2d\u52a0\u5165\u8fd9\u51e0\u884c\uff1a
export HADOOP_COMMON_LIB_NATIVE_DIR="/usr/local/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"
\u5426\u5219\u4f60\u7684hadoop\u4e0d\u80fd\u6b63\u5e38\u5de5\u4f5c\u3002\u5982\u679c\u4f60\u7528\u7684\u7cfb\u7edf\u548chadoop\u7684package\u76f8\u7b26(32\u4f4d)\uff0c\u8fd9\u4e24\u884c\u662f\u4e0d\u5fc5\u8981\u7684\u3002
\u6211\u4eec\u4e0d\u5e0c\u671b\u6709warning\uff0c\u5982\u4f55\u89e3\u51b3\uff1f\u65b9\u6cd5\u662f\u81ea\u5df1\u91cd\u65b0\u7f16\u8bd1\u6e90\u4ee3\u7801\u3002\u91cd\u65b0\u7f16\u8bd1\u5176\u5b9e\u5f88\u7b80\u5355\uff1a
\u5b89\u88c5 maven
$ sudo apt-get install maven
\u5b89\u88c5 protobuf-2.5.0 or later
$ curl -# -O https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
$ tar -xzvf protobuf-2.5.0.tar.gz
$ cd protobuf-2.5.0
$ ./configure --prefix=/usr
$ make
$ sudo make install
$ cd ..
\u73b0\u5728\u5e76\u7f16\u8bd1hadoop\u6e90\u4ee3\u7801\uff0c\u6ce8\u610f\u7f16\u8bd1\u4e4b\u524d\u9700\u8981\u5148\u7ed9\u6e90\u4ee3\u7801\u6253\u4e2a\u8865\u4e01
$ wget http://www.eu.apache.org/dist/hadoop/common/stable/hadoop-2.2.0-src.tar.gz
$ tar -xzvf hadoop-2.2.0-src.tar.gz
$ cd hadoop-2.2.0-src
$ wget https://issues.apache.org/jira/secure/attachment/12614482/HADOOP-10110.patch
$ patch -p0 < HADOOP-10110.patch
$ mvn package -Pdist,native -DskipTests -Dtar
\u73b0\u5728\u5230 hadoop-dist/target/ \u76ee\u5f55\u4e0b, \u4f60\u4f1a\u770b\u5230 hadoop-2.2.0.tar.gz or hadoop-2.2.0, \u4ed6\u4eec\u5c31\u662f\u7f16\u8bd1\u540e\u7684hadoop\u5305\u3002 \u4f60\u53ef\u4ee5\u4f7f\u7528\u81ea\u5df1\u7f16\u8bd1\u7684\u5305\uff0c\u540c\u6837\u6309\u7167\u4e4b\u524d\u7684\u6b65\u9aa4\u5b89\u88c564\u4f4d\u7684hadoop\u3002\u5982\u679c\u4f60\u5df2\u7ecf\u5b89\u88c5\u4e8632\u4f4d\u7684hadoop\uff0c\u53ea\u9700\u8981\u66ff\u6362 /usr/local/hadoop/lib/native \u76ee\u5f55\uff0c\u7136\u540e\u5c06\u5982\u4e0b\u4e24\u884c\u4ecehadoop-env.sh\u4e2d\u79fb\u9664\u5373\u53ef\uff1a
export HADOOP_COMMON_LIB_NATIVE_DIR="/usr/local/hadoop/lib/native/"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"

2. datanode \u4e0d\u80fd\u88ab\u542f\u52a8
\u4e00\u4e2a\u5e38\u7528\u7684\u65b9\u6cd5\u662f\u5148\u5220\u6389datanode\u5bf9\u5e94\u7684\u6587\u4ef6\u5939\u8bd5\u8bd5\uff0c\u6ce8\u610f\u8fd9\u6837\u505a\u53ef\u80fd\u4f1a\u4e22\u5931\u4f60\u7684\u6570\u636e\u3002\u53e6\u4e00\u79cd\u65b9\u6cd5\u662f\u5230 /usr/local/hadoop/logs/hadoop-hduser-datanode-*.log \u4e2d\u68c0\u67e5\u539f\u56e0\u5e76\u5bf9\u75c7\u4e0b\u836f\u3002

1、创建hadoop管理员帐号
直接在终端执行如下命令行:
1 sudo adduser hadoop
然后要求你设置hadoop帐户密码,这个命令是添加一个名为hadoop的标准帐户,我们需要的是管理员帐号
可以直接在图形界面下修改hadoop权限,将鼠标点击右上角的一个人头处,浮现列表,点击“用户账户”,解锁,然后更改为管理员权限

2、安装ssh服务
ssh可以实现远程登录和管理,详细情况请google百度
ubuntu默认并没有安装ssh服务,如果通过ssh链接ubuntu,需要自己手动安装ssh-server。命令行:
1 sudo apt-get install ssh openssh-server
3、ssh无密码验证登录
创建ssh-key,这里我们采用rsa方式,命令行如下:
1 ssh-keygen -t rsa -P ""
出现一个图形,出现的图形就是密码,不用管它
1 cat ~/.ssh/id_rsa.pub >> authorized_keys
然后即可无密码验证登录了,如下:
1 ssh localhost
退出命令行为:
exit
4、解压hadoop源码包
终端下进入hadoop源码包所在目录,使用复制命令把hadoop源码包复制到/home/hadoop下
1 cp hadoop-1.2.1.tar.gz /home/hadoop
然后解压,命令行如下
tar -xzvf *.tag.gz

5、配置hadoop的hadoop/conf下的hadoop-env.sh,core-site.xml,mapred-site.xml,hdfs-site.xml
配置hadoop-1.2.1/conf/hadoop-env.sh,命令行:
1 gedit /home/hadoop/hadoop-1.2.1/conf/hadoop-env.sh
ctrl + f 搜索到JAVA_HOME
把前面的#去掉,加上本系统jdk路径,保存退出
配置hadoop-1.2.1/conf/core-site.xml,命令行:
gedit /home/hadoop/hadoop-1.2.1/conf/core-site.xml
在hadoop新建hadoop_tmp目录,
将如下<configuration> </configuration>之间的添加进入,保存退出
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-1.2.1/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>

</configuration>
配置hadoop-1.2.1/conf/mapre-site.xml,命令行:
1 gedit /home/hadoop/hadoop-1.2.1/conf/mapre-site.xml.xml
将如下<configuration> </configuration>之间的添加进入,保存退出
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
配置hadoop-1.2.1/conf/hdfs-site.xml,命令行:
1 gedit /home/hadoop/hadoop-1.2.1/conf/hdfs-site.xml
将如下<configuration> </configuration>之间的添加进入,保存退出
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

</configuration>
至此hadoop的安装配置已经完毕,稍后的是hadoop的初次运行操作
6、格式化hdfs文件系统
进入hadoop-1.2.1
/bin/hadoop namenode -format

7、启动hadoop服务
/bin/start-all.sh
出现如下画面
jps
jps是查看java虚拟机运行的java线程
然后出现如下画面
不计jps,有五个hadoop相关线程,恭喜你,hadoop安装配置成功,运行正常。
然后可以退出hadoop。,以后再用时再启动,导入数据

  • ubuntu sudo gurb-install /dev/sda鍚巜indows鏃犳硶鍚姩
    绛旓細濡傛灉鐜板湪浣犳湁LiveUSB锛屽氨鐢ㄥ畠鍚姩锛岀劧鍚庢寕杞ubuntu鍒/mnt 鐒跺悗瀹夎Grub鍒扮‖鐩 sudo grub-install --root-directory=/mnt /dev/sda 鐒跺悗缂栬緫寮曞鏂囦欢sudo gedit /mnt/boot/grub/grub.cfg 鎼滅储'menuentry',鍒犻櫎澶氫綑鍚姩椤 鎼滅储'set root='锛屾壘鍒'set root=(hd?,?)'锛屾妸锛熸浛鎹负ubuntu閭d釜鍒嗗尯 鐒跺悗...
  • 姹傚姪tensorflow涓嬮亣鍒癱uda compute capability闂
    绛旓細閫夋嫨 linux, x86-64, ubuntu, 16.04, runfile(local)1.2 涓嬭浇cuDNN 杩涘叆cudnn鐨勪笅杞介〉,涓鍫嗚皟鏌,鏃ュ織鍐欐椂涓嬭浇鐨勬槸[Download cuDNN v5 (May 27, 2016), for CUDA 8.0 RC],鐐瑰紑閫塴inux,涓嶅嚭鎰忓鐨勮瘽杩欎釜灏辨槸涓嬭浇鍦板潃.2 纭GCC鐗堟湰,瀹夎渚濊禆搴撶‘璁ゆ湰鏈篻cc鐗堟湰,16.04榛樿鐨勬槸gcc 5,杩欓噷瀹夎闇瑕佺殑鏈楂樻槸g...
  • 鍙岀郴缁烲inux鍒嗗尯璋冩暣鍙岀郴缁焞inux鍒嗗尯
    绛旓細瀹夎椤哄簭涓猴細鍏堝畨瑁呭ソwindows绯荤粺锛屾瘮濡倃inxp鍒扮涓涓垎鍖猴紝鍗充富鍒嗗尯C鐩樸備箣鍚庨噸鍚椂锛屼互linux寮曞瀹夎鐩樺惎鍔紝瀹夎linux鍒扮浜屼釜鍒嗗尯锛屽嵆纭洏閫昏緫鎵╁睍鍒嗗尯绗竴涓綅缃紝鍗had0,1褰撴娴嬪埌windows瀛樺湪鏃讹紝浼氳嚜鍔ㄦ彁绀哄垱寤哄弻绯荤粺grub鍙紩瀵艰彍鍗曘傝繖鏍峰氨OK浜??ubuntu鍙岀郴缁熷垎鍖烘柟妗堣瑙o紵1銆佸唴瀛樿姹傦細涓庣數鑴戝唴瀛...
  • ubuntu鎬庝箞瀹夎povray
    绛旓細1. Open a shell 2. create a temporary directory somewhere to unpack the POV-Ray distribution for example:mkdir ~/povray/ 3. Get the POV-Ray for GNU/Linux binary package from the POV-Ray website. Save this package in the new directory you just created, for example /home/<u...
  • 濡備綍瀹夎,閰嶇疆鍜屼娇鐢∟eutron鐨勫悇涓粍浠
    绛旓細鍑轰簬婕旂ず涔嬬洰鐨勶紝浠ヤ笅鏄 Ubuntu銆丷ed Hat (Red Had Enterprise Linux銆丆entOS銆丗edora锛夊拰 openSUSE 鐨勪富瑕佸懡浠わ細Ubuntu锛瀹夎 neutron-server 鍜岃闂 API 鐨勫鎴风锛 $sudo apt-get install neutron-server python-neutronclient 瀹夎鎻掍欢锛歴udo apt-get install neutron-plugin-<plugin-name> 渚嬪锛歴ud...
  • Ubuntu12.04鎬庝箞瑁.tar.gz
    绛旓細readme涓婇兘璇翠簡鍢涳紝浣犺浠oot韬唤鏉瀹夎锛実z鍖呰В鍘 鍦ㄧ粓绔笅 su 鎴栨槸sudo 锛屽啀鎵ц杩欎釜install.sh
  • 瀹夎Ubuntu鏃堕亣鍒扮殑闂
    绛旓細閭i噷鏈変袱涓洰褰曪細BOOT鍜ubuntu銆傛垜浠嶣OOT鐩綍澶嶅埗浜唃rubx64.efi鏂囦欢锛屽苟灏嗗叾鏀惧叆ubuntu鐩綍骞堕噸鍛藉悕涓簃mx64.efi銆傜劧鍚庢垜鍏抽棴鏈哄櫒骞堕噸鏂板惎鍔ㄥ苟杩涘叆BIOS銆 鍦ㄩ偅閲屾垜璁剧疆鏈哄櫒浠巙buntu鍚姩锛堜互鍓嶄粠鏈湪閭i噷鍒楀嚭锛夈傝繖灏辨槸鎴戣В鍐宠繖涓儌鎽婂瓙鐨勬柟娉曘傚湪鐩稿叧鐨勭嚎绋嬩腑杩樻湁鍏朵粬鏈夌敤鐨勫缓璁紝鍙兘鏈夊姪浜庤В鍐崇被浼肩殑闂...
  • 璇锋暀澶х浠Ubuntu瀹夎matlab鐨勯棶棰
    绛旓細1) Mount Matlab 2014a UNIX disc and run the appropriate installer for either Linux(UNIX) or MAC OS X 2) choose "install manually without using the internet"3) when prompted to enter the "file installation key" use 12345-67890-12345-67890 ( 20 digits Nothing special... you ...
  • Windows7 涓嬬‖鐩瀹夎ubuntu閬囧埌闂,姹傞珮鎵嬫寚鐐(鐢╡asybcd瀹夎)
    绛旓細闀滃儚鏂囦欢鍚嶅繀椤讳繚鎸佷竴鑷达紒 鏈濂芥槸鏀惧埌C鐩 姝ゅ涓嶅悓鐨勭數鑴戞湁鏁伴噺涓嶇瓑鐨勯殣钘忕洏锛堜竴鑸1-2涓級鎵浠鐩樺彲鑳芥槸 锛坔d0,2锛夋湰浜虹殑acer灏辨湁涓ら殣钘忕洏 蹇呴』鏄紙hd0,2锛
  • Windows7 涓嬬‖鐩瀹夎ubuntu閬囧埌闂,姹傞珮鎵嬫寚鐐(鐢╡asybcd瀹夎)
    绛旓細(hd0,0)鍒(hd0,3)琛ㄧず涓诲垎鍖猴紝浠(hd0,4)寮濮嬭〃绀洪昏緫鍒嗗尯銆傚鏋滀綘鐨勯暅鍍忔枃浠舵斁鍦―鐩橈紝搴旇鍐(hd0,4).
  • 扩展阅读:lute苹果手机轻量版 ... kali linux安装 ... kali linux手机直装版 ... ubuntu官网网址 ... 外国spark实践网站入口 ... ubuntu视频 ... 线路检测入口网页版 ... ubuntu安装hadoop教程 ... ubuntu touch anbox ...

    本站交流只代表网友个人观点,与本站立场无关
    欢迎反馈与建议,请联系电邮
    2024© 车视网