搭建基于Eclipse的Hadoop测试环境
本机的环境如下:
Eclipse 3.6
Hadoop-0.20.2
Hive-0.5.0-dev
1. 安装hadoop-0.20.2-eclipse-plugin的插件。注意:Hadoop目录中的\hadoop-0.20.2\contrib \eclipse-plugin\hadoop-0.20.2-eclipse-plugin.jar在Eclipse3.6下有问题,无法在 Hadoop Server上运行,可以从http://code.google.com/p/hadoop-eclipse-plugin/下载
2. 选择Map/Reduce视图:window -> open pers.. -> other.. -> map/reduce
3. 增加DFS Locations:点击Map/Reduce Locations---> New Hadoop Loaction,填写对应的host和port
Map/Reduce Master:
Host: 10.10.xx.xx
Port: 9001
DFS Master:
Host: 10.10.xx.xx(选中 User M/R Master host即可)
Port: 9000
User name: root
更改Advance parameters 中的 hadoop.job.ugi, 默认是 DrWho,Tardis, 改成:root,Tardis。如果看不到选项,则使用Eclipse -clean重启Eclipse
否则,可能会报错org.apache.hadoop.security.AccessControlException
4. 设置本机的Host:
10.10.xx.xx zw-hadoop-master. zw-hadoop-master
#注意后面需要还有一个zw-hadoop-master.,否则运行Map/Reduce时会报错:
java.lang.IllegalArgumentException: Wrong FS: hdfs://zw-hadoop-master:9000/user/root/oplog/out/_temporary/_attempt_201008051742_0135_m_000007_0, expected: hdfs://zw-hadoop-master.:9000
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:352)
5. 新建一个Map/Reduce Project,新建Mapper,Reducer,Driver类,注意,自动生成的代码是基于老版本的Hadoop,自己修改:
package com.sohu.hadoop.test;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class MapperTest extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String userid = value.toString().split("[|]")[2];
context.write(new Text(userid), new IntWritable(1));
}
}
package com.sohu.hadoop.test;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class ReducerTest extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
package com.sohu.hadoop.test;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.GzipCodec;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class DriverTest {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args)
.getRemainingArgs();
if (otherArgs.length != 2)
{
System.err.println("Usage: DriverTest <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "Driver Test");
job.setJarByClass(DriverTest.class);
job.setMapperClass(MapperTest.class);
job.setCombinerClass(ReducerTest.class);
job.setReducerClass(ReducerTest.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
conf.setBoolean("mapred.output.compress", true);
conf.setClass("mapred.output.compression.codec", GzipCodec.class,CompressionCodec.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
6. 在DriverTest上,点击Run As ---> Run on Hadoop,选择对应的Hadoop Locaion即可
Eclipse插件没有配置成功,是Eclipse版本问题吗?我的是3.6
您好,您文章中第3点里提到的AccessControlException的解决方法,我尝试,没有解决问题。这里我用到的是5台机器配置的集群,3台是cygwin,主机是cygwin。我的eclipse里程序是在本机上写的(非主机),然后配置mapreduce插件的属性的时候连接到主机上的,点击run on hadoop会报类似3的错误,按照3的方法解决之后,仍然报错。先报错误是access=WRITE, inode="user",我在网上找了一下,用了hadoop dfs -chmod 777 /user,然后报错access=WRITE, inode="",这个就没有办法解决了。您知道如何解决这个问题吗?谢谢。
hadoop.job.ugi这个属性是要改成运行Hadoop的用户,如果以root运行的,那么就改成root,如果是其它用户运行的,就改成相应的用户名
你好,我在myeclipse下用了hadoop0.20.2的插件,请问怎么部署server,帮我看看,我qq109492927
你好!
我想请教一下基于Eclipse的远端Hadoop应用开发环境的配置的问题.具体是这样的:
我用几台机器搭建了一个hadoop平台,namenode所在的机器是双网卡.环境内部使用的内部IP,因此fs.default.name以及mapred.job.tracker里边都是内部ip地址. 外部地址用来与外部通信. . 但是在配置eclipse的时候define hadoop location 的Host到底怎么填写呢. 请赐教. 谢谢