单表关联
实例中给出child-parent(孩子——父母)表,要求输出grandchild-grandparent(孙子——爷奶)表。
file:
child parent Tom Lucy Tom Jack Jone Lucy Jone Jack Lucy Mary Lucy Ben Jack Alice Jack Jesse Terry Alice Terry Jesse Philip Terry Philip Alma Mark Terry Mark Alma
设计思路
MapReduce的shuffle过程会将相同的key会连接在一起,所以可以将map结果的key设置成待连接的列。要连接的是左表的parent列和右表的child列,且左表和右表是同一个表,所以在map阶段将读入数据分割成child和parent之后,会将parent设置成key,child设置成value进行输出,并作为左表;再将同一对child和parent中的child设置成key,parent设置成value进行输出,作为右表。为了区分输出中的左右表,需要在输出的value中再加上左右表的信息,比如在value的String最开始处加上字符1表示左表,加上字符2表示右表。这样在map的结果中就形成了左表和右表,然后在shuffle过程中完成连接。reduce接收到连接的结果,其中每个key的value-list就包含了"grandchild--grandparent"关系。取出每个key的value-list进行解析,将左表中的child放入一个数组,右表中的parent放入一个数组,然后对两个数组求笛卡尔积就是最后的结果了。
代码实现
Mapper类
import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.util.StringUtils; public class MyMapper extends Mapper<LongWritable, Text, Text, Text>{ private Text k = new Text(); private Text v = new Text(); @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, Text>.Context context) throws IOException, InterruptedException { String line = value.toString(); String[] tmp = line.split(" +"); if(tmp.length>0){ if(tmp[0].equals("child") && tmp[1].equals("parent")) return; k.set(tmp[0]); v.set(tmp[1]+"1"); context.write(k,v); k.set(tmp[1]); v.set(tmp[0]+"2"); context.write(k, v); } } }
Reducer 类
import java.io.IOException; import java.util.ArrayList; import java.util.List; import org.apache.hadoop.io.FloatWritable; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class MyReducer extends Reducer<Text, Text, Text, Text>{ private Text k = new Text(); private Text v = new Text(); @Override protected void setup(Reducer<Text, Text, Text, Text>.Context context) throws IOException, InterruptedException { context.write(new Text("grandChild"), new Text("grandParent")); } @Override protected void reduce(Text key, Iterable<Text> value,Context context) throws IOException, InterruptedException { List<String> child = new ArrayList<String>(); List<String> grand = new ArrayList<String>(); for(Text val : value){ String str = val.toString(); String stf = str.substring(str.length()-1); String con = str.substring(0, str.length()-1); int flag = Integer.parseInt(stf) ; if(flag == 1){ grand.add(con); }else if(flag == 2){ child.add(con); } } for(int i = 0;i<child.size();i++){ k.set(child.get(i)); for(int j=0; j<grand.size();j++){ v.set(grand.get(j)); context.write(k,v); } } } }
Job驱动类
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.FloatWritable; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class FamliyShip { public static void main(String[] args) throws Exception{ Configuration conf = new Configuration(); Job job = Job.getInstance(conf); job.setJarByClass(FamliyShip.class); job.setMapperClass(MyMapper.class); job.setReducerClass(MyReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); FileInputFormat.addInputPath(job, new Path("hdfs://localhost:9000/usr/qqx/familyinput")); FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:9000/usr/qqx/familyoutput")); System.exit(job.waitForCompletion(true)?0:1); } }
时间: 2024-10-14 00:45:30