Apache Hadoop Combiner java example

posted on Nov 20th, 2016

Apache Hadoop

Hadoop is an Apache open source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models.

The Hadoop framework application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale up from single server to thousands of machines, each offering local computation and storage.

Pre Requirements

1) A machine with Ubuntu 14.04 LTS operating system installed.

2) Apache Hadoop 2.6.4 pre installed (How to install Hadoop on Ubuntu 14.04)

Hadoop Combiner Example

The Combiner class is used in between the Map class and the Reduce class to reduce the volume of data transfer between Map and Reduce. Usually, the output of the map task is large and the data transferred to the reduce task is high.

A Combiner, also known as a semi-reducer, is an optional class that operates by accepting the inputs from the Map class and thereafter passing the output key-value pairs to the Reducer class. The main function of a Combiner is to summarize the map output records with the same key. The output (key-value collection) of the combiner will be sent over the network to the actual Reducer task as input.

Step 1 - Add all hadoop jar files to your java project. Add following jars.

/usr/local/hadoop/share/hadoop/common/*
/usr/local/hadoop/share/hadoop/common/lib/*
/usr/local/hadoop/share/hadoop/mapreduce/*
/usr/local/hadoop/share/hadoop/mapreduce/lib* 
/usr/local/hadoop/share/hadoop/yarn/*
/usr/local/hadoop/share/hadoop/yarn/lib/*

Combiner.java

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class Combiner {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(Combiner.class);
    job.setMapperClass(TokenizerMapper.class);
    //combiner   
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path("hdfs://localhost:9000/user/hduser/input"));
    FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost:9000/user/hduser/output"));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

Step 2 - Change the directory to /usr/local/hadoop/sbin

$ cd /usr/local/hadoop/sbin

Step 3 - Start all hadoop daemons

$ start-all.sh

Step 4 - Create input.txt file. In my case, i have stored input.txt in /home/hduser/Desktop/hadoop/ directory.

input.txt

Step 5 - Add following lines to input.txt file.

hadoop java hello pig hive sqoop hadoop
hadoop java hello pig hive sqoop hadoop
hadoop java hello pig hive sqoop hadoop
hadoop java hello pig hive sqoop hadoop
hadoop java hello pig hive sqoop hadoop
hadoop java hello pig hive sqoop hadoop

Step 6 - Make a new input directory in HDFS

$ hdfs dfs -mkdir /user/hduser/input

Step 7 - Copy the input.txt from local file system to HDFS.

$ hdfs dfs -copyFromLocal /home/hduser/Desktop/hadoop/input.txt /user/hduser/input

Step 8 - Run your Combiner program by submitting java project jar file to hadoop. Creating jar file is left to you.

$ hadoop jar /path/combiner.jar Combiner

Step 9 - Now you can see the output files.

$ hdfs dfs -cat /user/hduser/output/part-r-00000

With Combiner

Apache Hadoop Combiner Java Example

Without Combiner

Apache Hadoop Combiner Java Example

Step 10 - Dont forget to stop hadoop daemons.

$ stop-all.sh

Please share this blog post and follow me for latest updates on

facebook             google+             twitter             feedburner

Previous Post                                                                                          Next Post

Labels : Hadoop Standalone Mode Installation   Hadoop Pseudo Distributed Mode Installation   Hadoop Fully Distributed Mode Installation   Hadoop HDFS commands usage Hadoop Commissioning and Decommissioning DataNode     Hadoop Mapper/Reducer Java Example   Hadoop WordCount Java Example   Hadoop Partitioner Java Example   Hadoop HDFS operations using Java   Hadoop Distributed Cache Java Example