Hadoop Python MapReduce Tutorial for Beginners
Hire me to supercharge your Hadoop and Spark projects
I help businesses improve their return on investment from big data projects. I do everything from software architecture to staff training. Learn More
This article originally accompanied my tutorial session at the Big Data Madison Meetup, November 2013.
The goal of this article is to:
- introduce you to the hadoop streaming library (the mechanism which allows us to run non-jvm code on hadoop)
- teach you how to write a simple map reduce pipeline in Python (single input, single output).
- teach you how to write a more complex pipeline in Python (multiple inputs, single output).
There are other good resouces online about Hadoop streaming, so I’m going over old ground a little. Here are some good links:
- Hadoop Streaming official Documentation
- Michael Knoll’s Python Streaming Tutorial
- An Amazon EMR Python streaming tutorial
If you are new to Hadoop, you might want to check out my beginners guide to Hadoop before digging in to any code (it’s a quick read I promise!).
Setup
I’m going to use the Cloudera Quickstart VM to run these examples.
Once you’re booted into the quickstart VM we’re going to get our dataset. I’m going to use the play-by-play nfl data by Brian Burke. To start with we’re only going to use the data in his Git repository.
Once you’re in the cloudera VM, clone the repo:
To start we’re going to use stadiums.csv
. However this data was encoded in Windows (grr) so has ^M
line separators instead of new lines \n
. We need to change the encoding before we can play with it:
Hadoop Streaming Intro
The way you ordinarily run a map-reduce is to write a java program with at least three parts.
- A Main method which configures the job, and lauches it
- set # reducers
- set mapper and reducer classes
- set partitioner
- set other hadoop configurations
- A Mapper Class
- takes K,V inputs, writes K,V outputs
- A Reducer Class
- takes K, Iterator[V] inputs, and writes K,V outputs
Hadoop Streaming is actually just a java library that implements these things, but instead of actually doing anything, it pipes data to scripts. By doing so, it provides an API for other languages:
- read from STDIN
- write to STDOUT
Streaming has some (configurable) conventions that allow it to understand the data returned. Most importantly, it assumes that Keys and Values are separated by a \t
. This is important for the rest of the map reduce pipeline to work properly (partitioning and sorting). To understand why check out my intro to Hadoop, where I discuss the pipeline in detail.
Running a Basic Streaming Job
It’s just like running a normal mapreduce job, except that you need to provide some information about what scripts you want to use.
Hadoop comes with the streaming jar in it’s lib directory, so just find that to use it. The job below counts the number of lines in our stadiums file. (This is really overkill, because there are only 32 records)
A good way to make sure your job has run properly is to look at the jobtracker dashboard. In the quickstart VM there is a link in the bookmarks bar.
You should see your job in the running/completed sections, clicking on it brings up a bunch of information. The most useful data on this page is under the Map-Reduce Framework
section, in particular look for stuff like:
- Map Input Records
- Map Output Records
- Reduce Output Records
In our example, input records are 32 and output records is 1:
A Simple Example in Python
Looking in columns.txt we can see that the stadium file has the following fields:
Lets use map reduce to find the number of stadiums with artificial and natrual playing surfaces.
The pseudo-code looks like this:
You can find the finished code in my Hadoop framework examples repository.
Important Gotcha!
The reducer interface for streaming is actually different than in Java. Instead of receiving reduce(k, Iterator[V])
, your script is actually sent one line per value, including the key.
So for example, instead of receiving:
It will receive:
This means you have to do a little state tracking in your reducer. This will be demonstrated in the code below.
To follow along, check out my git repository (on the virtual machine):
Mapper
Reducer
You might notice that the reducer is significantly more complex then the pseudocode. That is because the streaming interface is limited and cannot really provide a way to implement the standard API.
As noted, each line read contains both the KEY
and the VALUE
, so it’s up to our reducer to keep track of Key changes and act accordingly.
Don’t forget to make your scripts executable:
Testing
Because our example is so simple, we can actually test it without using hadoop at all.
Looking good so far!
Running with Hadoop should produce the same output.
A Complex Example in Python
Check out my advanced python MapReduce guide to see how to join two datasets together using python.
Python MapReduce Book
While there are no books specific to Python MapReduce development the following book has some pretty good examples:
Mastering Python for Data Science
While not specific to MapReduce, this book gives some examples of using the Python 'HadoopPy' framework to write some MapReduce code. It's also an excellent book in it's own right.