Large-scale Processing with MapReduce - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

Large-scale Processing with MapReduce

Description:

Based on the text by Jimmy Lin and Chris Dryer * CSE4/587 * * er * * All HDFS communication protocols are layered on top of the TCP/IP protocol A client establishes a ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 56
Provided by: bina
Learn more at: https://cse.buffalo.edu
Category:

less

Transcript and Presenter's Notes

Title: Large-scale Processing with MapReduce


1
Large-scale Processing with MapReduce
  • Based on the text by Jimmy Lin and Chris Dryer

2
Motivation
  • Systems are defined by large repositories of data
    they collect and process
  • gathering, analyzing, monitoring, filtering,
    searching, or organizing data, web-scale data
  • Analysis of user behavior data encompasses
    data-warehousing, data-mining, and analytics
  • Business intelligence, knowledge discovery
  • Scientific experiments The Hadron Collider
    Astronomy The Sloan Digital Sky Survey, Next
    generation DNA sequencing technologies
  • Sentiment analysis and opinion mining
  • Question and answer systems
  • It is not all text processing huge network and
    link analysis of numerical data
  • Systems from impossibly small to enormously large

3
Solutions?
  • Traditional methods ML, Classification, and
    programming models (attacking the deep features)
  • Authors ML dont matter what matters is large
    amount of data
  • This is a sentiment held by Tom White in his text
    on Hadoop Good news is Big Data is here bad
    news is we are struggling to store and analyze
    it.
  • Programming models?
  • Markup data with semantic info annotation? more
    data
  • Probability models maximum likelihood models
    (MLE) based algorithms
  • Data behaves quite chaotic
  • Simply boils down to organizing data and
    designing a programming model to process this
    data
  • Googles solution GFS and MapReduce
  • Reverse engineered open source version of GFS
    Hadoop Distributed File System or simply Hadoop
  • MapReduce (MR) is the programming model

4
Big idea behind MR
  • Scale-out and not scale-up Large number of
    commodity servers as opposed large number of high
    end specialized servers
  • Economies of scale, ware-house scale computing
  • MR is designed to work with clusters of commodity
    servers
  • Research issues Read Borraso and Holzles work
  • Failures are norm or common
  • With typical reliability, MTBF of 1000 days
    (about 3 years), if you have a cluster of 1000,
    probability of at least 1 server failure at any
    time is nearly 100

5
MapReduce
6
Big idea behind MR (contd.)
  • Move processing to data A distributed system is
    in charge of managing the data, and the
    processing is moved to the nodes where data
    resides.
  • Sequential processing of data (within a given
    server) instead of random access and locking up.
    MR is for batch processing
  • Write once read Many (WORM) data to allow
    parallel servers
  • High-level system details monitoring of the
    status of data and processing
  • Seamless scalability once the MR algorithm is
    designed it can work on any size cluster without
    any core code alteration (as opposed to GPGPU
    processing that require mapping to a specific
    architecture/size of the GPU).
  • Divide and conquer not a new idea
  • Designed mainly for processing text data.

7
Issues to be addressed
  • How to break large problem into smaller problems?
    Decomposition for parallel processing
  • How to assign tasks to workers distributed around
    the cluster?
  • How do the workers get the data?
  • How to synchronize among the workers?
  • How to share partial results among workers?
  • How to do all these in the presence of errors and
    hardware failures?
  • MR is supported by a distributed file system that
    addresses many of these aspects.

8
MapReduce Basics
  • Key-value pairs form the basic structure of
    MapReduce ltkey, valuegt
  • Key can be anything from a simple data types
    (int, float, etc) to file names to custom types.

9
MapReduce Example (fig.2.4)
10
MapReduce Design
  • You focus on Map function, Reduce function and
    other related functions like combiner etc.
  • Mapper and Reducer are designed as classes and
    the function defined as a method.
  • Configure the MR Job for location of these
    functions, location of input and output (paths
    within the local server), scale or size of the
    cluster in terms of maps, reduce etc., run the
    job.
  • Thus a complete MapReduce job consists of code
    for the mapper, reducer, combiner, and
    partitioner, along with job configuration
    parameters. The execution framework handles
    everything else.

11
The code
  • 1 class Mapper
  • 2 method Map(docid a doc d)
  • 3 for all term t in doc d do
  • 4 Emit(term t count 1)
  • 1 class Reducer
  • 2 method Reduce(term t counts c1 c2 )
  • 3 sum 0
  • 4 for all count c in counts c1 c2 do
  • 5 sum sum c
  • 6 Emit(term t count sum)

12
MapReduce Example Mapper
  • This is a cat
  • Cat sits on a roof
  • ltthis 1gt ltis 1gt lta lt1,1,gtgt ltcat lt1,1gtgt ltsits 1gt
    lton 1gt ltroof 1gt
  • The roof is a tin roof
  • There is a tin can on the roof
  • ltthe lt1,1gtgt ltroof lt1,1,1gtgt ltis lt1,1gtgt lta lt1,1gtgt
    lttin lt1,1gtgt ltthen 1gt ltcan 1gt lton 1gt
  • Cat kicks the can
  • It rolls on the roof and falls on the next roof
  • ltcat 1gt ltkicks 1gt ltthe lt1,1gtgt ltcan 1gt ltit 1gt
    ltroll 1gt lton lt1,1gtgt ltroof lt1,1gtgt ltand 1gt ltfalls
    1gt ltnext 1gt
  • The cat rolls too
  • It sits on the can
  • ltthe lt1,1gtgt ltcat 1gt ltrolls 1gt lttoo 1gt ltit 1gt
    ltsits 1gt lton 1gt ltcat 1gt

13
MapReduce Example Combiner, Reducer, Shuffle,
Sort
  • ltthis 1gt ltis 1gt lta lt1,1,gtgt ltcat lt1,1gtgt ltsits 1gt
    lton 1gt ltroof 1gt
  • ltthe lt1,1gtgt ltroof lt1,1,1gtgt ltis lt1,1gtgt lta lt1,1gtgt
    lttin lt1,1gtgt ltthen 1gt ltcan 1gt lton 1gt
  • ltcat 1gt ltkicks 1gt ltthe lt1,1gtgt ltcan 1gt ltit 1gt
    ltroll 1gt lton lt1,1gtgt ltroof lt1,1gtgt ltand 1gt ltfalls
    1gt ltnext 1gt
  • ltthe lt1,1gtgt ltcat 1gt ltrolls 1gt lttoo 1gt ltit 1gt
    ltsits 1gt lton 1gt ltcat 1gt
  • Combine the counts of all the same words
  • ltcat lt1,1,1,1gtgt
  • ltroof lt1,1,1,1,1,1gtgt
  • ltcan lt1, 1,1gtgt
  • Reduce (sum in this case) the counts
  • ltcat 4gt
  • ltcan 3gt
  • ltroof 6gt

14
What is MapReduce?
  • MapReduce is a programming model Google has used
    successfully is processing its big-data sets (
    20000 peta bytes per day)
  • A map function extracts some intelligence from
    raw data.
  • A reduce function aggregates according to some
    guides the data output by the map.
  • Users specify the computation in terms of a map
    and a reduce function,
  • Underlying runtime system automatically
    parallelizes the computation across large-scale
    clusters of machines, and
  • Underlying system also handles machine failures,
    efficient communications, and performance issues.
  • -- Reference Dean, J. and Ghemawat, S. 2008.
    MapReduce simplified data processing on large
    clusters. Communication of ACM 51, 1 (Jan. 2008),
    107-113.

15
Classes of problems mapreducable
  • Benchmark for comparing Jim Grays challenge on
    data-intensive computing. Ex Sort
  • Google uses it for wordcount, adwords, pagerank,
    indexing data.
  • Simple algorithms such as grep, text-indexing,
    reverse indexing
  • Bayesian classification data mining domain
  • Facebook uses it for various operations
    demographics
  • Financial services use it for analytics
  • Astronomy Gaussian analysis for locating
    extra-terrestrial objects.
  • Expected to play a critical role in semantic web
    and web3.0

16
Large scale data splits
Map ltkey, 1gt ltkey, valuegtpair
Reducers (say, Count)
Parse-hash
Count
P-0000
, count1
Parse-hash
Count
P-0001
, count2
Parse-hash
Count
P-0002
Parse-hash
,count3
17
More on MR
  • All Mappers work in parallel.
  • Barriers enforce all mappers completion before
    Reducers start.
  • Mappers and Reducers typically execute on the
    same server
  • You can configure job to have other combinations
    besides Mapper/Reducer ex identify
    mappers/reducers for realizing sort (that
    happens to be a Benchmark)
  • Mappers and reducers can have side effects this
    allows for sharing information between
    iterations.

18
Storage
  • Google GFS Open Source equivalent is Hadoop
    Distributed File System
  • Googles BigTable a sparse, distributed,
    persistent multidimensional sorted map Hbase is
    the open-source equivalent
  • We will discuss Hadoop framework in detail later
    HDFS, Hbase, Pig etc.
  • We will use these in the design of project 2
  • Where can you more information?
  • http//hadoop.apache.org/mapreduce/
  • Tom Whites Hadoop The Definitive Guide

19
Distributed File System
  • Separation of computation and data
  • Googles GFS supports a proprietary
    implementation of this storage
  • HDFS Hadoop Distributed File System is an open
    source equivalent

20
HDFS
  • Divide user data into blocks and replicate those
    blocks across the local disks of nodes in the
    cluster
  • A master/slave architecture in which the master
    maintains the file namespace (metadata, directory
    structure, file to block mapping, location of
    blocks, and access permissions) and the slaves
    manage the actual data blocks namenode and
    datanodes

21
Architecture
22
HDFS Architecture
Namenode
Metadata(Name, replicas..) (/home/foo/data,6. ..
Metadata ops
Client
Block ops
Datanodes
Read
Datanodes
B
replication
Blocks
Rack2
Rack1
Write
Client
23
Hadoop Distributed File System
HDFS Server
Master node
HDFS Client
Application
Local file system
Block size 2K
Name Nodes
Block size 128M Replicated
24
Namenode and Datanodes
  • Master/slave architecture
  • HDFS cluster consists of a single Namenode, a
    master server that manages the file system
    namespace and regulates access to files by
    clients.
  • There are a number of DataNodes usually one per
    node in a cluster.
  • The DataNodes manage storage attached to the
    nodes that they run on.
  • HDFS exposes a file system namespace and allows
    user data to be stored in files.
  • A file is split into one or more blocks and set
    of blocks are stored in DataNodes.
  • DataNodes serves read, write requests, performs
    block creation, deletion, and replication upon
    instruction from Namenode.

25
File system Namespace
  • Hierarchical file system with directories and
    files
  • Create, remove, move, rename etc.
  • Namenode maintains the file system
  • Any meta information changes to the file system
    recorded by the Namenode.
  • An application can specify the number of replicas
    of the file needed replication factor of the
    file. This information is stored in the Namenode.

26
Data Replication
  • HDFS is designed to store very large files across
    machines in a large cluster.
  • Each file is a sequence of blocks.
  • All blocks in the file except the last are of the
    same size.
  • Blocks are replicated for fault tolerance.
  • Block size and replicas are configurable per
    file.
  • The Namenode receives a Heartbeat and a
    BlockReport from each DataNode in the cluster.
  • BlockReport contains all the blocks on a
    Datanode.

27
Replica Placement
  • The placement of the replicas is critical to HDFS
    reliability and performance.
  • Optimizing replica placement distinguishes HDFS
    from other distributed file systems.
  • Rack-aware replica placement
  • Goal improve reliability, availability and
    network bandwidth utilization
  • Many racks, communication between racks are
    through switches.
  • Network bandwidth between machines on the same
    rack is greater than those in different racks.
  • Namenode determines the rack id for each
    DataNode.
  • Replicas are typically placed on unique racks
  • Simple but non-optimal
  • Writes are expensive
  • Replication factor is 3
  • Replicas are placed one on a node in a local
    rack, one on a different node in the local rack
    and one on a node in a different rack.
  • 1/3 of the replica on a node, 2/3 on a rack and
    1/3 distributed evenly across remaining racks.

28
Replica Selection
  • Replica selection for READ operation HDFS tries
    to minimize the bandwidth consumption and
    latency.
  • If there is a replica on the Reader node then
    that is preferred.
  • HDFS cluster may span multiple data centers
    replica in the local data center is preferred
    over the remote one.

29
Safemode Startup
  • On startup Namenode enters Safemode.
  • Replication of data blocks do not occur in
    Safemode.
  • Each DataNode checks in with Heartbeat and
    BlockReport.
  • Namenode verifies that each block has acceptable
    number of replicas
  • After a configurable percentage of safely
    replicated blocks check in with the Namenode,
    Namenode exits Safemode.
  • It then makes the list of blocks that need to be
    replicated.
  • Namenode then proceeds to replicate these blocks
    to other Datanodes.

30
Filesystem Metadata
  • The HDFS namespace is stored by Namenode.
  • Namenode uses a transaction log called the
    EditLog to record every change that occurs to the
    filesystem meta data.
  • For example, creating a new file.
  • Change replication factor of a file
  • EditLog is stored in the Namenodes local
    filesystem
  • Entire filesystem namespace including mapping of
    blocks to files and file system properties is
    stored in a file FsImage. Stored in Namenodes
    local filesystem.

31
Namenode
  • Keeps image of entire file system namespace and
    file Blockmap in memory.
  • 4GB of local RAM is sufficient to support the
    above data structures that represent the huge
    number of files and directories.
  • When the Namenode starts up it gets the FsImage
    and Editlog from its local file system, update
    FsImage with EditLog information and then stores
    a copy of the FsImage on the filesytstem as a
    checkpoint.
  • Periodic checkpointing is done. So that the
    system can recover back to the last checkpointed
    state in case of a crash.

32
Datanode
  • A Datanode stores data in files in its local file
    system.
  • Datanode has no knowledge about HDFS filesystem
  • It stores each block of HDFS data in a separate
    file.
  • Datanode does not create all files in the same
    directory.
  • It uses heuristics to determine optimal number of
    files per directory and creates directories
    appropriately
  • When the filesystem starts up it generates a list
    of all HDFS blocks and send this report to
    Namenode Blockreport.

33
Protocol
34
The Communication Protocol
  • All HDFS communication protocols are layered on
    top of the TCP/IP protocol
  • A client establishes a connection to a
    configurable TCP port on the Namenode machine. It
    talks ClientProtocol with the Namenode.
  • The Datanodes talk to the Namenode using Datanode
    protocol.
  • RPC abstraction wraps both ClientProtocol and
    Datanode protocol.
  • Namenode is simply a server and never initiates a
    request it only responds to RPC requests issued
    by DataNodes or clients.

35
Robustness
36
Possible Failures
  • Primary objective of HDFS is to store data
    reliably in the presence of failures.
  • Three common failures are Namenode failure,
    Datanode failure and network partition.

37
Re-replication
  • The necessity for re-replication may arise due
    to
  • A Datanode may become unavailable,
  • A replica may become corrupted,
  • A hard disk on a Datanode may fail, or
  • The replication factor on the block may be
    increased.

38
Cluster Rebalancing
  • HDFS architecture is compatible with data
    rebalancing schemes.
  • A scheme might move data from one Datanode to
    another if the free space on a Datanode falls
    below a certain threshold.
  • In the event of a sudden high demand for a
    particular file, a scheme might dynamically
    create additional replicas and rebalance other
    data in the cluster.
  • These types of data rebalancing are not yet
    implemented research issue.

39
Data Integrity
  • Consider a situation a block of data fetched
    from Datanode arrives corrupted.
  • This corruption may occur because of faults in a
    storage device, network faults, or buggy
    software.
  • A HDFS client creates the checksum of every block
    of its file and stores it in hidden files in the
    HDFS namespace.
  • When a clients retrieves the contents of file, it
    verifies that the corresponding checksums match.
  • If does not match, the client can retrieve the
    block from a replica.

40
Metadata Disk Failure
  • FsImage and EditLog are central data structures
    of HDFS.
  • A corruption of these files can cause a HDFS
    instance to be non-functional.
  • For this reason, a Namenode can be configured to
    maintain multiple copies of the FsImage and
    EditLog.
  • Multiple copies of the FsImage and EditLog files
    are updated synchronously.
  • Meta-data is not data-intensive.
  • The Namenode could be single point failure
    automatic failover has been recently added with a
    backup namenode.

41
Data Organization
42
Data Blocks
  • HDFS support write-once-read-many with reads at
    streaming speeds.
  • A typical block size is 64MB (or even 128 MB).
  • A file is chopped into 64MB chunks and stored.

43
Staging
  • A client request to create a file does not reach
    Namenode immediately.
  • HDFS client caches the data into a temporary
    file. When the data reached a HDFS block size the
    client contacts the Namenode.
  • Namenode inserts the filename into its hierarchy
    and allocates a data block for it.
  • The Namenode responds to the client with the
    identity of the Datanode and the destination of
    the replicas (Datanodes) for the block.
  • Then the client flushes it from its local memory.

44
Staging (contd.)
  • The client sends a message that the file is
    closed.
  • Namenode proceeds to commit the file for creation
    operation into the persistent store.
  • If the Namenode dies before file is closed, the
    file is lost.
  • This client side caching is required to avoid
    network congestion also it has precedence is AFS
    (Andrew file system).

45
Replication Pipelining
  • When the client receives response from Namenode,
    it flushes its block in small pieces (4K) to the
    first replica, that in turn copies it to the next
    replica and so on.
  • Thus data is pipelined from Datanode to the next.

46
API (Accessibility)
47
Application Programming Interface
  • HDFS provides Java API for application to use.
  • Python access is also used in many applications.
  • A C language wrapper for Java API is also
    available.
  • A HTTP browser can be used to browse the files of
    a HDFS instance.

48
FS Shell, Admin and Browser Interface
  • HDFS organizes its data in files and directories.
  • It provides a command line interface called the
    FS shell that lets the user interact with data in
    the HDFS.
  • The syntax of the commands is similar to bash and
    csh.
  • Example to create a directory /foodir
  • /bin/hadoop dfs mkdir /foodir
  • There is also DFSAdmin interface available
  • Browser interface is also available to view the
    namespace.

49
Space Reclamation
  • When a file is deleted by a client, HDFS renames
    file to a file in be the /trash directory for a
    configurable amount of time.
  • A client can request for an undelete in this
    allowed time.
  • After the specified time the file is deleted and
    the space is reclaimed.
  • When the replication factor is reduced, the
    Namenode selects excess replicas that can be
    deleted.
  • Next heartbeat transfers this information to the
    Datanode that clears the blocks for use.

50
MapReduce Engine
  • MapReduce requires a distributed file system and
    an engine that can distribute, coordinate,
    monitor and gather the results.
  • Hadoop provides that engine through (the file
    system we discussed earlier) and the JobTracker
    TaskTracker system.
  • JobTracker is simply a scheduler.
  • TaskTracker is assigned a Map or Reduce (or other
    operations) Map or Reduce run on node and so is
    the TaskTracker each task is run on its own JVM
    on a node.

51
Job Tracker
  • Is a service with Hadoop system
  • It is like a scheduler
  • Client application is sent to the JobTracker
  • It talks to the Namenode, locates the TaskTracker
    near the data (remember the data has been
    populated already).
  • JobTracker moves the work to the chosen
    TaskTracker node.
  • TaskTracker monitors the execution of the task
    and updates the JobTracker through heartbeat. Any
    failure of a task is detected through missing
    heartbeat.
  • Intermediate merging on the nodes are also taken
    care of by the JobTracker

52
TaskTracker
  • It accepts tasks (Map, Reduce, Shuffle, etc.)
    from JobTracker
  • Each TaskTracker has a number of slots for the
    tasks these are execution slots available on the
    machine or machines on the same rack
  • It spawns a sepearte JVM for execution of the
    tasks
  • It indicates the number of available slots
    through the hearbeat message to the JobTracker

53
The Execution Framework
  • A MapReduce program, referred to as a job,
    consists of code for mappers, reducers and others
    packaged together with configuration parameters
    (such as IO locations).
  • The developer submits the job to the submission
    node of a cluster (in Hadoop, this is called the
    jobtracker).
  • Execution framework (sometimes called the
    \runtime") takes care of everything else it
    transparently handles all other aspects of
    distributed code execution, on clusters ranging
    from a single node to a few thousand nodes.

54
Responsibilities of the Execution Framework
  • Scheduling
  • Each MapReduce job is divided into smaller units
    called tasks
  • Essentially the key space is shared among the
    of Mappers
  • Maintain a queue in case tasksgt mappers ,
    reducers etc.
  • Coordination among multiple jobs and users.
  • Data/code co-location
  • Synchronization
  • Error and fault handling
  • Partitioners, Combiners

55
Summary
  • We discussed the features of MapReduce and
    distributed file system.
  • We discussed Architecture, Protocol, API, etc.
  • Also MapReduce Engine, Application Architecture
  • References
  • Apache Hadoop http//hadoop.apache.org/
  • http//wiki.apache.org/hadoop/
  • Hadoop The Definitive Guide, by Tom White, 2nd
    edition, Oreillys , 2010
  • Dean, J. and Ghemawat, S. 2008. MapReduce
    simplified data processing on large clusters.
    Communication of ACM 51, 1 (Jan. 2008), 107-113.
Write a Comment
User Comments (0)
About PowerShow.com