Spark Vs Hadoop

Big Data is about storing, processing a huge volume of data with the ultimate end goal of generating insights from data. Data can be processed in real-time or in batches depending on the solution requirements. Apache Hadoop and Apache Spark are the two big data frameworks that are frequently used among the Big Data professionals. But when it comes to selecting one framework for data processing, Big Data enthusiasts fall into the dilemma. Let us know to understand what the main difference between these two is and try to find out which one is better.

Source: Data Flair

What is Hadoop?

This is a processing framework that provides batch processing. Hadoop is the first framework that ended up gaining a fair amount of significant traction within the open-source world. After several presentations and papers from Google about the way they were dealing with large amounts of data, Hadoop started to use components and algorithm stacks so that they could make processing large amounts of data easier.

Hadoop provided the ability to analyze large data sets. However, it relied heavily on disk storage as opposed to memory for computation. Hadoop was therefore very slow for calculations that require multiple passes over the same data. This allowed the hardware requirements for Hadoop operations to be cheap (hard disk space is far cheaper than ram) it made accessing and processing data much slower.

Hadoop is an open-source distributed processing framework that manages data processing and storage for big data applications running in clustered systems. It is comprised of 3 main components:

  • Hadoop Distributed File System (HDFS): a distributed file system that provides high throughput access to application data by partitioning data across many machines.
  • YARN: a framework for job scheduling and cluster resource management (task coordination)
  • Map Reduce:  the YARN-based system for parallel processing of large data sets on multiple machines.
Hadoop Distributed File System (HDFS)

As the name suggests it is distributed file system that make use of clusters to coordinate replication data storage. HDFS makes sure that data stays available no matter the possible host failures. This is used by data sources in order to store the intermediate results and perfect the final results. Hadoop’s processing functionality comes from Map Reduce. The processing technique follows the reduce, map, and shuffle algorithm using the key-value pairs. This procedure will involve:

  • Reading the set of data from HDFS.
  • Dividing the set of data up into chunks that are distributed through the nodes that are available.
  • Applying the computation for each of the nodes to the data subset.
  • Redistributing the intermediate results so that they are grouped by key.
  • Reduce the value of the keys by combining and summarizing the results that the individual nodes calculated. Writing the final results into the HDFS.
YARN

This stands for Yet Another Resource Negotiator and is a cluster coordinating component for this stack. It makes sure that the underlying scheduling and resources are managed and coordinated. YARN is what makes things possible to be able to run more diverse workloads through this cluster than could be done with earlier iterations by working as an interface for the resources.

Map Reduce

The parallel programming paradigm allows for the processing of huge amounts of data by running processes on multiple machines. Defining a Map-Reduce job requires two stages: map and reduce.

  • Map: It defines the operation that needs to be performed in small portion of dataset in parallel. Output key-value pair is of form < K, V >.
  • Reduce operation to combine the results of the Map.

What is Spark?

This is a framework that is batch processing with stream processing ability. It is built with a lot of the same principles of Map Reduce, and it focuses mainly on the processing speed of its workload by providing full processing and in-memory computation.

Spark can be used standalone, or it can connect with Hadoop as a Map-Reduce alternative. It only interacts with storage layer when it loads the data else all the the process on data happens in-memory In the end, it will provide the final results. Everything else is managed through memory.

Since Spark is able to look at the entire task beforehand it works faster when it comes to disk-related tasks. It usually do this by making DAGs, which show the operations that need to be done, the data to be worked on, and the relationship between them, which will give the processor a better chance of coordinating work. The stream gets it to stream processing from Spark Streaming.

Spark by itself is made for batch-oriented work. Spark has implemented a design known as micro-batches. This strategy was created to treat streams of data as if it were little batches of data that it can handle by using its batch engine’s semantics.

Spark is a cluster-computing framework, which means that it competes more with Map Reduce than with the entire Hadoop ecosystem. For example, Spark doesn’t have its own distributed file system but can use HDFS.

Spark uses memory and can use disk for processing, whereas Map Reduce is strictly disk-based. The primary difference between Map Reduce and Spark is that Map Reduce uses persistent storage and Spark uses Resilient Distributed Datasets (RDDs), which is covered in more detail under the Fault Tolerance section.

Conclusion:

No doubt Apache Spark and Apache Hadoop both are the most important tool for processing Big Data. They both have equally specific weightage in the Information Technology domain. Based on the project requirement and considering the ease of handling them developer can choose among Apache Hadoop and Apache Spark.

From the above comparison, Spark was developed to be fast, and address Hadoop’s shortcomings. Spark is not only faster, but also uses in-memory processing and has many libraries built on top of it to accommodate for big data analytics and Machine Learning. Despite Hadoop’s shortcomings, both Spark and Hadoop play major roles in big data analytics and are harnessed by big tech companies around the world to tailor user experiences to customers or clients.

Spread the knowledge

ShivamSingh96

I am passionate about teaching and learning Fundamentals of Machine Learning and Deep Learning. I also used to write blogs related to AI, Machine Learning and Deep Learning.

Leave a Reply

Your email address will not be published. Required fields are marked *