IJIEPR                   Back to the articles list | Back to browse issues page

XML Print


Department of CSE, GITE, Rajahmundry, A.P, India , sasikanth@giet.ac.in
Abstract:   (319 Views)
The Today’s digital world computations are tremendously difficult and always demands for essential requirements to significantly process and store enormous size of datasets for wide variety of applications. Since the volume of digital world data is enormous, this is mostly generated unstructured data with more velocity at beyond the limits and double day by day. In last decade, many organizations have been facing major problems to handling and process massive chunks of data, which could not be processed efficiently due to lack of enhancements on existing and conventional technologies. In this paper address, how to overcome these problems as efficiently by using the most recent and world primary powerful data processing tool, which is hadoop clean open source and one of the core component called Map Reduce, but which has few performance issues. This paper main goal is  address and overcome the limitations and weaknesses of Map Reduce with Apache Spark.
Full-Text [PDF 737 kb]   (69 Downloads)    
Type of Study: Research | Subject: Logistic & Apply Chain
Received: 2020/05/3 | Accepted: 2020/05/3 | Published: 2020/05/3

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


© 2020 All Rights Reserved | International Journal of Industrial Engineering & Production Research

Designed & Developed by : Yektaweb