Mapreducetrabajos

Filtro

Mis búsquedas recientes
Filtrar por:
Presupuesto
a
a
a
Tipo
Habilidades
Idiomas
    Estado del trabajo
    976 mapreduce trabajados encontrados, precios en USD

    consiste en desarrollar un esquema maestro-trabajador, como el esquema de procesamiento mas habitual en los entornos de computacion distribuida, parecido a modelos tan famosos como mapReduce

    $87 (Avg Bid)
    $87 Oferta promedio
    4 ofertas

    consiste en desarrollar un esquema maestro-trabajador, como el esquema de procesamiento mas habitual en los entornos de computacion distribuida, parecido a modelos tan famosos como mapReduce

    $51 (Avg Bid)
    $51 Oferta promedio
    1 ofertas
    Ingeniero de Datos Finalizado left

    Buscamos perfil profesional para desempeñarse como Ingeniero de Datos modalidad Freelance para proyecto en el rubro Minería. Excluyente experiencia de sobre 2 años como Desarrollador: Big Data Azure, Hadoop, MapReduce, Spark, Hive, Synapse Analytic, Python (o SCADA)

    $21 / hr (Avg Bid)
    $21 / hr Oferta promedio
    7 ofertas

    Me gustaría un curso de varias clases (cada clase un vídeo diferente) para aprender a utilizar Hadoop y MapReduce, al principio desde cero, e incrementando el nivel con cada clase, empezando con lo básico hasta los niveles más avanzados, para aprender y entender completamente Hadoop y MapReduce para Big Data. El mínimo total de duración sumando todas las clases que sea de 30 minutos. Importantísimo, idioma ESPAÑOL, sino no me sirve

    $8 - $31
    $8 - $31
    0 ofertas
    Big Data Architect Finalizado left

    Se trata de un proyecto generación de estrategia de Big Data para Business Analytics , las fases que queremos cubrir son las de revisión y auditoría de las fuentes de datos (estructurados - no estructurados) y el apoyo téc...sistemas operativos y redes. • Arquitectura de centro de procesos de datos y creación de Data Lakes (Cloudera,HortonWorks,MapR) • Familiarización con entornos de procesamientos modernos a escala masiva (Big Data) y/o en tiempo real: Hadoop/Mapreduce, HBase Scala/Spark, dataflow, Storm, Flume. • Conocimientos del entorno Salesforce •...

    $15 / hr (Avg Bid)
    $15 / hr Oferta promedio
    6 ofertas

    ...programación como Java, Scala o Python. Es imprescindible saber para qué y por qué se usan las tecnologías para poder modelar la mejor arquitectura posible para un determinado problema concreto de negocio. Buscamos: Titulados en Informática, Matemáticas, Estadística, etc Requisitos mínimos Experiencia con Git Lenguajes de programación: Java, Python, Scala, R... Experiencia en desarrollo de procesos MapReduce en Hadoop, Spark o Flink Manipulación de datos en diferentes DB Nosql como Cassandra, Mongo o HBase Requisitos deseados Experiencia en desarrollo de procesos Real Time con Storm, Spark o Flink Experiencia o conocimiento de las herramientas del ecosistema Hadoop Librerías y/o tecnologí...

    N/A
    N/A
    0 ofertas

    ...Profesional graduado en Informática, Economía, Actuario, Ciencias Exactas, oespecialistas en Data mining o Big Data.- Excelentes habilidades de comunicación, proactividad, capacidad de organización yplanificación.- Conocimientos de SQL.- Habilidad para crear modelos analíticos complejos y algoritmos.- Conocimientos de programación sobre herramientas y aplicaciones de Big Data (Hadoop/HDFS, Spark, MapReduce, Hive, R y Python)-Capacidad de abstracción y creatividad para resolver problemas complejos- Buena disposición tanto para el trabajo individual como para el trabajo en de trabajo: Saavedra, Capital :- Ambiente de trabajo desafiante y exigente pero a su vez alegre y divertido, para que hagas lo que más te gus...

    N/A
    N/A
    0 ofertas

    BluePatagon empresa líder en tecnologías de Business Intelligence & Business Analytics, está en búsqueda de Especialista en Big Data (Hadoop - Hortonworks), para importante cliente de CABA. Experiencia: mínima de 1 año en tecnologías Hadoop: Experiencia en desarrollo de aplicaciones (MapReduce + HDFS). Big Data: Familiaridad con el ecosistema (Hive, Pig, HBase, etc). y conceptos de escalabilidad, análisis en tiempo real, procesamiento de datos distribuidos. Linux: Uso Avanzado (manejo de servicios de SO, administración, shell scripting, seguridad) Programacion: POO (Java preferentemente, Python) Base de datos: RDBMS (Oracle, MySQL, PostgreSQL), NoSQL (HBase, Cassandra) Data Exchange y configuraci&oacu...

    N/A
    N/A
    0 ofertas

    El backend estaría compuesto por el lenguaje de programación Python, aunque también se programaría en Javascript haciendo uso de tecnologías como NodeJS.  La base de datos sería por un lado relacional PosgreSQL y por otro lado también utilizaríamo...relacional tipo MongoDB Para la plataforma en sí haríamos uso de tecnologías de frontend como AngularJS, HTML5, CSS3 etc. Básicamente es: lenguaje de programación PYTHON en el backend (en los servidores con los procesos de análisis de información) y JAVASCRIPT con el framework de AngularJS en la parte de cliente. Para el análisis de datos se utilizarán tecnologías usadas en Big Data como Hadoop, MapReduce, ...

    N/A
    N/A
    0 ofertas

    ...experiencia en NoSQL /HBASE, Cassandra o similares, Neo4j). Voluntad de aprender e implementar nuevas tecnologías BigData, según sea necesario. Iniciativa y capacidad de trabajar de manera independiente y en equipo. Experiencia con Sotrm en soluciones real time analytics. Experiencia en procesamiento paralelo (MPI, OpenMP) como ventaja competitiva para el puesto. Experto en comprensión en Hadoop HDFS y MapReduce. Pensamiento creativo (Out of the box) Capacidad en gestión de equipos <em>InnoQuant acaba de ser seleccionado como uno de los 10 startups tecnológicas más prometedoras en España.  Somos un equipo experimentado de profesionales de TI que trabajan en tiempo real plataforma de análisis de grandes datos del ...

    $244 (Avg Bid)
    $244 Oferta promedio
    5 ofertas

    I'm in search of a professional proficient in AWS and MapReduce. My project involves: - Creation and execution of MapReduce jobs within the AWS infrastructure. - Specifically, these tasks will focus on processing a sizeable amount of text data. - The goal of this data processing is to perform an in-depth word frequency analysis, thereby extracting meaningful answers prompted by the data. The ideal freelancer for this job will have substantial experience handling data within these systems. Expertise in optimizing performance of MapReduce jobs is also greatly desirable. For anyone dabbling in AWS, MapReduce and data analytics, this project can provide a challenging and rewarding experience.

    $37 (Avg Bid)
    $37 Oferta promedio
    6 ofertas

    I'm in search of an intermediate-level Java programmer well-versed in MapReduce. Your responsibility will be to implement the conceptual methods outlined in a given academic paper. What sets this task apart is that you're encouraged to positively augment the methodologies used: • Efficiency: Be creative with the paper's strategies and look for room for improvement in the program's efficiency. This could include enhancements to the program's capacity to process data, or to its speed. Ideal candidate should be seasoned in Java Programming, specifically MapReduce operations. Moreover, the ability to critically analyze and improve upon existing concepts will ensure success in this task. Don't hesitate to innovate, as long as you maintain the ...

    $139 (Avg Bid)
    $139 Oferta promedio
    33 ofertas
    Hadoop administrator Finalizado left

    ...administrator will be responsible for ensuring the smooth functioning of the Hadoop system and optimizing its performance. - The candidate should have a deep understanding of Hadoop architecture, configuration, and troubleshooting. - Experience in managing large-scale data processing and storage environments is required. - Strong knowledge of Hadoop ecosystem technologies such as HDFS, YARN, MapReduce, and Hive is essential. - The Hadoop administrator should be proficient in scripting languages like Python or Bash for automation and monitoring tasks. - Familiarity with cloud platforms and distributed computing frameworks is a plus. - Excellent communication skills and the ability to work collaboratively in a team environment are necessary. - The candidate should be proactive, det...

    $310 (Avg Bid)
    $310 Oferta promedio
    3 ofertas
    Hadoop HDFS Setup Finalizado left

    HDFS Setup Configuration: 1 NameNode 3 DataNodes 1 SecondaryNameNode Requirements: Assuming y...the Overview module, Startup Process module, DataNodes module, and Browse Directory module on the Web UI of HDFS. MapReduce Temperature Analysis You are given a collection of text documents containing temperature data. Your task is to implement a MapReduce program to find the maximum and minimum temperatures for each year. Data Format: Year: Second item in each line Minimum temperature: Fourth item in each line Maximum temperature: Fifth item in each line Submission Requirements: Submit the source code of your MapReduce program along with instructions for running it. Also, include a short document explaining your design choices and how you tested your solution. ...

    $15 (Avg Bid)
    $15 Oferta promedio
    2 ofertas

    Looking for Hadoop Hive Experts I am seeking experienced Hadoop Hive experts for a personal project. Requirements: - Advanced level of expertise in Hadoop Hive - Strong understanding of big data processing and analysis - Proficient in Hive query language (HQL) - Experience with data warehousing and ETL processes - Familiarity with Apache Hadoop ecosystem tools (e.g., HDFS, MapReduce) - Ability to optimize and tune Hadoop Hive queries for performance If you have a deep understanding of Hadoop Hive and can effectively analyze and process big data, then this project is for you. Please provide examples of your previous work in Hadoop Hive and any relevant certifications or qualifications. I am flexible with the timeframe for completing the project, so please let me know your avail...

    $20 (Avg Bid)
    $20 Oferta promedio
    2 ofertas
    Big Data Statistics Finalizado left

    1: model and implement efficient big data solutions for various application areas using appropriately selected algorithms and data structures. 2: analyse methods and algorithms, to compare and evaluate them with respect to time and space requirements and make appropriate design choices when solving real-world problems. 3: ...appropriate design choices when solving real-world problems. 3: motivate and explain trade-offs in big data processing technique design and analysis in written and oral form. 4: explain the Big Data Fundamentals, including the evolution of Big Data, the characteristics of Big Data and the challenges introduced. 6: apply the novel architectures and platforms introduced for Big data, i.e., Hadoop, MapReduce and Spark complex problems on Hadoop execution platform....

    $129 (Avg Bid)
    $129 Oferta promedio
    9 ofertas
    Big data processing Finalizado left

    Write MapReduce programs that give you a chance to develop an understanding of principles when solving complex problems on the Hadoop execution platform.

    $25 (Avg Bid)
    $25 Oferta promedio
    9 ofertas
    Python Expert Finalizado left

    I am looking for a Python expert who can help me with a specific task of implementing a MapReducer. The ideal candidate should have the following skills and experience: - Proficient in Python programming language - Strong knowledge and experience in MapReduce framework - Familiarity with web scraping, data analysis, and machine learning would be a plus The specific library or framework that I have in mind for this project is [insert library/framework name]. I have a tight deadline for this task, and I prefer it to be completed in less than a week.

    $11 / hr (Avg Bid)
    $11 / hr Oferta promedio
    52 ofertas
    Mapreduce Program Finalizado left

    I am looking for a freelancer to develop a Mapreduce program in Python for data processing. The ideal candidate should have experience in Python programming and a strong understanding of Mapreduce concepts. Requirements: - Proficiency in Python programming language - Knowledge of Mapreduce concepts and algorithms - Ability to handle large data sets efficiently - Experience with data processing and manipulation - Familiarity with data analysis and mining techniques The program should be flexible enough to handle any data set, but the client will provide specific data sets for the freelancer to work with. The freelancer should be able to process and analyze the provided data sets efficiently using the Mapreduce program.

    $108 (Avg Bid)
    $108 Oferta promedio
    27 ofertas

    It's java hadoop mapreduce task. The program should run on windows OS. An algorithm must be devised and implemented that can recognize the language of a given text. Thank you.

    $33 (Avg Bid)
    $33 Oferta promedio
    8 ofertas
    Hadoop Trainer Finalizado left

    I am looking for an advanced Hadoop trainer for an online training program. I have some specific topics to be covered as part of the program, and it is essential that the trainer can provide in-depth knowledge and expertise in Hadoop. The topics to be discussed include Big Data technologies, Hadoop administration, Data warehousing, MapReduce, HDFS Architecture, Cluster Management, Real Time Processing, HBase, Apache Sqoop, and Flume. Of course, the trainer should also have good working knowledge about other Big Data topics and techniques. In addition to the topics mentioned, the successful candidate must also demonstrate the ability to tailor the course to meet the learner’s individual needs, making sure that the classes are engaging and fun. The trainer must also possess o...

    $14 / hr (Avg Bid)
    $14 / hr Oferta promedio
    1 ofertas

    We are an expanding IT company seeking skilled and experienced data engineering professionals to support our existin...experience in a data engineering role. Desired (but not required) Skills: - Experience with other data processing technologies such as Apache Flink, Apache Beam, or Apache Nifi. - Knowledge of containerization technologies like Docker and Kubernetes. - Familiarity with data visualization tools such as Tableau, Power BI, or Looker. - Understanding of Big Data tools and technologies like Hadoop, MapReduce, etc. If you possess the necessary skills and experience, we invite you to reach out to us with your CV and relevant information. We are excited to collaborate with you and contribute to the continued success and innovation of our IT company in the field of data en...

    $25 / hr (Avg Bid)
    $25 / hr Oferta promedio
    34 ofertas

    Need an exceptional freelancer with expertise in AWS CloudFormation and Python Boto3 scripting to create a CloudFormation template specifically for an EMR (Elastic MapReduce) cluster and develop a validation script. This project requires strong knowledge of AWS services, proficiency in Python scripting with Boto3, and the ability to meet a strict 5-day can be changed based on project requirements. Requirements: - Extensive experience in AWS CloudFormation, specifically for EMR clusters - Proficiency in Python scripting with Boto3 - Solid understanding of IAM, S3, and EMR services - Previous experience in creating validation scripts or automated testing scripts - Familiarity with Spark and Adaptive Query Execution (AQE) is highly desirable Will tell exact requirements when

    $168 (Avg Bid)
    $168 Oferta promedio
    9 ofertas

    Help to implement HDFS and MapReduce applications.

    $137 (Avg Bid)
    $137 Oferta promedio
    14 ofertas

    ...appropriate visualisation/s and report the results of analysis. All the steps/Python code/results must be shared. (A) Data Analysis (75%) • On given datasets, identify the questions that you would like to answer through data analysis. • Given two datasets, use SQL queries to create a new dataset for analysis. • Perform data cleaning and pre-processing tasks on the new dataset. • Use HIVE, MapReduce (or Spark) and machine learning techniques to analyse data. • Perform visualization using Python and PowerBI and report the results. (B) Issues and Solution (25%)• Identify the current issues in the use of Big Data Analytics in the fashion retail industry. Based on the identified issues, propose an effective solution using various technologies. ...

    $7 / hr (Avg Bid)
    $7 / hr Oferta promedio
    15 ofertas

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    $665 (Avg Bid)
    $665 Oferta promedio
    7 ofertas

    Need java expert with experience in Distributed Systems For Information Systems Management, it will invlove the usage of MapReduce and Spark Linux and unix commands Part 1 Execute a map reduce job on the cluster of machines Requires use of Hadoop classes Part 2Write a Java program that uses Spark to read The Tempest and perform various calculations. The name of the program is TempestAnalytics.java. I will share full details in chat make ur bids

    $884 (Avg Bid)
    $884 Oferta promedio
    6 ofertas

    You are required to setup a multinode environment consisting of a master node and multiple worker nodes. You are also required to setup a client program that communicates with the nodes based on the types of operations requested by the user. The types of operations that expected for this project are: WRITE: Given an input file, split it into multiple partitions and store i...the types of operations requested by the user. The types of operations that expected for this project are: WRITE: Given an input file, split it into multiple partitions and store it across multiple worker nodes. READ: Given a file name, read the different partitions from different workers and display it to the user. MAP-REDUCE - Given an input file, a mapper file and a reducer file, execute a MapReduce Job on th...

    $7 - $18
    $7 - $18
    0 ofertas

    given a dataset and using only MapReduce framework and python, find the following: • The difference between the maximum and the minimum for each day in the month • The daily minimum • the daily mean and variance • the correlation matrix that describes the monthly correlation among set of columns Using Mahout and python, do the following: • Implement the K-Means clustering algorithm • Find the optimum number (K) of clusters for the K-mean clustering • Plot the elbow graph for K-mean clustering • Compare the different clusters you obtained with different distance measures

    $165 (Avg Bid)
    $165 Oferta promedio
    8 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $100 (Avg Bid)
    $100 Oferta promedio
    4 ofertas
    MapReduce with Hadoop Finalizado left

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $105 (Avg Bid)
    $105 Oferta promedio
    5 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $123 (Avg Bid)
    $123 Oferta promedio
    4 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $140 (Avg Bid)
    $140 Oferta promedio
    6 ofertas

    Hello All, The objective of this subject is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time. Please stay auto bidders Thank You

    $97 (Avg Bid)
    $97 Oferta promedio
    3 ofertas
    MapReduce with Hadoop Finalizado left

    The objective of this assignment is to learn how to design a distributed solution of a Big Data problem with help of MapReduce and Hadoop. In fact, MapReduce is a software framework for spreading a single computing job across multiple computers. It is assumed that these jobs take too long to run on a single computer, so you run them on multiple computers to shorten the time.

    $120 (Avg Bid)
    $120 Oferta promedio
    16 ofertas

    1. Implement the straggler solution using the approach below a) Develop a method to detect slow tasks (stragglers) in the Hadoop MapReduce framework using Progress Score (PS), Progress Rate (PR) and Remaining Time (RT) metrics b) Develop a method of selecting idle nodes to replicate detected slow tasks using the CPU time and Memory Status (MS) of the idle nodes. c) Develop a method for scheduling the slow tasks to appropriate idle nodes using CPU time and Memory Status of the idle nodes. 2. A good report on the implementation with graphics 3. A recorded execution process Use any certified data to test the efficiency of the methods

    $186 (Avg Bid)
    Urgente
    $186 Oferta promedio
    11 ofertas
    Big data project Finalizado left

    identify differences in implementations using Spark versus MapReduce, and understand LSH through implementing portions of the algorithm. Your task is to find hospitals with similar characteristics in the impact of COVID-19. Being able to quickly find similar hospitals can be useful for connecting hospitals experiencing difficulties and finding the characteristics of hospitals that have dealt better with the pandemic

    $189 (Avg Bid)
    $189 Oferta promedio
    17 ofertas
    mapreduce with python Finalizado left

    I have an input text file and a mapper and reducer file which outputs the total count of each word in the text file. I would like to have the mapper and reducer file output only the top 20 words (and their count) with the highest count. The files use and I wanna be able to run them in hadoop.

    $138 (Avg Bid)
    $138 Oferta promedio
    12 ofertas

    i want map reduce framework need to be implemented in scala

    $210 (Avg Bid)
    $210 Oferta promedio
    7 ofertas

    I will have couple of simple questions regarding: NLP, FSA, MapReduce, Regular expression, N-Gram. Please let me know if you have expertise in these topics.

    $158 (Avg Bid)
    $158 Oferta promedio
    34 ofertas

    1) Describe how to implement the following queries in MapReduce: SELECT , , , , FROM Employee as emp, Agent as a WHERE = AND = ; SELECT lo_quantity, COUNT(lo_extendedprice) FROM lineorder, dwdate WHERE lo_orderdate = d_datekey AND d_yearmonth = 'Feb1995' AND lo_discount = 6 GROUP BY lo_quantity; SELECT d_month, AVG(d_year) FROM dwdate GROUP BY d_month ORDER BY AVG(d_year) Consider a Hadoop job that processes an input data file of size equal to 179 disk blocks (179 different blocks, not considering HDFS replication factor). The mapper in this job requires 1 minute to read and fully process a single block of data. Reducer requires 1 second (not minute) to produce an answer for one key worth of values and there are a total of

    $200 (Avg Bid)
    $200 Oferta promedio
    1 ofertas

    I can successfully run the Mapreduce job on the server. But when I want to send this job as yarn remote client with java(via yarn Rest api), I get the following error. I want to submit this job successfully via Remote Client(Yarn Rest Api.)

    $12 (Avg Bid)
    $12 Oferta promedio
    3 ofertas

    Write a MapReduce program with python to implement BFS. , and shell script are needed according to the detailed instructions in the uploaded file.

    $136 (Avg Bid)
    $136 Oferta promedio
    24 ofertas

    ...to you how you pick necessary features and build the training that creates matching courses for job profiles. These are the suggested steps you should follow : Step 1: Setup a Hadoop cluster where the data sets should be stored on the set of Hadoop data nodes. Step 2: Implement a content based recommendation system using MapReduce, i.e. given a job description you should be able to suggest a set of applicable courses. Step 3: Execute the training step of your MapReduce program using the data set stored in the cluster. You can use a subset of the data depending on the system capacity of your Hadoop cluster. You have to use an appropriate subset of features in the data set for effective training. Step 4: Test your recommendation system using a set of requests that execute ...

    $20 (Avg Bid)
    $20 Oferta promedio
    3 ofertas

    The write-up should include the main problem that can be subdivided into 3 or 4 subproblems. If I'm satisfied, we discuss further on implementation.

    $235 (Avg Bid)
    $235 Oferta promedio
    4 ofertas

    Using mapreduce recommend the best courses for up-skilling based on a given job description. You can use the data set to train the system and pick some job descriptions not in the training set to test. It is left up to you how you pick necessary features and build the training that creates matching courses for job profiles. Project submission- 1. Code files with comments for your MapReduce implementation of training and query steps 2. . Document the design of your logic including training, query and feature engineering. data csv is too big will share separately

    $79 (Avg Bid)
    $79 Oferta promedio
    2 ofertas

    Hi Sri Varadan Designers, I noticed your profile and would like to offer you my project. We can discuss any details over chat. I have task to do in Mapreduce in hadoop

    $16 (Avg Bid)
    $16 Oferta promedio
    1 ofertas

    I want to run pouchdb-node on AWS Lambda. Source code: Detailed Requirements: - Deploy pouchdb-node to AWS Lambda. - Use EFS in storage layer. - Ok to limit concurrency to 1 to avoid race conditions. - Expose via Lambda HTTPS Endpoints (no API Gateway) - The basic PUT / GET functions, replication, and MapReduce must all work Project Deliverables: - Deployment script which packages pouchdb-node and deploys it to AWS using SAM or CloudFormation. Development Process: - I will not give access to my AWS Accounts. - You develop on your own environment and give me completed solution.

    $188 (Avg Bid)
    $188 Oferta promedio
    7 ofertas
    Hadoop Assignment Finalizado left

    ...to you how you pick necessary features and build the training that creates matching courses for job profiles. These are the suggested steps you should follow : Step 1: Setup a Hadoop cluster where the data sets should be stored on the set of Hadoop data nodes. Step 2: Implement a content based recommendation system using MapReduce, i.e. given a job description you should be able to suggest a set of applicable courses. Step 3: Execute the training step of your MapReduce program using the data set stored in the cluster. You can use a subset of the data depending on the system capacity of your Hadoop cluster. You have to use an appropriate subset of features in the data set for effective training. Step 4: Test your recommendation system using a set of requests that execute ...

    $130 (Avg Bid)
    $130 Oferta promedio
    5 ofertas

    ...metrics to show which is a better method. OR ii) Improvement on the methodology used in (a) that will produce a better result. 2. Find a suitable paper on replication of data in hadoop mapreduce framework. a) Implement the methodology used in the paper b) i) Write a program to split identified intermediate results from (1 b(i)) appropriately into 64Mb/128Mb and compare with 2(a) using same metrics to show which is a better method. OR ii) Improvement on the methodology used in 2(a) that will produce a better result 3. Find a suitable paper on allocation strategy of data/tasks to nodes in Hadoop Mapreduce framework. a) Implement the methodology used in the paper b) i) Write a program to reallocate the splits from (2 (b(i)) above to nodes by considering the capability ...

    $162 (Avg Bid)
    $162 Oferta promedio
    4 ofertas

    Principales artículos de la comunidad mapreduce