: Hadoop & ETL Developer
Job Summary
We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs.
Key Responsibilities
Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies.
Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation.
Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. o Develop and manage workflow orchestration using Apache Airflow.
Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage.
Optimize MapReduce and Spark jobs for performance, scalability, and efficiency.
Ensure data quality, governance, and consistency across the pipeline.
Collaborate with data engineering teams to build scalable and high-performance data solutions.
Monitor, debug, and enhance big data workflows to improve reliability and efficiency.
Required Skills & Experience
3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark).
Strong expertise in ETL processes, data transformation, and data warehousing.
Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte.
Proficiency in SQL and handling structured and unstructured data.
Experience with NoSQL databases like MongoDB.
Strong programming skills in Python or Scala for scripting and automation.
Experience in optimizing Spark and MapReduce jobs for high-performance computing.
Good understanding of data lake architectures and big data best practices. Preferred Qualifications
Experience in real-time data streaming and processing.
Familiarity with Docker/Kubernetes for deployment and orchestration.
Strong analytical and problem-solving skills with the ability to debug and optimize data workflows.
If you have a passion for big data, ETL, and large-scale data processing, we'd love to hear from you!
Job Types: Full-time, Contractual / Temporary
Pay: ?400,000.00 - ?1,100,000.00 per year
Schedule:
Day shift
Monday to Friday
Morning shift
Application Question(s):
How many years of experience do you have in Big Data ETL?
How many years of experience do you have in Hadoop?
Are you willing to work on contractual basis of job ?
Are you comfortable on 3rd party payroll?
Are you from Delhi?
What is the notice period in your current company?
Work Location: In person
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.