Devops Big Data

Year    HR, IN, India

Job Description

Gurugram



About Us




We empower enterprises globally through intelligent, creative, and insightful services for data integration, data analytics and data visualization.
Hoonartek is a leader in enterprise transformation, data engineering and an acknowledged world-class Ab Initio delivery partner.
Using centuries of cumulative experience, research and leadership, we help our clients eliminate the complexities & risk of legacy modernization and safely deliver big data hubs, operational data integration, business intelligence, risk & compliance solutions and traditional data warehouses & marts.
At Hoonartek, we work to ensure that our customers, partners and employees all benefit from our unstinting commitment to delivery, quality and value. Hoonartek is increasingly the choice for customers seeking a trusted partner of vision, value and integrity

How We Work?



Define, Design and Deliver (D3) is our in-house delivery philosophy. It's culled from agile and rapid methodologies and focused on 'just enough design'. We embrace this philosophy in everything we do, leading to numerous client success stories and indeed to our own success.
We embrace change, empowering and trusting our people and building long and valuable relationships with our employees, our customers and our partners. We work flexibly, even adopting traditional/waterfall methods where circumstances demand it. At Hoonartek, the focus is always on delivery and value.




1. Collaborate with cross-functional teams to design, implement, and maintain robust and scalable Big Data infrastructure.
2. Build and manage CI/CD pipelines for deploying and monitoring Big Data applications and services.
3. Implement automation scripts for provisioning, configuration, and orchestration of Big Data clusters using tools such as [tools like Ranger, Ansible, or others].
4. Ensure high availability, performance, and security of Big Data platforms.
5. Troubleshoot and resolve issues related to infrastructure, applications, and data pipelines.
6. Collaborate with data engineers and data scientists to optimize and streamline data processing workflows.
7. Implement and maintain monitoring and logging solutions for Big Data applications and infrastructure components.
8. Evaluate and adopt new tools and technologies to enhance the efficiency of the DevOps processes.
9. Provide support to development teams in areas such as environment setup, debugging, and performance tuning.
1. Bachelor's degree in Computer Science, Engineering, or a related field.
2. Proven experience (3-6 years) working as a DevOps Engineer with a focus on Big Data technologies.
3. Strong expertise in deploying and managing Big Data frameworks such as Hadoop, Spark, Kafka, Hue , Airflow, Trino, etc.
4. Experience with containerization and orchestration tools such as Docker and Kubernetes.
5. Proficiency in scripting languages (e.g., Python, Bash) for automation tasks.
6. Hands-on experience with configuration management tools like Ansible,
7. Strong hands on with version control systems (e.g., Git) and CI/CD tools (e.g., Jenkins, GitLab CI).
8. Understanding of security best practices for Big Data environments.
9. Excellent problem-solving and communication skills.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD3655521
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    HR, IN, India
  • Education
    Not mentioned
  • Experience
    Year