Senior Pyspark Engineer (1 Year Renewable Contract)

Year    TS, IN, India

Job Description

Role:

Senior PySpark Engineer

Experience

Required:

Minimum 8+ Years

Work Location: Hyderabad (5 Days Work from Office) Job Type: Contract to Hire (1 Year/ Renewable) Notice Period: Immediate to 15 Days max Mode of Interview: Virtual
We are seeking a highly skilled PySpark Data Engineer to design, build, and optimize large-scale data pipelines and distributed systems. Beyond deep expertise in Apache Spark (PySpark) and automation, this role requires the ability to manage stakeholders, ensure timely delivery, and assess requirements. You will play a critical role in bridging business needs with technical execution, ensuring high-quality, scalable, and reliable data solutions. Cloudera PySpark experience is preferred.

KEY RESPONSIBILITIES:



Architect and guide the refactoring of legacy PySpark scripts into modular, reusable, and configuration-driven frameworks aligned with enterprise standards. Lead migration efforts to Spark 3.3+ and Python 3.10+, ensuring compatibility, performance, and maintainability across distributed systems. Drive modernization by replacing deprecated APIs (e.g., RDDs, legacy UDFs) with efficient DataFrame operations and Pandas UDFs, promoting best practices. Establish and enforce structured logging, robust error handling, and proactive alerting mechanisms for operational resilience. Oversee performance tuning, including partitioning strategies, broadcast joins, and predicate pushdown, to optimize Spark execution plans. Ensure data integrity through schema enforcement, data type consistency, and accurate implementation of Slowly Changing Dimensions (SCD) logic. Collaborate with DevOps and QA teams to integrate Spark workloads into CI/CD pipelines and automated testing frameworks. Mentor and conduct code reviews, providing technical guidance and resolving complex findings to uphold code quality and team growth. Lead performance benchmarking and regression testing initiatives to validate scalability and reliability of Spark applications. Coordinate deployment planning, runbook creation, and production handover, ensuring smooth transitions and operational readiness. Engage with stakeholders to translate business requirements into scalable data processing solutions and contribute to data platform strategy.

Educational Qualification:



Graduate/Masters in software engineering/IT/Computer Science or equivalent.

Technical Skills:



PySpark Development (5-7 Years)



Refactoring legacy scripts, using DataFrame APIs, avoiding .collect()or equivalent

Spark Optimization (3-5 Years)



Broadcast joins, partitioning strategy, predicate pushdown

Pyspark Migration activity (2 Years)



Prior experience with Pyspark migration activity.

Testing Frameworks (1+ Years)



Pytest, Great Expectations, Deequ for unit/integration/performance testing
Job Type: Contractual / Temporary
Contract length: 12 months

Pay: ?600,000.00 - ?2,700,000.00 per year

Work Location: In person

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4776952
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    TS, IN, India
  • Education
    Not mentioned
  • Experience
    Year