Databricks + Pyspark

Year    TS, IN, India

Job Description

Skills- Databricks+ Pyspark



Experience: 4 to 13 years



We are looking for a highly skilled Data Engineer with expertise in PySpark and Databricks to design, build, and optimize scalable data pipelines for processing massive datasets.



Key Responsibilities:



Build & Optimize Pipelines: Develop high-throughput ETL workflows using PySpark on Databricks. Data Architecture & Engineering: Work on distributed computing solutions, optimize Spark jobs, and build efficient data models. Performance & Cost Optimization: Fine-tune Spark configurations, optimize Databricks clusters, and reduce compute/storage costs. Collaboration: Work closely with Data Scientists, Analysts, and DevOps teams to ensure data reliability. ETL & Data Warehousing: Implement scalable ETL processes for structured & unstructured data. * Monitoring & Automation: Implement logging, monitoring, and alerting mechanisms for data pipeline health and fault tolerance.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4593653
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    TS, IN, India
  • Education
    Not mentioned
  • Experience
    Year