Hiring for Databricks_Hyderabad 
 Design and build scalable data pipelines using Databricks, Apache Spark, and Delta Lake. 
 Develop and optimize ETL/ELT processes to ingest data from multiple sources (structured, semi-structured, unstructured). 
 Write and maintain clean, reusable code in PySpark, SQL, and optionally Scala or Java. 
 Implement data quality checks, validation frameworks, and pipeline monitoring. 
 Collaborate with data scientists, analysts, and business stakeholders to understand data needs. 
 Integrate Databricks workflows with cloud platforms (AWS, Azure, or GCP). 
 Tune and optimize Spark jobs for performance and cost-efficiency. 
 Maintain documentation of data flows, schema designs, and system architecture               
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.