Design, develop, and maintain data pipelines using Apache Spark on Databricks
Implement ETL/ELT workflows across structured and semi-structured data sources
Optimize Spark jobs and Databricks clusters for performance and cost-efficiency
Collaborate with cross-functional teams to gather requirements and deliver data solutions
Ensure data quality, integrity, and security across all stages of the pipeline
Integrate with cloud services such as Azure Data Factory, AWS Glue, or GCP Dataflow
Document technical processes and contribute to knowledge sharing
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.