Hiring for Databricks_Bangalore
Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake.
Build and optimize data pipelines for large-scale data ingestion, transformation, and processing.
Collaborate with data analysts, data scientists, and business teams to deliver reliable and high-performing datasets.
Develop data models, manage data lakehouse architectures, and ensure data quality, lineage, and governance.
Integrate data from multiple sources, including Azure Data Lake, SQL databases, APIs, and streaming systems.
Automate and orchestrate workflows using Databricks Workflows, Azure Data Factory (ADF), or Airflow.
Optimize Spark jobs for performance and cost efficiency.
Implement monitoring, logging, and alerting for data pipelines and jobs.
Ensure compliance with data security and governance policies
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.