:
Design and implement ETL/ELT pipelines using Databricks and PySpark.
Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets.
Develop high-performance SQL queries and optimize Spark jobs.
Collaborate with data scientists, analysts, and business stakeholders to understand data needs.
Ensure data quality and compliance across all stages of the data lifecycle.
Implement best practices for data security and lineage within the Databricks ecosystem.
Participate in CI/CD, version control, and testing practices for data pipelines.
Required Skills:
Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits).
Strong hands-on skills with PySpark and Spark SQL.
Solid experience writing and optimizing complex SQL queries.
Familiarity with Delta Lake, data lakehouse architecture, and data partitioning.
Experience with cloud platforms like Azure or AWS.
Understanding of data governance, RBAC, and data security standards.
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.