Write optimized, scalable SQL queries for large datasets
Support
dimensional modeling
,
data marts
, and
curated datasets
for analytics use cases
Governance & Security
Implement
Unity Catalog
for access control, lineage, and data auditing
Work with
Azure AD
,
Key Vault
, and policies for securing access to data and secrets
DevOps & CI/CD Automation
Collaborate on
CI/CD pipeline setup
for Databricks notebooks, jobs, and ADF pipelines using
Azure DevOps
Apply Git-based version control practices and deployment automation for data workloads
Contribute to infrastructure-as-code
using
Terraform
or
Bicep
Must-Have Qualifications
5+ years of hands-on experience in
data engineering
roles
Strong experience with
Azure Databricks
and
Apache Spark (PySpark)
Strong proficiency in Python
for data engineering and scripting tasks
Experience building pipelines in
Azure Data Factory
or Synapse Pipelines
Proficiency with
SQL
,
Delta Lake
,
ADLS Gen2
, and
Databricks Notebooks
Solid understanding of medallion architecture
and
data lake principles
Experience working with structured and semi-structured data
Exposure to CI/CD tools
(Azure DevOps, Git) and
Infrastructure-as-Code
Strong problem-solving, debugging, and optimization skills
Nice-to-Have Skills
Experience with
Unity Catalog
,
Delta Live Tables (DLT)
, or
Delta Sharing
Familiarity with Azure Synapse SQL Pools (Dedicated/Serverless)
Understanding of network security (Private Link, Key Vault, AAD integration)
Working knowledge of Terraform
or ARM templates
Experience in data governance and compliance frameworks
Soft Skills
Proactive communication with remote/distributed teams
Ability to work under deadlines and adapt in a fast-paced environment
* Strong documentation and collaboration practices
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.