Databricks, Apache Spark, PySpark, and cloud data platforms (Azure/AWS)
. This role focuses on designing and delivering
scalable data engineering solutions, ETL pipelines, and enterprise data architectures
for analytics and reporting.
This is an
immediate hiring
requirement for a
hybrid work model
.
Location
Gurugram, Chennai, Hyderabad, Bengaluru
Hybrid Role - 3 Days Office / 2 Days Work From Home
Job Type
Contract to Hire (C2H) | Full-Time
UK Shift
Experience
10 - 18 Years
Key Responsibilities
Design and implement
enterprise-scale data architecture
using Databricks
Build, optimize, and maintain
ETL / ELT pipelines
using Apache Spark and PySpark
Develop and manage
Delta Lake
based data platforms
Ensure
data quality, performance, scalability, and reliability
Optimize
SQL queries
and Spark workloads for cost and performance
Implement
data governance, security, and access controls
Collaborate with
data engineers, data scientists, analysts, and business teams
Lead architectural decisions and
mentor junior data engineers
Implement
CI/CD pipelines
for data engineering workflows
Work with
cloud data platforms
on Azure or AWS
Required Skills (Must Have)
Databricks (Hands-on, enterprise experience)
Apache Spark & PySpark
Data Engineering & Data Architecture
ETL / ELT Pipeline Development
Data Modeling & SQL
Preferred Skills
Azure Databricks or AWS Databricks
Delta Lake architecture
Data Warehousing (Snowflake, Redshift, Synapse - preferred)
Azure Data Factory or Apache Airflow
Cloud-based analytics platforms
Nice to Have Skills
Machine Learning pipelines
CI/CD for data engineering
Terraform / Infrastructure as Code
Data governance, compliance, and security frameworks