Lead Assistant Manager

Year    Noida, Uttar Pradesh, India

Job Description

:
Job Title
Databricks Engineer
Location
[NCR / Bengaluru]
Job Type
[Full-time]
Experience Level
4+ years in data engineering with a strong focus on Databricks
Domain
[Healthcare]
Job Summary:
We are seeking a highly skilled and motivated Databricks Engineer to join our data engineering team. The ideal candidate will have strong experience in designing, developing, and optimizing large-scale data pipelines and analytics solutions using the Databricks Unified Analytics Platform, Apache Spark, Delta Lake, Data Factory and modern data lake/lakehouse architectures.
You will work closely with data architects, data scientists, and business stakeholders to enable high-quality, scalable, and reliable data processing frameworks that support business intelligence, advanced analytics, and machine learning initiatives.
Key Responsibilities:

  • Design and implement batch and real-time ETL/ELT pipelines using Databricks and Apache Spark.
  • Ingest, transform, and deliver structured and semi-structured data from diverse data sources (e.g., file systems, databases, APIs, event streams).
  • Develop reusable Databricks notebooks, jobs, and libraries for repeatable data workflows.
  • Implement and manage Delta Lake solutions to support ACID transactions, time-travel, and schema evolution.
  • Ensure data integrity through validation, profiling, and automated quality checks.
  • Apply data governance principles, including access control, encryption, and data lineage, using available tools (e.g., Unity Catalog, external metadata catalogs).
  • Work with data scientists and analysts to deliver clean, curated, and analysis-ready data.
  • Profile and optimize Spark jobs for performance, scalability, and cost.
  • Monitor, debug, and troubleshoot data pipelines and distributed processing issues.
  • Set up alerting and monitoring for long-running or failed jobs.
  • Participate in the CI/CD lifecycle using tools like Git, GitHub Actions, Jenkins, or Azure DevOps.
Required Skills & Experience:
  • 4+ years of experience in data engineering.
  • Strong hands-on experience with Apache Spark (DataFrames, Spark SQL, RDDs, Structured Streaming).
  • Proficient in Python (PySpark) and SQL for data processing and transformation.
  • Understanding of Cloud environment (Azure & AWS).
  • Solid understanding of Delta Lake, Data Factory and Lakehouse architecture.
  • Experience working with various data formats such as Parquet, JSON, Avro, CSV.
  • Familiarity with DevOps practices, version control (Git), and CI/CD pipelines for data workflows.
  • Experience with data modeling, dimensional modeling, and data warehouse concepts.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD3890038
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Noida, Uttar Pradesh, India
  • Education
    Not mentioned
  • Experience
    Year