5-6 years of total experience in data engineering or big data development.
2-3 years hands-on experience with Databricks and Apache Spark.
Proficient in AWS cloud services (S3, Glue, Lambda, EMR, Redshift, CloudWatch, IAM).
Strong programming skills in PySpark, Python, and optionally Scala.
Solid understanding of data lakes, lakehouses, and Delta Lake concepts.
Experience in SQL development and performance tuning.
Familiarity with Airflow, dbt, or similar orchestration tools is a plus.
Experience in CI/CD tools like Jenkins, GitHub Actions, or CodePipeline.
Knowledge of data security, governance, and compliance frameworks.
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.