with 1-2 years of hands-on experience in building scalable data pipelines and working with cloud technologies. You'll be part of a collaborative delivery team focused on transforming raw data into clean, usable datasets that drive analytics and business insights.
Key Responsibilities:
Build and maintain ETL/ELT pipelines for structured and unstructured data
Work with cloud services like AWS (Glue, S3, Redshift) or similar platforms
Optimize data workflows using PySpark, SQL, or equivalent tools
Implement basic data security and governance best practices
Monitor and troubleshoot data jobs and ensure reliability
Collaborate with analytics and business teams to understand data needs
Document workflows, pipelines, and architecture clearly
Contribute to automation and process improvement efforts
Skills & Qualifications:
1-2 years of experience in data engineering or similar role
Proficient in
SQL
and
Python
(or Scala)
Hands-on experience with
PySpark
,
Spark
, or
big data tools
Familiarity with
AWS
,
Azure
, or
GCP
(Glue, S3, Redshift, BigQuery, etc.)
Exposure to ETL orchestration tools like
Airflow
or
Azure Data Factory
Basic knowledge of CI/CD tools and version control (e.g., Git)
Understanding of data quality, governance, and security practices
Bachelor's degree in Computer Science, Engineering, or related field
Cloud certifications (AWS/Azure/GCP) are a plus
Job Types: Full-time, Permanent
Pay: ₹250,000.00 - ₹300,000.00 per year
Benefits:
Health insurance
Provident Fund
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.