to lead complex data workflow migrations and build scalable data architecture. You will design, optimize, and maintain data pipelines that power our business analytics and data-driven decision-making. This role demands deep knowledge of both NoSQL (MongoDB) and relational databases, and experience in streamlining CI/CD processes for data workloads.
Key Responsibilities
Lead the migration of Jenkins-based data pipelines to
Apache Airflow
, improving orchestration, scheduling, and reliability.
Design and implement robust, scalable data workflows using
Airflow
and
Python
, ensuring reliability, modularity, and fault tolerance.
Architect and optimize data storage solutions using
MongoDB
(NoSQL) as well as
MySQL / PostgreSQL
(relational), balancing performance and cost.
Collaborate with cross-functional teams (data scientists, analysts, engineers) to build integrated data pipelines and delivery mechanisms.
Implement data validation and quality checks to ensure accuracy, consistency, and reliability across all processes. Airbyte+2Airbyte+2
Monitor, profile, and tune data workflows for performance, throughput, and resource utilization.
Build and maintain CI/CD practices specifically tailored for data pipelines, enabling rapid delivery and stable deployments.
Document data workflows, architecture patterns, and processes to support maintainability and knowledge sharing.
Stay current on data engineering best practices, emerging tools, and orchestration frameworks.
Required Skills & Expertise
Strong experience in
Apache Airflow
and writing DAGs in Python.
Proficiency in
Python
for building data ingestion and transformation logic.
Experience migrating or managing
CI/CD pipelines
(e.g., Jenkins ? Airflow).
Deep understanding of
MongoDB
(NoSQL) as well as
MySQL / PostgreSQL
or other relational databases.
Expertise in designing, building, and maintaining
data pipelines
and workflow orchestration systems.
Solid experience in ensuring data quality, reliability, and performance in production pipelines.
Strong problem-solving, debugging, and analytical skills.
Excellent communication and collaboration skills, with ability to work across cross-functional teams.
Nice-to-Have
Experience with
Docker
,
AWS
, or
GCP
.
Familiarity with
Spark
for large-scale data processing. X0PA AI
Experience using
Git
and REST APIs in data engineering context.
Understanding of data governance, data lineage, and metadata management. Taggd
Soft Skills
Strong
analytical thinking
and problem-solving mindset.
Excellent
ownership
-- ability to lead and drive data architecture initiatives.
Good
communication
skills to articulate complex data ideas to technical and non-technical stakeholders.
Team player, able to collaborate effectively with engineers, data scientists, and product teams.
If you like, I can also:
? Create a
LinkedIn?ready job posting
? Provide a
short, catchy version for job boards
? Craft a
technical deep-dive JD
specifically for senior data engineering roles
Do you want me to do one?
AttachSearchStudyVoiceChatGPT can make mistakes. Check important
Job Type: Full-time
Pay: ?1,000,000.00 - ?1,500,000.00 per year
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.