IT / Data / AI / E-commerce / FinTech / Healthcare
Notice Period:
Immediate
What We Are Looking For
Proven experience leading data engineering teams with strong ownership of web crawling systems and pipeline architecture.
Expertise in designing, building, and optimizing scalable data pipelines, preferably using workflow orchestration tools such as Airflow or Celery.
Hands-on proficiency in Python and SQL for data extraction, transformation, processing, and storage.
Experience working with cloud platforms such as AWS, GCP, or Azure for data infrastructure, deployments, and pipeline operations.
Deep understanding of web crawling frameworks, proxy rotation, anti-bot strategies, session handling, and compliance with global data collection standards (GDPR/CCPA-safe crawling).
Strong expertise in AI-driven automation, including integrating AI agents or frameworks like Crawl4ai into scraping, validation, and pipeline workflows..
Responsibilities
Lead and mentor data engineering and web crawling teams, ensuring high-quality delivery and adherence to best practices.
Architect, implement, and optimize scalable data pipelines that support high-volume data ingestion, transformation, and storage.
Build and maintain robust crawling systems using modern frameworks, handling IP rotation, throttling, and dynamic content extraction.
Establish pipeline orchestration using Airflow, Celery, or similar distributed processing technologies.
Define and enforce data quality, validation, and security measures across all data flows and pipelines.
Collaborate with product, engineering, and analytics teams to translate data requirements into scalable technical solutions.
Develop monitoring, logging, and performance metrics to ensure high availability and reliability of data systems.
Oversee cloud-based deployments, cost optimization, and infrastructure improvements on AWS/GCP/Azure.
Integrate AI agents or LLM-based automation for tasks such as error resolution, data validation, enrichment, and adaptive crawling
Qualifications
Bachelor's or master's degree in engineering, Computer Science, or related field.
7-12 years of relevant experience in data engineering, pipeline design, or large-scale web crawling systems.
Strong expertise in Python, SQL, and modern data processing practices.
Experience working with Airflow, Celery, or similar workflow automation tools.
Solid understanding of proxy systems, anti-bot techniques, and scalable crawler architecture.
Hands-on experience with cloud data platforms (AWS/GCP/Azure).
Experience with AI/LLM frameworks (Crawl4ai, LangChain, LlamaIndex, AutoGen, OpenAI, or similar).
Strong analytical, architectural, and leadership skills.
Job Type: Full-time
Pay: ?1,000,000.24 - ?2,000,000.04 per year
Benefits:
Health insurance
Provident Fund
Work from home
Application Question(s):
Years of hands-on experience do you have in data engineering
Years of hands-on experience do you have building and delivering data engineering projects
Years of hands-on experience do you have in web crawling or scraping systems?
years have you worked hands-on with Python and SQL in production environments?
years of hands-on experience do you have with Airflow, Celery, or any pipeline orchestration tools?
Years of hands-on experience do you have with Crawl4AI for automated scraping?
Notice period (days)
Current CTC (monthly)
Work Location: Remote
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.