to join their team. They are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure.
The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely.
About ASIGN:
At Asign, we are revolutionizing the art sector with our innovative digital solutions. We are a passionate and dynamic startup dedicated to enhancing the art experience through technology. Join us in creating cutting-edge products that empower artists and art enthusiasts worldwide.
Please note:
The vetting process for this role comprises 2-3 rounds of interviews and may be followed by a brief assignment.
Festivals From India is hiring for this role on behalf of the ASIGN
This is an on-site, full-time position based in Chennai.
Salary band for this role is available upon request.
Essential Requirements:
Minimum 5 years of hands-on experience in data engineering.
Solid understanding and experience with ELT pipelines and modern data stack tools.
Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.).
Proficiency in Python and SQL.
Experience working with APIs and data integration from multiple sources.
Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
Strong problem-solving and debugging skills.
Essential Qualifications:
Bachelor's/Master's degree in Computer Science, Engineering, Statistics, or a related field
Proven experience (5+ years) in data engineering, data integration, and data management
Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, BeautifulSoup, Selenium, Playwright)
Proficiency in Python and SQL for data manipulation and pipeline development
Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake)
Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster)
Strong understanding of relational and non-relational databases (PostgreSQL, MongoDB, etc.)
Solid understanding of data modeling, ETL best practices, and data governance principles
Systems knowledge and experience working with Docker.
Strong and creative problem-solving skills and the ability to think critically about data engineering solutions.
Effective communication and collaboration skills
Ability to work independently and as part of a team in a fast-paced, dynamic environment.
Job Types: Full-time, Permanent