. The role involves designing, building, and optimizing data pipelines as we transition multiple data sources and workloads to
Snowflake Cloud Data Platform
. You will work with modern data engineering frameworks, contribute to data modelling efforts, and ensure high-quality, scalable, and well-documented data solutions.
Key Responsibilities
Design, develop, and maintain
ETL/ELT pipelines
for large-scale data ingestion and transformation.
Work extensively with
Snowflake Cloud Data Warehouse
, including schema design, performance optimization, and data governance.
Develop data processing scripts and automation workflows using
Python
(pandas/dask/vaex) and
Airflow
or similar orchestration tools.
Implement
data modelling
best practices (3NF, star schema, wide/tall tables) and metadata management processes.
Optimize SQL queries across different database engines and manage performance trade-offs.
Contribute to data quality, lineage, and governance integration (e.g.,
Collibra
).
Collaborate with business stakeholders to gather requirements, translate into technical specifications, and deliver end-to-end solutions.
Support agile ways of working, participate in ceremonies, and maintain relevant documentation and artifacts.
Work with
source control (GitHub)
and follow best practices for shared codebase and CI/CD workflows.
Contribute to building pipelines that are robust, reliable, and support RBAC-based data access controls.
Required Skills & Experience
3+ years
of experience in
Data Engineering
or similar role.
Strong expertise with
Snowflake
, including schema design, warehouse configuration, and data product development.
Advanced
SQL
skills with experience writing optimized, high-performance queries.
Hands-on experience in
Python for data processing
, particularly with
pandas
or equivalent frameworks.
Experience with
Airflow
,
DBT
, or similar data orchestration/ELT frameworks.
Excellent understanding of
ETL/ELT patterns
, idempotency, and data engineering best practices.
Strong
data modelling
experience (3NF, dimensional modelling, semantic layers).
Familiarity with
data governance
and metadata cataloguing best practices.
Experience integrating data pipelines with
enterprise access control / RBAC
.
Working experience with
GitHub
or similar version control tools.
Ability to work with business stakeholders, gather requirements, and deliver scalable solutions.
Preferred / Nice to Have
Experience with
AWS data services
(S3, Glue, Lambda, IAM).
Knowledge of
data virtualisation
platforms, especially
Denodo
(cache management, query performance tuning).
Certifications in
Snowflake
,
AWS
, or
Denodo
.
Degree in Computer Science, Data Engineering, Mathematics, or related field (or equivalent professional experience).
Skills
Data Engineering,Airflow,Python,Snowflake
About UST
UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world's best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients' organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact--touching billions of lives in the process.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.