. This role involves re-engineering existing data logic, building efficient pipelines, and ensuring seamless performance optimization in Snowflake.
Key Responsibilities
Analyze and extract existing data logic, queries, and transformations from Redshift.
Rewrite and optimize SQL queries and data transformations in Snowflake.
Design and implement ETL/data pipelines to migrate and sync data (S3 to Snowflake using Snowpipe, bulk copy, etc.).
Ensure high performance through Snowflake-specific optimizations (clustering, caching, warehouse scaling).
Collaborate with cross-functional teams to validate data accuracy and business requirements.
Monitor, troubleshoot, and improve ongoing data workflows.
Required Skills & Experience
5 - 8 years of experience in Data Engineering
Strong SQL expertise
in both Redshift and Snowflake.
Proven experience in
data migration
projects, specifically Redshift to Snowflake.
Hands-on experience with
ETL/data pipeline development
(using Python, Airflow, Glue, dbt, or similar tools).
Solid understanding of
AWS ecosystem
, particularly
S3 to Snowflake ingestion
.
Experience in
performance tuning and optimization
within Snowflake.
Strong problem-solving skills and ability to work independently.
Nice to Have
Experience with
dbt, Airflow, AWS Glue
, or other orchestration tools.
Knowledge of modern data architecture and best practices.
Work Mode:
Initially remote for 1-2 months, then onsite in Pune.
Job Types: Full-time, Permanent
Pay: Up to ₹1,800,000.00 per year
Benefits:
Health insurance
Provident Fund
Education:
Bachelor's (Required)
Experience:
Redshift: 2 years (Required)
* Snowflake: 2 years (Required)
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.