to support analytics, reporting, and machine learning initiatives. You will play a key role in transforming raw data into structured and reliable datasets that power business decisions.
The ideal candidate has strong experience in data architecture, pipeline orchestration, and dimensional modeling, and is comfortable working in a fast-paced, data-driven environment.
Key Responsibilities:1. Data Pipeline Development
Design, build, and maintain reliable, scalable, and efficient ETL/ELT data pipelines.
Ingest data from various sources (APIs, databases, streaming platforms, etc.).
Monitor and troubleshoot data flows to ensure data integrity and timeliness.
2. Data Modeling
Design and implement data models (e.g., star/snowflake schema) for analytics and reporting.
Build and maintain a high-quality data warehouse or data lake architecture.
Collaborate with analysts, data scientists, and product teams to define data requirements and deliver well-structured datasets.
3. Data Governance & Quality
Implement and maintain data validation, quality checks, and documentation.
Establish and enforce best practices for data accuracy, consistency, and lineage.
4. Collaboration & Support
Work closely with data analysts, scientists, and software engineers to support data needs across the organization.
Provide technical guidance on data infrastructure and architecture decisions.
Document data models, data dictionaries, and pipeline workflows.
Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or a related field.
5+ years of experience in data engineering, with hands-on work in data modeling and building pipelines.
Proficiency with SQL and at least one programming language (Python, Scala, or Java).
Experience with modern data stack tools (e.g., dbt, Airflow, Fivetran, Snowflake, BigQuery, Redshift, Databricks, etc.).
Strong understanding of relational and non-relational databases.
Experience with cloud platforms (AWS, GCP, or Azure).
Familiarity with CI/CD for data workflows and version control tools (Git).
Nice to Have:
Experience with streaming technologies (Kafka, Kinesis, Spark Streaming).
Knowledge of data security and compliance (GDPR, HIPAA, etc.).
Exposure to ML pipelines or MLOps workflows.
Familiarity with business intelligence tools (Looker, Tableau, Power BI).
Job Type: Full-time
Pay: ₹470,047.17 - ₹1,816,443.00 per year
Work Location: In person
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.