Sr. Data Engineer

Year    Remote, IN, India

Job Description

Role: Data Engineer


Location:

Remote | Reports to: CEO

Type:

Long term Contract

Compensation:

Competitive salary + full benefits

Start Date:

ASAP

Date Posted:

TBD

About the Client

Our client is redefining how organizations grow by building fully bespoke Go-To-Market (GTM), Revenue Operations (RevOps), and Business Development systems. They combine automation, AI, and strategy to architect revenue machines that deliver measurable, scalable growth. They're growing fast, care deeply about people, and need a Data Engineer to help their clients become truly AI-native.

About the Role

As a

Data Engineer

, you'll be the backbone of every AI/ML-driven solution delivered. You'll design, build, and maintain the data foundations--pipelines, lakes, warehouses, and tooling--that enable predictive lead scoring, generative data augmentation, and real-time insights. You'll collaborate cross-functionally with data scientists, ML engineers, and stakeholder teams to turn raw data into revenue-fueling intelligence.

What You'll Own

Data Pipeline Development



Architect, develop, and maintain scalable ETL/ELT pipelines for structured, semi-structured, and unstructured data. Use orchestration tools like Apache Airflow, Prefect, or Dagster for reliable workflows.

Data Cleaning & Preparation



Apply preprocessing techniques: de-duplication, normalization, anomaly detection, and feature engineering. Ensure ML-ready data quality for training and inference.

AI Tooling Integration



Integrate tools like DataRobot, Alteryx, or custom AI scripts to automate data cleansing, enrichment, and augmentation. Continuously test new AI tools to optimize workflows.

Data Modeling & Structuring



Design and manage data models and storage solutions (Delta Lake/Hudi, Snowflake, Redshift, BigQuery). Optimize for performance, cost-efficiency, and ML scalability.

Collaboration



Translate business requirements into data solutions in collaboration with business strategists, ML engineers, and client teams. Lead discovery sessions and provide clear handoffs.

Optimization & Automation



Monitor and fine-tune pipeline performance. Implement alerts and self-healing patterns to reduce manual intervention.

Data Governance & Quality



Follow best practices for data security, lineage, and compliance (GDPR, CCPA). Build auditability into every pipeline.

Monitoring & Maintenance



Create dashboards and alerts to track pipeline health. Troubleshoot and conduct post-mortems for recurring issues.

Innovation



Stay ahead of AI/data engineering trends and prototype new tools or architectures that elevate the client's offerings.
What You Bring

Experience



4-7+ years of hands-on data engineering experience, especially with AI/ML data pipelines. Comfortable working in fast-paced environments (startups, consultancies, high-growth teams).

Technical Skills



Languages:

Python, SQL (Scala/Java is a plus)

Frameworks:

Apache Spark, Flink, or Beam

Cloud:

AWS, Azure, or GCP (e.g., Redshift, Data Factory, BigQuery)

Tools:

DataRobot, Alteryx, etc.

Storage:

Delta Lake, Hudi, Snowflake, Redshift, BigQuery

Orchestration:

Airflow, Prefect, Dagster

Containers:

Docker/Kubernetes (nice-to-have)

AI/ML Knowledge



Experience in structuring data for modeling--feature engineering, handling data imbalance, and managing large datasets.

Soft Skills



Strong problem-solving skills and a bias for action. Excellent communicator with the ability to translate technical insights into business value. Experience in client-facing roles and managing expectations.

Education & Certifications



Bachelor's or Master's in CS, Data Science, Engineering, or equivalent. Preferred: AWS Certified Data Analytics, Google Professional Data Engineer, or similar.
Why Work With the Client?

Ownership & Autonomy



Take full ownership of client-facing solutions--your work is the solution. Results-focused culture--flexible work setup with high accountability.

Modern Work Philosophy



100% remote--work from anywhere. Async-first communication to support deep work and flexibility.

Wellness & Support



Stipends for wellness, mental health, and creativity. Tech budgets to optimize your home workspace.

Culture



Anti-corporate: no politics, no unnecessary hierarchy. Work hard, play hard with global retreats in amazing locations (e.g., Costa Rica, Switzerland, Barcelona).
Interview Process

1. Submit a 3-5 minute loom video introducing yourself, sharing why you want to work at Ombrik, and showcasing / sharing a project you are most proud of (personal or professional) a. MUST SHOW FACE

2. Behavioral and technical interview (60-90 mins)

3. Project showcase / case study (60-120 mins)

The Client Cares About People, Not Pedigree

They value unorthodox journeys, curiosity, and passion over fancy degrees. Whether you're self-taught, a college dropout, or just relentlessly driven--they want to hear from you.

Job Type: Full-time

Pay: ?1,200,000.00 - ?2,000,000.00 per year

Application Question(s):

Have you built and maintained production ETL/ELT pipelines using
frameworks like Apache Spark (or Flink/Beam) orchestrated by
Airflow/Prefect/Dagster?

Which cloud data warehouses or lakes (Redshift, BigQuery, Snowflake,
Delta Lake, etc.) have you worked with extensively?

Which data engineering languages are you most proficient in?
Work Location: Remote

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD3740981
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Contract
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Remote, IN, India
  • Education
    Not mentioned
  • Experience
    Year