Senior Software Engineer, Data

Year    HR, IN, India

Job Description

Gurugram, Haryana, India








Department
Technology
Job posted on
Oct 28, 2025
Employment type
Full Time

About us:


MatchMove is a profitable Singapore-based fintech company and one of Asia's leading Banking-as-a-Service (BaaS) providers, enabling businesses to embed financial services directly into their digital ecosystems. Operating its proprietary, secure, and regulated Banking Wallet OS(TM) platform across Asia and beyond, MatchMove empowers enterprises to issue accounts, cards, payments, loans, and other financial products seamlessly within their own platforms.
The company is experiencing double-digit year-on-year growth and processes billions of transactions each year, underscoring its scale, resilience, and trust among partners and users. Recognized with multiple industry awards -- including the Frost & Sullivan's 2025 Singapore Enabling Technology Leadership Recognition for Excellence in Embedded Finance Innovation -- MatchMove has been celebrated for driving innovation across a wide range of embedded finance use cases.
By partnering with leading local banks and ecosystem players, MatchMove bridges the gap between traditional banking and modern digital commerce. Its mission is to deliver innovative, secure, and inclusive financial technology solutions that drive digital transformation for businesses while empowering millions of end users across the region.
With a strong commitment to innovation, regulatory excellence, and sustainable growth, MatchMove continues to pioneer new approaches to embedded finance, redefining how businesses and consumers access and interact with financial services in Asia and beyond.

Skills:

#SeniorDataEngineer #AWSDataEngineer #DataPipelines #PySpark #ETL #AWS

#DataLake

Are You The One?


As a

Senior

Data Engineer

, you will design and develop data pipelines and workflows across streaming and batch layers. You'll work alongside fraud analysts, and backend engineers to ensure that the data flows are

timely, accurate, and trustworthy

. You'll be instrumental in shaping a

secure, compliant, and API-consumable data lake

that supports both operational and analytical use cases.

You Will Contribute To

Building and managing the

data lake architecture

using

AWS S3, AWS Glue, Lake Formation, and Athena

for scalable, schema-aware storage and querying. Developing and optimizing ETL/ELT pipelines using

AWS Glue, PySpark, or Airflow

, with strong schema evolution and data partitioning logic. Using

AWS DMS (Database Migration Service)

to replicate and aggregate operational data from transactional stores (MySQL, PostgreSQL) into the lake in near real-time. Enabling both

real-time streaming

(via Kinesis or Kafka) and

batched data pipelines

for downstream use cases in reconciliation, fraud scoring, and compliance , operation or billing reporting. Implementing

data quality checks, observability metrics, lineage, and auditing

to meet compliance and reporting standards for a regulated fintech environment. Structuring data in

OTF (optimized table formats)

such as Apache Iceberg or Delta Lake to support upserts, time travel, and incremental reads. Supporting

role-based access controls, encryption, and fine-grained policies

using

Lake Formation

and IAM to enforce data governance. Enabling downstream teams by building

data marts, materialized views, and APIs

for dashboards, machine learning models, and alerts. Leveraging

Generative AI tools

to improve development velocity -- for example, auto-generating PySpark scaffolds, test suites, documentation, and DDL scripts -- while maintaining high engineering standards and traceability

Responsibilities

Design, implement, and manage

scalable, cost-efficient data pipelines

across streaming and batch paradigms using AWS-native services. Write efficient, testable

PySpark scripts, Glue jobs

, and SQL transformations that support complex join, windowing, and aggregation logic. Tune storage layout in S3 with proper file sizing, compression, partitioning, and table format (e.g., Iceberg or Hudi) for optimal performance. Maintain metadata cataloging using

AWS Glue Data Catalog

, including crawler configurations, schema validations, and tagging. Use

Athena

, Redshift Spectrum, or EMR for large-scale querying and data validation jobs. Integrate with fraud systems and reconciliation engines to ensure

near real-time data availability and accuracy

. Contribute to

CI/CD pipelines

for data workflows, including automated testing, rollback strategies, and rollback alerting. Work closely with Data Governance, InfoSec, and Engineering teams to enforce

data access control

,

encryption

, and

compliance mandates

.

Requirements

At-least 4 years

of experience in data engineering, ideally within

fintech or high-throughput transactional environments

. Strong hands-on experience with

AWS Glue (Jobs + Crawlers), S3, Athena, Lake Formation

and redshift Deep understanding of

ETL/ELT patterns

, especially with

PySpark or Spark SQL

, and orchestration tools (Airflow, Step Functions). Experience in

streaming data ingestion and transformation

using Kinesis, Kafka, or AWS MSK. Familiarity with

DMS

for continuous replication of RDS/Aurora data into staging zones. Experience with

open table formats

(e.g., Apache Iceberg, Delta Lake, or Hudi) and their performance characteristics. Proficiency with

SQL

and distributed querying engines, and understanding of query optimization techniques. Exposure to

data observability

(e.g., Great Expectations, Monte Carlo) and debugging production data pipelines. Experience with

data security best practices

, encryption at rest/in transit, and IAM-based access control models.

Brownie Points

Experience working in a

PCI DSS or any other central bank regulated environment

with audit logging and data retention requirements. Familiarity with

ML feature stores

, streaming aggregations, or fraud analytics tooling. Exposure to BI/data visualization tools like

QuickSight, Metabase, or Looker

. Proficiency in

version control

,

GitOps

, or

Terraform/CDK

for infrastructure-as-code in data workflows. Experience collaborating in cross-functional squads with fraud, finance, or compliance analysts. Experience using

GenAI

to drive

business or operational efficiency

-- e.g., automating reconciliation, anomaly detection alerting, or cost analysis. Proficiency in

GitOps or infrastructure-as-code

for managing data workflows (e.g., Terraform, AWS CDK).

MatchMove Culture:

We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.

We are AI-first in our approach.

We embrace AI as a strategic tool that enhances decision-making, creativity, and productivity. Every team member is equipped and encouraged to integrate AI into their workflow, experiment with new tools, and contribute to our collective AI literacy. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to impact people's lives meaningfully. Grow with us and shape the future of fintech. Join us and be part of something bigger!

Personal Data Protection Act:


By submitting your application for this job, you are authorizing MatchMove to:collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporations have service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and * retain your personal data for one year for consideration of future job opportunities (where applicable).

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4571188
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    HR, IN, India
  • Education
    Not mentioned
  • Experience
    Year