We turn customer challenges into growth opportunities.
Material is a global strategy partner to the world's most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve.
Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe
Job Responsibilities:
Design and Develop Data Pipelines:
Development and optimisation of scalable data pipelines within
Microsoft Fabric
, leveraging fabric based notebooks,
Dataflows Gen2
,
Data Pipelines
, and
Lakehouse architecture
. Build robust pipelines using both batch and real-time processing techniques. Integrate with
Azure Data Factory
or
Fabric-native orchestration
for seamless data movement.
Microsoft Fabric Architecture:
Work with the
Data Architecture
team to implement scalable, governed data architectures within
OneLake
and Microsoft Fabric's unified compute and storage platform. Align models with business needs, promoting performance, security, and cost-efficiency.
Data Pipeline Optimisation:
Continuously monitor, enhance, and optimise Fabric pipelines, notebooks, and lakehouse artifacts for performance, reliability, and cost. Implement best practices for managing large-scale datasets and transformations in a Fabric-first ecosystem.
Collaboration with Cross-functional Teams:
Work closely with analysts, BI developers, and data scientists to gather requirements and deliver high-quality, consumable datasets. Enable
self-service analytics
via certified and reusable
Power BI datasets
connected to Fabric Lakehouses.
Documentation and Knowledge Sharing:
Maintain clear, up-to-date documentation for all data pipelines, semantic models, and data products. Share knowledge of
Fabric
best practices and mentor junior team members to support adoption across teams.
Microsoft Fabric Platform Expertise:
Use your expertise in
Microsoft Fabric
, including
Lakehouses
,
Notebooks
,
Data Pipelines
, and
Direct Lake
, to build scalable solutions integrated with
Business Intelligence layers
,
Azure Synapse
, and other Microsoft data services.
Required Skills and Qualifications:
Experience in
Microsoft Fabric / Azure Eco System
: 7+ years working with Azure eco system , Relavent experience in Microsoft Fabric, including Lakehouse,oine lake, Data Engineering, and Data Pipelines components.
Proficiency in
Azure Data Factory
and/or
Dataflows Gen2
within Fabric for building and orchestrating data pipelines.
Advanced Data Engineering Skills: Extensive experience in data ingestion, transformation, and
ELT/ETL pipeline design
. Ability to enforce data quality, testing, and monitoring standards in cloud platforms.
Cloud Architecture Design: Experience designing modern data platforms using
Microsoft Fabric
,
OneLake
, and
Synapse
or equivalent.
Strong / Indeapth SQL and Data Modelling: Expertise in
SQL
and
data modelling
(e.g., star/snowflake schemas) for
Data
intergation
/
ETL
, reporting and
analytics
use cases.
Collaboration and Communication: Proven ability to work across business and technical teams, translating business requirements into scalable data solutions.
Cost Optimisation: Experience tuning pipelines and cloud resources (Fabric, Databricks, ADF) for cost-performance balance.
Preferred Skills:
Deep understanding of
Azure
,
Microsoft Fabric ecosystem
, including
Power BI integration
,
Direct Lake
, and
Fabric-native security and governance
.
Familiarity with
OneLake
,
Delta Lake
, and
Lakehouse architecture
as part of a modern data platform strategy.
Experience using
Power BI
with Fabric Lakehouses and
DirectQuery/Direct Lake
mode for enterprise reporting.
Working knowledge of
PySpark
,
strong SQL
, and
Python scripting
within Fabric or Databricks notebooks.
Understanding of
Microsoft Purview
,
Unity Catalog
, or Fabric-native governance tools for lineage, metadata, and access control.
Experience with
DevOps practices
for Fabric or Power BI, including version control, deployment pipelines, and workspace management.
* Knowledge of
Azure Databricks
: Familiarity with building and optimising Spark-based pipelines and
Delta Lake
models as part of a modern data platform. is an added advantage.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.