Develop and optimise scalable data pipelines using
Microsoft Fabric
, including
Fabric Notebooks
,
Dataflows Gen2
,
Data Pipelines
, and
Lakehouse architecture
. Work on both batch and real-time ingestion and transformation. Integrate with
Azure Data Factory
or Fabric-native orchestration for smooth data flow.
Fabric Data Platform Implementation:
Collaborate with data architects and engineers to implement
governed Lakehouse models
in
Microsoft Fabric (OneLake)
. Ensure data solutions are performant, reusable, and aligned with business needs and compliance standards.
Data Pipeline Optimisation:
Monitor and improve performance of data pipelines and notebooks in Microsoft Fabric. Apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains.
Collaboration with Cross-functional Teams:
Work closely with BI developers, analysts, and data scientists to gather requirements and build high-quality datasets. Support
self-service BI
initiatives by developing well-structured datasets and semantic models in Fabric.
Documentation and Reusability:
Document pipeline logic, lakehouse architecture, and semantic layers clearly. Follow development standards and contribute to internal best practices for Microsoft Fabric-based solutions.
Microsoft Fabric Platform Execution:
Use your experience with
Lakehouses
,
Notebooks
,
Data Pipelines
, and
Direct Lake
in Microsoft Fabric to deliver reliable, secure, and efficient data solutions that integrate with
Power BI
,
Azure Synapse
, and other Microsoft services.
Required Skills and Qualifications:
5+ years
of experience in data engineering within the
Azure ecosystem
, with relevant
hands-on experience in Microsoft Fabric
, including
Lakehouse
,
Dataflows Gen2
, and
Data Pipelines
.
Proficiency in building and orchestrating pipelines with
Azure Data Factory
and/or
Microsoft Fabric Dataflows Gen2
.
Solid experience with
data ingestion
,
ELT/ETL development
, and
data transformation
across structured and semi-structured sources.
Strong understanding of
OneLake
architecture and modern
data lakehouse patterns
.
Strong command of
SQL,Pyspark, Python
applied to both data integration and analytical workloads.
Ability to collaborate with cross-functional teams and translate data requirements into scalable engineering solutions.
Experience in
optimising pipelines
and managing compute resources for
cost-effective
data processing in Azure/Fabric.
Preferred Skills:
Experience working in the
Microsoft Fabric ecosystem
, including
Direct Lake
,
BI integration
, and Fabric-native orchestration features.
Familiarity with
OneLake
,
Delta Lake
, and
Lakehouse
principles in the context of Microsoft's modern data platform.
expert knowledge of
PySpark
, strong
SQL
, and
Python scripting
within Microsoft Fabric or Databricks notebooks.
Understanding of
Microsoft Purview
or
Unity Catalog
, or Fabric-native tools for
metadata
,
lineage
, and
access control
.
Exposure to
DevOps practices
for Fabric and Power BI, including
Git integration
, deployment pipelines, and workspace governance.
* Knowledge of
Azure Databricks
for Spark-based transformations and
Delta Lake
pipelines is a plus.
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.