Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a
Great Place to Work
, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you'll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Role Overview
An Azure Data Engineer specializing in Databricks is responsible for designing, building, and maintaining scalable data solutions on the Azure cloud platform, with a focus on leveraging Databricks and related big data technologies. The role involves close collaboration with data scientists, analysts, and software engineers to ensure efficient data processing, integration, and delivery for analytics and business intelligence needs245.
Key Responsibilities
Design, develop, and maintain robust and scalable data pipelines using Azure Databricks, Azure Data Factory, and other Azure services.
Build and optimize data architectures to support large-scale data processing and analytics.
Collaborate with cross-functional teams to gather requirements and deliver data solutions tailored to business needs.
Ensure data quality, integrity, and security across various data sources and pipelines.
Implement data governance, compliance, and best practices for data security (e.g., encryption, RBAC).
Monitor, troubleshoot, and optimize data pipeline performance, ensuring reliability and scalability.
Document technical specifications, data pipeline processes, and architectural decisions
Support and troubleshoot data workflows, ensuring consistent data delivery and availability for analytics and reporting
Automate data tasks and deploy production-ready code using CI/CD practices
Stay updated with the latest Azure and Databricks features, recommending improvements and adopting new tools as appropriate
Required Skills and Qualifications
Bachelor's degree in computer science, Engineering, or a related field
5+ years of experience in data engineering, with hands-on expertise in Azure and Databricks environments
Proficiency in Databricks, Apache Spark, and Spark SQL
Strong programming skills in Python and/or Scala
Advanced SQL skills and experience with relational and NoSQL databases
Experience with ETL processes, data warehousing concepts, and big data technologies (e.g., Hadoop, Kafka)
Familiarity with Azure services: Azure Data Lake Storage (ADLS), Azure Data Factory, Azure SQL Data Warehouse, Cosmos DB, Azure Stream Analytics, Azure Functions
Understanding of data modeling, schema design, and data integration best practices
Strong analytical, problem-solving, and troubleshooting abilities
Experience with source code control systems (e.g., GIT) and technical documentation tools
Excellent communication and collaboration skills; ability to work both independently and as part of a team
Preferred Skills
Experience with automation, unit testing, and CI/CD pipelines
Certifications in Azure Data Engineering or Databricks are advantageous
Soft Skills
Flexible, self-starter, and proactive in learning and adopting new technologies
Ability to manage multiple priorities and work to tight deadlines
* Strong stakeholder management and teamwork capabilities
Beware of fraud agents! do not pay money to get a job
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.