Platform Architecture – Databricks Engineer V

Year    KA, IN, India

Job Description

By clicking the "Apply" button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda's Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge.


-------------------

The Future Begins Here



At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet.


Bengaluru, the city, which is India's epicenter of Innovation, has been selected to be home to Takeda's recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement.

At Takeda's ICC we Unite in Diversity




Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team.

The Opportunity




As a

Platform Engineer V - Databricks

, you will be tasked with design, implement and manage Databricks Platforms to ensure optimal performance and scalability. Will define best practices for platform, evaluate new features, implement security and governance requirements. In addition, you will monitor and analyse platform metrics, troubleshoot issues related to cluster, jobs and configurations. You will work closely with engineering, DevOps and product teams to improve platform capabilities and enhance data management capabilities across the organization.

Responsibilities



Design, develop, and maintain data pipelines using Databricks, Apache Spark, and Delta Lake. Collaborate with data scientists and analysts to deploy machine learning models and analytics workflows on the Databricks platform. Optimize Spark jobs for performance and scalability, including partitioning, caching, and tuning resource allocation. Implement both batch and real-time data processing pipelines using Spark Streaming and Databricks workflows (DLT). Develop and manage Databricks notebooks for interactive data analysis, prototyping, and documentation. Utilize SQL, Python (PySpark), or Scala to work with large datasets and optimize queries. Manage and monitor Databricks clusters, jobs, and workflows for operational efficiency and cost control. Assist in the development of data lakes using Delta Lake to ensure efficient storage and querying of large datasets. Ensure data pipelines meet data quality, reliability, and compliance standards. Write well-documented, efficient, and maintainable code in collaboration with the wider data engineering team. Troubleshoot and resolve issues in data workflows and machine learning models. Keep up with the latest advancements in big data technologies, Databricks, and Apache Spark.

Skills and Qualifications



Bachelor's degree in Computer Science, Information Technology, or a related field More than 10+ years overall experience in Data Engineering and with at least 5+ years' experience in Databricks. Experience as a developer in Databricks platform with a strong understanding of Lakehouse concepts and technologies. Hands-on experience with

Databricks Notebooks

,

jobs

,

clusters

, and

workflows

.

Proven experience

in

Apache Spark

and

Databricks

development. Strong proficiency in

Python

(PySpark),

SQL

or

Scala.

Experience with cloud platforms such as AWS(preferred), Azure, or Google Cloud is a plus. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Detail-oriented mindset with a focus on data integrity, confidentiality, and compliance. Databricks and AWS related certifications Experience with APIs and CI/CD tools like Github

BENEFITS:




It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are:

Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working. Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs. Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements - Home Internet & Mobile Phone Employee Referral Program Leaves - Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days)

ABOUT ICC IN TAKEDA:



Takeda is leading a digital revolution. We're not just transforming our company; we're improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization.

Locations


-------------


IND - Bengaluru

Worker Type


---------------


Employee

Worker Sub-Type


--------------------


Regular

Time Type


-------------


Full time

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4158869
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    KA, IN, India
  • Education
    Not mentioned
  • Experience
    Year