Data Platform Engineer (hybrid)

Year    Bangalore, Karnataka, India

Job Description


This is where you save and sustain livesAt Baxter, we are deeply connected by our mission. No matter your role at Baxter, your work makes a positive impact on people around the world. You\'ll feel a sense of purpose throughout the organization, as we know our work improves outcomes for millions of patients.Baxter\'s products and therapies are found in almost every hospital worldwide, in clinics and in the home. For over 85 years, we have pioneered significant medical innovations that transform healthcare.Together, we create a place where we are happy, successful and inspire each other. This is where you can do your best work.Join us at the intersection of saving and sustaining lives\xe2\x80\x94where your purpose accelerates our mission.Summary:This section focuses on the main purpose of the job in one to four sentences.Perform development work and technical support related to our data transformation and ETL jobs in support of a global data warehouse/Data Platform. Can communicate results with internal customers. Requires the ability to work independently, as well as in cooperation with a variety of customers and other technical professionals.Essential Duties and Responsibilities:This section contains a list of five to eight primary responsibilities of this role that account for 5% or more of the work. The incumbent will perform other duties assigned.

  • Development, Enhancement and Support of ETL/data transformation jobs/data pipelines, using PySpark
  • Can explain technical solutions and resolutions with internal customers and communicate feedback to the ETL team
  • Perform technical code reviews for peers moving code into production.
  • Perform and review integration testing before production migrations
  • Provide high level of technical support and perform root cause analysis for problems experienced within area of functional responsibility
  • Can document technical specs from business communications
  • Serves as SME for various AWS cloud technologies
Qualifications:To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. List knowledge, skills, and/or abilities required. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.
  • 3+ years of ETL, Data Platform, Data Engineering, AWS experience
  • 5+ years of IT industry experience
  • Experience with core Python programming for data transformation
  • Intermediate-level PySpark skills. Can read, understand, and debug existing code and write simple PySpark code from scratch.
  • Strong knowledge of SQL fundamentals, understanding of subqueries, can tune queries with execution hints to improve performance.
  • ble to write SQL code sufficient for most business requirements for pulling data from sources, applying rules to the data, and stocking target data
  • Proven track record in troubleshooting data pipelines and addressing production issues like performance tuning, reject handling, and ad-hoc reloads
  • Proficient in developing optimization strategies for data pipelines.
  • Can create clear and concise documentation and communications.
  • Can document technical specs from business communications.
  • Ability to coordinate and aggressively follow up on incidents and problems, perform diagnosis, and provide resolution to minimize service interruption
  • Ability to prioritize and work on multiple tasks simultaneously
  • Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently with effective communication skills.
  • A self-starter who can work well independently and on team projects.
  • Experienced in analyzing business requirements, defining the granularity, source to target mapping of the data elements, and full technical specification.
  • Experienced working at the command line in various flavors of UNIX, with basic understanding of shell scripting in bash and korn shell.
Education and/or Experience:Include the education and experience that is necessary to perform the job satisfactorily.
  • Bachelors of Science in computer science or equivalent
  • 3+ years of ETL and SQL experience
  • 3+ years of python and PySpark experience
  • 3+ years of AWS and Unix experience
  • Preferred certifications:
  • Python and PySpark certifications
  • Snowflake Pro certification
At Baxter, we offer a dynamic and future focused work environment offering workplace flexibility, additional annual leave and a strong value driven culture.
Baxter is committed to supporting the needs for flexibility in the workplace. We do so through our flexible workplace policy which includes a minimum of 3 days a week onsite. This policy provides the benefits of connecting and collaborating in-person in support of our Mission.Reasonable AccommodationsBaxter is committed to working with and providing reasonable accommodations to individuals with disabilities globally. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the application or interview process, please click on the here and let us know the nature of your request along with your contact information.Recruitment Fraud NoticeBaxter has discovered incidents of employment scams, where fraudulent parties pose as Baxter employees, recruiters, or other agents, and engage with online job seekers in an attempt to steal personal and/or financial information. To learn how you can protect yourself, review our .

Baxter

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD3276685
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Bangalore, Karnataka, India
  • Education
    Not mentioned
  • Experience
    Year