Databricks Spark

Year    TN, IN, India

Job Description

Job Summary




The Sr. Developer role is crucial for driving innovation and efficiency in our hybrid work model. With a focus on Kafka Python Databricks SQL Databricks Workflows and PySpark the candidate will enhance our data processing capabilities. Experience in Claims and Billing domains is advantageous. This position requires 8 to 9 years of experience and offers a day shift schedule with no travel requirements.





Responsibilities



Develop and maintain scalable data processing solutions using Kafka Python and PySpark to enhance data flow and analytics capabilities. Collaborate with cross-functional teams to design and implement Databricks Workflows that streamline data operations and improve efficiency. Utilize Databricks SQL to perform complex data queries and generate actionable insights for business stakeholders. Ensure data integrity and quality by implementing robust data validation and error-handling mechanisms. Optimize existing data pipelines for performance and scalability ensuring they meet the evolving needs of the organization. Provide technical guidance and support to junior developers fostering a culture of continuous learning and improvement. Participate in code reviews and contribute to the development of best practices and coding standards. Work closely with product managers and business analysts to understand requirements and translate them into technical solutions. Monitor and troubleshoot data processing systems to ensure high availability and reliability. Stay updated with the latest industry trends and technologies to drive innovation and maintain a competitive edge. Document technical specifications and system designs to facilitate knowledge sharing and collaboration. Engage in regular team meetings to discuss project progress challenges and opportunities for improvement. Contribute to the companys purpose by developing solutions that enhance operational efficiency and deliver value to society.




Qualifications



Possess strong expertise in Kafka Python Databricks SQL Databricks Workflows and PySpark with a proven track record of successful implementations. Demonstrate proficiency in designing and optimizing data pipelines for large-scale data processing. Exhibit excellent problem-solving skills and the ability to work effectively in a hybrid work model. Have experience in the Claims and Billing domains which is considered a plus. Show strong communication skills and the ability to collaborate with diverse teams. Display a commitment to continuous learning and staying abreast of emerging technologies. Hold a bachelors degree in Computer Science Information Technology or a related field.




Certifications Required




Certified Apache Kafka Developer Databricks Certified Data Engineer Associate

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4423410
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    TN, IN, India
  • Education
    Not mentioned
  • Experience
    Year