Software Engineer Iii Senior Associate (data Engineering)

Year    KA, IN, India

Job Description

Job Summary:


As a Software Engineer III at JPMorgan Chase within the Corporate and Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives.


Job Responsibilities:


Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics. Frequently utilizes SQL and understands NoSQL databases and their niche in the marketplace Collaborate closely with cross-functional teams to develop efficient data pipelines to support various data-driven initiatives Implement best practices for data engineering, ensuring data quality, reliability, and performance Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows Perform data extraction and implement complex data transformation logic to meet business requirements Leverage advanced analytical skills to improve data pipelines and ensure data delivery is consistent across projects Monitor and executes data quality checks to proactively identify and address anomalies Ensure data availability and accuracy for analytical purposes. Identify opportunities for process automation within data engineering workflows Communicate technical concepts to both technical and non-technical stakeholders. Deploy and manage containerized applications using Kubernetes (EKS) and Amazon ECS. Implement data orchestration and workflow automation using AWS step , Event Bridge. Use Terraform for infrastructure provisioning and management, ensuring a robust and scalable data infrastructure.
Required qualifications, capabilities, and skills


Formal training or certification on Data Engineering concepts and 3+ years applied experience Experience across the data lifecycle. Advanced at SQL (e.g., joins and aggregations) Advanced knowledge of RDBMS like Aurora. Experience in Microservice based component using ECS or EKS Working understanding of NoSQL databases 4 + years of Data Engineering experience in building and optimizing data pipelines, architectures, and data sets ( Glue or Databricks etl) Proficiency in object-oriented and object function scripting languages (Python etc.) Experience in developing ETL process and workflows for streaming data from heterogeneous data sources Willingness and ability to learn and pick up new skillsets Experience working with modern DataLakes: Databricks ). Experience building Pipeline on AWS using Terraform and using CI/CD piplelines

Preferred qualifications, capabilities, and skills
Experience with data pipeline and workflow management tools (Airflow, etc.) Strong analytical and problem-solving skills, with attention to detail. Ability to work independently and collaboratively in a team environment. Good communication skills, with the ability to convey technical concepts to non-technical stakeholders. * A proactive approach to learning and adapting to new technologies and methodologies.

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4073184
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    KA, IN, India
  • Education
    Not mentioned
  • Experience
    Year