Software Engineer Iii Mlops

Year    KA, IN, India

Job Description

JOB DESCRIPTION



We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.


As a Software Engineer III at JPMorgan Chase within the Employee Platforms team, you serve as a seasoned Data Engineer with strong experience in AWS, Python, and MLOps to design, build, and maintain scalable data and machine learning infrastructure. This role will collaborate with data scientists and software engineers to enable efficient data processing, model deployment, and monitoring in cloud environments.




Job responsibilities



Design, implement, and optimize ETL/ELT pipelines using Python and AWS services (e.g., Glue, Lambda, S3, Redshift). Support the deployment, monitoring, and maintenance of machine learning models in production, leveraging MLOps best practices and tools. Build and manage scalable data architectures on AWS, ensuring reliability, security, and cost-effectiveness. Collaborate closely with data scientists, ML engineers, and business stakeholders to understand requirements and deliver robust solutions. Develop automated workflows for data ingestion, transformation, and model deployment using CI/CD pipelines. Monitor data pipelines and ML models for performance, data drift, and system health; implement improvements as needed. Document data processes, architectures, and model workflows; ensure compliance with internal and regulatory standards. Optimize data workflows and architectures for efficiency and scalability. Integrate new data sources and technologies into existing data infrastructure. Troubleshoot and resolve issues in data pipelines and machine learning operations. Ensure adherence to best practices in data engineering, security, and compliance.

Required qualifications, capabilities, and skills



Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on experience as a Data Engineer, MLOps Engineer, or in a closely related field. Program proficiently in Python. Utilize AWS cloud services (e.g., S3, EC2, Lambda, Glue, Redshift, SageMaker) in data engineering tasks. Apply MLOps tools and frameworks (e.g., MLflow, Kubeflow, Airflow, Docker, Kubernetes) in production environments. Build and optimize data pipelines and workflows for large-scale data processing. Implement CI/CD concepts and tools (e.g., Jenkins, GitLab CI) for automated deployments. Model data effectively and apply warehousing and big data concepts. Solve complex problems and communicate technical solutions clearly. Adapt quickly to new technologies and evolving project requirements.

Preferred qualifications, capabilities, and skills



Leverage infrastructure-as-code tools (Terraform, CloudFormation) for cloud resource management. Apply machine learning frameworks (TensorFlow, PyTorch, Scikit-learn) in model development and deployment. Utilize data engineering concepts and tools (Spark, Kafka, etc.) for advanced data processing. Implement model governance and explainability frameworks in ML workflows.






ABOUT US

Beware of fraud agents! do not pay money to get a job

MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Job Detail

  • Job Id
    JD4778123
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    KA, IN, India
  • Education
    Not mentioned
  • Experience
    Year