We are seeking a skilled professional to design, develop, and deploy scalable machine learning solutions. The ideal candidate will have a strong foundation in data science and applied machine learning, with experience in building agentic workflows using tools like RAG and NL-to-SQL. They will own the full lifecycle of machine learning solution development, ensuring enterprise security, privacy, and compliance standards are met. Strong communication and collaboration skills are required. The candidate will play a critical role in driving the organization's success by developing machine learning solutions that meet business needs and drive results.
Please apply, If you have the passion and standard methodologies work in an environment where challenges are a norm, where individual brilliance is valued and goes hand in hand with team performance, Where being proactive is how we do things !!!
Key responsibilities
AI/ML
Design, train, and optimize machine learning models for real-world applications.
Build end-to-end ML pipelines including data preprocessing, feature engineering, model training, validation, and deployment.
Collaborate with data engineers and software developers to integrate ML models into production systems.
Monitor model performance, detect data drifts, and retrain models for continuous improvement.
GenAI
Agentic Solution design and orchestration
+ Architect LLM?powered applications, including intent routing across tools/skills.
+ Implement agentic workflows using frameworks such as LangGraph or equivalents; decompose tasks, manage tool invocation, and ensure determinism/guardrails.
+ Integrate MCP?compatible tools and services to extend system capabilities.
Retrieval and embeddings
+ Build effective RAG systems: chunking strategies, embedding model selection, vector indexing, reranking, and grounding to authoritative data.
+ Optimize vector stores and search (ANN, hybrid, filters, metadata schemas).
Prompting and model strategy
+ Develop robust prompting patterns and templates; structure prompts for tool use and function calling.
+ Compare generic vs fine?tuned LLMs for intent routing; make data?driven choices on cost, latency, accuracy, and maintainability.
Data and integrations
+ Implement NL2SQL (and guarded SQL execution) patterns; connect to microservices and enterprise systems via secure APIs.
+ Define and enforce data schemas, metadata, and lineage for reliable retrieval.
Production readiness
+ Establish evaluation datasets and automated regressions for RAG and agents.
+ Monitor quality (precision/recall, hallucination rate), latency, cost, and safety.
+ Apply guardrails, PII handling, access controls, and policy enforcement end?to?end.
MLOps/ LangOps
Version prompts, models, embeddings, and pipelines; manage A/B tests and rollout.
Instrument tracing/telemetry for agent steps and tool calls; implement fallback/timeout/retry policies.
Core qualifications
Programming: Strong proficiency in Python (NumPy, Pandas, Scikit-learn), experience with ML frameworks (TensorFlow, PyTorch).
Machine Learning & Deep Learning: Hands-on experience with supervised, unsupervised, and reinforcement learning techniques.
Mathematics & Statistics: Solid foundation in linear algebra, probability, optimization, and statistical modeling.
Data Handling: Experience with SQL and NoSQL databases, data preprocessing, and feature engineering.
GenAI:
+ Strong understanding of vector embeddings and similarity search (cosine/IP/L2), chunking strategies, and reranking.
+ Hands?on experience building RAG pipelines (indexing, metadata, hybrid search, evaluators).
+ Practical prompt engineering for tool use, function calling, and agent planning.
+ Experience with agentic frameworks (e.g., LangGraph or similar) and orchestrating tools/services; familiarity with MCP and tool integration patterns.
+ Knowledge of NL2SQL techniques, SQL safety (schema constraints, query sandboxes), and microservice integration.
+ Ability to evaluate tradeoffs: generic/base LLMs vs fine?tuned/task?specific models (accuracy, drift, data/ops burden, latency/cost).
+ Proficiency with Python and common LLM/RAG libraries; containerization and CI/CD.
+ Understanding of enterprise security, privacy, and compliance; RBAC/ABAC for data access; logging and auditability
MLOps & Deployment: Familiarity with model deployment frameworks (MLflow, Kubeflow, SageMaker, Vertex AI), CI/CD pipelines, and containerization (Docker, Kubernetes).
Preferred experience
Hands-on experience with at least one major cloud provider (AWS, Azure, GCP, OCI)
Experience with large-scale distributed systems and big data frameworks (Spark, Hadoop)
Retrieval optimization (hybrid lexical+vector, metadata filtering, learned rerankers).
Model fine tuning/adapter methods (LoRA, SFT, DPO) and evaluation.
Observability stacks for LLM apps (tracing, eval dashboards, cost/latency SLOs).
Document AI (OCR, layout parsing) and schema construction for unstructured data.
Caching, batching, and KV?cache considerations for throughput/cost.
* Safe tool?use patterns: constrained decoding, JSON schemas, policy checks.
MNCJobsIndia.com will not be responsible for any payment made to a third-party. All Terms of Use are applicable.