All Careers
InfrastructureRemote, India · Full-time

MLOps Engineer

Build and maintain the infrastructure that powers our AI systems. We need someone who can bridge the gap between ML models and production systems at scale.

₹7,00,000 - ₹13,00,000

What you'll do

  • Design and implement ML pipelines for training, evaluation, and deployment
  • Build and maintain inference infrastructure for low-latency, high-throughput serving
  • Implement monitoring and observability for ML systems — tracking model performance, data drift, and system health
  • Manage vector databases and embedding infrastructure for RAG systems
  • Optimize compute costs while maintaining performance requirements
  • Implement CI/CD pipelines for model deployment and rollback
  • Collaborate with ML engineers to productionize research code
  • Ensure security and compliance of ML infrastructure

What we're looking for

Essential

  • 4+ years of experience in DevOps, SRE, or infrastructure engineering
  • 2+ years working with ML systems in production
  • Strong experience with cloud platforms (AWS, GCP, or Azure)
  • Proficiency in containerization and orchestration (Docker, Kubernetes)
  • Experience with ML serving frameworks (TensorFlow Serving, Triton, vLLM)
  • Strong Python skills and familiarity with ML libraries
  • Experience with infrastructure-as-code (Terraform, Pulumi)
  • Understanding of ML lifecycle and model deployment patterns

Nice to have

  • Experience with LLM inference optimization and serving
  • Knowledge of vector databases (Pinecone, Weaviate, Qdrant)
  • Familiarity with ML experiment tracking (MLflow, Weights & Biases)
  • Experience with GPU infrastructure and optimization
  • Understanding of model monitoring and drift detection
  • Experience with data pipeline tools (Airflow, Dagster)
  • Knowledge of edge deployment and model optimization
  • Relevant certifications (AWS ML Specialty, GCP ML Engineer)

Benefits & perks

Competitive salary with equity options
Comprehensive health and accidental insurance
Flexible remote work arrangement
Provident Fund (PF) and gratuity benefits
Professional certification budget
Access to cloud resources and cutting-edge infrastructure
Generous leave policy
Conference and training opportunities
Collaborative team of ML and infrastructure experts
Opportunity to build ML infrastructure at scale

Join Our Infrastructure Team

Our Infrastructure team builds the foundation that makes AI work at scale. We're the team that takes ML models from notebooks to production, ensuring they run reliably, efficiently, and securely. You'll work with cutting-edge ML infrastructure — from LLM serving to vector databases to real-time inference pipelines. We value reliability, efficiency, and elegant solutions to hard problems.

Join Our Infrastructure Team

Ready to Build ML Infrastructure?

Join us in building the infrastructure that powers production AI systems.

Careers - RDMI