Senior Data Scientist (GrabMaps)

  • Full-time

Company Description

About Grab and Our Workplace

Grab is Southeast Asia's leading superapp. From getting your favourite meals delivered to helping you manage your finances and getting around town hassle-free, we've got your back with everything. In Grab, purpose gives us joy and habits build excellence, while harnessing the power of Technology and AI to deliver the mission of driving Southeast Asia forward by economically empowering everyone, with heart, hunger, honour, and humility.

Job Description

Get to Know the Team

The Data Science (GrabMaps) team builds map intelligence that powers core Grab services like transport allocation, logistics, and pricing. You'll work on problems such as place search and recommendation, data curation, travel time estimation, traffic forecasting, routing, and positioning.

The team invests in deep research and scalable models, and you'll have room to explore new ideas that can shape how millions of users experience Grab's products.

Get to Know the Role

As an Applied Scientist for GrabMaps, you'll design, build, and ship machine learning and generative AI solutions that directly impact how Grab understands and uses map data. You'll work across the full lifecycle: framing problems with stakeholders, developing models (including LLMs and multi‑modal models), and deploying them into production.

You'll focus on using large models—LLMs, vision, and multi‑modal models—to improve search, recommendation, and content understanding around places and road networks.

The Critical Tasks You Will Perform

You will:

  • Translate business problems in mapping, search, and recommendation into clear machine learning problems, define success metrics, and explain your approach and results to both technical and non‑technical stakeholders.
  • Own end‑to‑end delivery of small to medium‑scope ML/LLM features or services, from data exploration and model design through training, evaluation, deployment, and post‑launch monitoring.
  • Develop, and optimize deep learning models—including LLMs, generative and multi‑modal models—to solve use cases such as POI understanding, relevance ranking, content generation, and map data quality.
  • Fine‑tune, evaluate, and adapt state‑of‑the‑art LLMs (e.g., GPT, Llama, Qwen) and other foundation models using supervised fine‑tuning and RL‑based methods, including prompt and instruction design for downstream tasks.
  • Architect and implement agentic AI workflows (for example, with LangChain, LlamaIndex, or function‑calling APIs), including tool integration, workflow chaining, and multi‑agent coordination for real‑time or near real‑time applications.
  • Build and maintain scalable pipelines for data preprocessing, feature extraction, model training, fine‑tuning, automated evaluation, and model versioning, working with ML engineers and software engineers to run them in production.
  • Optimize model serving for latency, throughput, and cost using techniques such as model compression, quantization, GPU/TPU acceleration, and distributed inference, and integrate with serving frameworks like TorchServe, Triton, or Ray Serve.
  • Review relevant research in search/recommendation, NLP/LLMs, and computer vision, run targeted experiments, and bring promising ideas into production prototypes or features.

Qualifications

What Essential Skills You Will Need

  • Ph.D. in Computer Science, Electrical/Computer Engineering, Operations Research, or a related field, or Masters with at least 2 year of equivalent practical experience applying advanced machine learning methods to real‑world problems.
  • Hands-on experience in deep learning and AI, with expertise in LLMs including fine-tuning, prompt engineering, and adapting foundation models for downstream tasks
  • Demonstrated experience deploying LLMs and other large-scale AI models to production:
    • Experience serving LLMs and agentic systems in production environments (e.g., TorchServe, Triton, Ray Serve)
      • Knowledge of model compression, quantization, and techniques for optimising inference latency and cost
      • Familiarity with GPU/TPU acceleration and distributed inference architectures
      • Experience implementing and maintaining scalable pipelines for data preprocessing, model training, fine-tuning, and automated evaluation
  • Proficiency in deep learning frameworks (TensorFlow, PyTorch) and deployment tools (ONNX, tf-serving, TorchServe, Triton Inference Server etc.)
  • Solid software engineering skills in Python/Spark
  • Experience with model versioning, CI/CD for ML, containerization (e.g., Docker), and cloud-based deployment (AWS, GCP, Azure)

Additional Information

Life at Grab

We care about your well-being at Grab, here are some of the global benefits we offer:

  • We have your back with Term Life Insurance and comprehensive Medical Insurance.
  • With GrabFlex, create a benefits package that suits your needs and aspirations.
  • Celebrate moments that matter in life with loved ones through Parental and Birthday leave, and give back to your communities through Love-all-Serve-all (LASA) volunteering leave
  • We have a confidential Grabber Assistance Programme to guide and uplift you and your loved ones through life's challenges.
  • Balancing personal commitments and life's demands are made easier with our FlexWork arrangements such as differentiated hours

What We Stand For at Grab

We are committed to building an inclusive and equitable workplace that enables diverse Grabbers to grow and perform at their best. As an equal opportunity employer, we consider all candidates fairly and equally regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, family commitments, physical and mental impairments or disabilities, and other attributes that make them unique.

Privacy Policy