SonicJobs Logo
Login
Left arrow iconBack to search

AI Platform Engineer

The Portfolio Group
Posted 9 hours ago, valid for 8 days
Location

London, Greater London SW1A2DX, England

Salary

£50,000 - £60,000 per year

info
Contract type

Full Time

By applying, a CV-Library account will be created for you. CV-Library's Terms & Conditions and Privacy Policy will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • Join an award-winning B2B consultancy in London as an AI Platform Engineer with an excellent salary and benefits.
  • You will design, build, and operate cloud-native platforms for conversational AI and generative AI products at scale.
  • The role requires strong experience in AWS cloud-native platforms, Databricks, and building production AI systems, with a focus on RAG pipelines and LLM-backed services.
  • Candidates should have proficiency in Python and experience with Kubernetes and Terraform, along with a solid understanding of distributed systems and API design.
  • This position offers deep technical ownership and long-term impact, requiring a minimum of 5 years of relevant experience.

AI Platform Engineer | London | Excellent Salary +Benefits

Join an award-winning, internationally recognised B2B consultancy as an AI Platform Engineer, owning the cloud-native platform that underpins conversational AI and generative AI products at scale.

Sitting at the core of AI delivery, you will design, build, and operate the runtime, infrastructure, and operational layers supporting RAG pipelines, LLM orchestration, vector search, and evaluation workflows across AWS and Databricks. Working closely with senior AI engineers and product teams, you'll ensure AI systems are scalable, observable, secure, and cost-efficient, turning experimental AI into reliable, production-grade capabilities. With further scope of responsibilities detailed below:

  • Own and evolve the AI platform powering conversational assistants and generative AI products.
  • Build, operate, and optimise RAG and LLM-backed services, improving latency, reliability, and cost.
  • Design and run cloud-native AI services across AWS and Databricks, including ingestion and embedding pipelines.
  • Scale and operate vector search infrastructure (Weaviate, OpenSearch, Algolia, AWS Bedrock Knowledge Bases).
  • Implement strong observability, CI/CD, security, and governance across AI workloads.
  • Enable future architectures such as multi-model orchestration and agentic workflows.

Required Skills & Experience

  • Strong experience designing and operating cloud-native platforms on AWS (Lambda, API Gateway, DynamoDB, S3, CloudWatch).
  • Hands-on experience with Databricks and large-scale data or embedding pipelines.
  • Proven experience building and operating production AI systems, including RAG pipelines, LLM-backed services, and vector search (Weaviate, OpenSearch, Algolia).
  • Proficiency in Python, with experience deploying containerised services on Kubernetes using Terraform.
  • Solid understanding of distributed systems, cloud architecture, and API design, with a focus on scalability and reliability.
  • Demonstrable ownership of observability, performance, cost efficiency, and operational robustness in production environments.

Why Join?

You'll own the foundational AI platform behind a growing suite of generative AI products, working with senior AI leaders on systems used by real customers at scale. This role offers deep technical ownership, long-term impact, and an excellent compensation package within a market-leading organisation.

INDAM

Apply now in a few quick clicks

By applying, a CV-Library account will be created for you. CV-Library's Terms & Conditions and Privacy Policy will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.