Minimum qualifications:
- Bachelor’s degree or equivalent practical experience.
- 2 years of experience with coding in C++, or 1 year of experience with an advanced degree.
- 1 year of experience with distributed processing.
- 1 year of experience with large-scale computing.
Preferred qualifications:
- Master's degree or PhD in Computer Science, or a related technical field.
- Experience building and scaling components of the Machine Learning life-cycle (e.g., feature stores, model serving systems, ML pipelines, training data generation).
- Experience developing, optimizing, and operating inference services or online serving systems at petascale.
- Experience with data processing internals or managing major systems like Flume, Beam, Spark, or Flink.
- Ability to drive projects from ideation to production deployment and long-term maintenance.
- Ability to collaborate effectively with researchers, applied scientists, and product teams to translate abstract requirements into production infrastructure.
About the job:
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
Responsibilities:
- Build FeatureML to streamline the user journey for Data Engineers and AI Practitioners, accelerating the creation, deployment, and management of features for production-scale ML models.
- Architect and build an available and elastic inference service designed to manage massive traffic demands.
- Develop a unified processing and storage platform, tailoring it specifically for DeepMind demanding data needs.
Learn more about this Employer on their Career Site
