SonicJobs Logo
Left arrow iconBack to search

Machine Learning Engineer: Perception

Bedrock Robotics Inc
Posted 5 days ago, valid for 23 days
Location

San Francisco, CA 94102, US

Salary

Competitive

Contract type

Full Time

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.

Sonic Summary

info
  • Bedrock is seeking a Machine Learning Engineer specializing in perception to join their team focused on bringing autonomy to the construction industry.
  • The ideal candidate should have at least 3 years of production ML experience, particularly in deploying deep learning models using PyTorch.
  • Responsibilities include designing early fusion architectures, optimizing models for embedded hardware, and collaborating with various teams to enhance system performance.
  • Candidates should possess expertise in 3D geometry, sensor calibration, and modern object detection architectures, along with proficiency in Python and familiarity with C++ or Rust.
  • The salary for this position is competitive and commensurate with experience, reflecting the company's commitment to attracting top talent in the field.

Join the team bringing advanced autonomy to the built world

At Bedrock, we’re moving AI out of the lab and into the real world. Our team is composed of industry veterans who helped launch Waymo, scaled Segment to a $3.2B acquisition, and grew Uber Freight to $5B in revenue. Today, we’re deploying autonomous systems on heavy construction machinery across the country, accelerating project schedules of billion-dollar infrastructure projects and improving safety on job sites. Backed by $350M in funding, we’re working quickly to close the gap between America's surging demand for housing, data centers, manufacturing hubs, and the construction industry's growing labor shortage.

This is where algorithms meet steel-toed boots. You’ll collaborate with construction veterans and world-class engineers to solve physical-world problems that simulations can’t touch. If you're ready to apply cutting-edge technology to solve meaningful problems alongside a talented team—we'd love to have you join us.

Machine Learning Engineer: Perception

 

Bedrock is bringing autonomy to the construction industry! We’re a group of veterans from the autonomous vehicle industry who are passionate about bringing the benefits of automation to areas in the construction industry currently underserved by the market.

We are looking for engineers with expertise in shipping production 3D perception systems at scale. Successful candidates have architected systems, trained models from scratch, understand the full stack (clustering, detection, classification, and tracking), and have shipped at scale. We use both computer vision and LIDAR-based approaches, so knowledge of either or both is key. Models are just part of the system: you understand data and have good intuition about why models fail. You know how to evaluate corner cases, manage or build data pipelines, use autolabels (or not), and have a strong understanding of statistical properties of these systems.

 

What You’ll Do:

  • Design Early Fusion Architectures: Develop and train state-of-the-art models (e.g., BEV-based transformers) that fuse raw Lidar and Camera data to solve for object detection and semantic segmentation.

  • Tackle "Messy" Physics: Build perception systems robust enough to handle dynamic occlusion (seeing the robot’s own arm/bucket), particulates (dust, snow, rain), and high-vibration conditions.

  • Deploy to the Edge: Optimize models for inference on embedded hardware. You will debug system-level issues, such as sensor calibration drift and latency bottlenecks.

  • Collaborating with other teams to create state-of-the-art representations for downstream use cases.

What we're looking for:

  • Production ML Experience: 3+ years of experience taking deep learning models from research to real-world production using PyTorch, Tensorflow, or JAX.

  • 3D Geometry & Calibration: You have a deep understanding of SE(3) transformations, homogeneous coordinates, and intrinsic/extrinsic sensor calibration. You understand the math required to project a 3D Lidar point onto a 2D image pixel accurately.

  • Early Fusion Expertise: Practical experience with architectures that fuse modalities at the feature level (e.g., BEVFusion, TransFuser, PointPainting) rather than just fusing final bounding boxes.

  • SOTA Object Detection experience with modern transformer-based architectures (DETR, PETR, etc…) including similar temporal models (PETRv2, StreamPETR, …)

  • Systems Fluency: You are an expert in Python, but you are also comfortable reading and writing systems code in C++ or Rust. You understand memory management and real-time constraints.

  • Data Intuition: You understand that in robotics, better data alignment often beats a bigger model. You are willing to dig into the data infrastructure to ensure ground truth quality.

Ways to stand out:

  • Bonus: Voxel/Occupancy Experience: Experience working with occupancy grids, NeRFs, or voxel-based representations for terrain mapping.

  • Bonus: Top-Tier Research: Published work in conferences such as ICRA, IROS, CVPR, ECCV, ICCV, CoRL, or RSS

Our roles are often flexible. If you don't fit all the criteria, or are in another location (especially one where we have an office like SF or NY) please apply anyway! We'd love to consider you.




Learn more about this Employer on their Career Site

Apply now in a few quick clicks

By applying, a Sonicjobs account will be created for you. Sonicjobs's Privacy Policy and Terms & Conditions will apply.

SonicJobs' Terms & Conditions and Privacy Policy also apply.