bedrock · 4 hours ago
Machine Learning Engineer: Perception
Bedrock Robotics is bringing autonomy to the construction industry, leveraging expertise from the autonomous vehicle sector. The Machine Learning Engineer: Perception will design and develop advanced perception systems, optimizing models for real-world applications and collaborating with teams to enhance object detection and semantic segmentation.
ConstructionReal EstateSoftware
Responsibilities
Design Early Fusion Architectures: Develop and train state-of-the-art models (e.g., BEV-based transformers) that fuse raw Lidar and Camera data to solve for object detection and semantic segmentation
Tackle "Messy" Physics: Build perception systems robust enough to handle dynamic occlusion (seeing the robot’s own arm/bucket), particulates (dust, snow, rain), and high-vibration conditions
Deploy to the Edge: Optimize models for inference on embedded hardware. You will debug system-level issues, such as sensor calibration drift and latency bottlenecks
Collaborating with other teams to create state-of-the-art representations for downstream use cases
Qualification
Required
Production ML Experience: Experience taking deep learning models from research to real-world production using PyTorch
3D Geometry & Calibration: You have a deep understanding of SE(3) transformations, homogeneous coordinates, and intrinsic/extrinsic sensor calibration. You understand the math required to project a 3D Lidar point onto a 2D image pixel accurately
Early Fusion Expertise: Practical experience with architectures that fuse modalities at the feature level (e.g., BEVFusion, TransFuser, PointPainting) rather than just fusing final bounding boxes
SOTA Object Detection experience with modern transformer-based architectures (DETR, PETR, etc…) including similar temporal models (PETRv2, StreamPETR, …)
Systems Fluency: You are an expert in Python, but you are also comfortable reading and writing systems code in C++ or Rust. You understand memory management and real-time constraints
Data Intuition: You understand that in robotics, better data alignment often beats a bigger model. You are willing to dig into the data infrastructure to ensure ground truth quality
Preferred
Voxel/Occupancy Experience: Experience working with occupancy grids, NeRFs, or voxel-based representations for terrain mapping
Top-Tier Research: Published work in conferences such as ICRA, IROS, CVPR, ECCV, ICCV, CoRL, or RSS