This role drives the various robotics aspects of the Simulation Smart Agents software stack. The Smart Agents group is responsible for building the ML models and system to simulate road users in a variety of situations and generate the scenarios used for testing and training AV driving policies. Our technology stack includes Generative AI models (GPT) and Reinforcement Learning (RL). The Smart Agents group work closely with the rest of the Simulation, and our partners Behaviors, Perception, and Safety Engineers.
The specific duties may include streamlining optimization, integration, creating ML infrastructure and tools, pipeline development, introspection, productionalization, and designing for fast experimentation cycles.
What you’ll be doing:
Collaborating with various specialists to get algorithms and ML models deployed and integrated into the simulation stack, with an eye on optimization and simplification of these procedures.
Working on runtime optimization and architecting highly performant ML and system pipelines.
Create and improve data pipelines for turning real world observations into training and simulation data.
Help define metrics and loss functions to evaluate correctness and realism of simulation actors behavior.
Spotting and collaboratively closing gaps in tooling and data introspection to accelerate engineer velocity within Simulation.
What you must have:
8+ years of experience in the field of robotics or latency-sensitive backend services
Proven experience in machine learning and classification. Familiar with ML frameworks such as Tensorflow or PyTorch
Experience architecting highly performant ML and system pipelines
Strong understanding and experience with runtime optimization
Strong programming skills in modern C++ or Python
Experience with profiling CPU and/or GPU software, process scheduling, and prioritization
Passionate about self-driving car technology and its impact on the world
Expertise in setting architectures that are scalable, efficient, fault-tolerant, and are easily extensible allowing for changes overtime without major disruptions.
Ability to design across multiple systems. Ability to both investigate in sophisticated areas as well as a good breadth of understanding of systems outside of your domain.
Ability to wear several hats shifting between coding, design, technical strategy, and mentorship combined with excellent judgment on when to switch contexts to meet the greatest need.
Bonus points!
Experience with ROS, OpenCV, or Gazebo
Expertise with parallel training, active learning, model deployment (e.g., TensorRT conversion)
Experience with build systems (Bazel, Buck, Blaze or Cmake)
Track record in deploying perception/prediction/av models into real world environments
Expertise working with various sensor technologies, including Lidar, Radar, and Camera
Experience working with RL and sequence prediction (ML) models
Experience with CUDA
The salary range for this position is $183,600 - $270,000. Compensation will vary depending on location, job-related knowledge, skills, and experience. You may also be offered a bonus, long-term incentives, and benefits. These ranges are subject to change.