AI Resident - Learning From Videos (LFV)
Full-timeJob Overview
The Team
The Learning From Videos (LFV) team in the Robotics division focuses on the development of foundation models capable of leveraging large-scale multi-modal (RGB, depth, flow, semantics, bounding boxes, tactile, audio, etc) data from multiple domains (driving, robotics, indoors, outdoors, etc) to improve downstream task performance.
Our approach emphasizes training scalability: by learning from multiple modalities, models can develop useful data-driven priors about 3D geometry, physics, and dynamics for world understanding.
Our research interests include, but are not limited to:
- Video Generation
- World Models
- 4D Reconstruction
- Multi-Modal Models
- Multi-View Geometry
- Data Augmentation
- Video-Language-Action Models
The AI Resident
This year-long AI Residency is a research-focused position designed for early-career researchers and engineers who are excited to work on ambitious problems in embodied AI. The resident will be deeply integrated into the LFV team, contributing to both ongoing and new research efforts in areas including:
- 4D World Models
- Physical and Embodied Intelligence
- Multi-Modal Learning
As an AI Resident, you will collaborate closely with researchers and engineers at TRI on high-risk, pushing forward our understanding of spatio-temporal reasoning and zero-shot generalization. This is a research-focused position, targeting the development of methods and techniques that can solve real-world problems.
We welcome you to join a positive, friendly, and enthusiastic team of researchers, where you will contribute to helping people gain and maintain independence, access, and mobility. We work closely with other Toyota affiliates, and actively collaborate towards research publications and the productization of our developed technologies.
The Team
The Learning From Videos (LFV) team in the Robotics division focuses on the development of foundation models capable of leveraging large-scale multi-modal (RGB, depth, flow, semantics, bounding boxes, tactile, audio, etc) data from multiple domains (driving, robotics, indoors, outdoors, etc) to improve downstream task performance.
Our approach emphasizes training scalability: by learning from multiple modalities, models can develop useful data-driven priors about 3D geometry, physics, and dynamics for world understanding.
Our research interests include, but are not limited to:
- Video Generation
- World Models
- 4D Reconstruction
- Multi-Modal Models
- Multi-View Geometry
- Data Augmentation
- Video-Language-Action Models
The AI Resident
This year-long AI Residency is a research-focused position designed for early-career researchers and engineers who are excited to work on ambitious problems in embodied AI. The resident will be deeply integrated into the LFV team, contributing to both ongoing and new research efforts in areas including:
- 4D World Models
- Physical and Embodied Intelligence
- Multi-Modal Learning
As an AI Resident, you will collaborate closely with researchers and engineers at TRI on high-risk, pushing forward our understanding of spatio-temporal reasoning and zero-shot generalization. This is a research-focused position, targeting the development of methods and techniques that can solve real-world problems.
We welcome you to join a positive, friendly, and enthusiastic team of researchers, where you will contribute to helping people gain and maintain independence, access, and mobility. We work closely with other Toyota affiliates, and actively collaborate towards research publications and the productization of our developed technologies.
Responsibilities
- Develop, integrate, and deploy algorithms for Multi-Modal and 4D reasoning targeting physical applications.
- Handle the ingestion of large-scale datasets for training, including streaming, online, and continual learning.
- Contribute innovative solutions at the intersection of machine learning, computer vision, and robotics to improve real-world task performance.
- Work closely with robotics and machine learning researchers and engineers to understand theoretical and practical needs.
- Follow best practices producing maintainable code, both for internal use as well as for open-sourcing to the scientific community.
- Contribute to research publications and technical reports.
Qualifications
- Bachelor's or Master’s degree in Computer Science, Electrical Engineering, Robotics, or a related technical field.
- Exceptional candidates with equivalent research experience (e.g., strong publication record, open-source contributions, or industry research experience) are encouraged to apply.
- Strong background in computer vision and its applications to robotics and embodied systems.
- Demonstrated research experience through publications, technical projects, or open-source contributions.
- Strong communication skills and a collaborative mindset, with the ability to learn quickly and contribute to team research efforts.
- Passionate about assisting and amplifying older adults and those in need through dexterous manipulation, human-robot collaboration, and physical assistance innovation.
Bonus Qualifications
- Spatio-temporal (4D) computer vision, including multi-view geometry, 3D/4D reconstruction, video generation, self-supervised learning, occlusion reasoning, etc.
- Large-scale training of multi-modal deep learning methods, both in terms of dataset sizes and model complexity, context length extension, and efficient attention, distributed computing, etc.
- Application of machine learning and computer vision to embodied applications.
Make Your Resume Now