Imitation Learning
Imitation learning trains robots to perform tasks by observing human behavior, without writing explicit instructions. A person demonstrates a task, and the robot learns to replicate it. Early systems relied on motion capture or physical guidance. Newer models learn directly from video, speech, or sensor streams.Recent work like Value-Implicit Pretraining (VIP) uses human videos to train goal-aware visual representations. These can guide robotic behavior without labeled actions or expert demonstrations, making imitation learning more scalable and data-efficient.