Learning to Walk in the Real World with Minimal Human Effort. Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan. Conference on Robot Learning (CoRL). 2020.

We develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort, using a multi-task learning procedure and a safety-constrained RL framework.

[Paper     Video]

Learning Agile Locomotion via Adversarial Training. Yujin Tang, Jie Tan, Tatsuya Harada. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020.

We present a multi-agent learning system, in which a quadruped robot learns to chase another robot while the latter learns to escape, which encourages agile behaviors and alleviates the laborious environment design effort.

[Paper     Video]

Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning. Xingyou Song, Yuxiang Yang, Krzysztof Choromanski, Ken Caluwaerts, Wenbo Gao, Chelsea Finn, Jie Tan. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2020.

We present a meta-learning method that enables a quadruped robot to adapt its policy to changes of dynamics using less than 3 minutes of real data.

[Paper     Video    Code]

Learning Agile Robotic Locomotion Skills by Imitating Animals. Xuebing Peng, Erwin Coumans, Tingnan Zhang, Tsang-Wei Lee, Jie Tan, Sergey Levine. IEEE Robotics Science and System (RSS). 2020. Best Paper Award.

We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.

[Project    Paper     Video     Code]

Autonomous Control of a Tendon-driven Robotic Limb with Elastic Elements Reveals that Added Elasticity can Enhance Learning. Ali Marjaninejad, Jie Tan, Francisco Valero-Cuevas. International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2020.

We explored the efficacy of autonomous learning on a simulated tendon-driven leg and demonstrate that increasing stiffness of the muscles take longer to train but perform more accurately.

                                            [Project    Paper]

Zero-shot Imitation Learning from Demonstrations for Legged Robot Visual Navigation. Xinlei Pan, Tingnan Zhang, Brian Ichter, Aleksandra Faust, Jie Tan, Sehoon Ha. IEEE International Conference on Robotics and Automation (ICRA). 2020.

We design a feature disentanglement network (FDN) to extract viewpoint-invariant features so that robots can learn from humans.

[Project    Paper     Video]

Learning Fast Adaptation with Meta Strategy Optimization. Wenhao Yu, Jie Tan, Yunfei Bai, Erwin Coumans, Sehoon Ha. IEEE International Conference on Robotics and Automation (ICRA). 2020.

We introduce Meta Strategy Optimization, a meta-learning algorithm that can quickly adapt to new scenarios with a handful of trials in the target environment.

[Paper     Video]

Adaptive Power System Emergency Control using Deep Reinforcement Learning. Qiuhua Huang, Renke Huang, Weituo Hao, Jie Tan, Rui Fan, Zhenyu Huang. IEEE Transactions on Smart Grid. 2019.

We develop novel adaptive emergency control schemes using deep reinforcement learning (DRL) for complex power systems.

[Paper]

Data Efficient Reinforcement Learning for Legged Robots. Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani. Conference on Robot Learning (CoRL). 2019.

We present a model-based reinforcement learning system for robot locomotion that learns walking from scratch based on only 4.5 minutes of data collected on a quadruped robot.

[Paper    Video]

Learning to Walk via Deep Reinforcement Learning. Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, Sergey Levine. Robotics: Science and Systems (RSS). 2019.

We present a sample-efficient deep RL algorithm based on maximum entropy RL that requires minimal per-task tuning and only a modest number of trials to learn locomotion policies directly on a quadruped robot.

[Project    Paper    Video    Blog    Code]

No-Reward Meta Learning. Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn. International Conference on Autonomous Agents and Multiagent Systems (AAMAS). 2019.

We introduce a meta-learning algorithm that allows for fast adaptation of learned policies to dynamic changes of the environment.

[Project    Paper    Code]

Sim-to-Real: Learning Agile Locomotion For Quadruped Robots. Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, Vincent Vanhoucke. Robotics: Science and Systems (RSS). 2018.

We present a system to automate the locomotion controller design using deep reinforcement learning. We train the controllers in simulation and overcome the sim-to-real gap by improving the simulator and learning robust policies.

[Paper    Video    Code]

Policies Modulating Trajectory Generators. Atil Iscen, Ken Caluwaerts, Jie Tan, Tingnan Zhang, Erwin Coumans, Vikas Sindhwani, Vincent Vanhoucke. Conference on Robot Learning (CoRL). 2018.

We propose a neural network architecture for learning complex controllable locomotion skills by having simple Policies Modulate Trajectory Generators.

[Paper    Video]

Optimizing Simulations with Noise-Tolerant Structured Exploration. Krzysztof Choromanski, Atil Iscen, Vikas Sindhwani, Jie Tan, Erwin Coumans. IEEE International Conference on Robotics and Automation (ICRA). 2018.

We propose a simple noise-tolerant replacement for the standard finite difference procedure used in blackbox optimization. By embedding structured exploration in LBFGS, our robot learns agile walking and turning policies.

[Paper]

Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning. Alexander Clegg, Wenhao Yu, Jie Tan, Karen Liu, Greg Turk. ACM Transactions on Graphics 34(4), SIGGRAPH Asia. 2018.

We apply deep reinforcement learning to automatically discovering dressing controllers represented by neural networks.

[Paper    Video]

Learning to Navigate Cloth using Haptics. Alexander Clegg, Wenhao Yu, Zackory Erickson, Jie Tan, Karen Liu, Greg Turk. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2017.

We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information.

[Paper]

Preparing for the Unknown: Learning a Universal Policy with Online System Identification. Wenhao Yu, Jie Tan, Karen Liu, Greg Turk. Robotics: Science and Systems (RSS). 2017.

We present a new method of learning control policies that successfully operate under unknown dynamic models. We create such policies by leveraging a large number of training examples that are generated using a physical simulator.

[Paper]

Haptic Simulation for Robot-Assisted Dressing. Wenhao Yu, Ariel Kapusta, Jie Tan, Charles C. Kemp, Greg Turk, Karen Liu. IEEE International Conference on Robotics and Automation (ICRA), 2017.

We focus on a representative dressing task of pulling the sleeve of a hospital gown onto a person’s arm. We present a system that learns a haptic classifier for the outcome of the task given few (2-3) real-world trials with one person.

[Paper]

Large-Scale Evolution of Image Classifiers. Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V. Le, Alexey Kurakin. International Conference on Machine Learning (ICML), 2017.

Designing architectures for neural networks can be challenging. Our goal is to minimize human participation, so we employ evolutionary algorithms to discover such networks automatically.

[Paper]

Simulation-Based Design of Dynamic Controllers for Humanoid Balancing. Jie Tan, Zhaoming Xie, Byron Boots, Karen Liu. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2016.

we propose a complete system that automatically designs a humanoid robotic controller that succeeds on tasks in the real world with a very small number of real world experiments.

[Project    Paper    Video]

Animating Human Dressing. Alexander Clegg, Jie Tan, Greg Turk, Karen Liu. ACM Transactions on Graphics 33(4), SIGGRAPH 2014.

We present a technique to synthesize human dressing by controlling a human character to put on an article of simulated clothing.

[Project    Paper    Video]

Learning Bicycle Stunts. Jie Tan, Yuting Gu, Karen Liu, Greg Turk. ACM Transactions on Graphics 33(4), SIGGRAPH 2014.

We apply reinforcement learning to find the optimal policies that allows a human character to perform bicycle stunts in a physically simulated environment.

[Project    Paper    Supplementary Doc    BibTeX    Video]

Soft Body Locomotion. Jie Tan, Greg Turk, Karen Liu. ACM Transactions on Graphics 31(4), SIGGRAPH 2012.

We present a physically-based system to simulate the locomotion of soft body characters without skeletons. To control the locomotion, we formulate and solve a quadratic program with complementary conditions (QPCC) to plan the muscle contraction and the contact forces simultaneously.

[Project    Paper    Supplementary Doc    BibTeX    Video]

Articulated Swimming Creatures. Jie Tan, Yuting Gu, Greg Turk, Karen Liu. ACM Transactions on Graphics 30(4), SIGGRAPH 2011.

We present a general approach to creating realistic swimming behavior for a given articulated creature body. We simulate the simultaneous two-way coupling between the fluid and the creature and apply numerical optimization to find the most efficient swimming gait for the animal.

[Project    Paper    BibTeX    Video]

Stable Proportional-Derivative Controllers. Jie Tan, Karen Liu, Greg Turk. IEEE Computer Graphics and Applications, 31(4), 2011.

We reformulate the traditional PD controller by taking into account the character's positions and velocities in the next time step, which allows arbitrarily high gains, even at large time steps.

[Project    Paper    BibTeX    Video]

Physically-based Fluid Animation: A Survey. Jie Tan, Xubo Yang. Science In China Series F: Information Science, 52(5), 2009.

We give an comprehensive survey on physically-based fluid animation research.

[Paper    BibTeX]

Fluid Animation with Multilayer Grids. Jie Tan, Xubo Yang, Xin Zhao, Zhanxin Yang. ACM SIGGRAPH/Eurographics Symposium of Computer Animation Poster, 2008.

We propose a multi-layer grid structure to numerically solve the Navier-Stokes equations, which enables combining advantages of various discretizations, catching the multi-scale behavior and optimizing the computational resources.

[Paper    BibTeX    Video]