Yuan-Hong Liao(Andrew)
Undergraduate [at] NTHU
andrewliao11 [at]
In Deep.
I'm fascinated with the causaility of reinforcement learning agent, safe RL, and uncertainty of deep neural policies. My ultimate goal is to learn a powerful agent and interpretable, safe to human beings at the same time..


Yuan Hong Liao just graduated from NTHU. He is major in Electrical Engineering at NTHU, advised by Prof. Min Sun. His research spans from Reinforcement Learning, Computer Vison to Natural Language Processing. He will go to USC as a visiting student at Prof. Joseph J. Lim's Lab in the late 2017. To see the details of his research background, please refer to here. For his curriculum vitae, please see here. If you're interested in his research, see the google scholar


Oct. 2017
Visiting Student @ USC
Supervisor: Prof. Joseph Lim
Summer 2016
CV/ML intern @ UmboCV
Natural Language object retrieval and Object Detection
2016. Mar. - 2017. May
Research Intern @ ITRI
Surveying reinforcement learning and implement basics RL agents
2013 - 2017
Undergraduate student @ NTHU
Electrical Engineering department
Supervisor: Prof. Min Sun


Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner

We propose a novel adversarial training procedure to leverage unpaired data in the target domain. Two critic networks are introduced to guide the captioner, namely domain critic and multi-modal critic. The domain critic assesses whether the generated sentences are indistinguishable from sentences in the target domain. The multi-modal critic assesses whether an image and its generated sentence are a valid pair. During training, the critics and captioner act as adversaries -- captioner aims to generate indistinguishable sentences, whereas critics aim at distinguishing them. During inference, we further propose a novel critic-based planning method to select high-quality sentences without additional supervision (e.g., tags)

ICCV 2017 in Venice, Italy
Tactics for Adversarial Attack on Deep Reinforcement Learning Agents

We introduce two tactics to attack agents trained by deep reinforcement learning algorithms using adversarial examples: (1) Strategically-timed attack: the adversary aims at minimizing the agent's reward by only attacking the agent at a small subset of time steps in an episode. Limiting the attack activity to this subset helps prevent detection of the attack by the agent. We propose a novel method to determine when an adversarial example should be crafted and applied. (2)Enchanting attack: the adversary aims at luring the agent to a designated target state. This is achieved by combining a generative model and a planning algorithm: while the generative model predicts the future states, the planning algorithm generates a preferred sequence of actions for luring the agent. A sequence of adversarial examples are then crafted to lure the agent to take the preferred sequence of actions.

IJCAI 2017, ICLR 2017 workshop in Melbourne, Australia
Leveraging Video Descriptions to Learn Video Question Answering

We propose a scalable approach to learn video-based question answering (QA): to answer a free-form natural language question about the contents of a video. Our approach automatically harvests a large number of videos and descriptions freely available online. Then, a large number of candidate QA pairs are automatically generated from descriptions rather than manually annotated.

AAAI 2017 in San Francisco, California USA