Incorporating human and environmental feedback for robust performance in agent domains

Date

2011-05

Authors

Aerolla, Mamatha

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

For an artificial agent to be fully autonomous and robust, it needs to be able to learn from and adapt to the environment. In order to keep the learning costs and complexity low, knowledge transfer from humans to agents becomes essential. Ideally, human users, including those without programming skills (i.e., non-technical users) should be able to teach agents desired behaviors using simple communication methods as quickly and as effortlessly as possible. Past work showed that giving human feedback can greatly reduce the sample complexity required to learn a good policy and can enable lay users to teach agents the behaviors they desire. However, prior work has focused on either training agents using human feedback or enabling agents to learn from environmental feedback. In case of domains with multiple agents, providing extensive human feedback becomes costly and infeasible and thus in such domains, it becomes necessary that the agents learn from limited human feedback. In this thesis, we enable an agent to exploit both environmental feedback and human input when it is available, thereby improving its performance significantly. Two domains are used to evaluate the agent's performance: "Tetris" and "Keepaway Soccer Simulator". While Tetris domain has a single agent, Keepaway Soccer domain is a more complex domain with multiple agents.

Description

Keywords

Artificial intelligence, Human-robot interaction, Multiagent domain

Citation