This paper addresses the problem of human-robot collaboration in scenarios where a robot assists a human by executing a complex motion involving the manipulation of an object. We focus on tasks in which success in the task depends on reaching a target pose that is controlled by the human. We contribute a reinforcement learning-based approach that allows the robot to reason about its own ability to successfully complete the task given the current target pose and indirectly adjust that pose by prompting the human user. Our approach allows the robot both to trade-off the benefits of adjusting the target position against the cost of bothering the human user while, at the same time, adapting to each user's responses. Our approach was tested in a real-world human-robot collaboration scenario involving the Baxter robot.