From Wochit Tech:
A team of roboticists at the University of California, Berkeley, has developed a simulation that indicates that adding some self-doubt to robots could help them better integrate into society. The simulation they created depicts an interaction between a robot and a human with an adjustable level of self-confidence for the robot, which has a built in off-switch. In one simulation, the robot was asked to perform a task, and a human was then given the option of allowing it to continue or hitting the off switch. But the robot also had the ability to override its own off-switch and therefore the wishes of the human. As you can probably expect, robots that had a lot of self-confidence turned themselves right back on. When they were given just a little confidence, however, the robot stayed off - even if it judged that it was doing a good job. The analysis suggests that agents with uncertainty about their utility function have incentives to accept or seek out human oversight. Thus, systems with uncertainty about their utility function are a promising area for research on the design of safe AI systems.