A possible definition of robot safety.
The proposed concept of safety of a robot’s behaviour can be described as a certain kind of passivity.
First, a safe robot uses only such subgoals which will cause predictable and explicitly permitted changes (in the environment). Everything else is implicitly forbidden.
Additionally, a safe robot acts only towards these goals or changes (in the environment) that it has been ordered to achieve, or which are necessary subgoals for achieving some given order.
A safe robot will prevent only own-caused mistakes from happening. It does not try to prevent others from making mistakes. The consequence is that the “first law” does not give a robot permissions to take control over people or even over random tools, in order to “save” someone (as it happens in stories by Isaac Asimov).
The permissions are specified on different levels of generality; some of them may be very abstract. Each of such permissions must be specified explicitly.
A safe robot has to comprehend and know for which activities it is authorised and, in some contexts, also who is allowed to give authorisations. (In such a case, when the robot does something wrong, this implies that a combination of the following issues has occured: 1) the robot has been given unnecessary permissions; 2) it has insufficient training for the task and accompanying environment; or 3) it has been given wrong / bad orders. All the issues which were described here can be perceived as the responsibilities of the robot’s maintainer or owner).
In this context, the passivity does not mean that the robot is necessarily purely reactive. Passivity means here that the robot distinguishes clearly between the orders that were given and the subgoals it has set to itself. The consequence of this distinction is that the robot will not try to make things “better” if not ordered to do so; and will not agree to do many actions, even if these actions are possible subgoals of a given task. The most important part about the passivity is that refusing to do actions is the “the first law” and following the orders is only “the second law”.
An important aspect of this definition of safety is that it requires neither complex cognitive abilities (even no proactivity), nor extensive training of the robot to be applicable and sufficient, and to clearly put both the responsibility and control over mistakes to the maintainer or owner of the robot; which is the goal of the safety system.
Addendum. A robot that is both safe and proactive could be possibly called “friendly”. However, this still does not mean that there is no longer anyone who can and has to take responsibility.
Addendum 2. An interesting consequence of this definition is that potentially the most dangerous robots will be the rescue robots; because they are given both commands to take control over people (in some sense) and also wide permissions, both necessary in order to be able to save people.
safe robot vs safety of robot’s actions vs safety in general, goal, order, task, permission, training, program, owner, maintainer, responsibility, irreversible action, environment, friendly robot, first vs second law, generality of permissions, implicit vs explicit permissions, proactivity, subgoals, cognitive abilities.