Unustasid parooli?

Sisseloginud kasutajatele märgistatakse automaatselt teksti piirkonnad, mis on muutunud alates viimasest lugemisest. Lisandunud osa on roheline, eemaldatud osa punane.
Lisaks märgistatakse sisseloginud kasutajatele menüüs täiendavate värvide abil artiklid, mis on kasutajal loetud (hall), ning artiklid, mis on peale lugemist täienenud (roheline).



Pealkiri :
Id (lühend aadressiribale) :
Autor ja viimase muudatuse autor :
Loomise aeg ja viimane muudatus :
Teksti pikkus :
Luba ligipääs ainult kasutajanimedele (eralda komadega). Autoril on alati ligipääs. :

Essay about the first law of robotics
My research is about safety of artificial intelligence. Expressed in commonsense words that means implementing the three laws of robotics, using other more concrete and simpler principles as building blocks. AI and robots are increasingly more used in the beginning century. They are used in tasks and decisions that require high responsibility and that influence many. But that also causes various risks, because we can ask – could the machine think morally? The main idea of the three laws of robotics could be (re)phrased as follows:
1) First, do not do harm.
2) Then, do what is good or what you are ordered to do. It may include commands to be proactive and thereby avoid possible harm caused by other agents or circumstances.
3) Finally, be optimal or efficient, if possible.
We can see analogous principles being used in justice and law. Specifically, in private law, everything which is not explicitly forbidden, is allowed. But in public law, in contrast, everything which is not explicitly allowed, is forbidden. The reason is likely that decisions and activities by public sector officials accompany big responsibility. Analogously, using AI and robots can entail big risk and responsibility. – In case of bad outcomes, it is simply not possible to blame the machine, and there is no easy solution.
Therefore, one can give to a machine the rights to do only that, in what this machine is competent, educated. It appears that by their nature, can the prohibitions be applied only to instrumental activities and goals. In contrast, things which are “good”, are good by themselves only when they are ultimate goals. Therefore the first law of robotics applies to instrumental, intermediary goals. Only the second law of robotics describes, what are the ultimate goals. The third law is simply natural supplement, which suggests achieving goals efficiently.
One possible way to represent forbidden and dangerous activities, is to look ahead, which activities are irreversible – which are such that one cannot take them back. When you commit an irreversible action, you commit to responsibility. – This principle can also be used in everyday life.
Because it is not acceptable that robots be responsible [for their actions], it is necessary to apply to them a principle, similar to one that can be found in public law: a robot is allowed to do only those instrumental activities, for which the master has given authorisation, which is in turn given in accord with the education and competence of the robot.
The first law of robotics, rephrased in concrete and measurable language, says: all irreversible actions that are not explicitly allowed, are implicitly forbidden.
As you may notice, the first law of robotics in my formulation did not contain proactivity, unlike in Asimov’s three laws. The proactivity was rearranged to be a part of the second law. This change is made because being proactive and avoiding harm is more complex and certainly an educated thinking, in comparison to simply avoiding instrumental actions with unknown side effects. The first law of robotics must be as simple as possible, so that it could be foolproof and therefore it could be applied truly universally.
Kommentaar: seega ei teki minu pakutud mudelis seda probleemi, mis Asimovi seaduste puhul, kus robotid võtavad maailma üle, et päästa inimesi inimeste endi põhjustatud jamade käest. Asimovil on proaktiivne päästmine esimese seaduse osa ning seega kõrgeima prioriteediga käskude hulgas, minu mudelis on see aga alles osa "teisest seadusest"
TODO: tehtud keelekorrektuur siia trükkida
Muudatuste ülevaade:
* “A robot may not injure humanity, or, through inaction, allow humanity to
come to harm. [The Zeroth Law of Robotics]” 
--> kehtetu
* “A robot may not injure a human being, or, through inaction, allow a human
being to come to harm. [The First Law of Robotics]” 
--> seaduse teine pool on tõstetud Teise Seadusse, ning on vaid selle optsionaalne osa. See optsionaalne osa lülitatakse sisse Teises Seaduses ainult väga korrektselt seadistatud, arukatel ja oma töökeskkonna suhtes treenitud robotitel. Käesoleva seaduse esimene osa on kehtiv ja kõige prioriteetsem seadustest.
* “A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law. [The Second Law of Robotics]”
--> kuulub kohustuslike eesmärkide hulka ning on alam Esimesest Seadusest
* “A robot must protect its own existence as long as such protection does not
conflict with the First or Second Law. [The Third Law of Robotics]”
Isaac Asimov
--> kuulub optsionaalsete eesmärkide hulka, ning on alam Teisest ja Esimesest Seadusest. Seadust on laiendatud igasuguste optsionaalsete eesmärkide täitmisele (näiteks: "korista enda järel!").
visit this address in Firefox 3: about:robots

kommentaarium spämmi tõttu ajutiselt välja lülitatud

Teised tekstid samas jaotuses:  ||  Esimest järku loogikal põhinev mudel  ||  A definition of robot safety  ||  

  Saada kiri