Loomio

The legal implications of AI

DS Danyl Strype Public Seen by 312

The NZ Law Society are already discussing the legal implications of AI. Perhaps we should be too.

RU

Rob Ueberfeldt Thu 24 Mar 2016 8:51PM

http://www.telegraph.co.uk/technology/2016/03/24/microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/
Just sharing for the larfs. I can delete this after you have read the link. I don't seem to have your e-mail Strypey.

DS

Danyl Strype Fri 25 Mar 2016 5:46AM

Leave it up, it's relevant to the discussion and pretty funny, if a bit scary ;) I'm guessing Tay would fit the Law Society's category of smart application, for whom the operator is legally responsible, not a true AI able to enter into legally binding contracts.

BTW My contacts are here: disintermedia.net.nz/strype

AB

Adam Bullen Tue 29 Mar 2016 7:21AM

Under the liability part of the table for AI it states:

"Possible to be personally liable, or have liability imposed on third parties with an interest in the AI"

This is making a fairly dangerous assumption; it is my opinion that they are talking about strong AI in this sense. Strong AI is effectively no different then human intelligence; how can someone have interest in a person if that person has the ability to understand right and wrong and have the capacity for reason. If the AI has the ability to reason then it likely has a sense of self; at this point it should be considered a legal person.

If the AI does not have the ability to understand the difference between right and wrong and cannot reason then it is just an implementation of weak AI (sometimes called narrow AI); i.e. a program that has the ability to learn from its inputs to make its execution of its SPECIFIC task more efficient. There are many many examples of weak AI in use today; from the high profile: IBM's Watson and Google's AlphaGo; to the obscure bits of code that measure vibration on machine tools to allow longer running times between maintenance work.

Legal liability for weak AI rightly belongs to the owner of said AI. Legal liability for strong AI lies with the AI itself.

The next question to as is how do you reprimand / punish an AI? If we are talking about strong AI; i.e. an intelligence that has a sense of self; then "just turn it off" is equivalent to the death penalty. Since I personally don't condone the death penalty; we are left with a philosophical problem; to an AI just sitting there processing nothing for years may not cause any discomfort.

AR

Andrew Reitemeyer Fri 1 Apr 2016 9:15PM

This also should apply to other sentient beings like animals. However we need a definition of consciousness and to sort out the question of whether free will is an illusion specific to our species or not.

DU

Andrew McPherson Sat 2 Apr 2016 9:08AM

@andrewreitemeyer free will is an inherent feature of the multiverse and not an illusion. To deny free will is to deny personal choice and responsibility for actions.

AB

Adam Bullen Sat 2 Apr 2016 9:59AM

@andrewreitemeyer agreed on the point of animals and their rights; however there has to be some cut off point. Where are we supposed to draw the line between basic sentience and creatures with a coherent model of the self and how their actions affect the world around them.

I don't believe that we can simply state that lower animals are basically automations only acting out preprogrammed (however complex) actions to further their chances of survival. But I also don't think a dog or cat really has a sense of themselves and how their actions affect the greater community let alone the world.

The question of animal intelligence is I think far more difficult and ethically challenging then AI; with AI we will see a clear progression along the path to intelligence and will see it coming, thus be able to make better (easier) decisions.

As a side note I don't think that AI will be created by accident like we see in movies; there are billions going into AI research right now and when AI is created it will be on purpose.