Wednesday, 23 August 2017

Different flavors of artificial intelligence - and, can we pull the plug?

I want to pass on a few summary points from two recent sessions of the Chaos and Complex Systems seminar that I attend at the University of Wisconsin. I led the first of the sessions and Terry Allard, a retired government science administrator (ONR, NASA, FAA) led the second.

Artificial intelligence (AI) comes in at least three flavors, or stages of development.

There is Artificial narrow intelligence (ANI), which where we are now, crunching massive amounts of data to discern patterns that let us solve problems, like reading X-rays or deciding whether to approve mortgage loans.

Then there’s artificial general intelligence (AGI, not yet happening), meant to achieve the kind of flexible and novel thinking that even human infants can do. Ideas about how to proceed include reverse engineering how the brain does what it does, making evolution happen with genetic algorithms, devising programs that change themselves as they learn (recursive self improvement), etc.

These approaches, especially recursive self improvement, might eventually lead on to artificial super intelligence (ASI), transcending human abilities. We might be no more able to understand this new kind of entity than a dog is able to understand quantum physics.  (See The Road to Superintelligence for one line of speculation.)

Intelligence. or how intelligent something is, is a measure of ability to achieve a particular aim, to deploy novel means to attain a goal, whatever it happens to be, the goals are extraneous to the intelligence itself. Being smart is not the same as wanting something. Any level of intelligence — including superintelligence — can be combined with just about any set of final goals — including goals that strike us as stupid.

So...what is the fundamental overall goal or aim of humans? Presumably, as with all other biological life forms, to perpetuate the species, which requires not having the environmental niche from which it draws support disappear, either through its own actions or though other natural forces. A super AI that might supplant us or be the next stage in our evolution would have to maintain or reproduce itself in a natural physical environment in the same way .

Paranoid fantasies about AI dystopias abound, and Applebaum suggests the AI dystopia may already be here, in the form of ubiquitous bots:
...bits of code that can crawl around the web doing all sorts of things more sinister than correcting spelling and grammar, like completely infecting and distorting social media, the article cites one estimate that half of the users on Twitter are bots, created by companies that either sell them or use them to promote various causes. The Computational Propaganda Research Project at the University of Oxford has described how bots are used to promote either political parties or government agendas in 28 countries. They can harass political opponents or their followers, promote policies, or simply seek to get ideas into circulation….no one is really able to explain the way they all interact, or what the impact of both real and artificial online campaigns might be on the way people think or form opinions.
Maybe we’ve been imagining this scenario incorrectly all of this time. Maybe this is what “computers out of control” really look like. There’s no giant spaceship, nor are there armies of lifelike robots. Instead, we have created a swamp of unreality, a world where you don’t know whether the emotions you are feeling are manipulated by men or machines, and where — once all news moves online, as it surely will — it will soon be impossible to know what’s real and what’s imagined. Isn’t this the dystopia we have so long feared?
Distinctions between human and autonomous agents are blurred in virtual worlds. What is real and what is “fake news”  is difficult to ascertain. “Spoofing” is rampant. (See The curious case of ‘Nicole Mincey,’ the Trump fan who may actually be a bot.).

Terry Allard offered the following Assertions/Assumptions in the second of our sessions:

-Artificial Intelligence is not a continuum. Human-Level Artificial General Intelligence (AGI) is not a required step to super-intelligence.  -Machine evolution requires machine capability to self-code and to build physical artifacts.
-People will become dependent on machine intelligence but largely unaware and unconcerned.
-AI’s will be pervasive, distributed, multi-layered and networked, not single independent entities.
-Super-intelligent Machines may have multiple levels of agency. There will be no single “off switch” allowing humans to pull the plug.
-What can be invented, will be invented; it’s just a question of time.

Finally, I point to an article by Cade Metz, "Teaching AI systems to behave themselves," that is an antidote to paranoid fantasies and questions about whether there can be an 'off switch'.


from Deric's MindBlog http://ift.tt/2vdBaB3
via https://ifttt.com/ IFTTT

No comments:

Post a Comment