I think we can divide the space of possible AI minds into two reasonably distinct categories. One category comprises the “passive AI minds” that seemed to be the main focus of the Chalmers-Dennett exchange. These are driven by large data sets and optimize their performance relative to some externally imposed choice of “objective function” that specifies what we want them to do—win at GO, or improve paperclip manufacture. And Dennett and Chalmers are right—we do indeed need to be very careful about what we ask them to do, and about how much power they have to implement their own solutions to these pre-set puzzles.
The other category comprises active AIs with broad brush-strokes imperatives. These include Karl Friston’s Active Inference machines. AI’s like these spawn their own goals and sub-goals by environmental immersion and selective action. Such artificial agents will pursue epistemic agendas and have an Umwelt of their own. These are the only kind of AIs that may, I believe, end up being conscious of themselves and their worlds—at least in any way remotely recognizable as such to us humans. They are the AIs who could be our friends, or who could (if that blunt general imperative was played out within certain kinds of environment) become genuine enemies. It is these radicalized embodied AIs I would worry about most. At the same time (and for the same reasons) I’d greatly like to see powerful AIs from that second category emerge. For they would be real explorations within the vast space of possible minds.
from Deric's MindBlog http://bit.ly/2ZJ2Hu5
via https://ifttt.com/ IFTTT
No comments:
Post a Comment