Facebook has recently come to the forefront of Artificial Intelligence research, with a series of projects that have both got praise and raised suspicion at the same time. On the one hand, its project to use artificial intelligence to stamp out extremist messages throughout this social media has been praised, while its project for decoding thoughts has raised some questions from both the moral and security aspects. Recently though, Facebook's Artificial Intelligence Research Division (FAIR) revealed the study that was done based on the process of teaching its chatbots to negotiate.
The results that it came up with are more than fascinating.
Will artificial intelligence become 'too human'?
It is becoming quite evident that shortly we will be in a situation to have more and more business or similar type of interactions with chatbots and other forms of artificial intelligence. To envision what this process would be like, a FAIR team did an extensive research project and published its results publicly through Cornell University.
The project started out with set goals in creating basic social skills dealing with negotiations. Facebook's 'negotiators' had a set of standard trading objects, while each of them was given a specific set of values -each was to give a higher or lesser value to different objects.
Since the communication was carried solely through text, the chatbots had the same problem we do when we negotiate - you never know who values what and how much.
As the report says, the object was so that chatbots would "learn from experience and plan." And so they did. Even more than expected! It seems that the artificial intelligence started developing quite human tactics.
First, they started to develop hardball negotiations, trying to hold out on a deal until the opposing side succumbs. Then they started to bluff, only formally expressing interest in a certain object while trying to acquiring something completely different. Finally, even that they had been given a defined set of sentences to use in these negotiations, they started developing their own.
Who will come on top eventually, humans or chatbots?
The results of this project raise again some serious questions about what is to be expected when artificial intelligence reaches its full development. We will have to make up our minds about what kind of artificial intelligence we want to develop and which human characteristics they should or shouldn't have. Do we want it to be rational and have common sense, or do we want it to win in such negotiating situations at any cost? There are no easy answers to these questions, but judging based on this Facebook study, the answers will have to be here sooner rather than later.