Directory Image
This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.

Facebook is using artificial intelligence to combat terrorism

Author: Dimple Shah
by Dimple Shah
Posted: Jun 16, 2017

Technology news

Facebook said that it has started using 'artificial intelligence' (AI) to help combat terrorists' use of its platform.

The American company's announcement comes as it faces growing pressure from government leaders to identify and prevent the spread of content from terrorist groups on its massive social network.

Facebook officials said in a blog post on 15 June 2017, that the company uses AI to find and remove the "terrorist content" immediately, before users see it. This is a departure from Facebook's usual policy of only reporting suspect content if users report it first.

They also say that when the company receives reports of potential "terrorism posts," it reviews those reports urgently. In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.If you are bad at bargaining, don't worry. A Facebook chatbot or dialogue agent which has the ability to engage in a meaningful conversation and negotiate with people may help you cut a better deal.

The bots were introduced by researchers at Facebook Artificial Intelligence Research (FAIR).

"Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it's possible for dialogue agents with differing goals to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes," said Facebook in a blog post.

Facebook trained the bots by showing them negotiation dialogues between real people and then having the bots imitate people's actions, a process called supervised learning.

In the training, the bots were shown a collection of items each having a particular point value and were instructed to divide them between themselves and another agent by negotiating a split of the items so that the bots maximised their points.

To go beyond simply trying to imitate people, the FAIR researchers also allowed the chatbot to achieve the goals of the negotiation.

To train it to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the bot when it achieved a good outcome.

Facebook claims the bots got smart enough to negotiate with humans such that they did not realise they were dealing with a machine.

"Interestingly, in the experiments, people did not realise they were talking to a bot and not another person, showing that the bots had learned to hold fluent conversations in English in this domain," Facebook said.

The bots even learned to bluff by initially feigning interest in a valueless item, only to later "compromise" by conceding it, it said.

Click Here For More Technology News

About the Author

Hi, My name is dimple shah and this is the News article Blog

Rate this Article
Leave a Comment
Author Thumbnail
I Agree:
Comment 
Pictures
Author: Dimple Shah

Dimple Shah

Member since: May 08, 2017
Published articles: 447

Related Articles