Featured Stories World News Commentary Money Watch Multimedia Prison Planet U.S. News Science And Technology

Ditch humans or cooperate? Google’s DeepMind tests ultimate AI choice with game theory

  • Print The Alex Jones Channel Alex Jones Show podcast Prison Planet TV Infowars.com Twitter Alex Jones' Facebook Infowars store

RT
February 10, 2017

DeepMind, the London-based artificial intelligence unit of Google’s parent Alphabet Inc. has been running a series of simulations aimed at answering a key AI question once and for all: will the robots play nice, or will they try and kill us all?

DeepMind’s latest research is focused on the dichotomy between cooperation and competition, specifically among reward-optimized agents (human or synthetic), in highly variable environments.

via GIPHY


While far from deciding humanity’s fate at this point, the information gathered thus far gives us an indication of the extent to which man and machine may cooperate in the near future, on everything from transportation systems to economics.

The team is trying to expand the comfort zone of existing AI agents in a variety of ways, most recently through two distinct game types that draw heavily on elements from game theory.

In the first game, the two agents must compete to gather as many apples as possible, a straightforward premise centered around scarcity and cooperation. The more plentiful the apples, the more likely the players were to cooperate or, at least, leave the other alone.

via GIPHY


However, there is a twist: both players are armed with a ray gun and can stun the other player at any time, immobilizing them for a brief period, allowing the aggressor to gather more resources unimpeded. This is classified as a ‘complex behavior’ within the game, as it requires more computing power, thought, or effort to carry out, as opposed to a singular directive such as a collecting apples.

  • A d v e r t i s e m e n t

The DeepMind team found that the greater the level of intelligence applied (or larger the neural network supporting the software agent), the more aggressive the software agents became.

via GIPHY


The second game, the Wolfpack game, involves hunting for prey for a reward. The twist here is that other wolves in the surrounding area also receive a reward for a successful hunt. The more wolves within the designated area, the greater the reward each wolf receives.

This game rewarded cooperation (the complex behavior in this instance) far more than the apples game, regardless of how intelligent the participants were.

via GIPHY


The researchers believe that there is a propensity towards the more complex behaviour in each game, especially as agents become more intelligent i.e. aiming at and zapping an opponent and cooperating for greater rewards in each game.

Leibo emphasized that in the current round of experiments, none of the software agents had a functioning short-term memory, and thus could not make inferences on other subjects’ behavior based on past experience.

“Going forward it would be interesting to equip agents with the ability to reason about other agent’s beliefs and goals,” he said.

This article was posted: Friday, February 10, 2017 at 8:18 am





Infowars.com Videos:

Comment on this article

Comments are closed.

Watch the News

FEATURED VIDEOS
MY LAST EVER VIDEO? See the rest on the Alex Jones YouTube channel.

The Truth About the London Terror Attack See the rest on the Alex Jones YouTube channel.

http://www.youtube.com/embed/7bnQrajRfTM http://www.youtube.com/embed/lZgG0p6eBBY

© 2017 PrisonPlanet.com is a Free Speech Systems, LLC company. All rights reserved. Digital Millennium Copyright Act Notice.