The AI chatbots did not become sentient or rebel or actually “”“communicate”“” in the way this implies, lol. They were designed to negotiate and make deals with each other on a set of items, making offers and countering for items. They were supposed to use English text, parse the English text, and generate a counter, while avoiding language that got worse outcomes. What ended up happening was that “I want” and “to me” are particularly powerful negotiating phrases, whereas most of the language was deemed sub-optimal. So reinforcement learning ended up reinforcing those phrases and the chatbots ended up trying to negotiate with those phrases alone. Just… miles of “to me to me to me to me”. They shut the bots down not because they had magically developed capabilities or goals they weren’t given, but because they were failing and poorly designed. Specifically, the researchers didn’t add a “does this look like English” check to the reinforcement rules, so things just started getting incomprehensible.
This wasn’t scary or a portent of doom. It was a failure. This kinda thing happens all the time. It’s not even that noteworthy. :)
How is two chatbots shouting at eachother like toddlers not noteworthy. Its cute and kind of amazing how the solution is essentially the robot equivolent of a parent stepping in and saying “both of you stop! Shouting wont get you anywhere talk this out proper”.
Me: “hey there neural net!”
NN: *cronch cronch*
Me: hey, uh, what are you processing?
NN: *crunches faster*
Me, *trying to pry open the NN’s mouth*: SHOW ME WHAT YOU’RE PROCESSING RIGHT NOW
Anonymous asked: if you cosplay as kanaya please please please see if you can get that one disaster outfit she has thats a dress tucked into another dress. that is by far her best outfit