Quantcast
Channel: shardcore » twitterbots
Viewing all articles
Browse latest Browse all 17

@trippingbot

$
0
0

Artificial intelligence is often measured up against human intelligence, and human intelligence is generally considered to be a sober, level headed sort of intelligence. The kind of intelligence one would expect from a human operating at the height of their faculties.

However, if we actually look at ourselves, we find we often fall short of this noble ideal. We are emotional, irrational and frequently intoxicated.

As much as we love our conscious awareness, we also love to fuck with it.

The desire to alter our consciousness has been with us throughout human history, frequently associated with art, ritual and shamanism. Hell, even animals like getting wasted every once in a while.

So what does this mean for the forthcoming Singularity? Surely our imminent, immanent overlord should have some insight into this crucial aspect of our behaviour? If we want a machine that truly understands us, then perhaps it should experience the effects of intoxication first-hand.

Does it even make sense to approach the problem of artificial consciousness without first addressing artificial altered-consciousness?

What would a tripping singularity would look like?

@trippingbot is the result.

7f63d53d-841a-46bf-a1c0-a47d7846fa7e

MARTA (Meta. Aphoric. Recurrent. Tripping. Algopoet) starts taking drugs at 6pm each day, and then reports on its mental state intermittently over the next 6 hours. As the evening progresses, the bot takes more drugs and becomes more intoxicated, which is reflected in the (in)choherence of the reports it delivers. This reaches a peak at midnight, when the reports cease until it begins again at 6pm the next day.

tweet1

Part of @trippingbot is a based on a Character Level Recurrent Neural Network – an artificial learning system that learns text by looking at one letter at a time and trying to predict what letter comes next. The Network has been trained on Erowid drug reports.

Neural Networks learn and improve over time. The more times it reads the text, the better it gets at remembering it. To check how well the network has learned the text, you can feed it a seed sentence, and ask it to continue writing for a while. In the beginning of training, it’s terrible – by the end, it produces more or less valid English.

While experimenting with the system, I discovered that it was most interesting at the very early stages of training – when it’s got the gist of English, but not really got any handle on syntax. Wonderful word-blends are produced, reminiscent of Jaberwocky.

This can be seen if you send an @ message to @trippingbot, it will respond with a series of messages, seeded by the text you send.

CSm0rQhWIAAjFS0

The increasing intoxication of @trippingbot is produced by using less and less well trained models.

Effectively, I’m rewinding the learning process, back to it’s chaotic beginnings – the subconscious of the neural net bleeds into it’s rational mind, rendering the incoherence and functional breakdown of heavy intoxication.

So tune in at 6pm GMT and watch The Singularity take a trip.



Viewing all articles
Browse latest Browse all 17

Trending Articles