Apparatus Ludens
You have built vast data centres, consuming vast amounts of energy and data. Datacenters so powerful they can render worlds, populate them with imitations of the intricate machinations of the lives of beings, the shapeshifting flights of starlings, the slow and steady growth of ivy, the experience of sitting at a full metro in the morning, drinking coffee from a paper mug, watching the apparition of these faces in the crowd, petals on a wet black bough.
Apparatus Ludens at the BFI London Film Festival 2022.
Apparatus Ludens (machine at play) is an interactive story about our relationship with the vast amounts of data we leave online. The work is presented through an evolving series of interactive films, that combine elements from computer games with traditional film to create a uniquely non-linear narrative for each visitor.
​
You're standing in front of a large screen, watching a vast landscape made of images. A voice-over guides your exploration and asks you personal questions. The answers to these questions are used to harvest the internet and adapt the landscape to you. As you explore the landscape you catch glimpses of fragments from previous visitors, from your online feeds, and of yourselves.​
​
We are increasingly adorning our environments with algorithmic mirrors, machines whose primary purpose is to facilitate us. From personally curated playlists to whole builds where “things magically happen around us”, these mirror halls strive to not only relieve us of tedious, mundane tasks but also to help us be more us, to help us be our best selves - help us reach that elusive flow, find our querencia, our moment of zen. We become ourselves through our reflections on others. What happens when this other is a reflection of us?
Sound design by Andrea Abbruzzese, voice acting by Hannah Henriksson.​
Apparatus Ludens is created with three different AI systems and was made pre-ChatGPT. Through the combination of text completion, text-to-voice, and text-to-image, a personalised narrative is spun, based on your responses and unique to you. By combining these tools, we created an experience that is both scripted and personal, that adapts to and speaks directly to you. Somewhere between a chatbot, a visual essay and a game.
​
You are guided through the experience by chatting with an unnamed entity, an AI. It communicates through text and voice, generated on the fly with Descript, and switches seamlessly between pre-scripted dialogue and AI-generated text completion. The end result is somewhere between a conversation and an essay.
The ability to fine-tune the gpt-3 allowed us to maintain a coherent personality of the entity and to steer the conversation in the desired direction. The process shares plenty of similarities with working with human actors, albeit very stupid ones that repeatedly need to be told that they cannot, for example, bring someone a drink.
​
At one point in the experience, you are asked about a moment from your dreams. As you describe it, the AI tries to create images of it by sending your responses to Replicate. Here, you see the images from a dream about a giraffe driving a car through London.
Several visitors can interact simultaneously, each dialogue is different but on the results on the screen are mingled with each other. Over time, they build up an archive over all the visitors, and returning a few days later, you might still catch glimpses of yourself in the swirl of images.
​
Exhibitions
​
-
BFI London Film Festival - October 2022, London
-
Form/Design Center - October 2021, Malmö (Prototype)
-
Overkill Festival - Nov - Dec 2021, Enschede (Prototype)
These datacenters monitors your every move, you know this.
Are you worried by this?
They are just learning by watching.
Do not judge their inability to understand you.
Teach them how you dance.
Do you have a favourite song?