[Spaces]


In Experiment 5, we explored how voice commands and body rhythms could manipulate digital environments to simulate the experience of lucid dreaming.
Inspired by films like Inception and Ant-Man, we used speech recognition to alter scanned 3D spaces, dynamically distorting familiar environments like classrooms and bedrooms.


Tools: Polycam, Scanniverse, TouchDesigner, P5.js

Experiment5_Speech Recognition


By integrating subtle body rhythms such as pulse, breath, and blink, we reflected the subconscious movements in dreams, creating immersive, surreal transformations.
This experiment highlighted the connection between sound, control, and visual manipulation, offering insight into how abstract data can drive interactive experiences and reshape digital landscapes in real-time.














Room
Scatter
Swell
Split
Distort
Surround


Speech Recognition

We used speech recognition to capture the user's voice, triggering corresponding visual transformations. 
Commands such as “distort,” “mirror,” or “scatter” would modify the visual spaces, which were originally scanned 3D environments like classrooms and bedrooms. 
These visuals, exported from TouchDesigner, were integrated into p5.js and played as videos that changed based on the spoken commands. 
This interaction gave users the feeling of reshaping their surroundings in real time, much like controlling the landscapes in a dream.