LOADING TRACES
TRACES
TRACES
Motion & Light

Traces of Motion

Can we elevate human motion through technology? How does movement leave its mark on us and our environment? We use interactive light projection to visualize the hidden paths we walk elevating human motion into an audiovisual piece.

Mixed Realities

Mixed Realities

We mix physical reality with computer-assisted real-time visualizations. The result is a new cross-boundary arena where real and digital coexist and interact.

Three Worlds

Three Worlds

Through three distinct visual worlds and choreographies, we invite viewers to reimagine what augmented performance can be, extending our bodies with light, geometry, order, and chaos.

Technology

Technology

Our system tracks human motion with exceptional speed, dynamically adapting visual patterns based on velocity, distance, height, angle, and more. This creates an embodied language that artists can learn and exploit for self-expression.

WORLDS

FLOW

clouds, rain, river, aurora, air, ice, stars

CLICK TO ENTER →

CYCLE

sharp, circles, neon, contrast, sparks, sun

CLICK TO ENTER →

BEAT

fluid, jump, beat, night, chaos

CLICK TO ENTER →
ABOUT

The Vision

TRACES is an exploration of human motion within an interactive light projection, creating new realms between the digital and physical space. Our goal is to extend gestures into brush strokes, steps into waves, and moves into traces of light and sound that continue to echo over space and time.

Click to read more…

The Project

Our work is an intensive collaboration between technology, choreography, and cinematography, building on the work of established digital artists. We combine real-time visualization and projection mapping to create an immersive mixed reality experience.

Click to read more...

The Technology

We use state-of-the-art mixed reality technology, often used in animation tech, robotics, and scientific research, to bring visual interactions into the physical world within an artistic context.

Click to read more...

What is our Inspiration?

In our fast-paced world, moments of physical expression pass unnoticed. What if we could capture these fleeting moments? Our inspiration drew from multiple sources, such as the trails of light in long-exposure photography, the mathematical beauty of motion trajectories in physics and robotics, and the ancient human practice of choreography and storytelling. TRACES emerged from this profound curiosity about the ephemeral nature of human movement. As we move in physical space, we leave echoes behind that fade over time. Invisible, undetectable echoes of intermingled actions that shape our reality without revealing themselves. What if we could show the ripples we make in space as we move? And all that with the dexterity and speed that is so typical for human movement?

What is TRACES?

TRACES was born as a cross-disciplinary experiment to explore these questions. The project is placed between science, technology, and choreography, where we created a physical space in which motion can leave marks behind, all visible, proud, noticed, seen, supportive, and complementary. Traces of light, flashes, swirls, colors, and patterns aligned with physical motion. By learning and exploiting action and reaction within the system, anyone can create their own embodied language resulting in dynamic art on the ground. A living, breathing canvas where the body guides the creation of the surrounding visual landscape, truly in real time. By that we join virtual and physical motion in a mixed realm in between ours and that of virtual creations. With our experiment, we want to show how interactive visual art can be augmented with state-of-the-art technology to see what was only part of our imagination so far.

Photo credits: Alexander Krivitskiy

Cross-disciplinarity

We are an independent interdisciplinary team collaborating on the border of visual arts, choreography, cinematography, engineering, and scientific exploration. Our combined expertise covers real-time computing, projection mapping and tracking technologies, experimental choreography, film documentation, and rapid prototyping. The goal of our collaboration within this project was to create a link between visual and motion arts using cutting-edge technology and creative solutions. The project involved intensive prototyping phases where, although speaking multiple professional languages, we needed to find the common voice to guide our project in the right direction. The result is a truly cross-disciplinary experimentation where science and technology complement motion and visuals in a sustainable and balanced approach.

Workflow

Creating TRACES required months of intensive collaboration between disciplines. We developed a unique methodology where movement phrases were created in response to visual possibilities, and visual algorithms were refined based on choreographic needs. This iterative process led to the development of what we call an "embodied language", in which movement qualities that dancers can consciously manipulate translate directly to specific visual effects. Over time, learning this language means developing an intuitive understanding of the system's responses, learning to "play" the technology like an instrument. This combination of technical precision and artistic intuition is what makes each performance of TRACES both technically unique and emotionally resonant.

Photo credits: Google DeepMind

What is mixed-reality?

Mixed reality (MR) is a type of reality extension (XR) technology similar to VR goggles where virtual animations and real-world imagery are combined. In contrast to VR, however, MR does this in the physical world (using projectors) without the need of specialized goggles. We use four projectors attached to a 2 m tall metal frame around a 4 m by 4 m arena. The overlapping projections allow us to create an immersive experience without shadows. We update the projected images so fast that the experience becomes immersive, with effectively no visible delay between movement and visualizations. This small delay is what makes our system feel like it naturally extends human motion with visuals.

Tracking & Visualization

We update the visuals according to human motion. But how do we know how a body moves exactly in 3D space with such high precision? We use cutting-edge tracking solutions based on infrared lights (by OptiTrack) and detect the 3D position of human body parts with a mindblowing speed, capturing it 300 times every second. We use these captured coordinates to calculate posture, movement direction, velocity, and many more, and feed these into our visualization pipeline. Our visualizations are built on the work of established digital artists and are written with WebGL to allow a fully real-time experience. We developed three distinct visual worlds based on these WebGL shaders with different colors, styles, and feels to allow artists to develop unique choreographies within each.

MEDIA

YouTube

Explore all our content on YouTube.

To Channel

LinkedIn

Connect with us professionally and follow our project updates.

Connect’

Instagram

See our related Instagram posts and follow us.

Follow Us
TEAM
Dr. David Mezey

Dr. David Mezey

Concept, technology, tracking, mixed-reality, artistic direction, website
"Merging natural and synthetic systems provides new ways of exploration in science, engineering, arts and education."
LinkedIn: @davidmezey
David James

David James

Technology, projection/audio mapping, artistic direction
"Creating immersive experiences through the fusion of light, sound, and motion."
LinkedIn: @TBC
Web: TBC
Patricia Woltmann

Patricia Woltmann

Choreography, motion concept, performance
"Movement is the universal language that connects our physical and digital worlds."
LinkedIn: @TBC
Web: TBC
Emma Sullivan

Emma Sullivan

Cinematography, post-editing
"Documenting the interface of movement and technology through the lens."
LinkedIn: @TBC
Web: TBC
IMPRESSUM

Legal Information

TRACES Mixed Reality Art Project

Responsible for content: Dr. David Mezey, David James
Contact: https://mezeydavid.com/contact

All content © 2024 TRACES Project. All rights reserved.
Software licenses and attributions available upon request. Visualization code contributions stated under each video respectively.

Special thanks to the Science of Intelligence Cluster Berlin (link).