Catherine Maglione

Portfolio

AI and Performance

Stable Diffusion, LCM Distillation, Python, YoloV8 Segmentation, OpenCV, Isadora

Frames
left: body mask   right: generated form

Overview

This piece was developed during an interdisciplinary course led by Melissa Blanco Borelli in the SLIPPAGE Lab at Northwestern University. The course invited us to ask:

What does it mean to collaborate with AI in the act of performance? How can we use AI to deepen questions about human/non-human communication?

My work used a live feedback system to transform bodies into mercury-like abstract visuals—layering choreography, machine vision, and generative AI to explore how systems see and shape us.

The Process

The process combined:

  • YOLOv8 instance segmentation to isolate a performer’s body in space
  • Stable Diffusion SDXL in image-to-image mode to transform the silhouette into abstract visuals

Instance segmentation is a computer vision task that both detects objects and delineates their exact pixel-wise boundaries, offering much finer spatial resolution than traditional object detection, which only outputs coarse bounding boxes. This isolated the performer’s full silhouette with precision—capturing the body’s shape, not just its presence.

The resulting binary silhouette mask was then securely transferred to a remote machine, where I used Stable Diffusion XL (SDXL) in image-to-image mode. SDXL is a text-to-image diffusion model, but when used in image-to-image workflows, it can take a reference image and generate a new one that shares its structure.

Performance Reflection

This piece explored the ambiguity of bodily boundaries by intentionally preserving a limitation in the segmentation pipeline. Although the YOLOv8 model used for instance segmentation was capable of detecting multiple bodies, the script remained configured to identify only a single person. This constraint informed the choreography: two dancers moved in close proximity—touching, folding, and intertwining—so their forms would be interpreted as one.

Rather than correcting the segmentation constraint, the work embraced it as a generative condition: the model’s inability to distinguish between bodies became a site of meaning. The performance questioned where one body ends and another begins, and what is lost or revealed when computational systems collapse plurality into singularity. By becoming one in the eyes of the machine, the work asked what it means to share authorship with an algorithm—and how identity, intimacy, and presence shift when likeness is filtered, abstracted, and returned through machine vision.

Reflections

Throughout this course, I expanded my thinking around AI as a medium. I remain deeply resistant to the notion that AI can or should replace human creativity. The outputs it produces, while sometimes visually striking, are fundamentally devoid of meaning until we, as artists, decide to engage with them. And that process of meaning-making, of weaving intention through randomness, is profoundly human.

Working with non-human generative systems only reaffirmed my awe for the depth, breadth, and emotional intelligence of human creativity. Each group’s vignette in our final performance was built through collaboration, conflict and care.Despite using advanced ML models, what resonated most was the very human labor of listening, feeling, and creating together.

This process also brought me into sharper alignment with the ethical and environmental weight of the tools we use. During the duration of the live performance, one performer poured water slowly into a bowl as a ritualistic reminder of the cost behind our medium. It was a quiet gesture—but one meant to center accountability amid the generative play.