Skip to main content

Elliott Lupp

Elliott Lupp is a composer, improvisor, visual artist, and sound designer whose work often invokes images of the distorted, chaotic, visceral, and absurd. This aesthetic approach as it relates to both acoustic and electroacoustic composition has led to a body of work that, at the root of its construction, focuses on the manipulation of noise, extreme gesture, shifting timbre, and performer/computer improvisation as core elements.

Elliott has received a number of awards and honors for his work, including a 2019 SEAMUS/ASCAP Commission, the 2019 Franklin G. Fisk Composition Award for Chamber Music, and Departmental and All-University awards in Graduate Research and Creative Scholarship. His music has been performed at a variety of electroacoustic festivals including N_SEME, CHIMEfest, Electronic Music Midwest, MOXsonic, Fulcrumpoint New Music Project, SEAMUS, and Electroacoustic Barn Dance, and by such ensembles as the Dutch/American trio Sonic Hedgehog (flute, clarinet, and electric guitar), the Atar Piano Trio, Found Sound New Music Ensemble, various members of MOCREP, The Chicago Composer’s Orchestra, Fonema Consort, and Ensemble Dal Niente.

PhD: Northwestern University (currently pursuing) – Alex Mincek, Jay Alan Yim

MM: Western Michigan University – Christopher Biggs, Lisa R. Coons

BM: Columbia College Chicago – Eliza Brown, Kenn Kumpf

 

Program Note, erase-repeat (2019) 

Commissioned by ASCAP/SEAMUS in 2019, erase-repeat is the first in a series of composed projects for the experimental electroacoustic duo MOUTHS. The work primarily stems out of a love for electronic instrument building, noise music, laptop improvisation, theatrics, and is constructed via overlapping sections that instructs the performers to not only listen closely, but improvise given certain performative/sonic parameters. More specifically, the work instructs the performers to focus heavily on the blending, shaping, and manipulation of localized sounds, generated live via amplified viola, electric guitar, live synthesis, and sample manipulation. These captured signals are then manipulated, broken down, and built upon in real time via each performer’s unique live processing set up as the piece progresses.

Currently, the work uses two independent flashcard-like ‘scores’ to feed each performer their own performative/sonic parameters to improvise with throughout the piece. This is achieved via Max patch that is visible on both performer’s laptops.

Each performer in MOUTHS uses their own uniquely programed live processing interface (mostly built in Max/msp) that is controlled live via iPads, QuNeo, and a Korg nanokontrol II. Finally, the primary Max patch outputs a fixed track of fixed media that act as glue throughout the structured improvisation, further assisting each performer’s improvisations throughout the piece.