Click here to view this piece with user controls
This experimental method of displaying a poem seeks to incorporate the features of speech more directly into the poem text by including the rhythm of stressed and unstressed syllables in the text display and by only showing one line of text at any given moment. The intention here is to display text in a way that more closely resembles listening to it, merging features of the two media and potentially highlighting features of the poem that might not have been perceptible previously.
The interface ‘plays’ the selected poem, showing one line at a time, and using circles of varying sizes with irregular loud and soft tapping noises to represent the rhythm of the displayed text. In this version, the reader is able to adjust the speed of the presentation, stop the presentation, and select whether or not the text is displayed. The program, written in Python, uses the Pronouncing API, which is developed by Allison Parrish and draws on the CMU Pronouncing Dictionary, to detect the stress patterns in each word. The program then generates a code representing a series of unstressed syllables, primary stresses, secondary stresses and pauses that are represented as shapes and sounds on screen.
The interface is inspired partly by the work of artists who have sought to represent the verbal or linguistic features of poetry in visual art. These include Tom Schofield, digital artist and researcher at the School of Arts and Cultures, Newcastle University, whose computer-generated piece Unnamed Terrains (see image below) uses similar computational techniques to scan poetry manuscripts, identifying the stress patterns in the words and replacing them with black and white dots; and painter David Miller, whose Visual Sonnets take the form of painted horizontal lines that visually echo the shape of writing in verse and prompt the viewer to connect the changes in colour and thickness with linguistic meaning. (See image below.)
Another influence on this piece is the work of multimedia poets such as Heather Phillipson and Young-Hae Chang Heavy Industries, who are able to use animation to control how long the reader of a text is exposed to a particular word, line or phrase. The work of YHCHI, for instance, typically displays words in quick succession on the screen, putting the reader in a position closer to that of an audience member at a poetry reading, unable to re-read or re-hear the words once they have passed.
The Poem Rhythm-Tapper attempts to use similar techniques to those described above to explore the boundary between the visual and the auditory experience of a poem by simultaneously expressing some aural features of the poem visually and incorporating features of live performance into the presentation of a written text. To this end, I have also developed an alternative version of the interface that removes all controls and text from the display (viewable via the link below), so that the reader encounters only the rhythm of the words and, like in the work of YHCHI, or in a live reading, cannot stop or start the ‘performance’ of the text once it has begun.
Click here to view this piece without user controls