ADEPt
ANALYSIS

Studying movement, sound and text...

provides us with a better understanding of a fundamental aspect of human culture: expressive communication.

These are the components of our data analysis process:

Video-EASE

Encoding and Analysis of Sound and Embodiment

Video-EASE integrates a variety of data to provide an empirical understanding of complex cultural phenomena: from the cycles and discrepancies of polyrhythmic dance and drumming to the gestures that accompany speech, traditional poetry and contemporary hip-hop. Video-EASE motion encodings augment data from established sound annotation software (like ELAN and SonicVisualizer) making it possible to analyze and visualize sound and movement together. Functions are currently implemented in MatLab and are available for 2D motion analysis on GitHub. In 2018, we will expand Video-EASE to include scripts for 3D motion and make the suite available in Python.

Pattern-Finding

Finding patterns in sound and movement using Video-EASE contourESE and findPattern

Video-EASE's contourESE function discretizes continuous signals into strings of salient-events. The output can be used for pattern-finding using the findPattern function. This is a great way to study the rich improvisatory tradition of music and dance that pervades Africana cultures, on the continent and in Diaspora. Check out the analysis of a trumpet solo improvisation by Dizzy Gillespie (at left or below on mobile).

Movement

Video Motion Encoding and Analysis using Video-EASE motion2D

While there are many fantastic (and free) software options available for sound annotation, this is not the case for video motion analysis. Video-EASE scripts provide an easy way to encode motion from a variety of video formats. A detailed frame-by-frame encoding takes time but is immensely valuable, both for understanding the analysis object and as training data for developing automated motion analysis. Where frame-by-frame detail is not needed, one can select a frame interval (e.g. one in five frames) or refer to a sound segmentation to encode frames that coincide with onsets, offsets or peak energy. Encoded motion is saved as a three-column comma-separated value (.csv) file with timecode, x-pixels and y-pixels.

Sound

Speech and Audio Signal Processing

One of ADEPt's ongoing objectives is to provide documentation for an empirical understanding of the special prosodic features of speech and vocal performance in African languages, such as the tonal contours of Yorùbá Poetry (left). To encode pitch and timing of an audio signal, we use University of London's Sonic Visualizer, the Max Planck Institute's ELAN Multimedia Annotator and Celemony's Melodyne software. The video to the left shows an example of a transcription made by Melodyne's automated analysis corrected by our analysts using the program's Direct-Note-Access feature.

Text with Diacritics

Tone-to-text applications and Text-mining

Many languages in Sub-Saharan Africa and creole languages within the Diaspora have unique tonal features. An example is the Yorùbá language of southwestern Nigeria which has three tone levels which provide lexical contrast (see image at right). ADEPt personnel have developed the first tone recognition applications for West African tone languages along with innovative ways to analyze text with diacritics. Transcriptions of our videos and audio are available as .srt files, compatible with ELAN Language Documentation Software, Adobe Premiere, YouTube's Creator Studio and other platforms.

TextWDiacritics