The five cases below highlight tools developed for analysis of dance in project ICI that is part of LABEX Arts-H2H at University Paris 8.
Case 1 exemplifies how motion tracking of two freely improvising dancers can be reduced to a 2-dimensional space permitting quantification of their relationship. Case 2 demonstrates the development of movement complexity measures based on acceleration data. Case 3 is a novel attempt to produce a quantitative analogue to Stern (1984)’s concept of affective attunement (here between a dancer and a spectator). Cases 4 and 5 demonstrate our approach to the collection and quantification of first person data and its combination with third person data.
This video exemplifies how motion tracking of two freely improvising dancers can be reduced to a 2-dimensional space permitting quantification of their relationship. The recorded motion, (animated with the stick figures), was numerically described with features such as positions, velocities, and accelerations. The dimensionality of the feature set was then reduced with Principal Component Analysis to produce the two components shown on right in the video. The circles in the plot effectively represent the movement styles of the dancers. Therefore, when the dancers move similarly in similar poses (see time 3:40), the circles are close to each other. Different movement styles, such as at time 2:25, cause the circles to be at a large distance.
The video below demonstrates an acceleration based estimate of movement complexity. Here the complexity attempts to capture if the movements are simple and predictable or complex and unpredictable. The estimate is a combination of the variance in the acceleration signals, and the size of the unexplained part of the signal in PCA (Principal Component Analysis). The blue and the black lines in the video are the complexity estimates for the dancers wearing the same colors. It can be seen from the video that when the dancers perform similar actions, such as standing still or walking around, they have similar level of complexity. Also, unexpected events cause peaks in the complexity, such as at time 01:03 when a dancer falls to the floor.
This case is a novel attempt to produce a quantitative analogue to Stern (1984)’s concept of affective attunement (here between a dancer and a spectator). We attempted to capture attunement by two measures that are based on recorded accelerations.
The first measure shown in the upper graph is a correspondence between acceleration histograms in a 5 second window. An example of a high value for the measure starts at time 1:35 in the video, where the dancer slows down, and thus both the dancer and the observer have low accelerations. While the behaviors have similarities in this case, we cannot be entirely sure if this a true indication of attunement or only a coincidence.
The second measure shown in the lower graph is a windowed correlation between accelerations in a 5 second window. An example of a high value can be found from the video starting at time 4:20. There both persons start an action from stillness with only a short delay. As both the dancer and the observer participate actively in the behaviour, it is a more likely indication of attunement than the behavior of the first example.
This video shows a duo dance, and a case where responses from the audience can be predicted from recorded movement data. During the live performance spectators were asked to indicate using a tablet interface when they felt that there was a start or an ending in the dance. The blue line in the video shows the annotations of the starts. For example, at time 00:12 there is a peak of 5 persons indicating a start within the next 5 seconds. During the performance the movements of the dancers were recorded with accelerometers. The red line is a predictor for the starts that has high values when there is a rise in the level of overall acceleration. The same approach can be also used for predicting perceived endings by finding drops in the level of acceleration.
This video shows a solo dance (on the top), reconstruction of annotations drawn by spectators with tablets (on bottom left), and a heat map created from the annotations (on bottom right). The task given to the spectators was to draw the targets of their attention over the background image. In the image, the blue and black figures stand for the dancers, the large figure in the bottom is meant for attention on self, and the smaller figures on the bottom represent the other persons in the audience. In the heat map, presses to single points have been expanded to small circles, drawn ellipses have been filled, and drawn lines have been made thicker. At time 00:30, the heat map lights up as 10 persons are making annotations following a sharp motion from the dancer in black. The heat map enables precise analysis of the annotations as regions corresponding, for example, to one dancer or one limb can be extracted from the map.
Last update on 2017-02-14 by Klaus Förger.