Dance Dissected: Enhancing Labanotation Generation through Part-specific Attention with Transformers

Authors: Min Li, Jing Sang, and Lina Du
Conference: ICIC 2025 Posters, Ningbo, China, July 26-29, 2025
Pages: 1959-1971
Keywords: Labanotation generation, Part-Specific Attention, Transformer, limb move-ments, encoder-decoder.

Abstract

Labanotation is a widely-used notation system for recording human dance movements. However, writing standard Labanotation scores requires extensive professional training. Automatically generating Labanotation scores from mo-tion capture data can significantly reduce manual efforts in dance documentation and can serve as a powerful tool for preserving folk dances, in the protection work of intangible cultural heritages. Despite this, existing methods for Labano-tation generation face challenges in capturing the more fluid and variable limb movements inherent in folk dance performances. In this paper, we introduce a novel Transformer-based model called PSA-Transformer Transformer with Part-Specific Attention to achieve more accurate Labanotation generation. First, we develop a Part-Specific Attention PSA module that adheres to the body part division rules of Labanotation. This module extracts spatial attention features at individual body part levels, enhancing the precision of movement capture. Then, this attention mechanism is integrated into an encoder-decoder architecture, enabling the model to learn global temporal dependencies within the feature sequences produced by the PSA module. As such, we sequentially generate corresponding Laban symbols using the decoder component of the PSA-Transformer. Extensive experiments on two real-world datasets demon-strate that our proposed model performs favorable against current state-of-the-art methods in automatic Labanotation generation.
📄 View Full Paper (PDF) 📋 Show Citation