Development of a ‘real-body’ and motion capture tool (MoveMe 3D)
PI: Dr Tracey Thornborrow – School of Psychology – Staff Profile
Co-investigator – Franky Mulloy – Shool of Sport and Exercise Science – Staff Profile | Prof. David Mullineaux – School of Sport and Exercise Science – Staff Profile | Prof. Martin Tovee – School of Psychology – Staff Profile | Prof. Luc Bidaut – School of Computer Science – Staff Profile |
The aim of the project was to develop a methodological tool, called ‘Move-me-3D’, by combining data generated from technologies used in the areas of sports biomechanics and body image research. Using infra-red motion capture and 3D body scans, the ‘Move-me-3D’ method can create bespoke and lifelike animations based on ‘real’ identities rather than CGI avatars. In this way, participants / users can be presented with a fully mobile 3 dimensional version of their own body, or any other body, either on a computer screen or in a virtual environment. Moveme3d was conceived initially in order to advance cross-cultural research on body movement and attractiveness. However, it has many potential applications including enabling training programmes for individuals with body dysmorphia, providing tailored movement analysis and feedback to improve athlete performance, and patient rehabilitation.
Ethical approval for the study was granted by the School of Psychology Research Ethics Committee at the University of Lincoln (project number PSY1718543).
To provide the raw material required for developing and testing the method to a proof of concept level, we firstly collected movement and 3D image data from a small sample of participants. The second stage involved using various software to blend the data sets together to create an animated ‘real body’.
Participants and procedure – A total of 11 female participants (ages ranged from 18 – 56) consented to providing their body movement and body image data for use in the project. All participants took part individually following the same procedure.
Motion capture data collection
Participants were individually accommodated into the motion capture laboratory for a single visit. Twelve infrared motion capture cameras (Cortex, Motion Analysis Corporation, Santa Rosa, California) were calibrated to cover a 4m x 4m x 2m capture volume. A total of 45 retro-reflective markers were places on anatomical bony landmarks to allow for whole body tracking of each participant. A marker-set (name) was created to ensure live tracking and identification during each trial, and to reduce post-processing time
Participants performed three different dances to three music clips. They were instructed to dance as if they were out with their friends in a club or similar social setting. Motion capture data were recorded at 150Hz for the first 30 seconds for each music clip. Then, participants were shown a video clip of Nicaraguan women dancing, and asked to dance to the three music clips again in a similar style. Following the completion of each trial, post-processing was conducted to identify all visible markers.
3D Body image data collection
A medical grade 3dMD™ body scanner was used to collect participants’ body image data. The scanner consists of a bank of 18 cameras positioned to enable 360 degree full body capture. The system automatically generates a continuous polygon ‘mesh’ and maps the colour information from the image to the mesh. The participant stands in the centre of the suite of cameras, on a designated plate. The body scanner captures a 20 second video (7 frames per second) of the participant. From that video, one frame is selected to generate the required 3D body image data.
Motion capture mapping and animation
Autodesk’s MotionBuilder was used to import the captured motion data (.C3D files) and animate a ‘rigged’ body scan model. The rotate tool was used to rotate the motion capture points until the front points were facing positive in the x,y, and z axes. Then an ‘Actor’ was imported into MotionBuilder. The Actor’s model was then resized to the approximate height of the points that represent the participant’s motion capture data.
Once the Actor’s Model was placed and edited so that it is on top of the motion capture points and identical in terms of limb placement, the mapping part of the process began. To do this, we used MotionBuilder’s Marker Set functionality to drag the individual motion capture points to the relevant section of the body as demonstrated below. Once all points were added, the ‘Actor’ will animate as per the motion capture Points.
This project succeeded in developing a method to blend 3d image data and movement data using technologies currently being used in the areas of body image research and sports biomechanics. In the first instance, this method will facilitate novel and exciting cross-cultural research into attractiveness judgements, using realistic – and thus ecologically valid – stimuli that can be adapted quickly and simply to suit any population or environment.
The method requires further refining: we had several issues with accurately mapping the image data onto the movement data. Also there are still some elements of using a 3D image that are a little problematic. For example, we have not yet made soft tissue ‘move’, only the joints and overall body shape, meaning that some movements ‘pull’ the image in a way that looks unnatural.
This was my first project as a PI, and it represented a steep, but very valuable learning curve. Learning to manage a team of people and a small pool of resources will stand me in good stead for future challenges, including being more efficient and productive as a project manager and working in a team. I had a good team for this project, each member bringing their knowledge and expertise into play at the necessary times. Franky Mulloy, co-investigator, was key to the project’s success, bringing both his technological knowledge and project management expertise into play throughout.