ANR

An@tomy2020

anat2020

Based on the observation that body movements can facilitate learning by enriching memory traces, An@tomy2020 is dedicated to the development of an innovative educational tool for learning functional anatomy.

The embodiment* seems to be very relevant for the learning of functional anatomy: by allowing the learner to link his own body experiences to the knowledge to be acquired, we will be able to promote the construction of the learner's representations while reinforcing his visual-spatial capacities.

An@tomy2020 aims at facilitating this link thanks to a platform integrating the animation of an anatomically realistic model of the learner from a capture of his movements, the use of anatomical knowledge bases linked to these actions, and the implementation of new interaction modes.
The most recent work in modeling, graphics and human-computer interaction will be integrated with advances in cognitive science and education to test concrete scenarios for learning anatomy; the project will also contribute to the advancement of these fields.

 

* Embodiment (or embodied cognition) is a concept from cognitive psychology, which refers to thoughts (cognition), feelings (emotion) and behaviors (body) based on our sensory experiences and body positions. In practice, it is used to think about aspects generally associated with our daily lives, such as how we move, speak and develop. (Source Wikipedia)

 

 

 


Six partners for this project founded by Agence Nationale pour la Recherche (ANR-16-CE38-0011) :
 

Partenaires

 

 
 

An@tomy2020 aims at developing an innovative educational tool for learning functional anatomy, which will integrate the most recent work in modeling, graphics and human-computer interaction with advances in cognitive science and education, and will allow testing scenarios for learning anatomy.

The approach is based on the idea that body movements can facilitate learning by enriching memory traces; this "embodiment" seems to be very relevant for the learning of functional anatomy since the knowledge to be acquired can be linked to the learner's body experiences.

An@tomy2020 aims to facilitate this link; this pedagogical objective raises scientific and technical questions. The animation of an anatomically realistic model of the learner from a capture of his movements, the use of knowledge linked to these actions, and the implementation of new modes of interaction are all components to favor the construction of the learner's representations by reinforcing his visual-spatial capacities.

   >>>   ANR project page  ANR

 

 

 

The three main research axes     up


 

BULLET  The first axis concerns the robust capture of the learner's movements and the transfer and animation of an anatomical model superimposed on the learner's body.

A first step of this work consisted in choosing RGB-D cameras that can capture the learner's movements. This choice was based on characterization tests of accuracy and repeatability of commercial devices. A test bench and a calibration method using an object of known geometry were used to perform these tests. This work has been published in an IEEE conference (ICARCV'2018).  In addition, one goal is to combine multiple views from possibly heterogeneous cameras. A method for calibrating these cameras against each other is being evaluated.

A second important component of this project consists in animating a complex model of the body (integrating bones, organs, muscles, etc.) based on the learner's movements. Anatomy transfer methods combining graphical animation and biomechanical simulation have been developed. The challenge is real time. A prototype is already operational.

 

précision
Characterization of the accuracy of a camera using a reference object
moved on a high precision translation table
modele1 modele2 Modèle3
  Anatomical model  

 

BULLET  The second axis aims at developing precise and efficient interaction techniques in augmented reality. Evaluations will be conducted in order to highlight the impact of these interaction modes on memorization.

We are evaluating different forms of innovative human-computer interaction for the presentation of complex 3D objects to students (e.g. anatomical objects). We are experimenting with standard display devices such as computer monitors or tablets, as well as with devices that allow a more realistic 3D rendering such as Virtual Reality headsets or physical volumes in which we give the illusion of the presence of the 3D shape.
Our experiments allow us to show and quantify the benefit of a realistic rendering for learning.

 

techn interaction
Different devices allowing interaction

 

Intercations2
Learner's eye tracking


 

BULLET  The third axis of the project concerns the definition of the pedagogical content and the quantification of An@tomy2020's contribution to the improvement of learning. The platform developed is being tested on medical and STAPS students. It will allow the foundations to be laid for future integration into the corresponding university curricula.

Within the framework of WP3, the pedagogical objectives of anatomy learning have been identified for STAPS and Medicine students.
Thus, in STAPS, it is a question of knowing: how motor coordinations are constructed, how to propose a physical activity adapted to different audiences and how to prevent the main corresponding injuries. At the end of this work, all the anatomical knowledge to be acquired in STAPS was listed (e.g. insertions and trajectory of a given ligament) and then grouped into blocks of skills specific to functional anatomy (e.g. visualizing the trajectory of the ligament and its reaction during movements) and blocks of professional skills (e.g. technically correcting a movement in order to prevent injury to the ligament in question).
A similar approach was undertaken for medical students.

 

In a second step, the pedagogical scenario of the first An@tomy2020 prototype around the knee joint complex was provided to all the partners. This scenario is now used as a roadmap for the development of the first prototype including all the functionalities necessary to learn the functional anatomy of the knee. This scenario allows an interaction between the learner and the tool and a learning of anatomy through movement (embodiment principle).

Finally, in parallel to this pedagogical work, within the framework of training courses for STAPS students in Lyon conducted by one of the partners (LIBM), a first exploratory experimental protocol was conducted in an ecological situation on the effect of functional anatomy exercises (learning through movement) on the acquisition and restitution of anatomical knowledge.
 

Preliminary results in favor of embodiment are presented in the figure below:

results

 

 

 

 

 

Partners     up


   

 

Anatoscope specializes in anatomy transfer and real-time animation, with the design of customized, physically simulated anatomical models. It contributes to the modeling of the learner's body and coordinates the integration work.

 

Anatoscope
   
Gipsa

 

Gipsa-Lab, Grenoble Images Parole Signal Automatique Laboratory, "Speech and Cognition" department, studies behavioral and cognitive processes in communication interactions. It must evaluate the cognitive processes of learning with embodiment via new interaction devices.

 

 

LIBM, Laboratoire Interuniversitaire de Biologie de la Motricité, will bring its knowledge of cognitive processes in the learning of anatomy. It will coordinate the evaluation via the platform..

LIBM
   
LIG

 

LIG, Laboratoire d'Informatique de Grenoble, through its Human-Computer Interaction Engineering team, has a great deal of experience in the design, development and evaluation of interaction techniques.
He will coordinate the tasks related to Augmented Reality interaction.

 

   

LJK, Laboratoire Jean Kuntzmann, will coordinate the tasks related to the formatting and accessibility of anatomical knowledge and educational content.

 

LJK
   
timc TIMC, Translational Innovation in Medicine and Complexity, and more precisely the CAMI team, a laboratory specialized in health technologies, is the project coordinator. TIMC is particularly interested in the modeling of the learner's body.

 

 

 

Publications     up


 

 

  • Ortega M and  Stuerzlinger W. Pointing at 3D Wiggle displays, IEEE VR 2018, Reutlingen, Germany  
     <voir l'article>

 

  • Anxionnat A, Voros S and Troccaz J. Comparative study of RGB-D sensors based on controlled displacements of a ground-truth model. IEEE ICARCV conference, Singapore, nov. 2018