My research interests lie in the interface between human-robot interaction, artificial intelligence, education & social sciences. Specifically, my research focuses on improving human-robot interaction for more acceptable and usable robots in educational contexts through machine learning. Differentiating themes of my research in human-robot interaction (HRI) are: (1) developing new areas of interaction in educational contexts, (2) reasoning and machine learning for HRI (3) designing haptic-enabled tangible swarm interactions for learners, (4) non-verbal interactions in HRI.

Human-robot Interaction in Education

Funding: NCCR Robotics
Main Collaborators: P. Dillenbourg (EPFL), F. Mondada (EPFL)

With the increasing number of engineering jobs, politics have more recently turned towards introduction of engineering subjects in early stages of curricula. More and more countries have started to introduce programming (and even robotics) to young children. However, this constitutes a real challenge for the teaching professionals, who are not trained for that, and are often skeptical to use new technologies in classrooms. Hence, the challenge is to introduce robots as tools to be used not only in programming or robotics based courses, but in the whole curricula in order to make them attractive to teachers as a pedagogical tool[1].

”Educational robotics” does not really constitute a research community per say. On one hand, there are scholars working on ”how to teach robotics” using robotics platforms such as Thymio and Mindstorms. Some scholars perform research in this domain (for example, measuring which learning activities produce faster learning), but their numbers are scarce, they typically meet in half-day workshops before ed’tech conferences (CSCL, AI&Ed, ….). On the other hand, one finds scholars who mainly do research on HRI and consider education as an interesting place for testing child-robot interactions.

I launched in 2016 a series of workshops to build a Robots for Learning (R4L) research community. The first event was a workshop, along with the RoMan 2016 conference. The event gathered 30 participants from all around the world. The second event was a workshop along the HRI 2017 conference in Vienna, Austria, with 60 participants and 10 presentations [2]. This fall, we hosted the first stand alone event, in Switzerland, for which we invited main actors of research in HRI for Learning. A new workshop is planned for HRI 2018. These workshops aim to mix scientists from field of robotics with those from digital education and learning technologies (See

I have been an active member of my research community (see CV for a detailed view on my academic service). My most significant duties include: main guest editor for the a Special Issue on Robots for Learning on the Springer Journal of Social Robotics and reviewer for several international journals in robotics and technologies for education (IEEE-Robot and Automation Magazine, Transaction on Learning Technologies …). I have been in the international program committee of the Human-Robot Interaction Conference (HRI 2018) and International Conference of Social robotics (ICSR 2017), as well as session chair for the RoMan 2016 Conference, which constitute widely as the 3 primary conferences in human-robot interaction research.

The following sections provides further details of my research on improving robots for education and daily life.

1 Reasoning and AI

  • Funding: NCCR Robotics, ANR MoCA
  • Main Collaborators: S. Pesty (UGA), C. Adam (UGA), D. Pellier (UGA), H. Fiorino (UGA), A. Jacq (EPFL), T. Asselborn (EPFL), K. Zolna(Jagiellonian University), P. Dillenbourg (EPFL)

One of my interests is to make robots able to autonomously sustain interactions with users, and in order to do so, they have to be able to reason about the users’ and their environment. During my PhD, I have been working on a Cognitive Architecture able to reason on Emotion, named CAIO (Cognitive and Affective Interaction-Oriented) Architecture [34]. This architecture, based on symbolic reasoning, showed promising results in modeling cognitive processes and specifically allowing decision making based on emotions. As shown in the figure, this architecture works as a two loops process, similar to the Dual-Process Theory: a deliberative loop generating intentions and a sensorimotor loop handling reflexes.

Research Perspectives: More recently, we have been working on second order reasoning in the context of the CoWriter project [5]. In the CoWriter project, the child’s teaches a Nao robot how to write. We use the learning by teaching paradigm to enhance motivation and engagement. In a collaborative learning task between a robot and a child, the idea is to model the child’s understanding and the child’s believes of the understanding of the co-learner robot. This way the robot could detect misunderstandings in view to correct them; or the robot could even create misunderstandings to enhance learning (by fostering questioning).

Since my arrival on the CoWriter project, we initiated a project on diagnosis of dysgraphia using data collected via a graphic tablet (Wacom). Our first results using RNN are very promising (a patent and a journal paper have been submitted). This work will later on be integrated in the CoWriter handwriting activities to adapt the learning path according to the diagnosis and the learner’s handwriting difficulties.

2 Non-verbal Interaction

2.1 Tangible Swarm Robots

  • Funding: NCCR Robotics
  • Main Collaborators: A. Ozgur, F. Mondada and P. Dillenbourg (EPFL)

With the Cellulo project [6], a part of the Transversal Educational Activities of the NCCR Robotics, we introduced a new robotic platform which is small and easily deployable. A dotted pattern printed on regular paper enables the Cellulo robots with absolute localization with a precision of 270 microns [7]. The robots also have a new locomotion system with a drive relying on a permanent magnet to actuate coated metal balls [8]. This new drive design allows backdrivability; i.e. it allows the robot to move and to be moved without damaging it. With this system, we also implemented a haptic feedback modality, allowing the user to feel forces when grasping the robot [9].

The robots are connected via Bluetooth to a master (PC or tablet) that handles the logical and computation of the activity. The onboard PCB of the robots only allows for proceeding the localization (image capture and decoding of the pattern) and the control of the three wheels actuation.

During two years, we developed several learning activities using the robots. The Figure for example shows the Feel the Wind activity, in which the learners were taught that the wind was formed by air moving from with high to low pressure points.

2.2 Haptic for learners

  • Funding: NCCR Robotics
  • Main Collaborators: A. Guneysu, A. Ozgur, F. Mondada and P. Dillenbourg (EPFL), Christophe Jouffrai (Toulouse University)

In the Cellulo project, we also started to explore the use of haptic feedback for learners. Haptic feedback enables us to render forces, but also borders, angles, or points. We developed a series of haptic capabilities and small interaction tasks that can be included in learning activities to inform the learner [9]. We tested the haptic feedback with children for instance in the symmetry activity, in which the child is able to formulate hypothesis on the placement of the symmetrical shape and to verify their claims by feel haptically the shape on paper (left Figure). We also tested with some pilot with visually impaired children who were able to explore a map of their classroom using the Cellulo robots.

Research Perspectives for Tangible Swarm Interaction and Haptic for Learners: We are now exploring the dynamics of the group of learners in manipulating the robots. The collaboration among learner is not always optimal, and a challenge would be to use the swarm robots to analyses and regulate the collaboration among learners. As these shared resource can be intelligent agents, they could rearrange themselves according to the collaborative state of the group.

2.3 Perceiving intentions and gestures in HRI

  • Funding: French ANR MoCA, INRIA-PRAMAD
  • Main Collaborators: T. Asselborn (EPFL), A. Jacq (EPFL), K. Sharma (EPFL), D. Vaufreydaz (INRIA-Grenoble)

In the field of human-robot interaction (HRI) as in many other technical fields, an innumerable number of different metrics to analyze the interaction can be found in as many studies. Even-though many of these metrics are not

comparable between studies, we observe that the research community in HRI, but also in many other research domain, is starting to seek for reproducibility [10]; a consensus begins to appear concerning common measures that can be used across a wide range of studies. In social HRI, the evaluation of the quality of an interaction is complex because it is very task dependent. However, certain metrics such as engagement seem to well reflect the quality of interactions between a robot agent and the human.

One aspect of acceptability of a robot is the home environment in which it is to be able to perceive when it will be solicited. The goal for the robot is to not disturb the user and to be able to predict that it will be solicited. This is something we do all the time as humans (we can see a street vendor approaching us and we know they will talk to us). The process for us human relies on proxemics (speed and angle of approach) but not only.

  In my work on modeling engagement [1112], I used multi-modal data to train a SVM to detect engagement. We

collected data from various sensors embedded in the Kompai robot and reduced the number of crucial features from 32 to 7. Interestingly shoulder orientation and position of the face in the image are among these crucial features. If we transpose these features to what humans do, these features seem coherent with behavioral analysis.

The previous model aimed to predict the engagement, but once the user is engaged, it is important to evaluate its attitude and mood. Using the COST and HAART dataset, we trained a model to detect social touch. Together with colleagues, we won the Social Touch Challenge at ICMI 2015 improving the gestures recognition from 60% accuracy to 70% training a Random Forest Model. [13]

2.4 Non-verbal Rendering

  • Funding: NCCR Robotics, French ANR MoCA
  • Main Collaborators: T. Asselborn (EPFL), S. Pesty (UGA), G. Calvary (UGA) and P. Dillenbourg (EPFL)

During my PhD, I have been developing a model of the so called behavioral styles. These styles act like a filter over communicative gestures to transpose a way of performing an action. We used multiple platforms (Nao – humanoid, and Reeti – facial expression) to test this rendering on facial and bodily communication [14]. We showed that these styles were perceptible and could influence the attitude of the child interacting with the robot[1516].

More recently, we showed that idle movements (movements that have no communicative intention) when displayed by a humanoid robot increases the anthropomorphic perception of the robot by the user [17].

These findings help in designing more natural interaction with humanoid robots, making them more acceptable and socially intelligent.

Research Perspectives: We will be continuing research in this area working within the ANIMATAS EU project (starting in January 2018) on synchrony and how alignment can keep learners engaged in a collaborative task with a robot.

3 External Engagement

I seek for end-user application of my research (I am involved in the creation of two startups with PhD students I supervised during my two years of postdoc) and I strongly believe that robots will be useful for the society in many contexts: elderly, educations, etc. In the coming years my goal would be to work on the design of new robots.

I am also closely collaborating with the Mobsya company in Switzerland and with the Softbank Paris (Aldebaran Robotics) in various research project (I involved them in the ANIMATAS EU Project that will start in January 2018).


[1]    W. Johal and P. Dillenbourg. What are robots doing in schools? [Online]. Available:

[2]    W. Johal, P. Vogt, J. Kennedy, M. de Haas, A. Paiva, G. Castellano, S. Okita, F. Tanaka, T. Belpaeme, and P. Dillenbourg, “Workshop on robots for learning: R4l,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ser. HRI ’17. New York, NY, USA: ACM, 2017, pp. 423–424. [Online]. Available:

[3]    W. Johal, D. Pellier, C. Adam, H. Fiorino, and S. Pesty, “A cognitive and affective architecture for social human-robot interaction,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, ser. HRI’15 Extended Abstracts. New York, NY, USA: ACM, 2015, pp. 71–72. [Online]. Available:

[4]    C. Adam, W. Johal, D. Pellier, H. Fiorino, and S. Pesty, Social Human-Robot Interaction: A New Cognitive and Affective Interaction-Oriented Architecture. Cham: Springer International Publishing, 2016, pp. 253–263. [Online]. Available:

[5]    A. Jacq, W. Johal, P. Dillenbourg, and A. Paiva, “Cognitive architecture for mutual modelling,” arXiv preprint arXiv:1602.06703, 2016.

[6]    A. Ozgur, S. Lemaignan, W. Johal, M. Beltran, M. Briod, L. Pereyre, F. Mondada, and P. Dillenbourg, “Cellulo: Versatile handheld robots for education,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ser. HRI ’17. New York, NY, USA: ACM, 2017, pp. 119–127. [Online]. Available:

[7]    L. Hostettler, A. Ozgur, S. Lemaignan, P. Dillenbourg, and F. Mondada, “Real-time high-accuracy 2d localization with structured patterns.” IEEE, May 2016, pp. 4536–4543. [Online]. Available:

[8]    A. Ozgur, W. Johal, and P. Dillenbourg, “Permanent magnet-assisted omnidirectional ball drive.” IEEE, Oct. 2016, pp. 1061–1066. [Online]. Available:

[9]    A. Ozgur, W. Johal, F. Mondada, and P. Dillenbourg, “Haptic-Enabled Handheld Mobile Robots: Design and Analysis.” ACM Press, 2017, pp. 2449–2461. [Online]. Available:

[10]    D. Chrysostomou, P. Barattini, J. Kildal, Y. Wang, J. Fo, K. Dautenhahn, F. Ferland, A. Tapus, and G. S. Virk, “Rehri’17 – towards reproducible hri experiments: Scientific endeavors, benchmarking and standardization,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ser. HRI ’17. New York, NY, USA: ACM, 2017, pp. 421–422. [Online]. Available:

[11]    W. Benkaouar (Johal) and D. Vaufreydaz, “Multi-Sensors Engagement Detection with a Robot Companion in a Home Environment,” in Workshop on Assistance and Service robotics in a human environment at IROS2012, Vilamoura, Algarve, Portugal, Oct. 2012, pp. 45–52. [Online]. Available:

[12]    D. Vaufreydaz, W. Johal, and C. Combe, “Starting engagement detection towards a companion robot using multimodal features,” Robotics and Autonomous Systems, vol. 75, pp. 4–16, 2016.

[13]    V.-C. Ta, W. Johal, M. Portaz, E. Castelli, and D. Vaufreydaz, “The grenoble system for the social touch challenge at icmi 2015,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ser. ICMI ’15. New York, NY, USA: ACM, 2015, pp. 391–398. [Online]. Available:

[14]    W. Johal, S. Pesty, and G. Calvary, “Towards companion robots behaving with style,” in Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on. IEEE, 2014, pp. 1063–1068.

[15]    W. Johal, G. Calvary, and S. Pesty, Non-verbal Signals in HRI: Interference in Human Perception. Cham: Springer International Publishing, 2015, pp. 275–284. [Online]. Available:

[16]    W. Johal, “Companion robots behaving with style, towards plasticity in social human-robot interaction.” [Online]. Available: CompanionRobotsBehavingwithStyle

[17]    T. L. C. Asselborn, W. Johal, and P. Dillenbourg, “Keep on moving! exploring anthropomorphic effects of motion during idle moments,” in 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), no. EPFL-CONF-231097, 2017.

Comments are closed.