Context The School of Computer Science and Enginnering at UNSW offers a bit more than 100 courses. Although some streams are defined to help students choose courses, it can be hard to nagivate and discriminate the link between courses (requirements, complementary).
We propose to build a tool for student to navigate visually on the curriulum and to recommend courses based on connected content. Using the course handbooks and online material, we propose to compute a similarity metrics (extra topic from text and compare list of topics from different courses).
Context Many children struggle to find their voice in social situations.
They may be shy, suffer from social anxiety, be new to a culture (migrants) or have a special need involving communication impairments (e.g., autism, speech, or, hearing).
Their voices often go unheard as they find it hard to contribute to conversation.
The Find your Voice (FyV, http://wafa.johal.org/project/fyv/) project was initiated to investigate how joke telling could help children to speak up and gain confidence.
Context The filed of social human-robot interaction is growing. There are more and more openly available datasets that features social interaction between humans and social interactions between humans and robots. Interpreting the transferability of human-human communition to human-robot communication is crucial in build social human-robot interactions. Credits Severin Lemaignan - PinSoRo Dataset In this project we propose to take a data-driven approach to build predictive model of the social interaction for human-human (HH) and human-robot scenarios.
Context: Research in HRI has been investigating how robot design and in particular humanlikeness can influence the interaction. The uncanny valley illustrate for instance how robots’ appearance influence emotional responses. In this project, we aim to take it a step further and investigate how robots’ appearance can influence cognitive load. Follwing a similar protocol to , we will use robot’s pictures as distractors in a perception task. Past research has shown that distractors with human faces impaired more strongly performances in the task when compared to other objects.
Context Social robots are foreseen to be encountered in our everyday life, playing assistant or companion roles. Recent studies showed the potential overtrust towards social robots , which can be in the end armful for the user, as robots may collect sensitive information.
Behavioural styles, propose a way to vary robot’s express itself within the same context. Given a gesture, it allows to manipulate the keyframes in order to generate variation of this gesture.
Context While digital tools are more and more used in classrooms, teachers’ common practice remains to use photocopied paper documents to share and collect learning exercises from their students. With the Tangible e-Ink Paper (TIP) system, we aim to explore the use of tangible manipulatives interacting with paper sheets as a bridge between digital and paper traces of learning. Featuring an e-Ink display, a paper-based localisation system and a wireless connection, TIPs are envisioned to be used as a versatile tool across various curriculum activities.
Context: Visuo-Motor coordination problems can impair children in their academic achievements and in their everyday life. Gross visuo-motor skills, in particular, are required in a range of social and educational activities that contribute to children’s physical and cognitive development such as playing a musical instrument, ball-based sports or dancing. Children with visuo-motor coordination difficulties are typically diagnosed with developmental coordination disorder or cerebral palsy and need undergo physical therapy. The therapy sessions are often not engaging for children and conducted individually.
Context Natural language is an important part of communication since it offers an intuitive and efficient way of conveying ideas to another individual. Enabling robots to efficiently use language is essential for human-robot collaboration. In this project, we aim to develop an interface between voice assistant engines (Alexa; SDK or Google Home Assistant) and ROS (Robotics Operating System). By doing this, we will be able to use the powerful dialogue systems developed for voice assistant in human-robot interaction scenario.