Comparing Fun and Performance: A User Study on Children’s Gaming Experiences with Mid-Air Hand Gestures


Fallah, S., & MacKenzie, I. S. (2024). Comparing fun and performance: A user study on children's gaming experiences with mid-air hand gestures vs. mouse. Proceedings of the 37th International British Computer Society Human-Computer Interaction Conference – HCI 2024, pp. 127-135. London: British Computer Society. doi:10.14236/ewic/BCSHCI2024.12 [PDF]

Comparing Fun and Performance: A User Study on Children’s Gaming Experiences with Mid-Air Hand Gestures

Saba Fallah and I. Scott MacKenzie

Lassonde School of Engineering
York University, Toronto, ON, Canada
{sabaflh@yorku.ca, mack@yorku.ca}

Abstract. This study investigates the effects of two types of mid-air hand-gesture input methods on children’s performance, fun, and preference and compares them to a traditional mouse input method. The evaluation was done using a card-matching game on a laptop with 18 children between five and seven. The trial completion time (s), number of selected cards, and children’s perception regarding ease of use, likability, and willingness to play the game using each input method again were recorded and analyzed. The performance between the two types of mid-air hand gesture inputs showed no significant difference, as evidenced by trial completion time and the number of selected cards. Similarly, regarding fun, there was no significant difference between the two gesture methods, as indicated by children’s perceptions regarding ease of use, likability, and willingness to play the game again using each input method. Contrarily, the mouse, serving as the baseline condition rather than a viable method, exhibited better performance and more fun. This highlights the importance of considering user familiarity and usability challenges associated with mid-air gesture-based input methods.

1 Introduction

This paper investigates children’s performance and experience while playing a game on a laptop using two mid-air hand-gesture input methods and compares them to a mouse. The game consists of matching two pairs of cards by selecting them one at a time.

Children are substantial users of digital devices, with use increasing during the COVID-19 pandemic (Ponti 2023). They use devices for different purposes, such as entertainment, education, and communication. Playing games is one of children’s primary use cases for digital devices. When designed well, games have the potential to offer an engaging and enjoyable experience that supports learning, cognitive growth, skill development, social interactions, physical activities, and healthy behaviours in young children (Lieberman et al. 2009).

Studies have shown that fun has been linked to increased engagement with learning technologies and learning activities (Iten and Petko 2016; Long 2007; Rambli et al. 2013; Vieira and da Silva 2017; Sim et al. 2006), and experiencing fun has been associated with positive effects on learning outcomes (Chan et al. 2019; Elton-Chalcraft and Mills 2015; Long 2007; Lucardie 2014; Rambli et al. 2013). Furthermore, neuroscience provides biochemical evidence supporting the beneficial impacts of fun within the learning environment and on the learning process (Willis 2007).

Various methods have been developed to measure children’s fun while interacting with digital devices. One tool in the child-computer interaction (CCI) field is the Fun Toolkit, which consists of three tools – Smileyometer, Fun Sorter, and Again Again Table – developed to measure children’s fun and preference for technology (Read 2008). It is designed for children aged four and above. Gestures in human-computer interaction (HCI) play an integral role. Many researchers have evaluated the role of gestures in this field, and gestures continue to make their way into our daily lives with every new technology released.

Children often use gestures as a natural and intuitive way to interact with devices. Gestural interaction, whether through touch-based gestures on screens or mid-air gestures using motion-sensing technology, is increasingly common in applications designed for children.

This study aims to measure children’s fun and performance when using gestures, particularly mid- air gestures, with technology.

2. RELATED WORK

In recent years, the exploration of mid-air hand gestures as input methods for children’s interactions with digital devices has garnered significant interest. This section reviews the research on children’s interaction with digital devices via mid-air hand gestures. Previous research has explored the suitability and impact of gesture-based interactions in child-computer interaction (CCI). Kauppinen et al. (2013) developed the Kinetic Stories application, utilizing full-body gestures to engage children with storybooks. Despite its promising aspects, the study highlighted usability challenges, such as gesture recognition failures and physical strain from prolonged use.

Similarly, Renzi et al. (2015) focused on enhancing children’s musical notation learning through the Touching Notes game, employing a motion-sensing camera for hand detection. While the study noted initial challenges with gestures, children showed improved performance and preferred gestures over the traditional mouse input.

Contrasting these findings, Liang et al. (2017) designed a gesture-based interactive puppetry system to enhance storytelling. Using a Leap Motion device, children performed single-hand gestures to control virtual puppets. The study found that children improved in performance and reduced errors over multiple trials, indicating that gesture-based interaction can enhance engagement and cognitive skills. However, it also underscored the need for adult supervision and guidance.

Alzubi et al. (2018) investigated the impact of gesture-interactive game-based learning (GIGL) on working memory and mathematical skills. The study found significant enhancements in both areas, suggesting that GIGL can effectively support cognitive development. Nonetheless, issues such as sensor calibration and game responsiveness occasionally disrupted gameplay, requiring adult intervention.

These studies emphasize the potential benefits of gesture-based interaction for children, yet they also reveal consistent usability challenges. Zaina et al. (2019) and Rahman et al. (2013) further explored the comparative aspects of gesture-based and touch-based inputs. Zaina et al. used the self-assessment manikin (Bradley and Lang 1994) to determine whether children had a positive or negative experience using gesture-based and touch-based inputs for a computational thinking game and found that children remained more engaged with gestures. However, touch-based interactions were more familiar, leading to quicker adaptation. Rahman et al. highlighted that the naturalness of gestures significantly influences children’s behaviour and satisfaction. Their study used a Likert scale Smileyometer (Read 2008) to gather feedback, finding that while touch-based gestures were generally satisfactory, mid-air gestures received more polarized responses, reflecting higher enjoyment and frustration due to gesture misinterpretation.

In Rahman et al.’s study (Rahman et al. 2013), the impact of the naturalness of gestures on children’s feelings and behaviour was observed and measured by the Smileyometer (Read 2008). However, Read and MacFarlane (2006) suggest that the Smileyometer is more useful with older children as younger children’s responses tend to skew toward very happy. So, the data have little variability.

In light of these findings, this study implements and compares two sets of mid-air gestures on children’s performance and fun – an aspect that has not been sufficiently explored in previous studies. The evaluation uses a game (see section 3) and compares the gestures with traditional mouse input. As suggested by Read and MacFarlane (2006), there is no real point in using a Smileyometer and an Again Again table as both measure the same construct, and the Again Again table seems to have more validity with younger children. This paper uses the Fun Sorter and the Again Again table from the Fun Toolkit, with the Fun Sorter enquiring about the ease of use and the likability of the input methods.

3. THE GAME: ANIMAL MATCH-UP

The game designed for this study – Animal Match-up – presents children with a straightforward yet engaging task. In each game trial, four cards (two pairs) are shown to the player facing down. Card position is randomized. See Figure 1a. Each pair contains the same image of a cartoon animal (the images were obtained from iStock1). The player selects cards one at a time. Upon selecting a card, the card is flipped revealing the animal on the opposite side. The player then chooses another card to flip, hoping to match the initial card. If the images match, the pair is solved and remains face-up on the screen. See Figure 1b. If the images do not match, the two selected cards turn face down to hide the animal image. See Figure 1c. The game continues until both pairs are matched; Figure 1d.

figure 1
Figure 1: The Animal Match-up game: (a) The game’s initial state with two pairs of cards facing down. (b) One pair was matched and remains face up. (c) The selected cards do not match, and the game returns to the initial state. (d) The final state of the game is when both pairs are matched.

While this game requires concentration and memory skills, it does not overwhelm the children, especially when gestures are used to select the cards. The game was developed on the Educaplay2 platform.

The game is played on a laptop on with three input methods, two using gestures. The gesture methods are one-handed mid-air gestures detected by an Intel RealSense depth camera (D435). When designing the gestures, learnability and memorability were important factors, especially given the age group of the targeted players. It was also important that the gestures were easy to perform. We included two types of gesture-based input methods to study varying numbers of gestures (two and four) and different levels of cognitive power required for each. The three input methods are described below.

3.1. Air Mouse

The Air Mouse method uses mid-air hand movement to move the cursor on the screen. With the palm facing the camera, the movement of the hand is tracked and a cursor moves accordingly on the display. With up/down and left/right movements of their dominant hand, the user places the cursor on the card to select. Selection occurs when the player closes their hand (i.e., makes a fist). See Figure 2.

figure 2
Figure 2: The Air Mouse: An open hand’s mid-air movement moves the cursor, and a fist gesture selects the card the cursor is positioned on. (Images from iStock3)

This method relies on moving a cursor with hand movements familiar and intuitive to children. The fist gesture for card selection imitates the motion for grabbing items in natural settings, making the interaction intuitive; that is, easily remembered and learned.

3.2. Finger Numbers

The Finger Numbers method introduces another set of one-handed mid-air gestures. In this method, there is no need to move the cursor to a position on a card and select it. Instead, the player shows the number one, two, three, or four with their hands to select the first, second, third, or fourth card from the left, respectively. See Figure 3.

figure 3
Figure 3: The Finger Numbers: With no need for moving a cursor, showing the number one, two, three, or four with a hand, selects the first, second, third, or fourth card from the left, respectively. (Images from iStock4)

This method aimed to introduce less intuitive gestures that rely more on memory, recall, and cognitive power. The number of gestures also increases from two to four between the Air Mouse and Finger Numbers. The upside is that cursor movement is not required, as with Air Mouse gestures.

3.3. Mouse

The mouse input method is included as a baseline condition to compare the performance and fun of using gestures to a traditional and commonly used input method. In trials using the mouse, the player uses a conventional mouse to move the cursor on the screen and position it on the card. A left click on the mouse selects the card and makes the card flip to show the image on it.

4. METHODOLOGY

This section outlines the assessment of the performance and fun in a game using the input methods described in section 3.

4.1. Participants

Eighteen children (nine girls and nine boys) aged 5 to 7 were recruited from local public places such as McDonald’s and Ikea foodcourts to participate in this user study with their parent’s permission. The children’s parents were asked to sign a consent form before the experiment begins. They received $20 after finishing the experiment.

4.2. Apparatus

The Animal Match-up game ran on an HP Envy x360 laptop with a screen resolution of 1536 864 operating with Windows 11. An Intel RealSense D435 camera facilitated gesture capture. See Figure 4. A custom Python program was employed, leveraging the MediaPipe v0.7 library to detect hand gestures.

figure 4
Figure 4: The Animal Match-up game running on the laptop with the Intel RealSense camera mounted on it.

4.3. Procedure

This study received ethical approval from York University’s Research Ethics Committee, ensuring compliance with all relevant institutional and regulatory guidelines. Informed consent was obtained from parents or legal guardians, and assent was obtained from the children. Data protection measures were rigorously implemented, including anonymization of participant data and secure storage of all research materials.

The experiment was conducted in local public places such as McDonald’s and Ikea Foodcourt. The researcher approached the parents of children in these places and explained the experiment to them. Interested parents then discussed the experiment with their children, and those children who showed interest were recruited to participate in the study on the same day and at the same location. Tables in quieter areas of these places were used as the experiment area. The participants were tested individually.

The researcher welcomed the children and invited them to sit comfortably in front of a laptop equipped with a RealSense camera. They then explained the game and provided a demonstration. The children were informed that they would use three different input methods during the experiment. Before starting the trials with each input method, the researcher described and demonstrated it to the children. To familiarize themselves, the children completed three practice trials with each new input method. Throughout the experiment, the children were allowed to take breaks whenever needed. Each participant performed five trials using each input method. Figure 5 shows the experiment setting.

figure 5
Figure 5: A participant playing the Animal Match-up game using three input methods: (a) Mouse, (b) Air Mouse, and (c) Finger Numbers.

To counterbalance the order effects, the participants were divided into six groups of three, each performing the experiment in a unique order.

Each trial began with four cards appearing on the laptop display. See Figure 4. Timing began at the beginning of each trial and stopped when the two pairs in each trial were matched. The time to complete each trial as well as the number of selected cards to finish the trial were recorded. The researcher took notes of participants’ expressed emotions throughout the experiment to record information on their feelings (e.g., frustration, excitement, etc.). After finishing the experiment, the Fun Sorter and the Again Again Table from the Fun Toolkit were used to gather information about children’s fun and preferences. In the Fun Sorter activity, the children were asked to sort the input methods based on ease of use and likability. See Figure 6a. This activity involved placing cards with the image of the input method on the table slots. In the Again Again table, the children were asked to answer yes, no, or maybe to whether they would like to play the game again using each input method. See Figure 6b.

figure 6
Figure 6: The Fun Toolkit tools used in this study: (a) The ease of use Fun Sorter on the top row and the likability Fun Sorter on the bottom row (filled randomly). (b) The Again Again table.

4.4. Design

This experiment used a 3 × 5 within-subjects design with the following independent variables and levels:

The dependent variables were trial completion time (s) and the number of selected cards to finish each trial. Qualitative data was obtained using the Fun Toolkit as explained in section 4.3.

The total number of trials per participant was 15 (3 input methods × 5 trials) each requiring a minimum of four selections. The total testing time for each participant was approximately 15 minutes.

5. RESULTS AND DISCUSSION

5.1. Trial Completion Time

The grand mean for trial completion time was 20.0 s per trial. By input method, the means were 13.3 s for the mouse, 23.9 s for the Air Mouse, and 22.6 s for Finger Numbers. See Figure 7. An ANOVA test showed that the effect of the input method on trial completion time was statistically significant (F2,34 = 25.02, p < .0001). A Scheffé post hoc test showed a significant difference between the mouse and the other two input methods.

figure 7
Figure 7: The average trial completion time (s) using the three input methods.

The effect of the trial on trial completion time was not statistically significant (F4,68 = 2.33, p >.05). Similarly, the Input Method × Trial interaction effect was found to be not statistically significant (F8,136 = 1.28, p > .05).

5.2. Number of Selected Cards

The grand mean for number of selected cards per trial was 5.93. This metric relates to accuracy, with 4 (the number of cards) as a baseline for a perfect trial. The number rises above 4 in correspondence with cards incorrectly selected during a trial. By input method, the means were 5.73 for the mouse, 5.84 for Air Mouse, and 6.22 for Finger Numbers. See Figure 8. Although interaction was most accurate with mouse, the effect of input method on the number of selected cards was not statistically significant (F2,34 = 2.21, p > .05). Similarly, the effect of the trial on the number of selected cards was not statistically significant (F4,68 = 0.48, ns), nor was the Input Method × Trial interaction effect (F8,136 = 0.49, ns).

figure 8
Figure 8: Number of selected cards per trial by input method. Error bars show ±1 SD.

5.3. Fun and Preference

Figure 9 shows the results from the ease of use Fun Sorter. For the analysis, the ease of use Fun Sorter results were converted to values 1, 2, and 3, with 1 being the easiest to use and 3 hardest to use.

figure 9
Figure 9: Results from the ease of use Fun Sorter: Number of children who selected each input method as easiest or hardest to use.

The mean, median, mode, and standard deviation (SD) were calculated to analyze the perceived ease of use. For the mouse, the mean score was 1.44, indicating that, on average, it was rated relatively easy to use. The median and mode were 1, suggesting that most children ranked the mouse as the easiest to use (SD = 0.7). For the Finger Numbers, the mean score was 2.22, indicating a perceived difficulty between the mouse and the Air Mouse. The median and mode were 2, showing that the Finger Numbers was most commonly ranked moderately easy to use (SD = 0.73). The Air Mouse had a mean score of 2.33, indicating a somewhat higher perceived difficulty than the other two input methods. The median was 2.5, and the mode was 3, suggesting that the Air Mouse was mainly ranked as the hardest to use (SD = 0.76). A Friedman test showed a significant difference in the ease of use of the input methods (χ2 = 8.44, p < .05, df =34) between the mouse and the mid-air gesture-based input methods.

Figure 10 shows the results from the likability Fun Sorter. Similarly to the ease of use Fun Sorter, the results from the likability Fun Sorter were converted to values 1, 2, and 3, with 1 being liked the most and 3 being liked the least. The mean, median, mode, and SD were calculated to analyze the likability of each input method. For the mouse, the mean score was 1.5, indicating that, on average, it was rated relatively easy to use. The median and mode were 1, suggesting that most children ranked the mouse as the easiest to use (SD = 0.7). For the Finger Numbers, the mean score was 2.05, indicating a perceived difficulty between the mouse and the Air Mouse. The median and mode were 2, showing that the Finger Numbers was most commonly ranked relatively easy to use (SD = 0.64).

figure 10
Figure 10: Results from the likability Fun Sorter: Number of children who selected each input method as their most or least liked input method.

The Air Mouse had a mean score of 2.44, indicating a higher perceived difficulty than the other two input methods. The median and the mode were 3, suggesting that the Air Mouse was mostly ranked as the hardest to use (SD = 0.85). A Friedman test showed a significant difference in the likability of the input methods (χ2 = 8.11, p < .05, df =34) between the mouse and the Air Mouse. Among the 18 participants, six had no prior experience with a mouse. This group struggled with clicking the mouse during the practice trials, and their hand position on the mouse had to be corrected. Some used the scrolling wheel to click, which was corrected before the experiment trials. Interestingly, while two participants within this group selected the mouse as the easiest input method, none chose it as their most liked input method. Three participants chose the Air Mouse, and three chose Finger Numbers as their most liked input method. When asked what they liked the least, two participants within this group chose the mouse, three chose the Air Mouse, and one selected Finger Numbers.

Overall, seven participants didn’t select the mouse as their most liked input method; six had no previous experience using the mouse. Similarly, a total of six participants, four of whom had no experience using the mouse before, did not select the mouse as the easiest to use.

Figure 11 shows the results from the Again Again Table, in which the children answered if they would like to play the game again using each input method. The mouse had the highest number of yes responses. Three participants, two of whom had no prior experience with the mouse, responded maybe, which could indicate the role of familiarity in the high number of positive responses.

figure 11
Figure 11: Responses from the Again Again table for the question, ”Would you like to do it again?”

During the Air Mouse trials, children encountered problems with gesture detection. Sometimes, the fist gesture that triggered a selection was not detected quickly, and children had to repeat the gesture multiple times. Due to this, some children got frustrated and moved their hands vigorously toward the camera in the fist position to make the camera detect it. At times, the children forgot to open their fists after making a selection. This led to unintended multiple selections of the same card. Additionally, despite moving their hands, the cursor didn’t move while they maintained the fist gesture. They needed to be reminded to open their fists.

Similarly, there were instances where children remembered to open their fists, but only partially, resulting in a half-fist gesture instead of the fully open hand gesture needed to move the cursor. Consequently, the cursor remained stationary on the screen despite the children moving their hands. To address this issue, they often brought their hand closer to the camera for detection, occasionally getting too close for the camera to detect anything, leading to frustration. Their posture had to be corrected by the researcher, and they had to be reminded to open their fists fully after each selection.

It was also observed that some children moved their bodies and leaned on their sides when moving their hands in front of the camera. Consequently, the Air Mouse encountered more mixed reactions, with seven children responding that they did not want to play the game using it again and two showing uncertainty.

Finger Numbers received more yes responses and fewer no responses than the Air Mouse. However, there were also a notable number of maybe responses (five), suggesting some uncertainty or reservation about this input method. In the Finger Number trials, the gestures were sometimes not detected quickly. Like the Air Mouse trials, the children moved their hands toward the camera to make it detect the gesture, and they seemed frustrated. Sometimes, they were misidentified, and a card the child did not intend to select was selected, leading to confusion. Some children verbalized their feelings in these instances with sentences such as ”I didn’t wanna do that.” Sometimes, the gestures were not performed correctly; for example, they showed their thumb instead of the gesture for selecting the first card, or they held up their middle, index, and pinky fingers instead of the gesture for selecting the third card. This showed that sometimes the children relied on counting fingers up instead of the gestures taught to them. In many instances, the children took time to think about the next gesture they wanted to perform.

Most children had to look at their hands while thinking of the next gesture. Some did that by bringing their hand down, outside the camera range; some children held their hands in front of the camera while thinking and opening and closing their fingers. This resulted in wrong and unintentional selections. Some children used both hands interchangeably, making one gesture with one hand, bringing it down, and using the other for the next gesture.

A Chi-square test revealed no significant difference between the results from the Again Again table for the three input methods (χ2 = 11.66, p > .05, df =4).

6. CONCLUSION

The mouse had a significantly lower trial completion time than the gesture-based input methods. Finger Numbers had a 5.75% lower trial completion time than the Air Mouse. It is important to note that Finger Numbers used four gestures while the Air Mouse relied on only two. The Finger Numbers had the highest number of cards selected to finish the trials. This indicates the high occurrences of unintended or misclassified gestures with this input method. The mouse was statistically significantly perceived as the easiest and most liked input method. The Finger Numbers was slightly more liked (19.02%) than the Air Mouse and was perceived almost as easy to use as the Air Mouse.

However, it is notable that all participants without prior experience using a mouse chose one of the two mid-air gesture-based input methods as their most liked input method. The mid-air gesture-based input methods underwent gesture detection delays, sometimes frustrating and confusing the children. Learning the gestures was also a part that was more challenging to the participants than having to use a mouse, even for the ones who had no prior experience using the mouse.

However, the analysis of willingness to use the input methods again showed no significant difference among the three input methods, suggesting that despite challenges, participants were generally open to retrying the mid-air gesture-based input methods. Overall, while the mouse demonstrated the best performance, fun, and user preference, the study highlights the importance of considering user familiarity and usability challenges associated with mid-air gesture-based input methods, like Air Mouse and Finger Numbers, among younger users.

Further refinements in gesture detection and more extended user training may be necessary to improve the usability and acceptance of these alternative input methods in interactive systems designed for children.

REFERENCES

Alzubi, T., R. Fernández, J. Flores, M. Duran, and J. M. Cotos (2018). Improving the working memory during early childhood education through the use of an interactive gesture game-based learning approach. IEEE Access 6, 53998–54009.

Bradley, M. M. and P. J. Lang (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry 25(1), 49–59.

Chan, S. C., C. J. Wan, and S. Ko (2019). Interactivity, active collaborative learning, and learning performance: The moderating role of perceived fun by using personal response systems. The International Journal of Management Education 17 (1), 94–102.

Elton-Chalcraft, S. and K. Mills (2015). Measuring challenge, fun and sterility on a ‘phunometre’ scale: Evaluating creative teaching and learning with children and their student teachers in the primary school. Education 3-13 43(5), 482–497.

Iten, N. and D. Petko (2016). Learning with serious games: Is fun playing the game a predictor of learning success? British Journal of Educational Technology 47 (1), 151–163.

Kauppinen, S., S. Luojus, J. Tuomisto, and A. Ahlgren (2013). Utilizing gesture recognition technology in children’s interactive storybook. In Proceedings of International Conference on Making Sense of Converging Media – AcademicMindTrek ’13, New York, pp. 76–79. ACM.

Liang, H., J. Chang, I. K. Kazmi, J. J. Zhang, and P. Jiao (2017). Hand gesture-based interactive puppetry system to assist storytelling for children. The Visual Computer 33, 517–531.

Lieberman, D. A., M. C. Fisk, and E. Biely (2009). Digital games for young children ages three to six: From research to design. Computers in the Schools 26(4), 299–313.

Long, J. (2007). Just for fun: Using programming games in software programming training and education. Journal of Information Technology Education: Research 6(1), 279–290.

Lucardie, D. (2014). The impact of fun and enjoyment on adult’s learning. Procedia-Social and Behavioral Sciences 142, 439–446.

Ponti, M. (2023, 05). Screen time and preschool children: Promoting health and development in a digital world. Paediatrics & Child Health 28(3), 184–192.

Rahman, M. S. A., N. M. Ali, and M. Mohd (2013). A study on the naturalness of gesture-based interaction for children. In Proceedings of the Third International Visual Informatics Conference - IVIC ’13 (LNCS 8237), Cham, pp. 718–728. Springer.

Rambli, D. R. A., W. Matcha, and S. Sulaiman (2013). Fun learning with ar alphabet book for preschool children. Procedia Computer Science 25, 211–219.

Read, J. C. (2008). Validating the fun toolkit: an instrument for measuring children’s opinions of technology. Cognition, Technology & Work 10, 119–128.

Read, J. C. and S. MacFarlane (2006). Using the Fun Toolkit and other survey methods to gather opinions in child computer interaction. In Proceedings of the 2006 Conference on Interaction Design and Children - IDC ’06, New York, pp. 81–88. ACM.

Renzi, M., S. Vassos, T. Catarci, and S. Kimani (2015). Touching notes: A gesture-based game for teaching music to children. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction – TEI ’15, New York, pp. 603–606. ACM.

Sim, G., S. MacFarlane, and J. Read (2006). All work and no play: Measuring fun, usability, and learning in software for children. Computers & Education 46(3), 235–248. Virtual Learning?

Vieira, L. C. and F. S. C. da Silva (2017). Assessment of fun in interactive systems: A survey. Cognitive Systems Research 41, 130–143.

Willis, J. (2007). The neuroscience of joyful education. Educational leadership 64(9), 1–5.

Zaina, L., E. Castro, S. Martinelli, and T. Sakata (2019). Educational games and the new forms of interactions: Exploring the use of hand gestures in a computational thinking game. Smart Learning Environments 6, 1–17.

-----
Footnotes:
1iStock.com/Turkan Rahimli
2https://www.educaplay.com/
3iStock.com/spukkato
4iStock.com/3283197d.273