Figure: The Weekly Structure of One Flipped Programming Course
Recently we have explored the application of flipped classroom to teaching computer science and especially in the context of programming courses. Flipped classroom is a teaching method where students first study theory by themselves as a pre-assigned homework and then learn in the classroom by working on exercises. This is the opposite of the traditional “listen at class and then work alone at home” approach, hence the term “flipped”. This approach aims to maximize the usefulness of the time the teacher and the students spend together.
We had a series of case studies with positive results. We first published one short poster paper and then a subsequent full paper. There is also the poster where we have a proposed process model for applying the method and a series of recommendations.
Flipped classroom teaching method, where theory is studied at home and exercises are done in the classroom, is gaining foothold in teaching. The method has been used with different approaches and guidelines, yet a single unified process has not been described. In this work we compare existing literature to our experiences in teaching. As our main result we outline a simple process description and guidelines for building a course structure with flipped classroom. Flipping the classroom has been found to be more efficient than traditional lecture-exercises model and the our findings support this. Therefore we recommend teachers to explore the possibility of utilizing the method.
Figure: Games that were associated with each other in the survey results
The previous two articles on Gamifying CSCL and Building Fair CS:GO Teams are an excellent segue to today’s topic. A year back we made an survey on how motivated video game players would be to exercise for in-game rewards. Additionally we performed a systematic mapping study on current scientific literature handling the topic. We found that it has not been a much researched subject and the attitudes of players were open to connecting video games and exercise by gamification.
According to our survey casual videogame players were willing to do physical exercise for in-game rewards. (with the survey mostly concerning Counterstrike players)
Survey on Player Opinions of Using Physical Exercise as Tool to Earn In-Game Rewards
To gain an understanding on how players would feel about connecting physical activities with video games, we carried out a questionnaire to possible end users. The questionnaire was targeted to Counter Strike players and the responses showed that there is an audience for using in-game rewards to motivate users to exercise, especially among the segment that does not currently exercise a lot. The questionnaire received 47 answers from 44 males and 3 females. Majority (60%) of respondents were between 19-30 years old, 31% were between 10-18 and 9% were 31-50 years old. Most respondents (42%) told that they play more than 3 hours/day, 30% play 2-3 hours/day and 27% said that they play less than 2 hours per day.
A weak inverse correlation was found between people interested doing a physical task to unlock a weapon and the amount of exercise the respondent was doing (R=-0.31; p=0.04). This means that the respondents who exercise a lot are not willing to do extra task for in-game rewards, but conversely respondents who exercise little could be motivated to exercise more with ingame rewards. People who currently exercise 1 to 7 hours per week would be most motivated by in-game rewards and people who exercise 8 to 14 hours per week would be least motivated by rewards.
This study presents a concept for linking individual physical activities and video games together to build an ecosystem to motivate players to exercise. We used a systematic mapping study to establish current state of art and a user questionnaire to understand how players feel about digital rewards from physical exercises. In addition we implemented a prototype to demonstrate the applicability. The results suggest that combining games and physical activity trackers together is technologically feasible, and there is an audience who would be willing to exercise in order to receive rewards in games.
This post deviates a little from the science publication fare and is related to applied machine techiques and how they can be used in the most unusual places. Let me start first by illuminating the motivation. I am a part of the organizing body of Snooze Ry., which arranges LAN party events for youth. As you might expect, there are computer game (eSports) competitions. However, one of the issues is that the skill level of the participants varies a lot and we aim to maximise fun at the competitions. Many of the games that are played there are team-based and we try to have teams evenly matched to have the matches to be as exciting as possible. Recently we have manually assigned the best players into the different teams, but this is not a long term solution and involves a lot of manual work.
Stand Back, I am Going to Try Science
To address this issue in constructing teams for competitions a good friend of mine set out to program a more fair tournament ladder constructor. For the first test case we chose a popular first person team action game, Counterstrike: Global Offensive. The game is connected to Steam, a popular gaming platform that also records statistics and creates public profiles for players, so that we have a wealth of data available from hours played to player performance per level and the weapon of choice. This wealth of data also provided an issue: Which statistics should we use to decide the player skill? Win/lose -ratio? What if he just had good teammates? Total hours played? What if the guy is just a slow learner?
I had the bright idea to bring machine learning into the mix. Why should we decide, when we could let a machine do it? That always ends well. We chose the naive bayesian classifier for this supervised machine learning experiment (i.e. we provide enough samples to the machine and hope that it learns to classify new data from that).
Anyhow, once we were committed the system was completed pretty fast. We surveyed our friends, asked them to name good and bad players, downloaded their statistics from the Steam API and used them to show examples of good, mediocre and bad players to the machine learning algorithm. Now when the system builds teams, it does the following steps:
Gets desired team sizes and player details as input
Gets the Steam username for each new player and downloads stats using the Steam API
Uses the trained classifier to label a player skilled, mediocre or unskilled
Divides players so that each team should have as even mix of players as possible (the same average skill level for each team)
Initial Results: Player Statistics That Correlate With Our Subjective Measure of Skill
In our rather limited dataset three statistics were assigned most importance by the bayesian classifier: Total matches won in the safehouse level, win rate and total shots using the Tec9 weapon. The relationship of these is visualized in the figure below. The more points awarded, the higher chances is that the player is “good”. In our dataset it appears that a player who wins 50% of safehouse level matches, has a 100% winrate and most shots with the Tec9 weapon is an ideal “good” player. For our panel’s arbitrary definition of good, that is.
Figure: “Skill” Points Awarded by the Classifier for Three Most Important Player Statistics
Naive Bayes is a machine learning method where an algorithm uses probabilistic analysis to categorize a collection of values (e.g. measurements from some objects) to one of the predefined categories. Learn more in Wikipedia or from a Coursera course.
Also, Lappeenranta University of Technology is arranging a machine learning course under the course name CT10A7060 Advanced Topics in Software Engineering during the intensive week May 16th – 20th, 2016. If you are a student at our university, sign up using the course management system Weboodi or fire me off a message in Twitter and I’ll connect you with the lecturer.
New! My systematic mapping study on Computer-Supported Collaborative Learning in Software Engineering Education has now been published. It cites and summarizes the results of over a hundred publications in the field of CSCL.
A systematic mapping study (SMS) is a secondary study that aims at classification and thematic analysis of earlier research. According to Kitchenham and Charters performing a SMS can be especially suitable if few literature reviews are available on the topic and there is a need to get a general overview of the field of interest. It can also be used to identify research gaps in the current state of research.
Computer-supported collaborative learning (CSCL) has been a steady topic of research since the early 1990s, and the trend has continued to this date. The basic benefits of CSCL in the classroom have been established in many fields of education to improve especially student motivation and critical thinking. In this paper we present a systematic mapping study about the state of research of computer-supported collaborative learning in software engineering education. The mapping study examines published articles from 2003 to 2013 to find out how this field of science has progressed. Ongoing research topics in CSCL in software engineering education concern wider learning communities and the effectiveness of different collaborative approaches. We found that while the research establishes the benefits of CSCL in several different environments from local to global ones, these approaches are not always detailed and comparative enough to pinpoint which factors have enabled their success.
For the first article in my blog I’ll introduce the NAILS project. It is a collection of cloud-based tools for performing statistical and Social Network Analysis (SNA) on citation data. SNA is a new way for researchers to map large datasets and get insights from new angles by analyzing connections between articles. As the amount of publications grows on any given field, automatic tools for this sort of analysis are becoming increasingly important prior to starting research on new fields. nails also provides useful data when performing systematic mapping studies in scientific literature.
The basic design and bibliometric principles of the system have been published in a research article:
Antti Knutas, Arash Hajikhani, Juho Salminen, Jouni Ikonen, and Jari Porras. 2015. Cloud-Based Bibliometric Analysis Service for Systematic Mapping Studies. In Proceedings of the 16th International Conference on Computer Systems and Technologies (CompSysTech ‘15). DOI: 10.1145/2812428.2812442