Face the Score
Design and Idea
As VR technology developing fast in the recent decade, those fancy scenes described in all the science fiction and Sci-Fi movies have gradually become possible. However, there is still a problem with the existing VR headset. The inconsistency between the current input system and the headset display cause the motion sickness and also destroyed the total immersive experience. Our initial idea is trying to exploring a way to improve the current input system. We wanted to integrate the BCI system with the VR headset, the final goal of the initial idea is that allow people to interact with the VR scene by their own mind. The presentation below shows what we would like to do in the first place.
During the study in the semester. We gradually find that our first idea is above the scale, and we also can’t really find a suitable device to build the integrated system. Our usability lab only provides limited kinds of EEG capture device and most of them are super hard to set up and export real-time data. Besides, those devices like OpenBCI also required to wear on the head (Figure 1), as a result, there is no place for the VR headset. We do need an integrated device to achieve that so we tried Looxid which is a VR cardboard that integrates the EEG signal capture device. But there is no API available to transfer the data we capture to a mobile device in real-time, and the lag of processing the EEG data and use it as the input will also be a difficult problem. As a result, we gave up our initial idea after these attempts.
Figure 1. Setting up for OpenBCI
The second idea we had was using the eye-tracking system as an assist system for people to interact with the VR world. For instance, help people to move the view or select target in a game. However, the problem still remains since there is no current available interface that integrated those two systems. We did try the Varjo VR headset which has both VR display and eye-tracking system. But the captured data can’t be export in real-time and can only be accessed with iMotion.
Finally, we switched our idea to make a game with the input of facial expression. We chose to use facial expression mainly because the device required for detecting the expression is just a camera. We don’t need to spend so much time setting up the devices and figuring out how to create API for that. We decided to use Unity to build to the game and we download affdex SDK as our facial expression detecting plguin. ( https://www.affectiva.com/product/emotion-sdk/ ) The flowing slides illstrate our final idea.
Motivation
The reason why we decided to make a music game is that based on the technologies we learned this semester, we thought that there are only a few we can use to realize a game. The reason why we chose facial detection is that this can be done only with a camera, not cumbersome devices so that there would be more players we can target. Then we thought about some existing games using facial detection, including both PC games and mobile games and there is no music game combined with this technology. So we decided to build a game that can use different facial expressions to play classic music games.
Design Progress
The inspiration for the first version of our game comes from the song “Billie Jean”. We made the score using Muse, tried a lot of instructions to restore the original song then export three tracks of the song including base, drum, and organ. In this version, players cannot see themselves from the screen, and there’s no cycle of play, so after players show three faces there’s nothing else for them to do. Based on that, we decided to make this a tutorial level and add playable factors into the game.
Figure 2. The first version of the game
Figure 3. The first score made using Muse
Talking about how to continue working on the project, we all agree that we need to find an appropriate song including such factors: 1. first of all, songful; 2. easy to tell the difference between each track; 3. the score can be got. At first, we tried to use Cool Edit to fuse the tracks of certain music, but we found that they cannot work well with each other. Then we tried to use Muse to add tracks into a song, that’s also hard to realize and require too much work. Finally, we found that there are scores can be downloaded and can be divided into separate tracks.
Figure 4. The second version of score downloaded
Then about the mechanic of the game, we also had a lot of ideas. To keep players concentrate on the game, there should have continuous require in the game. First, we designed to make faces shown on the timeline as a hint for players to show certain faces, but in this way, there needs to have a direct reflection in certain times so that players can know. The music we found is not suitable for this so we gave up this idea. Then we decided to do the progress bar as a visual reflection of facial expression. One facial expression represents one instrument and different tracks of music will be gradually added as players show more facial expressions. When players express one face, they will hear the change of music as well as the change of progress bar. The progress bar will automatically go down, and when it reaches to the bottom the certain track of music would stop. To keep the music playing, players need to continue showing facial expressions.
Outcome
The code is avaliable at https://github.com/LeoFrank313/FaceMusicGame2
In the final version, we added a camera in the center of the screen as the main part of the game. There are four tracks of music, the main track of piano will keep playing as the background music of the game, the other three tracks represent three facial expressions: mouse open, eye closure and nose wrinkle would be played when the facial expression is detected. We tested several times to change the sensitivity of detection, and the progress bar will change differently based on the difficulty of the detection. To add more playable into the game, we also used happy and sad as speed up and slow down the music. We thought it was suitable for the two emotions, for example, the music would sound sadder if it’s slower.
Figure 5. Final version
Testing and Analysis
We did playtest for both versions of our game. But we didn’t record any biometric data because we didn’t mean to have some scientific point of view or prove some hypotheses, we just want to complete and improve our project. What we do have done is questionnaire for players after the playtest.
For the first playtest, we tested version 1 and here are the questions we asked.
1. Do you understand the mechanic of the game?
2. Can you tell the difference between the music?
3. Do you think the mechanic is fun?
4. Any suggestions?
The data showed that most of the volunteers understood our game and they thought the game was fun for they have never played this kind of game. However, they thought the change between the music wasn’t significant and there’s no direct reflection they can see in the game. Also, after the three expressions there’s nothing left for them to do. Besides adding sound, it would be better if they can mute the sound.
Based on the data we collected, we changed a lot into the game. First of all, we added visual reflect in the game so that players can have a direct hint of the change. Second, we changed the music and it’s more clear now. Last but not least, we added mechanic into the game and there are waves in it, now it can be called a real game.
For the second playtest, we asked players to play both versions of the game and here are the questions we asked.
1. Do you understand the mechanic of the game?
2. Can you tell the difference between the music?
3. Which version do you think is better and why?
This time the result is much better. It’s obvious that players all like the second version and they can really play it. The music is also more clear, maybe one track is not that clear but others are detectable.
Summary
In this project, we tried a lot of technologies and several attempts to reach an interesting game. We found that it’s important to not persist one proposal stubborn. For the playtest, we listened to a lot of suggestions and tried to meet their requirements. For future work, we think it’s possible to make it more interesting and more like a game. We would focus on that and improve our project.