Today, I finished the plot of the gesture, from which I got a clearer overview of the dataset. The python scripts were used to plot all the gestures according to the gesture position (x, y) and cognitive levels. The original size of the letter (about 4000 * 4000) is too big to display on the screen, so I zoomed them out to fit all the gestures from one subject into one screen (2056 * 1024). The reason why I arranged them in one screen is to make it easier to find potential features by comparing all the gestures at the same time. Currently, the gestures are ordered by time.
The next step for me is to look into those gestures and try to find interesting and valuable features to test. And another thing to be finished before the end of the next week is to summarise the frequently used features for further reference.