Description of work
Prosthetic control is enforced by building a machine learning (ML) model of the user's intent to activate her (remaining) muscles, through the usage of biological signals gathered from the user's forearm / stump. Traditionally, a few surface electromyography (sEMG) sensors are used. Currently, the main problem of myocontrol is unreliability - although the ML system works fine in the lab and on a table, it then fails miserably once it is used online to perform real-life tasks.
But we already know that, by adding synthetic data to the dataset used to build the model, things seem to get better. Is this really the case in a realistic setting? Would the synthetic data really help make a myocontrol system more reliable, when confronted with complex, bimanual tasks performed using real prostheses?
Your task: implement this idea in our setup, then conceive and carry out a comparative experiment, checking whether it improves or not. New ideas to design more effective synthetic data are also welcome. How far can you go in visualising the input sace in your own mind?
Work breakdown