how to compactly, cheaply and non-invasively to do it? so far the only really successful attempt is repreented by surface electromyography. but clearly, the muscle movements cause deformations in the surface of the forearm, as you press with your fingers.
what if we tried to reconstruct the intended forces at the fingertips by looking at the forearm, while applying those forces?
mounting a small standard camera on the subject's wrist is easy; calibrating it is more complex; and extracting meaningful features from the images is even harder. the features must, for one thing, be as insensitive as possible to the illumination. but if such features can be extracted, then they can probably be associated to finger forces; and subsequently, positions and forces can be predicted using the images.
we already have a set of machine learning algorithms in place, that realise this schema starting from electromyographic data, ultrasound images and tactile traces. your task will be that of using RGB images and visual features to do that.
offline tests on healthy subjects will be performed as we go about. an online implementation and tests on amputees are later possible, if the approach is proved to be feasible.