For many people with upper limb disabilities, simple activities of daily living, such as drinking, opening a door, or pushing an elevator button require the assistance of a caretaker. An assistive, robotic system controlled via a Brain-Computer-Interface (BCI) could enable these people to perform these kind of tasks autonomously again and thereby increase their independence. In this context we investigate various methods to provide disabled people control over the DLR Light-Weight Robot, while supporting task execution with the capabilities of a torque-controlled robot.
In our research we follow two different interfacing techniques.
To improve the usability of such an assistive robotic system, e.g. when grasping objects, we integrate autonomous capabilities with the robot, to be able to overcome inaccuracies in the BCI. Different schemes of shared autonomy can be incorporated, starting with grasp stability detection in order to prevent objects from unintended dropping to full autonomous reach and grasp based on vision data.
We focus on aiding the disabled to regain lost hand/arm functionality without the stress of surgery, drugs, hospitalisation and so on. For fine control of dexterous prostheses we use Using non-invasive human-computer interfaces. We aim at re-establishing the sensori-motor loop with the missing/injured limb, and this includes feed-forward control (detecting the intention of the patient) and sensorial feedback (transducing digital readings to feelings).
Our main target patients are those affected by amputations, neuropathic pain, stroke, spinal cord injuries and neuromuscular degeneration conditions. We study novel non-invasive human-machine interafces (surface electromyography, ultrasound, tactile and optical sensing, etc.) for feed-forward control. We investigate innovative ways of delivering sensorial feedback such as, e.g., impedance-controlled mechatronic devices and the application of force/virabtion/electrical stimulation to the patient’s body.
We also focus on algorithms and methods to enforce a stable, reliable, long-standing intention detection of the patient / amputee. To this aim we study machine learning algorithms which can work online, incrementally, in a space- and time-bounded fashion.
Prosthetic hand on the TORO robotic platform, teleoperated with EMG.
We do not only exploit bio-data, but we are also working on improving methods for bio-data acquisition. Currently surface EMG only offers a very coarse view of muscle activity as it can only record a summation of the signals of different muscle fibers. We are working on a 3D-reconstruction method of the generating potentials  to see which muscles are active during a specific motion. Exploiting the crosstalk between the electrodes in large electrode arrays, we want to recover the muscle potentials of "muscles beneath muscles". Thereby, we gain a better understanding of action potentials within deeper muscular regions. Yet, we do not require invasive technology, like needle EMG. We call this method imaging electromyography, or in short: iEMG.
Each year approximately 15 million people worldwide suffer stroke for the first time. The neurological injury often leads to persisting paresis or paralysis in patients. Regaining control over their own limbs and thus maintaining independence from other people is only possible through frequent and repetitious training.
Our goal is to provide self-adjusting robots for physiotherapy of the upper limb. Through impedance control we can adjust the strictness of regulation on the path of motion. Thereby we can ensure “repetition without repetition” (Bernstein 1967), fostering neural plasticity and thus learning. Using different force patterns, e.g. assisting and resisting force, the training can be adjusted to the individual needs of the patient. Possible adjustments range from the type of training exercise to the variation of robotic support.
By observing the EMG-signals on the impaired arm, we can detect desired motion before the subject regains enough strength to execute it. Therefore, we can ensure that the patient is actively participating in the training and perform quantitative analysis of his progress. Moreover, we want to identify the areas in the workspace that need more training by analyzing the EMG and thus automatically adjusting the exercises, e.g. by focusing on specific areas of the human workspace.
Further, observing the non-impaired arm the robot will be able to support the impaired limb in copying the motion of the other arm to facilitate bilateral movement. This promotes the cortical reorganization and gives the patient the chance to have more control over the training and consider his own needs.
 Vogel, Jörn and Bayer, Justin and van der Smagt, Patrick (2013) Continuous robot control using surface electromyography of atrophic muscles. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 845-850. IEEE. International Conference on Intelligent Robots and Systems (IROS), 3.-7.11.2013, Tokyo, Japan.
 Vogel, Jörn and Haddadin, Sami and Simeral, J D and Stavisky, S D and Bacher, D and Hochberg, L R and Donoghue, J P and van der Smagt, Patrick (2010) Continuous Control of the DLR Light-weight Robot III by a human with tetraplegia using the BrainGate2 Neural Interface System. In: 12th International Symposium on Experimental Robotics (ISER). Springer-Verlag Berlin Heidelberg. The 12th International Symposium on Experimental Robotics, Montreal.
 Hochberg, Leigh R. and Bacher, Daniel and Jarosiewicz, Beata and Masse, Nicolas Y. and Simeral, John D. and Vogel, Joern and Haddadin, Sami and Liu, Jie and Cash, Sydney S. and van der Smagt, Patrick and Donoghue, John P. (2012) Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485 (7398), pp. 372-375. Nature publishing group. DOI: 10.1038/nature11076.
 Holger Urbanek, Verfahren zur rechnergestützten Verarbeitung von mit einer Mehrzahl von Elektroden gemessenen Aktionspotentialen des menschlichen oder tierischen Körpers, 2012, Patent: DE102012211799A1