Neuromag July 2018 | Page 14

your hand ( which is no longer there ), the muscles of your forearm ( which are ) still twitch accordingly . Even better , these twitches are linked to the electrical potential recorded on the surface of the skin , so they are easy to read with an electromyogram ( EMG ).
To a machine learning expert the problem then becomes simple : Muscles twitch in accordance with thoughts . Therefore if a machine can learn a mapping from muscle twitches to intended movements , the problem is solved ! The question then is how – if there is no remaining hand – how do I know the subject ’ s exact intended hand position ( called ‘ pose ’) in order to feed that to my algorithm ? At first , people tried something very simple : A single muscle , if it is activated , can twitch somewhere between 0 % and 100 % of its maximum capacity . Therefore , if one records the EMG over that exact muscle , the rest of the hand ’ s pose is no longer necessary . This system of proportional control ( the activation of the robot hand is proportional to the activation of discrete anatomical muscles , as measured by the EMG ) was sufficiently reliable and effective to become the current standard for myoelectric prostheses . However , it comes with downsides : If the electrode shifts a little bit on the skin , then the signal can quickly disappear . Plus , this forces the control to be from the most accessible muscles , which are not necessarily the most intuitive or the most comfortable to activate . Lastly , the human hand is not controlled through isolated muscle contractions , making more complex
control ( more than two degrees of freedom of the hand ) impossible .
To solve these problems , the field looked towards pattern recognition . It ’ s still too hard to give my algorithm the exact imaginary position of the hand in order to train it , the thinking goes , but if I ask the user to choose a small number of poses – say 8 – that they generate use reliably , I don ’ t need to be as precise in what I tell the algorithm . All I need to trust is that the user is capable of doing the same imagined pose reliably . While not quite that sim- ple in practice , the idea proved to allow for more reliable control in prosthetics both within and outside the laboratory . While more and more complicated machine learning has been attempted in the research community , however – deep learning , non-negative matrix factorization and other new methods – industry sticks resolutely with the simplest classification strategies . It is here that the difference in metrics comes into play .
For a machine learner , task success is measured by decoding accuracy . If I can decode the correct pose from the EMG 95 % of the time , then my classifier is better than one that can only do it 90 % of the time . Intuitively , this is also quite reasonable – if a classifier is right more often , it is probably the one you want . However , this is not the metric that an amputee uses to rate the usability of his device , as no movement is ever done in isolation . A real arm is used to perform complex sequences of movements ( opening doors , grasping and turning keys ) in situations one could never exhaustively test in a lab ( trying to open your front door when there ’ s a grocery bag on your arm and your kid is hanging off your leg screaming for ice cream ).
14 | NEUROMAG | July 2018