• Nenhum resultado encontrado

Brain computer interface

N/A
N/A
Protected

Academic year: 2021

Share "Brain computer interface"

Copied!
88
0
0

Texto

(1)

2017

Lu´ıs Filipe

Martinho de Almeida

Brain Computer Interface

(2)
(3)

2017

Lu´ıs Filipe

Martinho de Almeida

Brain Computer Interface

Interface C´erebro-Computador

”Knowing yourself is the beginning of all wisdom.” Aristotle

(4)
(5)

2017

Lu´ıs Filipe

Martinho de Almeida

Brain Computer Interface

Interface C´erebro-Computador

Disserta¸c˜ao apresentada `a Universidade de Aveiro para cumprimento dos requisitos necess´arios `a obten¸c˜ao do grau de Mestre em Engenharia Eletr´onica e Telecomunica¸c˜oes realizada sob a orienta¸c˜ao cient´ıfica de Manuel Bernardo Salvador Cunha, Professor Auxiliar do Departamento de Eletr´onica, Telecomunica¸c˜oes e Inform´atica da Universidade de Aveiro e Jos´e Nuno Panelas Nunes Lau, Professor Auxiliar do Departamento de Eletr´onica, Telecomunica¸c˜oes e Inform´atica da Universidade de Aveiro.

Thesis submitted to the University of Aveiro for compliance of the

requirements for the degree of Master in Eletronics and Telecomunications Engineering held under the supervision of Manuel Bernardo Salvador Cunha, Auxiliary Professor of Department of Eletronics, Telecommunications and Informatics from University of Aveiro and Jos´e Nuno Panelas Nunes Lau, Professor of Department of Eletronics, Telecommunications and Informatics from University of Aveiro.

(6)
(7)

Professora Auxiliar da Universidade de Aveiro

vogais / examiners committee Professora Doutora Br´ıgida M´onica Teixeira de Faria

Professora Adjunta Convidada do Instituto Polit´ecnico do Porto - Escola Superior de Sa´ude

Professor Doutor Manuel Bernardo Salvador Cunha

(8)
(9)

desejos e sonhos. Todos temos apenas uma vida para viver, e por essa raz˜ao que seja cheia de alegria e amor, ambos vivemos seguindo esse princ´ıpio, a procura da nossa felicidade e sonhos, pois o amor j´a o estamos a viver. Dedico tamb´em aos meus pais, irm˜ao e cunhada Susana que me apoiaram sempre em todos os maus momentos e me ajudaram sempre que precisei. Amo-vos a todos do fundo do meu cora¸c˜ao e sem vocˆes n˜ao seria a pessoa que sou hoje.

I dedicate this work to the love of my life, F´atima, who always accompanied me and gave me strength to move forward and go beyond my hopes and dreams. We all have one life to live, for that reason let it be full of happiness and love, and we both follow this principle, the pursuit of our happiness and dreams, because love we already found and were living it. I dedicate also to my parents, brother and sister-in-law Susana who supported me always in all the good and bad times and were always there to help me when I needed.

Love you all with all my heart and without you I would not be the person I am today.

(10)
(11)

A investiga¸c˜ao e desenvolvimento de sistemas BCI, Brain Computer Interface tem crescido de ano para ano, com resultados cada vez melhores. Uma das principais vertentes para a qual estes sistemas tˆem sido usados ´e na ´area da neuroprost´etica.

Desta forma tem-se demonstrado em v´arios estudos e investiga¸c˜oes a possibilidade de controlar membros completos ou parciais rob´oticos por n´os seres humanos, dando assim uma liberdade e conquista de movimentos perdidos a pessoas incapacitadas.

No entanto uma grande parte dos melhores resultados obtidos envolve a utiliza¸c˜ao de BCI invasivos, o que necessita de ser implantado diretamente no c´erebro humano, atrav´es de uma opera¸c˜ao cir´urgica. Isto ´e ainda um dos grandes inconvenientes que esta abordagem implica e tamb´em o facto de uma grande parte destes estudos ainda estarem na fase de testes.

Este trabalho teve como objetivo tentar comprovar que os BCI n˜ao invasivos tamb´em conseguem obter bons resultados apesar das suas limita¸c˜oes e pior aquisi¸c˜ao de resultados devido `a inclus˜ao de ru´ıdo por parte do nosso crˆanio e cabelo, assim como a inclus˜ao dos Parˆametros Hjorth proporciona melhores resultados na identifica¸c˜ao das classes desejadas.

Dividiu-se o trabalho em duas partes, uma para a identifica¸c˜ao das classes de “Piscar de Olho” e outra para identifica¸c˜ao das classes de “A¸c˜oes Pensadas”. Os resultados foram todos obtidos tendo em conta apenas um utilizador.

Relativamente `a dete¸c˜ao do “Piscar de Olho” comprovou-se que ´e facilmente conseguido com resultados quase perfeitos, com uma precis˜ao de 99.98%. Relativamente `a dete¸c˜ao de “A¸c˜oes Pensadas” n˜ao foi poss´ıvel comprovar a sua dete¸c˜ao usando sess˜oes de grava¸c˜ao diferentes, no entanto verificou-se que a classifica¸c˜ao das classes tendo em conta a mesma sess˜ao de grava¸c˜ao, obt´em resultados muito bons com valores acima dos 99% para o melhor m´etodo preditivo. A inclus˜ao dos Parˆametros Hjorth foi em todos os casos de estudo, a op¸c˜ao em que os resultados foram sempre melhores, demonstrando assim que a inclus˜ao dos mesmos ´e uma op¸c˜ao aconselh´avel, pois em alguns casos, a precis˜ao na dete¸c˜ao das classes aumento para duas ou mais vezes.

Os resultados s˜ao promissores e apesar de n˜ao ter conseguido obter os melhores resultados para sess˜oes de grava¸c˜ao independentes na classifica¸c˜ao de “A¸c˜oes Pensadas”, indico nas an´alises os passos necess´arios para a obten¸c˜ao de melhores resultados e a possibilidade de generaliza¸c˜ao do processo para diversos utilizadores.

(12)
(13)

The research and development of BCI systems, Brain Computer Interface has grown from year to year, with better and better results. One of the main areas for which these systems have been used is the neuroprosthetic.

Several studies and investigations have shown the possibility of controlling complete or partial robotic members by people, thus giving a freedom and conquest of lost movements to incapacitated persons.

However, a great part of the best results obtained involves the use of invasive BCI, which needs to be implanted directly into the human brain through a sirurgical operation. This is still one of the great drawbacks that this approach entails and also the fact that a large part of these studies are still in the testing phase.

The aim of this study was to try and prove that non-invasive BCI can also achieve good results despite their limitations and inferior quality on the acquisition of data due to the inclusion of noise from our skull and hair, and also that the inclusion of the Hjorth Parameters on the analysis provides better results in identifying the desired classes.

The work was split into two parts, one for the identification of “Eye Blinking” classes and the other for “Thought Actions” classes. The results were all obtained with only one user in mind.

Regarding the detection of “Eye Blinking” it has been found that it is easily achieved with near-perfect results, with an accuracy of 99.98%. Regarding the detection of “Thought Actions” it was not possible to verify its detection using different recording sessions, however it was verified that the classification of classes taking into account the same recording session, obtains very good results with values above 99% for the best predictive method. The inclusion of Hjorth Parameters was in all study cases, the option in which the results were always better, thus demonstrating that their inclusion is an advisable option, since in some cases, the accuracy in detecting classes doubled or more.

The results are promising and although I haven’t been able to obtain the best results for independent recording sessions in the classification of “Thought actions”, I indicate in the analysis some steps necessary to obtain better results and the possibility of generalizing the process for several users.

(14)
(15)

Contents

Contents 1 List of Figures 3 List of Tables 5 1 Introduction 7 I Motivation . . . 7 II Problem in Question . . . 8

III Thesis Structure . . . 8

2 State of Art 9 I Types of BCI . . . 10

I.1 Invasive BCI . . . 10

I.2 Partially Invasive BCI . . . 11

I.3 Non-Invasive BCI . . . 11

II Classifiers and Supervised Learning Methods . . . 12

II.1 Hjorth Parameters . . . 12

II.2 Neural Net . . . 13

II.3 Naive Bayes . . . 14

II.4 Decision Tree . . . 15

II.5 Deep Learning . . . 15

III News and Studies Developed in the Area . . . 16

3 Objectives 23 4 Brain Computer Interface - BCI 25 I Equipment and Software Used . . . 25

I.1 Emotiv EPOC+ Headset . . . 25

I.2 Software . . . 26

I.3 Developed Software . . . 27

II Approach to Solve Problem . . . 31

II.1 Data Gathering . . . 31

II.2 Data Analysis and Filtering . . . 31

(16)

Brain Computer Interface Contents

5 Results 35

I Blink State Detection . . . 35

I.1 Results . . . 36

II Thought Action State Detection . . . 42

II.1 Results . . . 42

6 Conclusion 49 Bibliography 53 Appendices 55 A Results - Detailed 56 I Blink Detection - Without Hjorth Parameters . . . 56

II Blink Detection - With Hjorth Parameters . . . 57

III Blink Detection - Custom Filter . . . 58

IV Thought Action Detection - Without Hjorth Parameters . . . 59

V Thought Action Detection - With Hjorth Parameters . . . 61

VI Thought Action Detection - Session Mix I - Without Hjorth Parameters 63 VII Thought Action Detection - Session Mix II - Without Hjorth Parameters 65 VIII Thought Action Detection - Session Mix I - With Hjorth Parameters . 67 IX Thought Action Detection - Session Mix II - With Hjorth Parameters . 69 B Low Pass Filter 71 C Rapidminer - Prediction Methods Configurations 73 I Neural Net . . . 73

II Naive Bayes . . . 73

III Decision Tree . . . 74

(17)

List of Figures

2.1 Neural Network Example . . . 13

2.2 Pplware News - “Brain controlled car drives in straight line” . . . 16

2.3 Design News - “Teen Invents Artificial Arm Controlled by Bluetooth Powered Brain Waves” . . . 17

2.4 Seeker - “Robot Arm Follows Brainwave Instructions” . . . 17

2.5 Mail Online - “As scientists discover how to ’translate’ brainwaves into words... Could a machine read your innermost thoughts?” . . . 18

2.6 MIT Technology Review - “A Brain-Computer Interface That Works Wirelessly” . . . 19

2.7 Quartz - A patient using an early version of the robot arm. (DARPA) . 19 2.8 The New York Times - “Prosthetic Limbs, Controlled by Thought” . . 20

2.9 Popular Science - “Brain-Controlled Bionic Legs Are Finally Here” . . 21

4.1 Emotiv EPOC+ . . . 26

4.2 Sensor Map of EPOC+ . . . 26

4.3 Demonstration of ControllerBCI Visualizer Interface . . . 28

4.4 Demonstration of ControllerBCI Recorder Interface . . . 29

4.5 Original Data with Label and Spectrum Analysis - Sensor AF3 - Blinking Mix 1 Session . . . 31

4.6 AF3 - Frequency Spectrum - Close Up . . . 32

4.7 AF3 - Original vs. Filtered . . . 32

4.8 Rapidminer - General Process . . . 33

4.9 Rapidminer - Inner Process . . . 34

5.1 Blinking Mix 1 - Sensors AF3 e AF4 . . . 36

5.2 Blinking Mix 2 - Sensors AF3 e AF4 . . . 36

5.3 Blinking Mix 1 - Sensors F7 e F8 . . . 37

5.4 Blinking Mix 2 - Sensors F7 e F8 . . . 37

5.5 Blinking Mix 1 - Sensors F7 e F8 - Definition of Classes . . . 41

6.1 Demonstration of Oxidation Build Up in Sensors . . . 49

B.1 Filter - Magnitude Response . . . 72

(18)
(19)

List of Tables

5.1 Blink Detection - Overall Results - Without Hjorth Parameters . . . . 38

5.2 Blink Detection - Method Accuracy Comparison . . . 38

5.3 Blink Detection - Method Accuracy Comparison - Minimum True Positives 39 5.4 Blink Detection - Overall Results - With Hjorth Parameters . . . 39

5.5 Blink Detection - Method Accuracy Comparison (Hjorth) . . . 39

5.6 Blink Detection - Comparison Between Without/With Hjorth Analysis 39 5.7 Blink Detection - Method Accuracy Comparison - Minimum True Positives (Hjorth) . . . 40

5.8 Blink Detection - Overall Results - Custom Filter . . . 41

5.9 Blink Detection - Method Accuracy Comparison (Custom Filter - Hjorth) 41 5.10 Thought Action Detection - Overall Results - Without Hjorth Parameters 43 5.11 Thought Action Detection - Method Accuracy Comparison . . . 43

5.12 Thought Action Detection - Overall Results - With Hjorth Parameters 44 5.13 Thought Action Detection - Method Accuracy Comparison (Hjorth) . . 44

5.14 Thought Action Detection - Overall Results - Mix I - Without Hjorth Parameters . . . 45

5.15 Thought Move Detection - Method Accuracy Comparison - Mix 1 . . . 45

5.16 Thought Action Detection - Overall Results - Mix II - Without Hjorth Parameters . . . 46

5.17 Thought Action Detection - Method Accuracy Comparison - Mix 2 . . 46

5.18 Thought Action Detection - Overall Results - Mix I - With Hjorth Parameters . . . 46

5.19 Thought Action Detection - Method Accuracy Comparison - Mix 1 (Hjorth) . . . 47

5.20 Thought Action Detection - Overall Results - Mix II - With Hjorth Parameters . . . 47

5.21 Thought Action Detection - Method Accuracy Comparison - Mix 2 (Hjorth) . . . 48

A.1 Blink Detection - Neural Net . . . 56

A.2 Blink Detection - Naive Bayes . . . 56

A.3 Blink Detection - Decision Tree . . . 56

A.4 Blink Detection - Deep Learning . . . 56

A.5 Blink Detection - Neural Net (Hjorth) . . . 57

(20)

Brain Computer Interface List of Tables

A.7 Blink Detection - Decision Tree (Hjorth) . . . 57

A.8 Blink Detection - Deep Learning (Hjorth) . . . 57

A.9 Blink Detection - Neural Net - Custom Filter (Hjorth) . . . 58

A.10 Blink Detection - Naive Bayes - Custom Filter (Hjorth) . . . 58

A.11 Blink Detection - Decision Tree - Custom Filter (Hjorth) . . . 58

A.12 Blink Detection - Deep Learning - Custom Filter (Hjorth) . . . 58

A.13 Move Thought Detection - Neural Net . . . 59

A.14 Move Thought Detection - Naive Bayes . . . 59

A.15 Move Thought Detection - Decision Tree . . . 59

A.16 Move Thought Detection - Deep Learning . . . 60

A.17 Move Thought Detection - Neural Net (Hjorth) . . . 61

A.18 Move Thought Detection - Naive Bayes (Hjorth) . . . 61

A.19 Move Thought Detection - Decision Tree (Hjorth) . . . 61

A.20 Move Thought Detection - Deep Learning (Hjorth) . . . 62

A.21 Move Thought Detection - Neural Net - Mix 1 . . . 63

A.22 Move Thought Detection - Naive Bayes - Mix 1 . . . 63

A.23 Move Thought Detection - Decision Tree - Mix 1 . . . 63

A.24 Move Thought Detection - Deep Learning - Mix 1 . . . 64

A.25 Move Thought Detection - Neural Net - Mix 2 . . . 65

A.26 Move Thought Detection - Naive Bayes - Mix 2 . . . 65

A.27 Move Thought Detection - Decision Tree - Mix 2 . . . 65

A.28 Move Thought Detection - Deep Learning - Mix 2 . . . 66

A.29 Move Thought Detection - Neural Net - Mix 1 (Hjorth) . . . 67

A.30 Move Thought Detection - Naive Bayes - Mix 1 (Hjorth) . . . 67

A.31 Move Thought Detection - Decision Tree - Mix 1 (Hjorth) . . . 67

A.32 Move Thought Detection - Deep Learning - Mix 1 (Hjorth) . . . 68

A.33 Move Thought Detection - Neural Net - Mix 2 (Hjorth) . . . 69

A.34 Move Thought Detection - Naive Bayes - Mix 2 (Hjorth) . . . 69

A.35 Move Thought Detection - Decision Tree - Mix 2 (Hjorth) . . . 69

(21)

Chapter 1

Introduction

I

Motivation

All of us have one day, looking to a person with physical disability, or a lost limb or complete inability to move its members, thought what would become of us if we were in their situation, and most of our responses are “don’t want to think about it” or “it would be impossible for me...” because we are all accustomed being able to do everything by ourselves, in regard to taking care of us, writing, dressing, eating, etc. For people without limbs, or part of members, the advances in medicine in the prosthetics area have returned much of the lost autonomy that these people suffered, however there are still some limitations making them still very different from a real member.

With advances in the existing technology it has become increasingly possible to develop fully robotic members or parts of human limbs, able to perform all the movements and abilities of a real member, however there is still a gap to win, which is, the proper control of such robotic limbs without any other commands or any analog interfaces.

To fill this gap scientists have performed many studies in the area of reading brainwaves, along with other types of interfaces but for the purpose of this thesis only the ones regarding brainwaves have interest. Major breakthroughs have been achieved, some people already have robotic prosthetic limbs that are controlled with the use of brainwaves, but with a method of reading that is invasive.

Surgeons go inside the skull and put microchip circuits able to read the potential differences in our brain at specific points and from there allowing us to interpret the data in order to determine our emotions and actions in other words

(22)

Brain Computer Interface Chapter 1. Introduction

one electroencephalogram directly from inside the skull.12

II

Problem in Question

This work intends to prove and demonstrate that it is possible to overcome the barrier described above, not only for invasive but also by non-invasive methods, in spite of the more present difficulties because the reading of EEG is made from outside the skull where we have significant more noise in the signal caused by our own skin and hair.

Thus I intend to prove that it is possible to obtain good results for the determination of small movements such as the blink of an eye, and the classification of thoughtful actions.

III

Thesis Structure

The structure of this thesis will be divided into three essential parts and finally the conclusions obtained from the analysis of results.

In the first part, referring to Chapter 2, will be the presentation of some studies carried out by various entities and researchers and relevant news in the development of Brain Computer Interfaces and the technology associated with it, as well the explanation of existing different types of Brain Computer Interfaces.

In the second part, referring to Chapter 3, will be the hardware and software used in this work, as well a theoretical explanation of the subjects covered, and the approach to the problem, what are the Hjorth Parameters and which were the predictive methods used for creating the models used in the classification.

In the third part, referring to Chapter 4, will be the presentation of results, which will be divided into two parts, were the first part refers to the detection of classes “Blink Left”, “Blink Right” and “Blink Both”, related to the detection of eye blinking and the second part refers to the detection of classes “Move Left”, “Move Right”, “Move Forward”, “Move Back” and “Stop”, referring to the thoughtful actions.

1Quartz. This controlled prosthetic robot arm lets you actually feel what it touches. 2015. url: http://qz.com/500572/this-

mind-controlled-prosthetic- robot-arm-lets-you-actually-feel-what-it- touches/.

2The New York Times. Prosthetic Limbs, Controlled by Thought. 2015. url: http://www.nytimes.com/2015/05/21/technology/a-

(23)

Chapter 2

State of Art

This work is based on a great concept that the scientific community has been increasingly exploring and investigating, the Brain Computer Interface (BCI) that by using the Electroencephalography (EEG) can create what before was considered fiction. EEG is the measurement of brain waves. It is a test that can be done easily and quickly and enables us to see how the brain functions over time.

In 1924 Berger1 first recorded the EEG signal of a human brain with a very basic device, still in testing and required that silver wire was inserted under the scalp of the patient. Through the analysis of EEG signals Berger was able to identify oscillatory activity in the human brain, designated as alpha waves, between 8-12 Hz also known as Berger waves. Later using the reading device developed by Siemens that was able to measure voltages as small as a tenth of a thousandth of a volt, made the Berger experience a success. From this point the EEG opened the door to a new world in the human brain field of study.

The EEG is usually used to show the types and brain activity sites during an attack and to evaluate people who have brain disorders, such as coma, confusion, tumors, loss of memory and reasoning or pure and simply the loss of control of a limb, such as paralysis caused by strokes.

In its most morbid aspect, but essential, the EEG is used to declare brain death, and from it to make the decision of turning off the support machines that maintain a living patient, but that in reality no longer has the possibility to live without the machines help.

The Brain Computer Interface, which is sometimes also known as Mind-Machine Interface (MMI), or Brain Machine Interface (BMI) is a direct communication link between the brain and an external element and are typically used for research, mapping,

1Lingaraju.G.M Anupama.H.S N.K.Cauvery. “Brain Computer Interface and its Types - A Study”. In: International Journal of

(24)

Brain Computer Interface Chapter 2. State of Art

service or repair of human cognitive or sensory functions through brainwave reading by the EEG.

Research in BCI began in the 1970s at the University of California, Los Angeles (UCLA), through the National Science Foundation grant, followed by a contract from DARPA.2 The works after this survey, marks the first use of the expression

brain-computer interface in the scientific literature.

As it was said a BCI’s main purpose is to be able to transmit human intentions in the real world to a virtual world that can be analyzed and processed, and thus can again interact on the real world through external devices or with computer programs with specific objectives. Usually this kind of interfaces are suitable for people with limited motor or speech level and can therefore manage to move or express themselves respectively.

With the emergence and the increasingly more diverse study of BCI interfaces, it has been possible to overcome barriers that in the recent past were considered as fictional elements. The research in BCI has been largely focused on the field of neuroprosthetic, which aims to study solutions for the lack of limbs, hearing and vision. Ahead I will talk a little about some of the studies and results that have been achieved with the research in this field of science.

I

Types of BCI

There are three types of BCI, invasive, partially invasive and non-invasive and as stated earlier, all are intended to intercept the electrical signals that pass between our neurons and turn them into signals that can be processed and used by other external devices.

I.1

Invasive BCI

Invasive BCI are interfaces that are implanted directly into the human brain, i.e., within the cranium through an operation and directly into the gray matter, hence the invasive term, however between all three types they are the ones that have the best results in terms of reading signal and quality. However they are subject to the signal becoming weaker over time or simply disappearing, because of the accumulation of scar tissue caused by the reaction of the human body to a foreign object.

This type of BCI is commonly used in people with paralysis, so that they can have some functionality with external devices, and they are also being used to restore

(25)

the “vision” to blind people, not a proper vision like a person who normally sees, but the possibility of starting to see some colors and blurs, that for those living in darkness, it is as if they were reborn again.

Another major utility of this type of BCI is the ability to restore partial movement to people who have lost limbs, thus using robotic limbs for their replacement.

I.2

Partially Invasive BCI

Partially invasive BCI are also implanted within the skull, but they do not touch the gray matter, unlike invasive, and therefore the strength of its signal is weaker in comparison with the invasive BCI type, however they are not so subject to the effects of scar tissue accumulation and continue to show better signals readings than the non-invasive BCI type.

The electrocorticography (ECoG) uses the same technology than the non-invasive BCI use, but the electrodes are embedded in a small plastic film that is placed over the cortex under the dura mater. Through this technology Eric Leuthardt and Daniel Moran in 2004 at the University of St. Louis in an assay where able to make a teenager control a game called Space Invaders, and found that with this technology it would be impossible to control more than one dimension.3

Another possibility would be to use the Light Reactive Imaging BCI interface, but they are still under study and summarizing it consists of putting lasers inside the skull that would be trained to track a single neuron and its reflection would be measured by another sensor. When the neuron fires, the pattern of laser light and its wavelength would change slightly, thus enabling the monitoring of a single neuron, but there would be less contact and consequently less accumulation of scar tissue.

I.3

Non-Invasive BCI

Non-invasive BCI are the ones with poorer brainwave reading signal when compared to the other two types, because of our skull, skin and hair that distort the signal and introduce noise, however this interface is the most secure, cheap and easy to use as it does not require a medical operation to install the sensors or the interface itself in the brain, you can simply put it like a helmet, where the sensors are mounted and the reading is made through the contact.

Despite having the worst signal quality among the three types of BCI, from the non-invasive BCI it was possible to make a patient that through some muscle implants was able to recover partial movements, thus revealing the potential of this type of

(26)

Brain Computer Interface Chapter 2. State of Art

non-invasive approach.

II

Classifiers and Supervised Learning Methods

One of the purposes of this work is to validate the use of Hjorth Parameters as a good way to improve the results on the detection of the required classes. They are a type of feature extraction that are usually used in signal processing in time domain. I’ll go into further detail down below.

In order to detect the various classes by reading the signals from EEG sensors, some type of prediction model was required. Since the program that was used to create the prediction models and then test them was RapidMiner , that will be discussedR

further down in Section I.2 Software in chapter 4, the available prediction methods were chosen from the existing library inside the program.

Below we’ll see a brief summary of each of the methods applied. Not going into detail on how they work, since that’s not that much the focus of this work, but more so to compare the results of each method trying to analyze and see which fits best the detection of several possible classes.

II.1

Hjorth Parameters

The Hjorth Parameters45 are usually used in signal processing in the time

domain and analysis of electroencephalographic signals to perform features extraction. There are three parameters, the activity, mobility and complexity that are indicators with statistical properties.

The activity is the signal power, the variance of a time function and may indicate the area of the spectral power in the frequency domain. The activity is defined as follows:

Activity = var(y(t)) (2.1)

Where y(t) represents the signal.

4Wikipedia. Hjorth Parameters. 2016. url: https://en.wikipedia.org/wiki/Hjorth_parameters.

5Donnchadh ´O Donnabh´ain. Hjorth Parameters Matlab Function. 2016. url: https://github.com/donnchadh/biosig/blob/master/biosig/

(27)

Mobility is the average frequency of the proportional standard deviation of the power spectrum and is defined by the following equation:

M obility = s

var(dydty(t))

var(y(t)) (2.2)

Where dydty(t) represents the first derivative of the signal y(t).

Complexity is the change in frequency and compares the signal to the similarity of a pure sine wave, converging its value to 1 if it is similar. It can be calculated by the following equation:

Complexity = M obility(

dy dty(t))

M obility(y(t)) (2.3)

Using these markers, unique characteristics can be extracted from each of the EEG sensor readings and I intended to demonstrate that with its use it is possible to obtain better results in the analysis of EEG signals than just using the original signal, making the creation of more complex prediction models, more reliable, since they depended of more input attributes to determine the desired output signal.

II.2

Neural Net

Figure 2.1: Neural Network Example Neural Net6 is a computational approach

based on a large collection of neural units that loosely model the way human brain solves problems.

Each neural unit is connected to many other connections and may be of the type executing or inhibiting its effect on the activation state of the related neuronal units. Each of these neuronal units may have a summation function that matches the values of all the entries together, and an operating limit or limit function may be necessary to overcome a form that can spread to others neurons.

Neural Net a is self-learning and training

system rather than being explicitly programmed and stands out in areas where detection feature or solution is difficult to express in a traditional computer program.

(28)

Brain Computer Interface Chapter 2. State of Art

They are composed of several layers or have a cube format where the signal passes through from input to output like seen in Figure 2.1. The return spread is where the stimulation of routing is used to reset the weights in the ”front” neural units and can be done in combination with training where the correct result is known in advance.

The Dynamic Neural Networks are the most advanced as they can dynamically create new connections based on rules and even new neural units while releasing others. One of the best use factors is that after due training, neural networks can become great problem solvers while other methods may not achieve the same results, however in some cases this requires training them to several billion cycles of iteration, which makes it a complex and lengthy training phase.

II.3

Naive Bayes

Naive Bayes7 is a simple technique for building classifiers using models that assign class labels to problem instances, represented as resource values vectors, where the class labels are drawn from a finite set.

It’s not just an algorithm for the construction of such classifiers, but a family of algorithms based on a common principle, all Naive Bayes classifiers assume that the value of a particular function is independent of the value of any other characteristic, given the class variable.

For example, a fruit can be considered as an apple whether it is red, round and about 10 cm in diameter. A Naive Bayes classifier considers each of these resources to contribute independently to the likelihood that the fruit is an apple, regardless of any possible correlations between the characteristics of color, diameter and roundness.

The Naive Bayes classifiers can be easily trained in some cases in a very efficient manner on a supervised learning environment. In many cases, the estimated parameters for the Naive Bayes model uses the maximum likelihood method, or can work with the Naive Bayes model without accepting the Bayesian probability or using any Bayesian methods.

One of the Naive Bayes’s advantages is that it requires very little data to train and estimate the parameters required for classification.

(29)

II.4

Decision Tree

Decision Tree8 is a structure in flow chart where each node is a test, and each

branch is the result of the test and each leaf node is the final decision that corresponds to the final classification.

A decision tree is composed by three types of nodes: • Nodes of Decision, usually represented by squares; • Nodes of Chance, usually represented by circles; • End Nodes, usually represented by triangles.

Decision Trees are commonly used in research management, transactions and in cases where specific information can be easily separated into groups of consecutive test. The applicability of a Decision Tree increases its ultimate success in the final classification desired.

II.5

Deep Learning

Deep Learning9 is a machine learning branch based on a set of algorithms that

attempt to model high-level abstractions in data using a deep graph with multiple layers of processing composed of multiple nonlinear and linear transforms.

One of the best points of Deep Learning is that replaces craft features with efficient algorithms for training unsupervised or semi-supervised and extracting characteristics hierarchies.

Deep Learning algorithms are based on the distribution of representations. The underlying assumption behind distributed representations is that the observed data is generated by the interaction factors arranged in layers. Also it assumes that the layered factors correspond to levels of abstraction or composition and varying their number and size may provide different amounts of abstractions.

Deep Learning takes advantage of this idea of hierarchical factors where the ones of higher level or more abstract concepts are learned starting from the lower leveled ones. These hierarchies are commonly constructed from a layer-by-layer greedy method. In this way it helps you choose what features are useful for learning.

Many Deep Learning algorithms are applied to unsupervised learning tasks, and this is a great benefit because usually not characterized information are more abundant than the characterized one.

8Wikipedia. Decision Tree. 2016. url: https://en.wikipedia.org/wiki/Decision_tree. 9Wikipedia. Deep Learning. 2016. url: https://en.wikipedia.org/wiki/Deep_learning.

(30)

Brain Computer Interface Chapter 2. State of Art

III

News and Studies Developed in the Area

I could tell you about the different approaches using the BCI and how they were achieved and deepen the form and methods used to obtain it, however that is not the fundamental objective of this work but the potential of using this type of equipment and technology to improve the welfare of human being and the achievement of barriers previously considered insurmountable.

So here are some of the various news and results obtained in studies and researches on the field of BCI mostly oriented in the field of robotic prosthetics which is the filed that interests me the most, like previously said on the motivation for this work.

Figure 2.2: Pplware News - “Brain controlled car drives in straight line”

One of the latest news and quite an impact in this area of development was the one published in Portugal by the site Pplware, entitled “Car controlled by the brain already leads in a straight line”,10 where a team of investigators from Nankai University in the city northeast of Tianjin, after 2 years of investigation where able to make a driver equipped with a non-invasive BCI, in this case the Emotiv EPOC+ the same that wasR

used in implementing this thesis, able to control a car.

The driver could only move the car forward, but however it is their ultimate goal to achieve complete movements like move forward, backward and turn around.

The project was initially born in order to help people with disabilities who could not physically control a car so that they could do it without the use of hands or feet, but with the progress of the project they saw that the technology could not only be used for this purpose as well as for people without disabilities thus freeing the hands and feet of the driver and allowing it to control the car only with their minds.

This news is somewhat in line with the objective of this thesis, which intends to use the BCI in robotic applications. Similarly they could move the car forward, but it could also be used to enable a production line, move a wheelchair, control a robot, call the elevator, turn on the coffee machine, turn on the TV, change the channel, etc. I do not say that you can currently perform all these actions, but it will be soon within the reach of us all, with the advances that have been achieved in the area.

10Pplware. Carro controlado pelo c´erebro j´a conduz em linha recta. 2015. url:

(31)

Figure 2.3: Design News - “Teen Invents Artificial Arm Controlled by Bluetooth

Powered Brain Waves” Another very interesting news was

published by the site Design News in March 2014, entitled “Teen Invents Artificial Arm Controlled by Bluetooth-Powered Brain Waves”,11 where a boy of just 15 years

old, Shiva Nathan, developed a robotic arm controlled by brainwaves.

Initially he had begun to develop a game, but in the meantime discovered that a cousin of his had lost both arms, and found that his prosthesis were very expensive and not very functional and then saw that he could do better taking advantage of what he had already been developing for the game.

He developed his program based on Arduino Prosthesis and with the help of

Mindwave Mobile12 headset, which is a non-invasive BCI relatively cheap.

The helmet is capable of recognizing two states, attention and meditation, and converts them into digital values that are sent to the micro-controller and move the arm through interaction with servos based on thresholds defined previously. At the time of the news the arm could only flex fingers and rotate the elbow, however he was already trying to get models for the movement of each finger.

Figure 2.4: Seeker - “Robot Arm Follows Brainwave Instructions”

In another article released by the site Seeker in May 2012, entitled “Robot Arm Follows Brainwave Instructions”,13 Hochberg and John

Donoghue, managed to develop a partially invasive BCI based system capable of making the test patients with paralysis to move a robotic arm.

The system uses small sized electrodes implanted directly into the primary motor cortex, the part of the brain that controls movement. Signals are routed through a tiny box in the scalp, which is connected by wire to a refrigerator-sized computer. The computer translates the brain movement patterns into an algorithm that transmits directly to the robot arm.

11Design News. Teen Invents Artificial Arm Controlled by Bluetooth-Powered Brain Waves. 2014. url: http://www.designnews.com/author.

asp?section_id=1386\&doc_id=272470\&itc=dn_analysis_element.

12MindWave. MindWave Mobile Headset. 2016. url: https://www.mindtecstore.com/en/mindwave- mobile- brainwave- starter- kit?gclid=

CNSj0fy-_M8CFUI8GwodgEsPGg.

(32)

Brain Computer Interface Chapter 2. State of Art

Both patients were able to move the robotic arm and grab a foam ball, but one of the situations that most moved the creators of the system was when one of the patients took a cup of coffee and got to drink it from a straw, something that she was not able to do during the last 15 years due to the accident that made her disabled.

Although the results were good, the success rate was about two thirds of the time and were not as fast and accurate as a normal human arm, however this experience brought back hope to many people with disabilities.

It is necessary to note that these results were obtained through a partially invasive BCI, which is not easily accessible and potentially not all would undergo an operation to have it, although they help a lot to improve the research and investigation in the area.

Figure 2.5: Mail Online - “As scientists discover how to ’translate’

brainwaves into words... Could a machine read your innermost

thoughts?” Another curious news was released by the

site Mail Online in May 2012, entitled “As scientists discovered how to ’translate’ brainwaves into words... Could a machine read your innermost thoughts?”,14 in which a team of scientists developed a technique that allows to read our minds. The researchers managed to record the complex patterns of electrical activity of the brain on a volunteer as he heard someone talking.

By submitting these wave patterns to a computer program they managed to turn them back into words and corresponded exactly to the words that the volunteer had heard the other person to speak. The scientists involved believe they can go further and even read the words and thoughts not spoken of one person. This advance is very important, because one day when the technology is fully developed and tested, a doctor for example could speak through thought with a patient who suffered a stroke and is unable to speak naturally.

Another also impressive news was released by the site MIT Technology Review in January 2015, entitled “The Brain-Computer Interface That Works Wirelessly”,15

after a decade of work, researchers at Brown University together with a company of

14Mail Online. As scientists discover how to ’translate’ brainwaves into words... Could a machine read your innermost thoughts? 2012. url:

http://www.dailymail.co.uk/sciencetech/article- 2095214/As- scientists- discover- translate- brainwaves- words- - Could- machine- read- innermost-thoughts.html.

15MIT Technology Review. A Brain-Computer Interface That Works Wirelessly. 2015. url:

(33)

Utah, Blackrock Microsystems began to market a wireless device that can be connected to a person’s skull and transmits via radio, thought commands that are transmitted by an implant in the brain.

Figure 2.6: MIT Technology Review - “A Brain-Computer Interface That

Works Wirelessly”

At the time of the news, the Utah company was talking with the US Food and Drug Administration for the remote mental device to be cleared for testing on volunteers.

The big limit to this type of implant is the fact that the volunteer could only use them in the presence of a team of scientists in a laboratory, which is a major detriment to the advancement of this technology based on invasive BCI systems.

Now a piece of news published by Quartz in September 2015, titled, “This mind-controlled prosthetic robot arm lets you actually feel what it touches”,16 were the US government had been able to develop a robotic arm that allows the user to feel things. Earlier at a conference in July, the US Defense Advanced Research Projects Agency (DARPA) had claimed to have been able to build a robotic arm capable of being controlled by the user through its brain, coming as of the date of this news to update its claim being not only able to control, but also to have the possibility of feeling things.

Figure 2.7: Quartz - A patient using an early version of the robot arm.

(DARPA) The system works by physically connecting

wires to the user’s motor and sensory cortex, which control the movement of muscles and the identification of tactile sensations when touching something, respectively. Thus when the user moves the robotic arm through its motor cortex, the pressure sensors in the arm transmit back the sensation to the sensory cortex of the user.

The level of touch perception is so good that the researchers blindfolded one of the volunteers and touched one of the robotic fingers and he was able to identify which one was touched, then they tried with two at the same time and he jokingly asked if they were trying to trick him, but he was also able to identify both fingers touched, thus demonstrating a recognition of almost natural sensation.

(34)

Brain Computer Interface Chapter 2. State of Art

This was the result of nine years of work on the DARPA project, “Revolutionizing Prostheses”, but still they keep secret some of the concrete results of the investigation, for example how sensitive the robotic arm is in identifying different textures.

From The New York Times, comes the next news, May 20 2015,17 were Johns

Hopkins University Applied Physics Lab engineers developed a robotic prosthesis of an arm with 26 joints capable of picking up an approximately 20 Kg, controlled through the mind of the user.

Figure 2.8: The New York Times - “Prosthetic Limbs, Controlled by Thought”

In the Figure 2.8 we seen a man named Les Baugh who lost both arms in an accident when he was a teenager. Now 59 years old and after undergoing surgery to remap his nerves from the arms he lost so that he can send signals from his brain to the modular prosthetic limb, M.P.L., he can control both robotic arms that can be seen in the figure, by using only his mind.

Of course the movements are not as fluid as normal people but even so he is able to grab on small objects like balls, cubes and even bottles as seen in the figure, where you can see him drinking from one with a straw.

Chief engineer, Mike McLoughlin, said that as Les Baugh’s remapping of nerves deepened, he would be able to have sensations in the prostheses, which have more than 100 sensors in each. Some of the other patients who also were part of the development and testing of the M.P.L. were able to feel some textures through touch.

The robotic arm developed is modular, being able to be modified in order to customize the limb to the needs and measures of the user, however this robotic arm is only still in the research and development stage and to be commercialized would have to be approved by the Food and Drug Administration. Note that at the time of the news there were only 10 robotic arms and each one cost $500,000, which demonstrates how expensive it is, being also necessary to reduce costs in the construction of one, to make the price more feasible to buy one.

Although the current approach is using an invasive method, the ultimate goal will be to have a fully controllable robotic prosthesis in a non-invasive way, without the need to perform surgeries or implants.

(35)

(a) ¨Ossur’s Robotic Prosthetic Foot Mounted

(b) ¨Ossur’s Robotic Prosthetic Foot Close Up Figure 2.9: Popular Science - “Brain-Controlled Bionic Legs Are Finally Here”

The last article chosen comes from the Popular Science website entitled “Brain-Controlled Bionic Legs Are Finally Here”18 from May 2015, where an Icelandic

company called ¨Ossur, an prosthetics developer, created an implanted myoelectric sensor which is implanted in the user and thus enables the control through the mind, in this case, of a prosthetic robotic foot as it can be seen in Figure 6.1a.

Surgery is relatively fast, which according to an orthopedic surgeon and the R&D chief of ¨Ossur, takes 15 minutes, where each sensor only needs a 1 cm incision because they are quite small with 3 mm x 80 mm. These sensors are powered by a coil embedded in the encapsulation, which will later connect to the prosthesis and as there are no batteries involved, there is no need to change the sensors unless they suffer a malfunction for any reason. The prosthesis moves as the sensor, frontal or rear, picks up impulses in the local muscle tissue.

Another of the significant parts of this sensor is that since it does not need to be connected to any specific nerve then it is not necessary to perform surgeries to remap the nerves to the prosthesis area, as we saw in the previous news.

The company ¨Ossur has not yet released the value of the IMES technology, however they already have extensive testing on their integrated leg, knee and foot prostheses where one of the test users had already tested a robotic foot with the IMES technology for 14 months, with great results according to the company and the user.

These were some of the many news regarding the ongoing researches in the field of BCI. The interfaces that offer us the best results are undoubtedly invasive or partially invasive, but as was mentioned earlier, at the moment these solutions are mainly very complex and most of then only possible to use in the presence of a team of scientists, or in early stages of testing.

18Popular Science. Brain-Controlled Bionic Legs Are Finally Here. 2015. url: http://www.popsci.com/brain- controlled- bionic- legs-

(36)

Brain Computer Interface Chapter 2. State of Art

Non-invasive interfaces are the most popular, able to get relatively good results, and at much more affordable values than others and easy to use and access. Of course it has its limitations, the signal quality is the main one that prevents them from being able to define more complex brain functions, such as the invasive type can due to its proximity to the brain, but even so these limitations outweigh in relation to the value associated with acquiring an invasive BCI type and not being able to use them without being in the laboratory or assisted.

(37)

Chapter 3

Objectives

Before discussing the process and all the materials involved in the development of this work, i’ll firstly display the original objectives that were set as the basis for the development of this work.

• Develop a program that enables the visualization of the data that is gathered by the EPOC+ headset;

• Develop a program that records data in a controlled manner, by creating pre-determined recording sessions;

• Analyze the data gathered to determine the most influential sensors in the requested action/thought;

• Apply filtering to data to minimize the influence of noise; • Calculate Hjorth Parameters;

• Create and test the prediction models using Rapidminer ;R

• Develop a program that tests the prediction model created, in real time using the EPOC+ readings.;

In the next chapter I’ll star explaining all the hardware and software used and developed to accomplish these objectives and also the methodology used in their execution.

(38)
(39)

Chapter 4

Brain Computer Interface - BCI

The development of a Brain Computer Interface (BCI), involves the study of brainwaves, as aforesaid, in order to be able to interpret the brain signals on concepts/classes noticeable by other external devices.

For proper identification of these concepts/classes it is necessary to acquire the data as reliable as possible, and with the lowest possible presence of noise. Furthermore it is also necessary to make sure that after the data acquisition it is possible to identify the moments that correspond to the desired class, as an example the time when the person doing the test blinked his left eye, right or both at the same time, or even the time that it was requested to visualize a mental action.

Thus it is necessary to ensure the synchronicity of recorded information with the identification of the moments that are going to be studied.

Now a brief explanation of the hardware and software used and also the developed process for EEG data acquisition and processing.

I

Equipment and Software Used

I.1

Emotiv EPOC+ Headset

The main equipment used, was the Emotiv EPOC+R 1 helmet, that consists of

14 EEG channels designed for contextualized research and advanced Brain Computer Interface applications. The EPOC+ also has gyroscopes sensors, wireless connection, Bluetooth Smart with a battery of 12 hours inside.R

(40)

Brain Computer Interface Chapter 4. Brain Computer Interface - BCI

Figure 4.1: Emotiv EPOC+

Figure 4.2: Sensor Map of EPOC+

With EPOC+ it’s possible to access the RAW information concerning the AF3, AF4, F7, F8, F3, F4, FC5, FC6, T7, T8, P7, P8, O1 and O2 sensors, corresponding to the positions that can be seen in Figure 4.2.

It’s from these sensors that the non-filtered EEG information (RAW data) can be accessed in each of the above described points to subsequently process and analyze.

The information returned by each sensor is sampled at 128 Hz, and has a resolution of 14 bits in which 1LSB = 0.51µV , with a dynamic range of 8400µV .2

I.2

Software

The main programs used were the Matlab for analysis and filtering the RAWR

information from the EPOC+, RapidMiner for analysis and creation of modelsR

able to detect the desired classes and finally the NetBeans for the developmentR

of additional programs designed for capture, visualization and recording of data from the EPOC+ and detection of desired classes, such as the blink of an eye.

In Matlab I took party of the great ease of handling large format informationR

and also the it’s analytical and mathematical skills, to make the application of filters required for proper cleaning of RAW information, process to be explained in the next section, and consequently the proper application of the identification tag in the moments where the class(es) to be analyzed was(were).

The Rapidminer was used because of its simple and easy use of its operators,R

previously created by the development team of RapidMiner , referring to variousR

(41)

data prediction models, among many other things, that in this case were the Neural Net, Naive Bayes, Decision Tree and Deep Learning, for proper training, testing and prediction of these models regarding our identification tag class.

I.3

Developed Software

As mentioned earlier, it was necessary to develop some additional programs essential for the development of this work. Something that I must state is that these programs despite being able to present the desired results, however they have some problems that could be improved, but that are not essential for the analysis and accomplishment of this work.

Initially, in addition to the helmet access, there was no programs available that would allow me to view the information read by the EPOC+ helmet, other than the Emotiv “TestBench” program, but in an operating system other than the intendedR

for the development of this work, Linux, since in a previous work to this one I developed an interface in Linux, that interacted through a computer with a wheelchair, where I virtualized the joystick control in a program developed in C++, thus allowing the ability to control the wheelchair through the computer instead of the chair’s manual joystick.

So I needed to develop a program capable of showing the information read by the helmet, in the form of a graphic like in an actual EEG. This way I could see the information of each of the sensors and thus begin to make a visual analysis of the information read from the helmet while performing any of the classes intended to classify.

The program developed was called “ControllerBCI Visualizer”, which as its name says allows the visualization of information read by the helmet in real time in a graphic format as you can see in Figure 4.3.

The program was developed in C++ together with QT5, to design the windows interface and the features of the program. It is possible to visualize the information of all the sensors of the EPOC+ helmet, as seen in the figure, or only those that we want to analyze with more attention, since it is possible to choose by tick(s) which sensor(s) is(are) to be displayed.

One of the problems in this program is the refresh time of each of the graphs that can take up to 50 ms, which leads to the loss of a maximum of 7 readings of the signals, since the sampling rate is 128 Hz. Visually you do not notice any difference in the design of each of the graphics, but when printing the information read to the Linux terminal, it was possible to verify this situation and got to the conclusion that was due to a function related to the refresh of each plot.

(42)

Brain Computer Interface Chapter 4. Brain Computer Interface - BCI

However, this problem is not deterrent to the accomplishment of this work, since the main purpose of the program is to visualize the information with the helmet on and to do some tests in real time and try to identify changes in the information of the sensors visually.

Figure 4.3: Demonstration of ControllerBCI Visualizer Interface

After being able to visualize the information, it was necessary to find a way to save it in order to be able to process and analyze it later. Thus the second program, “ControllerBCI Recorder”, was created also in C++ together with QT5, which enables the recording of information read from the helmet to a predefined file “recordData.txt”. You can see in Figure 4.4 the main menu of the program.

In order to test the program, a print of the information read from the EPOC+ helmet while the recording process was going on, was made to the Linux terminal and in the end compared the information printed on the terminal with the information recorded in the file, guaranteeing that the information would be the same and without corruption. This process was quite simple to implement and helped to confirm that the program recorded all the information properly.

The program has several recording options, being one of them a free session, without any type of requests to the user to perform any class, and it also has several recording sessions already configured, to make specific recordings were the user performs classes in a given order or in a random mode, to help in the creation of the prediction models.

(43)

Figure 4.4: Demonstration of ControllerBCI Recorder Interface

Among the several sessions presented in the program we have the four that were used for the realization of this thesis, “Blinking Mix 1”, consisting of 5 repetitions the classes “Blink Left”, “Blink Right” and “Blink Both”, corresponding to blinking the left eye, then the right, and finally both eyes. The “Blinking Mix 2” session is still composed of the same 15 requests for classes described above, but in a totally random way.

This process was done as follows, after the user presses the record button, a black image appears that fills the screen for 3 seconds, then begins the sequence in which for 1 second appears an image that asks, for example blink the left eye and next appears the black image again for 3 seconds signalizing the user to stop the action/thought requested.

When was asked for the user to blink the eye, the recording program classified that second as “Blink Left”, the same thing for the other classes but with the respective classifications, thus obtaining a time window in the information recorded with the identification of when the user blinks the left eye or performs any of the other classes. For all the other time span, where it wasn’t requested any of the classes to be identified the program would tag it with “Unknown”.

The other two sessions used were the “Move Mix 1” which was composed of 2 sequences of “Move Left”, “Move Right”, “Move Forward”, “Move Back” and “Stop”,

(44)

Brain Computer Interface Chapter 4. Brain Computer Interface - BCI

followed by two more requests for “Move Left” and “Move Right”. Similarly the “Move Mix 2” session is composed of 12 class requests but in a totally random manner.

This process was performed in the same way as in the “Blinking Mix” sessions, but the difference was that the image was held for 3 seconds instead of 1 second, for the user to visualize the action requested mentally during those three seconds.

Note that the reason why the sessions were not recorded with more than 12 requests, was because the version of the Rapidminer used was v7.2.003 with freeR

license, which limits the use of the program to 10,000 entry points, that is, each file may contain only 10,000 lines of different values for each of the sensors readings, which corresponds to approximately 1 min and 20 sec, the maximum continuous time for recording each session. I think this is one of the key points of the interpretation at this stage of the analysis that I will address further on the findings.

The stored information refers to the reading of all the sensors at that moment, the signal quality of each sensor, ie if the sensor was properly placed and the signal could be read well, this information being provided by the EPOC+ helmet itself, the time at which the signals were read, the tag corresponding to the class requested for the user to perform, if at that time was requested for such a thing and the state of the battery of the helmet. Also a serial number was added to help detect the loss of packets.

The problem with this program is that if you need to change the pre-configured sessions you need to go in the source code and change it, then recompile and build it again to be able to use it. This is one of the major points that could be improved, by making it possible to create personalized sessions inside the main menu program then save and run them.

The third program created was “ControllerBCI Decoder”. After being able to analyze the information and create predictive models in RapidMiner , the lastR

program to develop was the one that implements one of the predictive models to visualize the proper functionality of the said model in real time.

Thus, this program relies heavily on the code of the other two programs, in reading and writing information by adding only the methods necessary for the computation of the Hjorth Parameters and the application of information filters, more specifically, the frequency filter and moving average filter over RAW information, a process that will be discussed more concretely in Chapter: Data Gathering.

The implemented program executes one of the models created for the detection of the blink of an eye, using the model with better results, the “Decision Tree”, as it can be seen in the obtained results, and when it detects the blink of an eye writes in the Linux terminal the time at which it was detected and which class was detected.

(45)

II

Approach to Solve Problem

II.1

Data Gathering

Initially, the first approach was attempting to view the various EEG signals in real time to try and interpret the various received signals. So the “ControllerBCI Visualizer” program was used. With it I was able to view all the sensors signals from EPOC+ headset.

After that, a process was formulated to be able to save all the sensors signals so it could be used to create the prediction models that would detect the desired classes. So with the help of the developed application, “ControllerBCI Recorder”, it became possible to save the information in text file (record session) for later review and with appropriate labels of the areas where it was asked for the user to perform the desired action.

As previously described this process was run through predefined recording sessions with various actions where the user after putting the program in record mode, would only perform the actions requested by the images projected by the program.

II.2

Data Analysis and Filtering

With the properly recorded data then comes the data analysis. The first step was to analyze the frequency level, in order to try and identify the presence of areas with more information to try and remove most of the noise. After checking the spectral analysis, it was possible to confirm the presence of the dominant frequencies that can be seen in Figure 4.5.

Figure 4.5: Original Data with Label and Spectrum Analysis - Sensor AF3 - Blinking Mix 1 Session

(46)

Brain Computer Interface Chapter 4. Brain Computer Interface - BCI

As it can be seen in Figure 4.5, there are two signals, one for the sensor data and other for the label that represents the time zone were it was asked for the user to perform a given class. By analyzing the frequency spectrum we can see that most of the information is concentrated in the area below the 5 Hz. A close up of this information can be seen in Figure 4.6.

Thus a low-pass filter was developed to eliminate all other information above 4 Hz. This was developed with the assistance of Matlab using the internal applicationR

“Filter Design & Analysis”. The filter code can be accessed in Appendix B: Low Pass Filter.

Figure 4.6: AF3 - Frequency Spectrum - Close Up

After the application of the filter, a moving average was applied in order to smooth out abrupt changes in the short term, thereby making it a more cleaner look to the information visual. We can see the result of the application of the two filters in Figure 4.7.

Figure 4.7: AF3 - Original vs. Filtered

(47)

much of the irregularity in the original signal and after applying the moving average filter, which can be seen by the red line, the signal became even more clean and centered around the value 0µV , as expected, because as you see by observing the original signal it has a bit of offset, thus by applying the median filter the offset was removed.

With the signal clean, comes the next phase, the determination of Hjorth Parameters. Matlab was used for the calculation of the Hjorth Parameters usingR

the equations shown before.

With this, we conclude the cleaning and preparation of data.

II.3

Creation of Prediction Models

Next comes the determination and comparison of predictive models for the detection of classes, by RapidMiner . Using the procedures available in RapidMinerR R

and assembling a test circuit, it became possible to create a model using a training session and then applying it to a different training session for comparison, thus being able to verify the accuracy on detecting the desired classes using the model created by the first training session. We can see an example of a model identification and test processes in Figure 4.8.

Figure 4.8: Rapidminer - General Process

Within each of the four blocks that we can see on the right there is the process that we can see in Figure 4.9, where the only thing that differs between the four is the block regarding the method to be used.

(48)

Brain Computer Interface Chapter 4. Brain Computer Interface - BCI

Figure 4.9: Rapidminer - Inner Process

All the steps above, describe the general procedure for filtering, processing, model creation and identification of classes, from the reading of EEG signals from the EPOC+ helmet.

(49)

Chapter 5

Results

First of all I must point out that the values used in this study were obtained from my own readings done on myself with EPOC+ helmet. The sessions “Mix 1” were made early in the morning after waking up and the sessions “Mix 2” were made late at night before going to sleep. This was made to include the mental fatigue after a day of working, in my case, developing the software programs and running data filtering and testing.

Turning now to the actual detection of classes, they are split in two parts. In the first part, the identification of classes “Blink Left”, “Blink Right” and “Blink Both”, respectively corresponding to blinking the left eye, right eye and both eyes.

In the second part, the identification of classes “Move Left”, “Move Right”, “Move Forward”, “Move Back” and “Stop”, corresponding respectively to the thought of moving to the left, right, forward, backward and finally stop.

I

Blink State Detection

As stated earlier, this point shows the process and results in identifying the classes “Blink Left”, “Blink Right” and “Blink Both”.

To do this, the program “ControllerBCI Recorder” was used to record the two independent sessions, called “Blinking Mix 1” and “Blinking Mix 2”, explained previously in the Section: Developed Software.

After recording the RAW information using the method explained in Chapter: Data Gathering the following results were obtained.

(50)

Brain Computer Interface Chapter 5. Results

I.1

Results

Analyzing the filter application results and calculation of the Hjorth Parameters, it was clear that the blinking of the eyes is easily visible in sensors AF3, AF4, F7 and F8 as shown in the following figures, which presents the overlap of the results for the filtered signal AF3 and AF4, F7 and F8 in each of the recorded sessions.

Note that looking at Figure 5.3, taking into account that it’s session “Blinking Mix 1”, you can see the tagged zones with “label”, where the first zone corresponds to the class “Blink Left”, the second zone to class “Blink Right” and the third to class “Blink Both”, as explained above.

Figure 5.1: Blinking Mix 1 - Sensors AF3 e AF4

(51)

Figure 5.3: Blinking Mix 1 - Sensors F7 e F8

Figure 5.4: Blinking Mix 2 - Sensors F7 e F8

Seeing that these sensors are the ones that show changes visible to the naked eye with the “Blink”, it was decided to conduct the first analysis of the sessions “Blinking Mix 1” and “Blinking Mix 2” in RapidMiner using the process explained above,R

considering only the data from sensors AF3, AF4, F7 and F8.

The information on the session “Blinking Mix 1” was used to create the model and the information on session “Blinking Mix 2” was used as test information.

(52)

Brain Computer Interface Chapter 5. Results

Here are the results obtained.

Table 5.1: Blink Detection - Overall Results - Without Hjorth Parameters

Blink Detection (Without Hjorth Parameters)

Model precison or recall true Unknown true Blink Both true Blink Left true Blink Right class precision 74.82% 49.82% 45.24% 44.76% Neural

Net class recall 89.28% 21.92% 24.69% 27.65%

class precision 74.91% 34.57% 58.46% 36.90%

Naive

Bayes class recall 91.97% 17.92% 23.90% 15.80%

class precision 74.87% 35.80% 81.88% 48.00%

Decision

Tree class recall 96.10% 19.36% 17.77% 9.48%

class precision 73.85% 21.80% 45.85% 55.12%

Deep

Learning class recall 82.87% 22.88% 25.16% 28.91%

You can see the detailed data in Appendix A: section I - Blink Detection - Without Hjorth Parameters.

Next an overall accuracy of each of the methods studied is shown. Table 5.2: Blink Detection - Method Accuracy Comparison

Comparison

Neural Net Naive Bayes Decision Tree Deep Learning

Accuracy 70.34% 70.62% 72.45% 66.08%

Before analyzing the data there are two concepts necessary to comprehend, the class precision and class recall.

Class precision refers to the precision of the model regarding all the times that it outputs, for example “Blink Left”, which of those times where really “Blink Left”, thus giving the class precision of the model.

Class recall refers to the precision of the model on all the accurate times it identified, for example “Blink Left”, and all the times it wrongly identified the class outputted and it should be “Blink Left”, thus giving the model precision for that class detection.

Now regarding the results obtained, although the Table 5.2 show high percentages of accuracy, individually in each method the class with higher precision, with a minimum of 82.87% is the “Unknown” class, while the other classes have lower precision values, between 9.48% and 28.91%, which is not very good, because we want above all to have a good accuracy for the “Blink” classes and not the “Unknown” class. Lets see the true positive detection accuracy for each class. Recall that the test carried out includes five repetitions of each class consisting of 128 samples on average for each repetition.

Imagem

Figure 2.2: Pplware News - “Brain controlled car drives in straight line”
Figure 2.3: Design News - “Teen Invents Artificial Arm Controlled by Bluetooth
Figure 2.7: Quartz - A patient using an early version of the robot arm.
Figure 2.8: The New York Times - “Prosthetic Limbs, Controlled by Thought”
+7

Referências

Documentos relacionados

Ao Dr Oliver Duenisch pelos contatos feitos e orientação de língua estrangeira Ao Dr Agenor Maccari pela ajuda na viabilização da área do experimento de campo Ao Dr Rudi Arno

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados

Com foco na realidade brasileira e pesquisas atuais, são analisados 29 trabalhos, obtidos a partir de consulta à Biblioteca Digital Brasileira de Teses e Dissertações (BDTD, 2018),

i) A condutividade da matriz vítrea diminui com o aumento do tempo de tratamento térmico (Fig.. 241 pequena quantidade de cristais existentes na amostra já provoca um efeito

Necessita apenas de uma tabela com números positivos que representam frequências observadas de objetos ou indivíduos classificados por uma categoria de linha e uma

Despercebido: não visto, não notado, não observado, ignorado.. Não me passou despercebido

Para cada caso de estudo, foram descritas todas as etapas de preparação dos modelos 3D dos objetos testados via Impressão 3D, bem como as tarefas que foi necessário