My Ssec Capstone Project DESIGN AND ANALYSIS OF VIBRO-TACTILE HAPTIC DEVICE FOR HEARING IMPAIRED PERSON A PROJECT REPORT submitted by J

DESIGN AND ANALYSIS OF VIBRO-TACTILE HAPTIC DEVICE FOR HEARING IMPAIRED PERSON A PROJECT REPORT submitted by J

DESIGN AND ANALYSIS OF VIBRO-TACTILE HAPTIC DEVICE FOR HEARING IMPAIRED PERSON
A PROJECT REPORT
submitted by
J. Haripriya (118101005)
towards partial fulfillment of the requirements for the award of the degree
of
Master of Technology (Integ).

in
Instrumentation & Control

School of Electrical and Electronics Engineering
SASTRA DEEMED TO BE UNIVERSITY
(A University established under section 3 of the UGC Act, 1956)
Tirumalaisamudram
Thanjavur-613 401
MAY 2018
BONAFIDE CERTIFICATE
This is to certify that the project work entitled “DESIGN AND ANALYSIS OF VIBRO-TACTILE HAPTIC DEVICE FOR HEARING IMPAIRED PERSON” is a bonafide record of the work carried out by
J.Haripriya (118101005)
student of the final year M.Tech(Integ).Instrumentation &Control, in partial fulfillment of the requirements for the award of the degree of M.Tech(Integ). Instrumentation & Control of SASTRA DEEMED TO BE UNIVERSITY, Tirumalaisamudram, Thanjavur 613401, during the year 2017-2018.

NAME OF THE INTERNAL GUIDE : Dr. G. Balasubramanian
(SAP /SEEE)
SIGNATURE
Project Viva-voce held on _____________________________
Examiner -I Examiner-II

ACKNOWLEDEMENTS
I would like to express my heartfelt thanks to, Prof. R. Sethuraman, Vice Chancellor, SASTRA Deemed to be University for providing all the necessary encouragement and facilities during the course of study.

I extend my sincere thanks to Dr. G. Bhalachandran, Registrar, SASTRA Deemed to be University for providing the opportunity to pursue this project.

I extend my deepest gratitude to Dr. S. Vaidhyasubramaniam, Dean-Planning & Development, SASTRA Deemed to be University and to Dr. S. Swaminathan, Director, CeNTAB, SASTRA Deemed to be University for giving this wonderful opportunity.

I extend sincere thanks to Dr. S. Jayalalitha, Associate Dean, EIE, SEEE, SASTRA Deemed to be University who motivated me during the course of study and helped me in every way during the curriculum and project work.

I am thankful to my guide Dr. G. Balasubramanian, Senior Assistant Professor, EIE who had been a source of constant encouragement and technical support throughout the project. Without his support, this project would not have been possible.

I thank Dr. S. Rakesh Kumar, Assistant Professor-III, EIE who had been a milestone in my career by providing me an honoring project to work upon. His deep insight in the field and invaluable suggestions helped me in making progress in my project work.

.

(HARIPRIYA J)
ABSTRACT
DESIGN AND ANALYSIS OF VIBRO-TACTILE HAPTIC DEVICE FOR HEARING IMPAIRED PERSON
KEY WORDS: Tactile sensation, Vibration motor, FIR filter, Assistive device, Four- channel array
Tactile sensor is a technology to enable speech recognition unlike other hearing aids. In existing hearing aids, stimulation of the skin is done by placing electrodes on the skin. It is known to have large drawbacks as the electrodes when placed on the human body may pass current through the skin when proper insulation is not done. Thereby causing harm to the person wearing it. So the vibration motors are being used instead of the electrodes to avoid such risks.

The main objective of this project is to design a vibro-tactile haptic device for hearing-impaired person. The proposed device tends to use the functional similarity between the ear and the skin. The place-theory of hearing is similar to the tactile sensation of the skin making it potential to localize the vibrations. This tactile-sensation will enable the person with mild to profound deafness to sense speech signals by providing standard vowel speech signals. The results obtained are analyzed using confusion matrix illustrating the working of the prototype. So, similar to cochlear implants, the speech signal is decomposed into various bands of frequencies using FIR filters. A four-channel-array skin hearing aid is developed by segmenting the entire acquired

TABLE OF CONTENTS
ACKNOWLEDGEMENT(iii) ABSTRACT(iv) ABBREVIATIONS(vii)
NOTATION(vii)
LIST OF TABLES(viii)
LIST OF FIGURES(viii)
CHAPTER 1 INTRODUCTION(1)
1.1 Introduction(1)
1.2 Hearing loss(2)
1.3 Causes of hearing loss(3)
1.4 Types of hearing disorder(4)
1.4.1 Based on the nature of loss(4)
1.4.2 Based on the degree of loss(4)
1.5 Types of assistive devices(4)
CHAPTER 2 LITERATURE REVIEW(9)
CHAPTER 3 HEARING MECHANISM(11)
3.1 Anatomy of human ear(11)
3.2 Anatomy and function of cochlea(14)
CHAPTER 4 DESIGN OF THE PROPOSED HEARING DEVICE(17)

4.1 Introduction to tactile sensing(17)
4.2 Block Diagram of the device(18)
4.3 Digital Signal Processor(19)

4.3.1 Filter bank(20)
4.4 Vibration motor array(24)

4.5 Arduino(26)

4.6 Algorithm for speech processing(27)
CHAPTER 5 TESTING AND VALIDATION OF THE DEVICE(28)
5.1 Evaluation of vibration motor sensitivity(28)
5.2 Testing using standard speech signal(29)
5.3 Testing using actual speech signal(33)
5.4 Limitations(34)

CHAPTER 6 CONCLUSION(35)
REFERENCES(36)
APPENDIX(38)
A.1 Program to initialize Arduino(38)
A.2 Program for acquiring speech online(38)
A.3 Program for filter design(38)

ABBREVIATIONS
ALD Assistive Learning Device
SHL Sensorineural Hearing Loss
FIR Finite Impulse Response
DSP Digital Signal Processing
LPF Low Pass Filter
BPF Band Pass Filter
HPF High Pass Filter
MATLAB MATrix LABoratory
FFT Fast Fourier Transform
PWM Pulse Width Modulation
CI Cochlear Implantation
NOTATIONS
mm millimeters
g grams
rpm rotation per minute
g acceleration due to gravity
V Voltage
Hz Hertz
LIST OF TABLES
TABLE NO. TABLE NAME Page No.

4.1 Filter bank 23
4.2 Specifications of vibration motor 26
LIST OF FIGURES
Figure No. Figure Name Page No.

1.1 Common cause for hearing loss 2
1.2 WHO statistics of global estimates 2
1.3 Hearing loss based on degree 5
1.4 Types of hearing ids 6
1.5 Cochlea Implantation 6
1.6 Sound flow in assistive devices 7
3.1 Anatomy of human ear 11
3.2 Anatomy of outer ear 12
3.3 Anatomy of middle ear 13
3.4 Anatomy of inner ear 14
3.5 Cochlea structure with hair cells 15
3.6 Cochlea frequency decomposition 16
4.1 Frequency distribution 18
4.2 Block Diagram representation of device 19
4.3 Motor placement on the skin 19
4.4 Digital Signal Processor applications 20
4.5 Components of DSP 21
4.6 FIR structure 22
4.7 Block Diagram of FIR filter design 24
4.8 Frequency Distribution 25
4.9 fdatool in MATLAB 25
4.10 Arduino board pin mapping 27
5.1 Prototype of assistive device 28
5.2 Confusion matrix of sensitivity 29
Figure No. Figure Name Page No.

5.3 Continuous signal of vowel a, e, i 30
5.4 Single-sided spectrum of vowel a 30
5.5 Filter banks of two channels 31
5.6 Design of filter banks 32
5.7 Confusion matrix of vowel a, e, i 33
5.8 Confusion matrix for actual speech 33
CHAPTER 1
INTRODUCTION
1.1. Introduction
Hearing loss, also known as hearing impairment is a partial or complete inability to hear. According to the statistics provided by World Health Organization (WHO) there is a total of 5% accounting to around 360 million people in the whole world 1. The stated statistics reported by WHO as of March,2015, includes 91% i.e. 328 million of adults (consisting 183 million males and 145 million females) and 9% i.e. 32 million of children.

There are many reasons for hearing loss, of which predominately being hereditarily acquired, aging, viral infections, injuries, side-effects of drugs, diseases affecting ear Furthermore, the main reason in the present era is a person’s prolong exposure to noises made by the vehicles or machines in the industry, explosions 2. This would lead to the loss of hearing of certain frequency range.

It is estimated that by 2050 over 900 million people i.e. for every ten people there would be one person suffering from deafness as per records. Over one-third of the elderly people are having hearing loss proving ageing is the tremendous cause for it. The causes of hearing loss with the exact statistics are illustrated in Figure 1.1.
It shows that noise causes more deafness than aging does. The WHO provided the global data of people suffering from hearing loss and stated that most of the people suffering from due to age factor are present in South Asia, Asia Pacific and Saharan Africa which is illustrated in Figure 1.2.

There is a growing rate of hearing impaired people in the world in an alarming rate. Therefore it is very important for preventing hearing loss and also to treat in a better manner with increasing technology.

Fig. 1.1 Common causes for hearing loss
The treatment is based on the severity of the hearing loss. Irrespective to the age of a person the hearing loss occurs. This tends to change the social life of the person and his mental health.

Fig. 1.2 WHO statistics global estimates of hearing loss
1.2. Hearing loss
Hearing loss is the one of the most common disability affecting people. This was stated as third most common disability with different reasons of cause. The adult are known to have hearing loss if the sound level being heard b them is only above 40 dB and for children the level being above 30 dB. There are a lot of effects due to hearing loss, making the person lonely, irritated and stressed. The early identification of the hearing loss would get a better treatment when compared to late analysis. People with hearing impairment having many affects both physically and mentally. There is a study showing that people with hearing loss have lesser memory power and cognitive functioning making them rejected in society 3. They are even rejected in companies mainly in industries for manufacturing making them unemployed. The early diagnosis will prevent the severe affects of being completely deaf.
1.3 Causes of hearing loss
There are many reasons for hearing loss which have damaged any part of the ear. The main causes include aging and loss due to severe noise exposure. The person can even become deaf, genetically i.e. hereditary from parents or anyone in the family tree. The diseases such as viral fever, Meniere’s disease affect the ear drum. The noise hypertension and diabetes leads to hearing loss too. The exposure to loud noises such as explosions and industrial machines for a longer time make a person get deprived of hearing specific range of frequencies.

1.4 Types of hearing disorder
The hearing loss classified into two (i) based on the nature and (ii) based on the degree.

1.4.1 Based on the nature of loss
There are four types of hearing impairment based on the nature of the loss. The impairment can occur due to the problems in conduction of sound in the ear (inner, middle or the outer part of the ear). The problem may occur in conduction of the signal to the brain from ear through the nerves. They are, (i) Conductive hearing impairment which is a disorder related to conduction of the sound signal in mechanical form in the ear drum, ear canal or the cochlea, (ii) Sensorineural Hearing Loss (SHL) resulted due to the dysfunction of the auditory nerve which is connected from the inner ear to the brain. It also involves problems related to inner ear mainly inside the cochlea and its hair cells, (iii) Mixed hearing loss involving the SHL accompanied by the conduction component damage and (iv) Central hearing impairment occurs in sound centers present in the brain. This type of impairment is caused due to the head injury or any diseased affecting the nerves connected to particular part of the brain.
1.4.2 Based on the degree of the loss
Based on the degree or the magnitude to which the impairment has occurred the classification is done as (i) mild, (ii) moderate (iii) severe and (iv) profound. These are named by the increasing order of the magnitude of loss of hearing which is measured in decibels is illustrated in Figure 1.3.

1.5 Types of assistive devices
To aid the people with hearing loss there are many treatments. These include hearing aid, Cochlear Implantation (CI) and Assistive Learning Devices (ALDs). The type of device to be used depends on the range of hearing loss and the type of loss. Hence, there are various advantages and disadvantages for each device.

Fig. 1.3 Hearing loss based on the degree
The hearing aids are the amplifying devices containing a microphone with an amplifier along with it which is wearable as shown in Figure 1.4. There are various filters used in the signal processing to remove the noise. These aids are used by the person mainly with SHL and conductive hearing loss where the sound needs to be amplified for the person to hear. The main advantage of these aids is that they are wearable, amplifies the sound attenuated by the ear and provides more gain for soft sounds. But the disadvantages are that it is only an aid for amplification, filtering should be done accurately and also cannot differentiate the noises in surroundings making the person impossible to understand sounds. The next type is the CI whose function is to replace the damaged hair cells by electrically stimulating the inner ear illustrated in Figure 1.5. The microphone collects data and is processed using the speech processor and is provided to an array of electrodes. This is advantageous for people suffering from partial hearing and for post lingual suffering patients. But it is limited to people who suffer profound hearing loss.

Fig. 1.4 Types of hearing aids
.

Fig. 1.5 Cochlear Implantation
It is costly and requires a surgery to be done. This surgery is done to drill a hole in the head to place the electrodes and is considered to be dangerous. To avoid such things many of them opt for assistive devices. The assistive devices for hearing impaired are used separately or with the combination of hearing aid or CI shown in Figure 1.6.

This are preferred for people suffering from hearing for a particular frequencies. They are categorized into acoustic and alerting types. In one of the acoustic a transmitter and receiver are present where the other end of receiving is connected to a hearing aid. The other alerting type includes conversion of sound or speech signal into visual representation or vibration for touch sense on the skin. There are various types included in the above stated. These are more comfortable as they reduce the noise and are customizable. They are more useful than other types but are not used mainly because of less popularity and not advanced technologies being involved in them. All the above mentioned methods include lip-reading and hand-signals.

Fig. 1.6 Sounds flow in ALD
There are many treatments for hearing impaired people, such as hearing aid, assistive device and surgical implants ALDs are highly advantageous than other devices as they are portable and lower cost. The design of a kind of ALD is based on tactile sensing. Tactile sensor is one of the alternative technology to enable speech recognition 4-6.Unlike other assistive device, this device uses cutaneous sensory nerve instead of the auditory nerve in ear.
The tactile vocoder mimics the ‘place theory’ of frequency discrimination of the cochlea 7-11.It is advantageous than the cochlea implant, as it doesn’t require any surgery and is cost-effective. Researches based on tactile vocoder devices for profoundly deaf were very successful. Vocoder has a disadvantage that it is too costly for practical use. Also there have been many new processors which are cheap and easily implemented being available. There are many devices which use the place theory but are implemented using electrodes. These are dangerous while using because of short circuit or current discharge may affect the person who is wearing it tremendously. To avoid such problems different methodology is implemented in the work presented using various latest technologies
CHAPTER 2
LITERATURE REVIEW
Gault (1924), described a set of experiments in which he investigated into a set of vowels and consonants combinations that are differentiated by the sense of touch and stated that if the apparatus is designed with high degree of perfection it can be used for the training purposes and can be used as a device to understand speech with the sense of touch.

Goff (1967), proposed a working describing how the frequency discrimination of the skin is similar to that of the ear. The areas where skin is more sensitive and responds to the changes or any vibrations to which it is applied are described with many experiments by giving various inputs to the skin.

Beachler and Carney (1981), two types of channels are taken which are single channel and multi-channel array of vibro-tactile sensors are taken. The comparison was made for the channels and found that as the number of channels increased the output varied. The phonemes were mainly tested and the error occurred in each device varied according to the sound.
Brooks PL and Frost BJ (1983), in their research work have used a tactile vocoder to identify words. The vocoder employs filter channels with a one-third octave and the central frequencies of 200-8000 Hz range. They described that outputs obtained after logarithmic amplification were transmitted through a solenoidarray of 16 channels on the fore-arm of the subject. Though at first the words were poorly guessed were identified with improved degree of perfection by practice.

Jianwen Li (2014), proposed multi channel array skin hearing technology to solve sound discrimination problems utilizing the principle of hair cells. The band pass filtering technology is used to convert the received voice signals of different frequencies into electric impulses. To simulate the different regions of skin an array of electrodes is used so as to check the skin response to electric signals. Through this experiment it is found that sensory nerves in the skin can help to transfer skin signal.

Mahalakshmi and Reddy M. R (2010), have proposed the updated design of filters in cochlea implantation. This was done using Band-Pass filter Banks implemented using Kaiser Window with a length of 877. This was used as the speech processing can be done easily which is to remove the noise. This design is used for the filter bank design of the vibration motor array.

CHAPTER 3
HEARING MECHANISM
3.1. Anatomy of human ear
The ear is one of the sense organs that enable mammals to hear. Hearing is the sense of sound by the brain and central nervous system. It is categorized into two parts, distinguishing the different sounds and identifying their source. The ear is generally divided into three different parts- outer ear, middle and inner ear as shown in Figure 3.1. To begin with, the inner ear is filled-in by a fluid called perilymph. This fluid inside the inner ear helps the sound receptors to convert sound waves into the action potentials (electrical signals). The action potentials enable the brain to sense the sound

Fig. 3.1 Anatomy of the human ear
In addition to conversion of sound signals into action-potentials, the inner ear is capable for the sense of balance. The outer ear and middle ear act as passage to send the sound to inner from the environment.
They help in compensating the losses in the sound energy by amplifying the signals when the waves pass from one medium to other. The outer ear consists of eardrum that acts as a channel to conduct vibrations of the ear illustrated in Figure 3.2. It takes care of localization of sound. The sound localization is described in two ways. To start with, the sound reaches the ear little sooner compared to the second ear (right or left). Secondly the intensity of sound is reduced when it reaches second ear. This is because the head obstructs partially the spreading of sound waves passing onto other ear. All these considerations are done by the brain in determining the source of sound.

Fig. 3.2 Anatomy of outer ear
The middle ear is situated after external ear before inner ear illustrated in Figure 3.3. The eardrum i.e. the tympanic membrane separates the middle ear from the ear canal belonging to the outer ear. The middle ear propagates vibrations by the ear drum to the fluid of inner ear. The chain of movable bones makes the transfer of vibrations possible. These bones are called as ossicles that extend through the corresponding muscles of middle ear.

The tympanic membrane is about 5 mm in radius. Its shape is slightly curving inward on its surface and vibrates in response to sound and is very sensitive to pain. The auditory tube exposes the outside of the membrane to the atmosphere enabling its cavity i.e. tympanic cavity will be continuous in jaw and the throat regions shown in Figure 3.4. The mouth actions such as chewing and swallowing open the auditory tube which is closed generally. This opening of the auditory tube enables the middle ear air pressure to be in balance with atmospheric pressure. Tympanic membrane’s vibrations are suppressed if there is excess pressure; as a result the sense of hearing gets affected.

Fig. 3.3 Anatomy of the middle ear
The inner ear is connected to the brain through the auditory nerve wherein the sound signal needs to be converted into electrical signal. This signal is processed in the brain a reaction is sent through the nerve connections. The main part of the ear is the cochlea and its hair cells which are responsible for frequency decomposition. This forms the base for the proposed device

Fig. 3.4 Anatomy of the inner ear
3.2 Cochlea anatomy and function
Cochlea is the organ of hearing shown in Figure 3.5. It is called this way because it is the main part of the ear which converts the sound signals in to sense of hearing. Its shape is in the form of snail like spiral, so that its longer shape perfectly fits inside a bounded space. The tube of cochlea is divided into scala vestibuli, cochlea duct(scala media) and scala tympani. These three parts are bundled up a spiral stari-case in the cochlea. Acting as the upper chamber, the base of scala vestibuli is the oval window. Scala tympani is the lower chamber which has a basal aperture and rounded window, which is enclosed in an window that exhibits elasticity. The cochlea duct separates the other two chambers of cochlea. The start of cochlea is called as the basal end and the other end is called as apex. Helicoterma situated at the apex of cochlea duct enables the communication between scala vestibuli and scala tympani.

Perylimph is filled in by both scala vestibuli and scala tympani, whereas cochlear duct is filled in with fluid called endolymph containing high potassium and low sodium concentrations.
The organ of corti is consists of hair cells that act as receptors. The hair cells near the larger end of cochlea respond to high pitches sounds where as those at smaller end and rest of the cochlea respond to sounds with low pitch. The nerves that connect these hair cells to the brain are subjected to damage due to various reasons. As the tympanum pushes back and forth against the cochlea compress the fluid to create waves in the compartment filled with fluid when the tympanum moves back and forth against the cochlea. Specific nerve impulses are created depending upon the wave characteristics. The hair cells convert the vibrations into sound signals. The base of the brain receives the nerve impulses that are formed from vibrations that are sent using auditory-nerve. This way the sound message is passed to the brain.

Fig. 3.5 Cochlea structure with hair cells
The frequency distribution of the cochlea along with the watery liquid filled inside it is useful for transmission of signal to the brain. The cochlea is made of basilar membrane
which contains hair cells and liquid present inside it. It contains hair cells responsible for amplification shown in Figure 3.6.

Fig. 3.6 Cochlea frequency decomposition
When the sound waves travels from the base to the apex in the basilar membrane, the
pressure wave deformation occurs at specific area on the membrane depending on the frequency of vibration. The frequency of vibration is larger at the base (kHz) and reduces at the apex (Hz) .

After different places being invoked that is by place coding, the nerve ends are electrically impulse. Due to which the signal is acquired by the brain instantaneously. From the frequency distribution it can be found that the frequencies that can be conducted by the ear are in the range of 20 Hz – 20 kHz. The cations potassium and calcium present in and near the hair cells are responsible for the signals transferring. As the ions move towards and into the particular hair cell there would be change in the pressure present near it. This enables the nerve connection to get stimulated

CHAPTER 4
DESIGN OF PROPOSED HEARING DEVICE
In the previous design of tactile hearing aids, the stimulation of the skin is done by placing electrodes on the skin. It is known to have large drawbacks as the electrodes when placed on the human body may pass current through the skin when proper insulation is not done. Thereby causing harm to the person wearing it. So the vibration motors are being used instead of the electrodes to avoid such risks. It detects sounds from events picked up by a microphone, adapts the sound to the frequency sensitivity range of the skin using algorithms developed based on modulating, transposition, or filtering principles, and translates the signal as vibrations.

A four-channel-array skin hearing aid is developed by segmenting the entire acquired speech into four FIR (Finite Impulse Response) filters based on the frequency spectrum 11 – 20. This is given to the four respective vibratory motors to stimulate the skin. This technology is advantageous than other devices since it doesn’t require any operator, not relying on intact hearing, low cost and is comparatively safer to use.

4.1 Introduction to a tactile sensing
In a new study, it is found that the timing and frequency of vibrations produced in the skin when you make the hands move on any surface plays an important role in how we use our sense of touch. It is known that the sense of touch helps to gather information about the objects and surfaces around us. Receptors present in the skin are spread in the form of a grid or sort of rows and columns. When we touch any object grid consisting of many receptors on various levels of the skin will transmit the signal containing information about the surface to our brain.
The skin is also sensitive to vibrations of different magnitudes. These vibrations produce or send information to the afferents or the nerves that carry signals from the receptor to the brain. This exact timing and frequency response of the neural system conveys specific information about texture of brain which is similar to the frequency of vibrations on the eardrum conveying information about the sound signal. Hence proving that the place theory or frequency distribution of ear in the cochlea as shown in Figure 4.1 is similar to that of skin.

Fig. 4.1 Frequency distribution
4.2 Block Diagram of the device
The block diagram shown in Figure 4.2 depicts the basic structure in design of the tactile device. The microphone is used for acquiring data from the surrounding. This data i.e. the speech signal is sent to the DSP (Digital Signal Processor) for further processing. In this processor the segmentation of the signal is done into four channels by filters .This depicts a four channel array on the skin.

DSP processorMotor DriveVibrationMotor ArraySkinBrainMicrophone
Fig. 4.2 Block diagram representation of the proposed device
The motor drive is used for interfacing the processor to the vibration motors. The four channels are connected to four vibration motors. This forms the array of four on the skin. The sensitivity of the motor is intensified so that it would match the skin detectable range. A separator is placed between the skin and the vibrators for increasing the vibration effect and also to avoid the motor from getting in contact with any obstacle which prevents from rotating. The motors are separated from each other with a distance of 3cm apart to increase the sensitivity which is shown in Figure 4.3.
Vibratory MotorCounter weightSeparator SKIN1.2.3.4.

Fig. 4.3 Motor placement on the skin
4.3. Digital Signal Processor

In signal processing of any digital signal a processor is required. Such a digital processor does the operations mathematically to provide the required output. A microprocessor is generally used for this purpose. This is a most important technology to be established in 21st century. It takes data such as images, audio, video which are manipulated using various algorithms shown in Figure 4.4.

Fig. 4.4 DSP diverse applications
First of all a signal must be converted from analog to digital using ADC (Analog to Digital Converter). This converted signal is then utilized for processing using various algorithms already implemented as tasks in the processor. This then is converted to the
required analog form using DAC (Digital to Analog Converter) shown in Figure 4.5. There is a need of converters to be present along with the memory chip. In the entire processor mathematical algorithms are fed for the different tasks to be performed. The memory is based on the processing required during the processing. The serial and parallel communications are present in this chip.

Fig. 4.5 Components of Digital Signal Processor
4.3.1. Filter bank
The proposed device requires signal processing of the audio or the speech signal provided to it through the microphone. The obtained signal should be converted into digital signal for further processing. This is done in MATLAB, using various methods. The converted signal is passed through the filter banks. The filtering is the removal of noise or unwanted components from the speech signal. By the usage of these filters signal can be restored or signal separation can be done. One such filter is FIR filter whose response is of finite impulse response. These are generally preferred as they are feasible and practical than any other digital filters. For a filter of order N, the output is given by weighted sum of the most recently obtained input values with the coefficients multiplied. The FIR filter is made up without any feedback which means the denominator has a value of 1
FIR filters are mainly used because of the following advantages namely,
Eliminate channel interactions
Linear phase response requirement
Easily designed for most custom requirements in frequency response
Do not require feedback, this feature  helps to eliminate feedback errors
FIR  filters always have  their poles at zero due to D(z)=1,always being ensured of the stability
A FIR filter is used to implement all types of digital related frequency responses. Usually FIR filters are designed using adders, multiplier including a series of delays to obtain the required output from the filter. The result of the filter is a mixture of delays operated on input sample.
The filter architecture is shown in fig 4.6. The values of (H0, H1…..HN) are the coefficients used for multiplication. This indicates that the output is a summation of all delayed samples multiplied by the manipulated appropriate coefficients.
?Fig. 4.6 FIR architecture
The filter design is originally defined as the process of choosing the length (order) and coefficients of the filter. All the filter designs are based on the approximation of any ideal filter. So the obtained filter is a closer view of the ideal filter characteristics. Thereby, as the filter order increases makes the filter design and its implementation complex. The design starts with the specification of the required FIR filter characteristics. The specification in detailed implies frequency response design. The response of the filters is classified into three kinds based on the frequencies such as stop band, pass band and transition band. The response of the pass-band is the filter’s effect on the frequency components that are obtained without any effect. The frequencies in a filter’s stop-band are highly reduced. The transition band signifies the frequencies in the middle, which may receive a little reduction, but are not detached completely from output signal. Now, for the required device the filters are designed for four channels based on required the cut-off frequency. The range of the frequency spectrum expands the human hearing range. The set of filters are designed based on the cochlea implantation filter design. The first filter is chosen to be a low pass filter with cut-off between 0-250Hz and the other filters are designed as shown in Table 4.1. The filter outputs are then given to the array of four vibrators through the motor drive as shown in Figure 4.7.

S NO. FILTER CUT-OFF Frequency
1. LOW PASS FILTER (LPF) 0-250Hz
2. BAND-PASS FILTER -1 (BPF1) 250-350Hz
3. BAND-PASS FILTER- 2 (BPF 2) 350-450Hz
4. HIGH PASS FILTER (HPF) ;450Hz
Table 4.1 Filter Bank
To design the filters MATLAB is used with the filter being used is Kaiser Window with a window length of 877 20. The Kaiser Window is mainly preferred among other FIR filters because it is better in implementation for audio signals.

INPUT-SPEECH SIGNAL
LPF
BPF-1
BPF-2
HPF
MOTOR 1
MOTOR 2
MOTOR 3
MOTOR 4

Fig. 4.7 Block-Diagram of FIR filter design
The Kaiser window is implemented in MATLAB using the toolbox used for filter design known as fdatool consisting of many tools required mainly for a analysis purpose such as frequency response, magnitude and phase plot. Using the filter designer which is fdatool in MATLAB, the window length, types of filter, response type, and frequency specifications can be done. The filter design done for the four various filters is done as shown in Figure 4.8.

The fdatool is used for designing a filter whose sampling frequency is chosen as to be the 7510 Hz as shown in Figure 4.9. The beta value of the Kaiser Window is chosen to be 0.6. The audio signal originally is of 44100 Hz. Then down-sampled by a factor of 6 for reducing samples.

4.4 Vibration motor array
The vibration from the each of the filters is given to the array of vibratory motors. These are chosen because they can be sensed by the skin when given with a particular range of vibration. The motor is chosen with the better performance of particular size as tabulated in Table 4.2.

Fig. 4.8 Frequency Distribution

Fig. 4.9 fdatool in MATLAB
The operating range of the vibratory motor is found to be in the range of values between 0.15 to 1 Duty Ratio. The speed is found to be 9000 rpm. The resolution of the vibration motor is 0.05 Duty Ratio.

Table 4.2 Specification of the vibration motor
Parameter Specification Condition
Body Diameter 6mm Max. body Diameter
Body Length 12.3mm –
Unit weight 2.1g –
Rated Vibration Speed 9,000 rpm At rated voltage using the inertial test load
Typical Vibration
Amplitude 1G Peak-to-peak value at rated voltage using the inertial test load
Typical Vibration
Efficiency 9.5G/W At rated voltage using the inertial test load
Max. Start Voltage 0.8V Measured at no load
4.5 Arduino
As already known Arduino is an open-source hardware wherein the coding can be done easily. The Arduino is a single board microcontroller which helps in miniaturization.In this device designing Arduino Uno is been used for its user-friendly coding availability. The operating range is of 5V, the coding is done through MATLAB. Since the add-ons are done in MATLAB for Arduino processing of signal and controlling can be done on a single platform. The PWM (Pulse Width Modulation) pins of the Arduino are used for connecting the vibratory motors which operate for these signals as illustrated in Figure 4.10. The range of PWM (Pulse Width Modulation) is 0 – 255 values which is divided from 0 – 1 on the Arduino board. The four pins of Arduino board are connected to the four channels the vibration motor array placed on the skin.

Fig. 4.10 Arduino board pin mapping
4.6. Algorithm for speech processing
Input : Speech signal (Mic./File)
Output : Voltage signal to Arduino
Step: 1 To acquire the speech signal and provide as a wav file to MATLAB.

Step: 2 The audio signal is down-sampled by a factor of 6 to reduce the number of
samples.

Step: 3 The segmentation of the speech signal into different frequency spectrum
using filter banks.
Step: 4 The FFT are done for the various filters and the spectrum is analyzed.

Step: 5 Adjustment of the gains of each filter banks are done by plotting the
amplitude.

Step: 6 Command is given to the vibration motors according to the speech
segments
CHAPTER 5
TESTING AND VALIDATION
The prototype of the assistive device is shown in Figure 5.1(a) with connection to the Arduino Uno board. The coding is done for acquiring of signal and providing it to the motors. Then the device is placed around the hand making all the four vibratory motors in contact with the skin for the sense of touch as shown in the Figure 5.1(b)

(a) Prototype with Arduino (b) Placement of Prototype
Fig. 5.1 Prototype of the assistive device
5.1 Evaluation of vibration sensitivity
In general, sensitivity of any device is stated as the minimum magnitude of the input signal required to produce a specified or detectable output signal. For the proposed device, we are interested in finding the sensitivity of the human skin i.e. to find the tactile sensitivity. To test the sensitivity the vibration motor is placed on the skin.
Then different Duty-Ratios are given to the motor through the Arduino. The confusion matrix is obtained with the data obtained after testing. As a first part of testing, the vibration motor is checked for the sensitivity on the human skin. For this purpose the motor is placed in contact with the human skin. Then different random duty cycles are provided for testing to the person. The sensitivity matrix is obtained for the same using MATLAB as shown in Figure 5.2.

8953501306830The classification of the data is done into three classes as on, off and increment. The analysis of the confusion matrix provides the sensitivity of the motor. The output of the confusion matrix obtained is an output of 99% i.e. the sensitivity with an error of 1%.

Fig. 5.2 Confusion matrix for sensitivity
5.1 Validation using standard speech signal
The vowels are chosen for the purpose of testing. The analysis of the spectrum and envelope is done in MATLAB using various tools. The continuous speech signal for the vowel ‘a’ is shown in Figure 5.3.
The signal obtained has sampling frequency of 44100 Hz. This signal is down-sampled to reduce the number of samples to 7510 Hz .The single sided frequency spectrum is shown in Figure 5.4 (a). The single sided spectrum after sampling of the signal is shown in Figure 5.4 (b). The spectrum output can be analyzed and the signal peak energy frequencies can be identified using the amplitude envelope. This data can be used to change the gains of the filters based on the maximum peak value of energy obtained in the amplitude plot. The gain is increased so that the motor will rotated for those values in a higher voltage value.

Fig. 5.3 Continuous time signal of vowel ‘a’

(a) before down-sampling (b) after down-sampling
Fig. 5.4 Single-sided frequency spectrum of vowel ‘a’
Then the sampled signal is passed through the Filter bank consisting of four filters namely low, band-pass 1, band-pass 2 and high pass filters.

The frequency spectrum for each filter is chosen between 0Hz – 20kHz. The fdatool is used for the design purpose. The coefficients of each filter are obtained and given to the filter design. The graph of the low-pass in the range of 0 – 250Hz is obtained in the Figure 5.5(a). The graph of band-pass filter 1 is obtained in range of 250Hz – 350Hz as shown in Figure 5.5(b). Thereby the graph of band-pass filter2 is obtained in the 350Hz – 450Hz range shown in Figure 5.6(a). Similarly the high-pass is designed for above 450Hz. As shown in Figure 5.6(a).

(a) Low Pass Filter (b) Band-Pass Filter 1
Fig. 5.5 Filter banks for two channels
(a) Band-Pass Filter 2 (b) High Pass Filter
Fig. 5.6 Design of filter bank
A certain number of profoundly deaf subjects were chosen and the vowels were given randomly as stimuli. Initial testing of the device was done by acquiring vowels from the standard database known as International Phonetic Alphabet (IPA).The training was done for a longer period for the vowels ‘a’, ‘e’ and ‘i’ .Then the results were analyzed using confusion matrix and found to be 98% accurate as shown in Figure 5.7. Furthermore, online voice recording of the vowels was provided to the two people and required more time for learning. The testing for these online data acquired is done for the vowels ‘a’ and vowel ‘e’.

Fig. 5.7 Confusion matrix for vowel a, e ,i
5.2 Validation using actual speech signal
The accuracy is decreased to around 86.7% showing a lot of variation as shown in Figure 5.8.Online speech signal output for vowels ‘a’, ‘e’ .

447675169545
Fig. 5.8 Confusion matrix for actual speech signal
Signals are obtained but with different training periods. The online speech signals are obtained through the microphone and passed through the filter banks. It is found that the amount of error increased.

5.3 Limitations
It is found during the training period that the person undergoing had problems in learning the vowels beyond ‘a’, ‘e’, ‘i’. The learning period also increased as the number of vowels increased. The vowels are chosen as they are periodic in nature. But if the training is increased to phonetics and other words there would be more difficulties and challenges while teaching. But the fact of interest is that the person would get the sense of hearing while using the proposed device.

CHAPTER 7
CONCLUSION

According to the WHO, it is found that a large number of world populations are facing with the problem of hearing loss. There are different causes of hearing loss such as aging, exposure to loud noises or hereditary. They are treated separately using a hearing aid, CI or ALD. Unlike other assistive devices, the proposed device is formulated on the basis of place theory of ear which is similar to that of skin. Therefore, the device contains vibrators which are used to stimulate the grids present on the skin. The filter design is done for the speech signal acquired using MATLAB. The filter design is done for the window length 877 using fdatool. The output is given to the array of four vibration motors. Since four motors are being used, this forms a four-channel motor array.
The sensitivity and the resolution of the vibration motor is found using the confusion matrix. The prototype is placed on the hand and various vowels are given. The output is plotted using a confusion matrix which gives the accuracy of the vowels ‘a’, ‘e’, ‘i’ with accuracy of desired range. The frequency spectrum and envelope is used for the design of the filters. In future scope, the inclusion of the lip-reading might help in increasing the accuracy rate.
REFERENCES
WHO Fact sheet (2015) FS-300 Deafness and hearing loss.
Candice Manning, Timothy Mermagen and Angelique Scharine (2016) The effect of sensorineural hearing loss and tinnitus on speech recognition over air and bone conduction military communications headsets, In Press.

Uhlmann RF, Larson EB, Rees TS, Koespsell TD and Duckert LG (1989) Relationship of hearing impairment to dementia and cognitive dysfunction in older adults, JAMA, vol.26113, 1916-1919.

Gault RH (1924) Progress in experiments on tactile interpretation of oral speech, J .Abn Soc Psych 19 :155-159.

Guelke RW, Huyssen RMJ (1959) Development of apparatus for the analysis of sound by the sense of touch. J Acoust Soc Am, 31:799-809.

Martin B.D (1972) Some aspects of visual and auditory information transfer by cutaneous stimulation, Queen’s University.

Kofmel L (1979) Perception of environmental and speech sounds through a vibrotactile device, Queen’s University, Kingston, Canada, 1979.

Brooks PL, Frost BJ (1983) Evaluation of a tactile vocoder for word recognition.

J Acoust, Soc Am 74:34-39.

Brooks PL (1984) Comprehension of speech by normal-hearing and profoundly deaf subjects using the Queen’ s University tactile vocoder, Queen’s University, PhD diss., Ontario, Canada.

Gibson DM (1983), Tactile Vocoder User Manual, Queen’s University, Kingston, Canada, Dept Electrical Engineering.

Yeni-Komshian GH, Goldstein MH (1977)Identification of speech sounds displayed on a vibrotactile vocoder. J Acoust Soc Am, 62 :194-198.

Eilers, R. E., Ozdamar,Oller, D. K., Miskiel, E. and Urban (1989) The effect of vocoder filter configuration on tactual perception of Speech”, J. Rehab. Res. Dev.,26, 51–64.

Jianwen Li (2014) Cutaneous sensory nerve as a substitute for auditory nerve in solving deaf-mutes’ hearing problem: an innovation in multi-channel- array skin-hearing technology, Neural Regeneration Research, 9, Issue 16, 1532-1540.

Beachler CA, Carney AE (1986), “Vibrotactile perception of suprasegmental features of speech comparison of single-channel and multi-channel instruments”, Journal of Acoustial Society of America,79,131.

GoFF G.D. (1967), “Differential discrimination of frequency of cutaneous mechanical Vibration”. Journal of Experimental Psychophysics, 74, 294-299.

Bolanowski, S. J., Jr., Gescheider, G. A., Verrillo, R. T. and Checkosky, C.M (1989) Four channels mediate the mechanical aspects of touch. Journal of the Acoustical Society of America, 84, 1680-1694.

Ahmed Ben Hamida (1999) An Adjustable Filter-Bank Based Algorithm for Hearing Aid systems, IEEE, 1187-1192.

18. Philipos C. Loizou (1997) Signal Processing for Cochlear Prosthesis: A Tutorial
Review, IEEE, 881-885
19.Philipos C.Loizou(1999) The Signal Processing Technique for the Cochlear
Implants”, IEEE Engg. In Medicine and Biology, 34-45.

20. Mahalakshmi P., Reddy M. R (2010), “Signal analysis by using FIR filter banks
in cochlear implant prostheses”, International Conference on Systems in the dept.
Medicine and Biology, IEEE, 253 – 258, December 16-18.

APPENDIX
A.1 Program to initializes arduino

This program is to initialize the arduino and connect the four motors,
a = arduino(‘COM3’, ‘Uno’, ‘Libraries’, ‘AdafruitMotorShieldV2’)
addOnShield = addon(a, ‘AdafruitMotorShieldV2’);
A.2 Program for cross coupled controller
This program is to obtain an online audio signal
recObj = audiorecorder
disp(‘Start speaking.’)
recordblocking(recObj, 1);
disp(‘End of Recording.’);
pause(2);
A.3 Program for Filter design
This program helps in implementing the data obtained from microphone to FIR filters.

x,fs = audioread(‘C:UsersuserDesktopevowel.wav’); %audio signal from microphone
TotalTime = length(x)./fs;
t= 0:TotalTime/(length(x)):TotalTime-TotalTime/length(x);
Figure(1);
plot(t,x);
orig_fft=fft(x);
length1=length(x);
f0 = (0:length1-1)*(fs/length1);
pow0 = abs(orig_fft).^2/length1;
Figure(2);
plot(f0,pow0);
y=downsample(x,6,0); % down-sampling of the signal
tdown1=fs/6;
total1 = length(y)./tdown1;
t11= 0:total1/(length(y)):total1-total1/length(y)
y11=fft(y);
length1=length(y);
f1 = (0:length1-1)*(tdown1/length1);
pow1 = abs(y11).^2/length1;
Figure(3);
plot(f1,pow1);
Num11=-5.49836645599313e-06,-7.66588300776852e-06,-9.64016558743512e-06,-1.13027945061986e-05,-1.25422697482289e-05… % Coefficients of LPF
low=filter(Num11,1,y)
f_low=fft(low);
n10=length(low);
F2 = (0:n10-1)*(tdown1/n10);
pow2 = abs(f_low).^2/n10;
Figure(4);
plot(F2,pow2);
low_new=low*3
low_1=abs(low_new)
Num12=-1.84603780799603e-06,-2.09491419898238e-06… %Coefficients of %BPF-1
bp1=filter(Num12,1,y);
f_bp1=fft(bp1);
n11=length(bp1);
F3= (0:n11-1)*(tdown1/n11);
pow3 = abs(f_bp1).^2/n11;
Figure(5);
plot(F3,pow3);
bpnew=bp1*2
bp_1=abs(bpnew)
Num13=-1.51183845499752e-06,-1.22754109430845e-0… %coefficients of %BPF-2
bp2=filter(Num13,1,y)
fbp2=fft(bp2);
n12=length(bp2);
F4= (0:n12-1)*(tdown1/n12);
pow4 = abs(fbp2).^2/n12;
Figure(6);
plot(F4,pow4);
bpnew2=bp2*4.1
bp_2=abs(bpnew2)
Num14=9.54656491058925e-06,2.19734152608253e-05… %coefficients of %HPF
high=filter(Num14,1,y)
fhigh1=fft(high);
n13=length(high);
F5= (0:n13-1)*(tdown1/n13);
pow5 = abs(fhigh1).^2/n13;
Figure(7);
plot(F5,pow5);
bpnew3=high*2.4
bp_3=abs(bpnew3)
for( i=1:100:length(bp_2))
writePWMDutyCycle(a,’D5′,low_1(i));
writePWMDutyCycle(a,’D6′,bp_1(i));
writePWMDutyCycle(a,’D9′,bp_2(i));
writePWMDutyCycle(a,’D11′,bp_3(i));
pause(0.2);
end
writePWMDutyCycle(a,’D5′,0);
writePWMDutyCycle(a,’D6′,0);
writePWMDutyCycle(a,’D9′,0);
writePWMDutyCycle(a,’D11′,0); % to make PWM values to zero