Voice Controlled wheelchair
Ece 4180 Design Project
  • About
  • Components
  • Implementation
  • Videos
  • References
  • Source Code
  • Contact
  • more...

Controlling the chair

Picture
Original System
In the original system, based on the position of the joystick, different voltages were sent to the control module that would drive the appropriate motors using the H-bridge.

Picture
To control the chair, we hacked into the Joystick module (JSM) and bypassed the joystick. We replicated the joystick signals using a Phidget Analog Board.

The Phidget libraries were used to interface the analog board and the sensors to the x86 system. Voice recognition was performed in MATLAB and the decoded Command was passed to the C++ source code. 





PictureBypassing the joystick
The system supports the following commands:
1.       Move
2.       Back
3.       Left
4.       Right
5.       Stop



The C code then sends the appropriate voltages to the 8 pin ribbon cable of the Joystick Module using the Analog Board.

PicturePin Head of the Ribbon Cable that connects the joystick to the joystick module
Pin 1 receives a 5V supply from the JSM. Pin 8 is connected to ground. Pin 3 is not used for our setup. When the system is switched on, a reference voltage (2.5V) is supplied to pins 2, 4, 5, 6, 7. Pin 6 is the reference pin and a constant voltage of 2.5 V is maintained on it throughout.


Picture
Voltage profile for each command
Picture
Sensors
The sensors are mounted at the from and back of the chair. The sensors detect obstacles within 40 cms . Once an obstacle is detected the system executes the 'Stop' command

Voice recognition

Picture
Speech Recognition or Automatic Speech Recognition (ASR) is the process of translating human speech into text by using some kind of an algorithm executed on a machine. The main motivation for doing ASR in most applications is to accept a command from a human and provide appropriate service by the machine. Some of the examples are query based information systems, railway reservation and speech transcription.

Data Recording and Training:

For the speech recognition, we have taken the case of a speaker dependent recognition. The vocabulary was a set of five words, which are the set of  commands {"Forward","Stop","Left","Right","Back"}. Fifty utterances of each of the words were recorded by the one of the members of the project and this was used for the training. The audio was recorded on a laptop PC and the fifty utterances of the same word from the *.wav file was recovered using a program which can extract the speech signals alone from the file. Throughout this project the sampling frequency chosen was 16000Hz.

Speech recognition is a special case of pattern recognition. A general block diagram representing such a system is shown below. There are two stages in supervised pattern recognition, viz., training and testing. The process of extraction of features relevant for classification is common to both phases. During the training phase, the parameters of the classification model are estimated using a large number of class examples. During the testing or recognition phase, the features of a test pattern are matched with the trained model of each and every class. The test pattern is declared to belong to that class whose model matches the test pattern best.


Create a free website with Weebly
  • Source Code
  • Contact