Acquisition and Analysis of Robotic Environment Data Using Machine Learning Techniques

 Context of this Project

Machine Learning/Intelligence in Robotic Systems.   

 Prerequisite Knowledge 

It is required to read this documentation for proceeding further through this article. 

Acknowledgement:

This research project was initiated in August 2013 as part of an academic major/minor project called "Designing an Experiment Based on Fire Bird V: AVR ATmega 2560" at the Amrita Robotic Research Center (ARRC), Amrita School of Engineering, Bangalore.


Abstract:

The objective of this project is to develop a system for analyzing and detecting objects in an experimental robotic environment. The initial work involved collecting time series data from three long-range IR distance sensors and five short-range IR distance sensors. The robot was programmed to record data at a fixed sampling rate while moving along a straight path. This data was then subjected to feature extraction and pre-processing, ultimately providing the distance of objects from the robot's path in the form of a time series stream.


From an implementation perspective, the system primarily consists of two modules. The first module focuses on the data recorded from the sensors, which include IR distance sensors, PIR thermal imaging sensors, and microphones. The spatial data from the IR distance sensors is obtained through the use of an ADC. The PIR thermal imaging sensor is connected to the robot via the I2C bus switch, utilizing the Two Wire Interface (TWI) of the robot's microcontroller. It is planned to employ the Omron D6T-44L thermal imaging sensor on both sides of the robot, offering a resolution of 4x4 and a range of 2-3 meters (45° angle), meeting the required compatibility. For the microphone, two electret microphones are connected to the ADC through an OPA344-based amplifier circuit. The spatial data obtained from the ADC undergoes FFT algorithm processing in C to obtain a time series stream of frequency distribution from the audio signal.


The second module involves designing various experimental scenarios, incorporating heat sources, different dummy shapes, and sound sources placed along the robot's path. The robot is programmed to move along an even, straight path at a constant speed to record time series data at a rate of 2 samples per second per sensor. The time series data collected from these three sources undergoes different stages of data mining algorithms, including feature extraction, pre-processing, and clustering. Finally, a validation phase is conducted to assess the accuracy of the obtained outcomes.

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/Picture11.png?attredirects=0

Figure [1]: Sample Robotic Environment


As mentioned earlier, this project has following sections:

a) Object Detection and Analysis in Disaster like Environment (Minor Project)

b) Acquisition and Analysis of Robotic Environment Data Using machine Learning Techniques (Major Project)

Minor Project

Object Detection and Analysis in Disaster Like Environment

Acknowledgement: This Module has been implemented using the Fire Bird V (AVR ATmega2560/ARM7) robot; Which has been jointly developed IIT, Mumbai and Nex-Robotics Pvt. Ltd.; And it is their project.

Report: Please download a detailed academic report of this module here.

This is the following block diagram of this  module:

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/block1.jpg?attredirects=0

Figure[2]: Block Diagram of Module(a)

We have identified the following sub-module to be used in order to record data:

a) Installation of Sensor (IR Sensor GP2D120)

b) Generation of PWM and direction control of Robot

c) ADC

d) UART

e) XBEE Installation

f) Design of Scenarios

So, these are the following experiments conducted:

Experiments:Sensor Installation:

The installation process involved recording data from three long-range and five short-range IR sensors. The short-range sensors were positioned strategically to cover blind spots not covered by the long-range sensors. These sensors provide an analog signal that varies between zero and five volts. This signal is then fed into the ADC. The front sensor specifically detects the railing of the workbench. More information about these sensors can be found in the academic report on Page 24.


For the upcoming part (b) of the project, we have planned to interface the following sensors:

1) IR/Ultrasonic Sensors

2) Passive IR Sensors (for Thermal Imaging)

3) Microphone (for detecting Sound Sources)


PWM Generation:

To control the speed of the robot, a PWM signal is required. We maintain a steady speed of approximately 22cm/sec while collecting data. We utilize an eight-bit Timer/Counter module that is built into the microcontroller to generate this signal. The duty cycle chosen is 100%. Additionally, a DC motor controller IC (H-bridge and Power amplifier) has been interfaced to control the motor's direction. Both motors can be controlled separately from the microcontroller by setting/clearing direction bits.


The following video is a Proteus simulation showcasing the PWM setup. It has been designed based on the robot's manual and microcontroller data sheet configurations.

 

Configuring ADC

In this video (In Hindi), I'm explaining to configure the ADC for our purpose:

UART and XBEE Installation

Communication with the PC can be established either through wired or wireless means using the RS232 or XBEE module. The data collected from the robot is then transferred to the PC for offline analysis. This data will be fed into machine learning algorithms for further analysis.

Below is the code snippet for UART communication, which can also be found in Appendix-IV of the report:


UART Code

void usart_comm()

{

UCSR0C = (1<<UCSZ01)|(1<<UCSZ00); // selecting clock for USART

UBRR0L = 0b01011111; // defining Baud Rate 16B

UBRR0H = 0x00;

UCSR0B = (1<<TXEN0) | (1<< RXEN0); //Enabling Rr, Tr;

}

    

      In this video, The robot is collecting data and Sending to PC:

Scenario Design

      A setup similar to the figure[3 & 4] has been followed. It includes of the different shapes kept at the different coordinates; These coordinates are chosen according to scenario type.

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/scenario.png?attredirects=0

Figure[3]: Setup of Scenarios

      A set of four sample scenarios has been designed to record data:

 Scenario [1] 

            This scenario is equally dense scenario. In this scenario, the objects are kept with uniform density:

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/Picture1.png?attredirects=0

Figure[5]: Scenario [1]

 Scenario [2]

            This scenario is diverse scenario. In this scenario, objects are placed with uneven density in the different part of the work-space. In this scenario, objects are placed in such a way that part 1 and 2 are highly dense; whereas part 1 and 3 has very few objects.So, in this case; robot records less dense area 1, first then highly dense area 2 in the right side; while following path a. In case of path b, it does the vice versa. The scenario has been shown in figure [6].

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/Picture2.png?attredirects=0

Figure[6]: Scenario[2]

 Scenario[3]

           This scenario is based on object width (Board Object close to path). In this scenario, the broad objects are kept closer to path a, b. whereas, the objects with lesser width are kept away from the path a, b. The width of the objects at the height of sensor has been tabulated in Table I (Object available in our lab).

https://sites.google.com/site/shivendramishra91/my-announcements/tabl.png?attredirects=0

Table[I]: Width of Objects

                 

 The coordinates for the objects in this scenario has been mentioned in figure[7], given below:

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/Picture3.png?attredirects=0

Figure[7]: Scenario[3]

 Scenario[4]

           This scenario is based on object path (Broad object away from path). In this scenario, the broad objects are kept away from path a, b. Whereas; the objects with lesser width are kept closer to the path a, b. The objects are chosen according to the width given in the table-I. This scenario has been drawn in the figure[8].

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/Picture4.png?attredirects=0

Figure[8]: Scenario[4]

Result of Module

        The data recorded can be downloaded from data drive here (.Zip :  Hex File). This data can preprocessed using the C- Program available here. And a distance file can be created as shown in the figure[9].

 

https://sites.google.com/site/shivendramishra91/config/pagetemplates/acquisition-and-analysis-of-robotic-environment-data-using-machine-learning-techniques/ss.jpg?attredirects=0

Figure[9]: Data after Preprocessing

Major Project

Acquisition and Analysis of Robotic Environment Data Using Machine Learning Techniques

As mentioned above, It has been decided to use these sensors for recording:

1) PIR Thermal Imaging Sensor (2 Nos.)

2) IR Proximity Sensor (5 Nos.)

3) Microphones  (2 Nos.)

PIR Thermal Imaging Sensor

This project choose D6T-44L thermal Imaging sensor, based on our requirement and compatibility with existing hardware. This is the following circuit diagram for interfacing this sensor with micro-controller of robot:

https://sites.google.com/site/shivendramishra91/my-announcements/xx_001.jpg?attredirects=0

Figure[10]: Interfacing of D6T-44L with ATmega2560

 

    The circuit depicted in Figure [10] utilizes an I2C switch (TI PCA9545A) to connect two sensors on the I2C bus. This is necessary because the sensors have a hardcoded address (the same 7-bit address for all sensors), requiring isolation from each other on the bus. There are alternative methods to achieve this:


1) Software methods: I2C Bit-Banging (Connecting Sensors to GPIOs):

   It is possible to access these sensors by employing bit-banging of the I2C protocol. By connecting the sensors to GPIO pins, a piece of code can emulate I2C functionality. However, based on conducted experiments, this method is unreliable for bus speeds of 50 KHz or higher and necessitates calculating a checksum for each received data iteration.


2) Hardware Methods: Hot Swapping Support:

   Another option is to utilize devices like LTC4300-1 to enable hot swapping support for multiple sensors on the bus. Additionally, the circuit includes 4.7k resistors for pull-up on each line. The A0 and A1 pins of the PCA9545A are grounded to set the last bit of its address as zero.


The code used to access the sensor is a modified version of the Peter Fleury I2C library. The necessary files can be included in any AVR embedded C project.

Download

       

D6T-44L Code Checkout

$ svn co https://github.com/MishraShivendra/D6T_44L

or browse on GitHub.

This is the demo for sensor, this experiment uses Xbee  X-CTU Terminal to see data on computer. Next, I use a c code to calculate 4x4 thermal image of data received:

 

Results and some part of this project are not included here, Please read full version of this project on Springer or download from ResearchGate.