Skip to main content

Demo instruction and Problems Remaining

In this project, the smart mainly has two main functions, and we designed two demo situations to show each of the functions.

Demo 1: The emergency braking system

The first function is to have the users control the car while the system noticing the surroundings to make sure the car will not hit anything that might damage the car. Basically, the small car was to simulating an ordinary car that is equipped with an auto emergency braking system. The car has three different modes with three types of instruction LEDs. The original is the green mode in which mode the user could control the small car with a computer. The red mode is a mode which the raspberry pi on the small car find that it might hit an object and make a decision to stop the car, in this mode, the LEDs will turn red. The orange mode is the mode between the red mode and the green mode. It appears when the car is stopped and the car is moving backwards for 15 centimetres.


Demo 2: The visual identification motivated car 

In this demo, the small car does not need to be controlled. Instead, the car is controlled by itself. We have the camera on the car to identify different objects with different colours and different shapes. Having identified the objects, the camera could return the object name, position, length and width of the identified object. By using the data, the Pi could generate distance information and do the corresponding action to the object being sensed.

The original design is that the car should be able to do the actions above while being able to automatically find its way to bypass a given barrier. However, due to the limited time, this function has not been developed well.

Problems remaining

This system still has several problems that have not been solved.

The first problem is that the camera on the car is not clear and could easily make mistakes if the light source changes. In the tests before the demo, the light from the window could easily break its white balance and make the car blind. In some cases, the car was confused and recognize the signs wrongly.

The second problem is that the sonar distance sensor returns wrong value randomly, making the Pi easier to make the wrong decisions. Even the research group have tried to avoid this kind of errors from the software level, the bug sometime occurs.

Also, the CCTV of the camera is not prepared. We tried to have the user see the vision from the car's angle and show the results of the identifications. However, it turned out that the bandwidth of the Pi cannot hold the video to transfer with wifi, and the video on the computer would be delayed by five to 10 seconds and the video memory would be occupied soon and the program would be killed.

Even though we still have some problems that are not solved, the car could still finish part of the assumed functions during the demo.

Comments

Popular posts from this blog

Week 4: Change of Operating System and Problems

Change system and problems In the lab section of this week, the research group had a difficult time. In detail, it is difficult to have the video of the camera been transmitted to the computer. The data flows to the raspberry in the form of coordinate, length and height of the marked blocks. In this way, the processor of the pi could be relaxed. However, in the first place , the basic function of the car is that the user could see what the car sees with the use of camera. If the control system is not visual, it is meaningless to have such a complex system. To make it possible to visualize the streaming video, the research group checked the driver of the camera. It turns out that it is not designed for Raspbian, and this made its function been limited to this system. To solve this problem, the research group had to change the system of the smart car and recode the programs that have been tested. In specific, the driver is designed for a pure Linux environment. Having don...

Week 1: Car Assembling and Raspberry Pi Initializing

Pi2go MK2 Smart car assembling and introduction: In the first week of the project, the research group received the components that will be used in the project. The first thing to do is to assemble the smart car Pi2go MK2. In specific, Pi2go MK2 contains a chip which could control different parts of the car. The raspberry pi uses the GPIO connections to control and power the smart car. Figure 1: GPIO pins to control the car System instaling: Having assembled the car, the research group began to initialize the raspberry pi. According to the instruction, the research group had to instore an operating system into the SD card with SD card formatter and Win 32 Disk Imager. Problem-solving: However, when the raspberry is equipped with the operating system and plugged into a screen, the screen showed the following alert: kernel failure. To solve the problem, the research group tried to restart the pi for several times and it still shows a failure. Then, the researc...

Week 2:Machine Learning of Pixy 2

In this lab section, the research group managed to activate the pixy 2 camera with a given driver. By using the driver, the research group could cheek whether the camera could run smoothly. Figure 1: Pixy 2 camera Then, the research had to enable the SPI and I2C service in the raspberry pi and introduce the python library module to the raspberry pi for the Pi2go MK2. In this way, all kinds of sensors and motional devices could be activated when necessary. Having activated the camera, the research group began to try the machine learning part of the program. In specific, the car needs to recognize different kinds of barriers and send alerts to the pi and have the pi to judge how to change the moving direction to avoid hitting. The machine learning part is the fatal part of the car. Firstly, the research group managed to do this from a simple task which is to recognize a sign written on paper. The detailed learning and recognition procedure are shown in the video.