AI Powered RC Car
We plan on making a remote controlled car that can drive itself forwards and backwards. This car will be controlled with simple voice commands such as, “Forward!, Backward!, Stop!, Left!, Right!“. It will also include on-board image detection in order to properly avoid obstacles along its’ way.
For the base RC vehicle, we are utilizing an off the shelf self-assembly frame kit to house the components. We will then connect the motors that come with the frame to our Raspberry Pi. After this, we will mount the battery bank on the underside of the vehicle. On both sides of the vehicle, we’ll mount our Jetson Nanos. At the center of the vehicle, we will mount the low-power network switch to create our little internal network. We will connect both the Pi and both of the Jetson Nanos to this to facilitate communications between them.
Our main sensor will be our IMU. It comes with GPS tracking, an accelerometer, a gyroscope, and a barometric sensor. The GPS tracking in this unit is a bonus and not necessarily going to be used. These extra tracking mechanisms are just a bonus and allow us to gauge how fast the vehicle itself is moving as well as other useful data. We believe this data will be of use to us, we just don’t exactly know how yet. Once we implement everything, we’re 100% sure we will miss something and end up needing this data one way or another. We will also need a GPIO Expansion board in order to host all these various connecitons.
For sound, this will be the main proponent of controlling the vehicle. There will be a microphone within one of the Jetson Nanos to power the entire thing. This speaker will sound verbal feedback, that way when you issue it a command, you know it received the command. Like when you say, “Forward!”, it will reply saying, “moving forward”.
For image detection, once it detects an object in the camera sensor, it will move clear from it. We will put unique objects along its’ path to try to get it to detect and identify these objects. Finally, once it sees a human face, it should be able to identify it and stop moving as soon as it detects a face.
The network switch will be there in order to facilitate communication between the two Jetson Nanos as well as the Raspberry Pi. This way, each Jetson Nano is responsible for one part of the AI work needed. The first Jetson Nano will run the AI model on image detection and be connected directly to the camera. The second Jetson Nano will process all the speech detection with the microphone attached to it. Both of these Jetson Nanos will send the information upstream to the Pi to process and react to this information. The Pi will directly process this information and change the speed of the motors and everything else basically.
Finally, to power the entire thing, we have a 40,000 mAh Power bank. This should be enough power to run the two Jetson Nanos as well as the Raspberry Pi. After heavy testing, we’ll provide a rated battery time and how long the car can keep moving around a house.
To test the final product, we’ll have it go around my house and make sure it avoids obstacles. We will also use voice commands to get it to move somehwat to where it goes, but we’ll also rely a lot on the object detection for it to steer itself clear of obstacles.