Hello! In this post I will be outlining my latest project, which is the point of this blog. Then I will layout out a roadmap for where I want to take it.
There is a noticeable gap between consumer level robotics and the robotics platforms being produced in academic and scientific labs. Technology has gotten cheap and powerful enough that it is possible to achieve complex behavior with readily available hardware and software. My goal is to put to together a concept demo of that idea. Basically, I want to design a low cost autonomous vehicle that can complete basic navigation and path-finding tasks. Being a poor college student, I have great motivation for keeping the price low. The complexity of the project will grow as I gain experience.
Okay, so that is the over-arching concept behind the AutoRC project. What do I mean to do with all of that? To begin with, I want to have a small vehicle (currently a RC car) which can navigate the outside world. This is not easy. There are number of big challenges to overcome in a hostile outside environment. One of the most basic tasks to overcome is obstacle avoidance. If the vehicle can’t navigate around objects that lie in it’s path, then it doesn’t matter what features or abilities your vehicle has. Obstacle avoidance can be very simple, but when combined with path navigation and real environments, it becomes difficult and complex very quickly. I have considered several kinds of obstacle detection methods. Professional vehicle platforms use LiDAR (Light Detection And Ranging) sensors, but LiDARs are incredibly expensive, and bring in a massive amount of data. It would be difficult to do on-board processing of a LiDAR sensor in real time and still keep the platform small/mobile. Touch-based sensors are almost out of the question, for a number of reasons. First off, touch sensors require physical contact with an object. When a vehicle is navigating around a environment, it needs to find an optimal path to it’s goal. By the time a robot has physically touched an obstacle, it it no longer on an optimal path, because it has back up and reroute. Also, with the current drive platform, the robot cannot go in reverse. I have some experience with ultrasonic and IR distance sensors, and both are pretty good ways of detecting objects within the sensor’s field of view. However, they do not provide a complete solution, especially when the obstacle avoidance is paired with path navigation. When a robot is confronted with a avoidance decision (do I go left or right?) these sensors do not give a very good picture of which path might give better results. A sensor that gives a better result is a camera. Cameras give an enormous amount of information, and the difficulty lies in whittling down the info into something usable. Cameras also do not contain distance information, unless you have a camera with the ability to focus, or stereoscopic cameras positioned like human eyes. To maintain the low cost, I have decided that one camera combined with a IR rangefinder would give me the best results for the lowest cost. Another popular option is to use sensors/computing that are separate from the vehicle, such as the quadrotors at UPenn, but this does two things that I do not want. one, it requires an environment that supports the robot. if I have to optimize the environment for the robot anyway, what is the point of autonomy? Second, it takes the focus off of the robot and highlights the computing. For me, this is neither cost efficient or desirable.
Right now, obstacle avoidance is the most important requirement, so I will be focusing on that as the first phase of the AutoRC project. After that, I can begin adding functionality. The main add ons I would like to see would be path finding and waypoint management. The vehicle will need geolocation (GPS) and inertial measurement, e.g. 3-axis accel, gyro and magnetometer. at the moment, the arduino is the easiest way to control the RC car through the pwm pins, as both the drive and steering on the rc car are pwm. To use a camera for obstacle avoidance I want to use OpenCV, which is a open source computer vision library. The arduino is not capable of handling that much information, so for higher-level processing I am going to use a Pandaboard ES to do the graphics processing in an attempt to keep the software on-board.
As I get further into the project, I will talk more about where I want to go with it. For now, I want to get a basic setup running and see where that takes me.
irc.freenode.net #swift funsized