LayerOne Badge Hack

•May 29, 2012 • 5 Comments

This past weekend I attended LayerOne, A security Conference put on by the awesome people at NSL. It was a tremendously fun experience, and I learned quite a bit. I participated in two contests, the Tamper Evidence Contest and the Badge Hacking Contest. For the badge hacking I wanted to do something that would utilize both portions of the badge. Basically the badge was an arduino attached to a RF transmitter that was paired with a small RC car. The board had five control buttons, three that were accessible by the arduino. I could use the arduino to control left, right, and turbo (forward fast). I wanted a control system that was semi autonomous, but also didn’t run around willy-nilly like a simple obstacle avoidance robot. So I decided to have a two part scheme that would use the badge as well as a second arduino I had lying around.

The first part would be the badge. I happened to have an old 6DOF IMU in my box-o-parts, and I realized I could use to control the car through motion. If I attached the IMU to the badge it would be able to sense tilting. By tilting the badge left I could have the car turn left, and then use the same idea for right and forward. I could have attached external wires to the reverse button to give myself full control, but with limited time I decided not to worry about it. I used a arduino protoboard shield to house the IMU, and after some janky soldering and few datasheets I had a IMU shield.


The completed L1 badge with my ghetto IMU shield














To sense tilt, the only sensors I needed were the accelerometers. If I wanted to later I could add the gyroscopes, but this would have added several layers of difficulty as the sensors are analog, and the badge uses 3 of the 6 analog i/o pins to control the transmitter for the RC car. So if I wanted to use more than 3 sensors I would probably need to do some sort of A/D conversion and just read in the 6 sensors (3x accel, 3x gyro) digitally. In the interest of time I decided that sensing tilt was good enough.

Sensing tilt with an 3-axis accel is actually pretty simple. Gravity is a force, and thereby a constant acceleration downwards on a mass. so with x y and z accel in the standard configuration, if the badge was perfectly level the x and y accels would read 0, and the z would read -1 G. If I tilt the badge 45 degrees to the left, the x will read .5G and the z will read -.5G. So by taking the arctangent of y/z and x/z, I can tell  what angle the badge is tilting left/right/forward at.

The code I wrote is terrible, and so here is some pseudocode to explain what I did:

(inside the main loop)  if (atan(xaccel/zaccel > .785) // arctan(0.5/0.5)=45 (.785 rad), then set buttonright true, else if (atan(xaccel/zaccel < -.785) then set buttonleft true, else do nothing

if (atan(yaccel/zaccel > .785)then set turbo true, else do nothing.

The code was very simplistic, but when I tested it on my spare arduino it seemed to work fine.

I was unable to get the badge functioning before the end of the contest. When I tried to program it with the working code I had little success. Too late I realized I had neglected to add the second crystal to the board so that the arduino would not run in slow time. Thanks to Arko for helping my fix my badge when I lifted the power pad on one of ICs.

I wanted to do something with the car to make it do more as well, but I ran out of time the second day. I probably shouldn’t have tried to split my time between tamper evidence and this, but who needs sleep anyway? The initial idea was to have the second arduino run an IR distance sensor so that I could stop and reverse when it detected an obstacle, and then I wanted to add some blinky to give the car that extra snazz that would make it unique from the 20 other cars driving around. I had some EL wire lying around, and so I wanted to attach that to the underside of the RC car to give it that lit undercarriage look.

The end implementation was not nearly that cool. I wasn’t able to look at the car long enough to figure out how to control it locally from the second arduino, so the hack became purely a passive system. I attached the IR sensor to the front of the car, and set it up so that when anything cam close than 6″ the car would flash the EL wire. When I tried to mount the EL under the car, not only did it not want to conform nicely but it was rubbing against the ground. Next time I will just use LEDs.


Here is the car with the EL Wire and the IR sensor. Also the Badge had the IMU shield mounted

In the end not much about this hack was functional, but if I had two more hours I would have been able to make it work nicely. Hopefully after finals I will get the chance to do something more, maybe even upgrade the driving platform.

Thanks to charliex for making the badge. This was an awesome design for a con, and I loved the opportunity to interface with external hardware outside of just the badge.


AutoRC Overview

•February 2, 2012 • 2 Comments

Hello! In this post I will be outlining my latest project, which is the point of this blog. Then I will layout out a roadmap for where I want to take it.

There is a noticeable gap between consumer level robotics and the robotics platforms being produced in academic and scientific labs. Technology has gotten cheap and powerful enough that it is possible to achieve complex behavior with readily available hardware and software. My goal is to put to together a concept demo of that idea. Basically, I want to design a low cost autonomous vehicle that can complete basic navigation and path-finding tasks. Being a poor college student, I have great motivation for keeping the price low. The complexity of the project will grow as I gain experience.

Okay, so that is the over-arching concept behind the AutoRC project. What do I mean to do with all of that? To begin with, I want to have a small vehicle (currently a RC car) which can navigate the outside world. This is not easy. There are number of big challenges to overcome in a hostile outside environment. One of the most basic tasks to overcome is obstacle avoidance. If the vehicle can’t navigate around objects that lie in it’s path, then it doesn’t matter what features or abilities your vehicle has. Obstacle avoidance can be very simple, but when combined with path navigation and real environments, it becomes difficult and complex very quickly. I have considered several kinds of obstacle detection methods. Professional vehicle platforms use LiDAR (Light Detection And Ranging) sensors, but LiDARs are incredibly expensive, and bring in a massive amount of data. It would be difficult to do on-board processing of a LiDAR sensor in real time and still keep the platform small/mobile. Touch-based sensors are almost out of the question, for a number of reasons. First off, touch sensors require physical contact with an object. When a vehicle is navigating around a environment, it needs to find an optimal path to it’s goal. By the time a robot has physically touched an obstacle, it it no longer on an optimal path, because it has back up and reroute. Also, with the current drive platform, the robot cannot go in reverse. I have some experience with ultrasonic and IR distance sensors, and both are pretty good ways of detecting objects within the sensor’s field of view.  However, they do not provide a complete solution, especially when the obstacle avoidance is paired with path navigation. When a robot is confronted with a avoidance decision (do I go left or right?) these sensors do not give a very good picture of which path might give better results. A sensor that gives a better result is a camera. Cameras give an enormous  amount of information, and the difficulty lies in whittling down the info into something usable. Cameras also do not contain distance information, unless you have a camera with the ability to focus, or stereoscopic cameras positioned like human eyes. To maintain the low cost, I have decided that one camera combined with a IR rangefinder would give me the best results for the lowest cost. Another popular option is to use sensors/computing that are separate from the vehicle, such as the quadrotors at UPenn, but this does two things that I do not want. one, it requires an environment that supports the robot. if I have to optimize the environment for the robot anyway, what is the point of autonomy? Second, it takes the focus off of the robot and highlights the computing. For me, this is neither cost efficient or desirable.

Right now, obstacle avoidance is the most important requirement, so I will be focusing on that as the first phase of the AutoRC project. After that, I can begin adding functionality. The main add ons I would like to see would be path finding and waypoint management. The vehicle will need geolocation (GPS) and inertial measurement, e.g. 3-axis accel, gyro and magnetometer. at the moment, the arduino is the easiest way to control the RC car through the pwm pins, as both the drive and steering on the rc car are pwm. To use a camera for obstacle avoidance I want to use OpenCV, which is a open source computer vision library. The arduino is not capable of handling that much information, so for higher-level processing I am going to use a Pandaboard ES to do the graphics processing in an attempt to keep the software on-board.

As I get further into the project, I will talk more about where I want to go with it. For now, I want to get a basic setup running and see where that takes me.

-Zach #swift funsized