The LitterBug project was born out out a Hackster contest aimed to use embedded AI on cheap embedded devices using ARM powered devices. Upon receiving a donkeycar kit for this contest, we found it to be an excellent platform for anyone to dive into autonomous vehicles and AI in general. We'll dedicate this post to some of our findings from this contest and our approach to building an autonomous trash rover.
Setting up the experiment
As mentioned, the donkeycar platform made it very easy start training an rc car for autonomous driving. However, the platform was designed to train a model to recognize and drive within tracks. For our usecase, we needed to see it our spin on the donkeycar could:
- Autopilot outdoors
- Control a new part (trash mechanism)
- Incorporate data from new part into a new model
- Autopilot controlling three outputs (steering, throttle, trash mechanism)
1. Autopilot outdoors
Donkeycars have been traditionally raced indoors on a clearly defined track. Our first experiment needed to determine if we could take the donkeycar outdoors, no track, and perform some basic obstacle avoidance. This model works by deciding steering angle and throttle rate based on the output of a convolutional neural network which takes an image from the picamera feed as input. This end-to-end learning learning framework is an example of the behavior reflex approach to autonomous driving.
First, we angled the camera to point a little straighter so that this new model could have more context besides the ground. We increased the complexity of the model by increasing the number of features to 64 for each of the convolutional layers. We also changed the loss weights to 0.9 for the angle loss and 0.1 for the throttle. After these tweaks, we gathered a good first batch of training sessions, looping around the garden obstacle course consistently in various lighting.
Using 150,000 training samples, we managed to get the basic donkeycar build to drive around loops in our backyard. This model was able to learn very simple behavior of associating being close to fences with a hard right turn. We found that the LitterBug did really well on long stretches and steering across a small dip in one of the corners.
Litterbug learned to turn at each end, but sometimes the timing was off. This caused it to crash mostly around the turns. Exploring the samples lead us to believe it learned to anticipate momentum to carry it around the bends from our training at high speeds. This first experiment was a good sign that we could move on to the next experiment.
2. Control a new part
The donkeycar library makes it pretty easy to add a new part. We used a servo that was practically identical to the servo being used for steering. We replicated the code needed to move and log the steering servo and mapped it to the new "scoop" servo. We also mapped an extra button on the controller to move this servo while training.
3. Incorporate data from new part into a new model
Since incorporating the scoop motion using the donkeycar library was possible, we could continue relying on their framework to train a new model. We simply changed the architecture of the model slightly to use the scoop motion data as another input and output scooping motion predictions similar to how it predicts the steering angle.
From here, we collected more training samples with the new scoop. It took some practice to get all three movements coordinated to actually pick up trash. To make sure we are feeding the model good examples, we can go in and select the best snippets to train on.
4. Autopilot controlling three outputs
After scrubbing the training data for the best examples, we can train a new model that incorporates the actuation of the scoop part. Scooping a piece of trash is a complex action- identifying it, navigating towards it, using the scoop, steering, throttle effectively to scoop it up, and carrying it to its final location.
With the architecture we used to train just outdoor racing, we hypothesize that this kind of model could learn simple behaviors like keeping a scoop up when there is trash and a scoop close to the camera. Some of the more chronological behavior like manuvering the scoop, steering, and throttle to move a piece of trash, or knowing when to dump trash could be trickier to learn. We are currently experimenting with model architectures/types and gathering large amounts of data.
Some hardware challenges
The original donkeycar platform uses 1/16 scale rc cars, ideal sized cars for racing- not so much our usecase. We began by swapping out the wheels for off-terrain wheels. This worked temporarily, but we soon burned out our little RC car after lots of lap training. Ultimately, we needed to scale up to a crawler with a bigger 1/10 scale. This improved the steering and gave us the torque required to scoop trash out of tricky places.
We also found that the crawler ESC we have uses a lot of power- 40 amps consistently! There are solar charge controllers that can provide such current but they are very large and designed to be mounted on a wall. Due to size limitations, we stuck to powering the pi and sensors using solar.
For the immediate goal of this contest, we wanted to see if building an autonomous rover that could pick up trash was possible. However, this is a starting piece to having a fully autonomous trash rover. Aspects that we would want to work on in the future include:
- GPS mapping to clean a specific location
- Select a designated area to dump collected trash
- Solar charging stations for rc car body