Autonomous Riding Lawnmower – Phase One Update

Jesse Brockmann is a senior software engineer with over 20 years of experience. Jesse works for a large corporation designing real-time simulation software, started programming on an Apple IIe at the age of six and has won several AVC event over the years. Jesse is also a SparkFun Ambassador. Make sure you read today's post to find out what he'll be up to next!

This is the second part of a multi-post series. If you would like to start from the beginning click here to see part one.


I would first like to respond to some comments I received on this project. I support and encourage the right to repair, and I can understand why people are upset with the current situation with newer equipment from John Deere or other companies. This mower was produced in a very different era and was very well documented by the owner's manual. For example, it included a complete circuit diagram with standard maintenance procedures and ways to fix common issues that an owner may have.

I’ve been making good progress on the mower project and I thought it was time for an update. A motor mount was fabricated and sprocket attached to the steering wheel. Size #25 chain is being used to connect the geared motor to the steering sprocket. The motor mount is notched to allow the chain tension to be adjusted.

Steering Sprocket

I also bought a linear actuator to control the brake/clutch mechanism. The linear actuator has a built-in potentiometer to provide feedback. This feedback will be used to set the end points for the actuator, so damage is not done to the brake pedal. The actuator has six inches of travel and 55 lbs of force for a margin of safety. As you can see, the mounting worked out well, as an existing hole was used and a threaded rod was run through that hole to mount the end of the actuator on a custom fabricated bracket.

Brake/Clutch

For the throttle, a servo was attached to the side of the tractor using a custom fabricated bracket. From the servo horn, a linkage was used to connect to the carburetor. This allows for full control of the throttle including the ability to kill the engine at very low throttle and choke for startup.

Throttle

Another custom mount was fabricated to shift the mower between second gear, neutral and reverse. This completes the four major control systems required for the mower.

Shifter

However, an extra safety system was added that allows the mower to be killed by shorting a wire that is part of the ignition system. This wire was routed up to the location where the controller will sit. The wire attaches to a relay on a normally closed connection. This means when the relay is not energized, the engine cannot run. This is a last line of safety in case the controller fails or power is lost.

Ignition System

A new circuit breaker was added to power the new control systems and wires run to the location of the controller. A fuse block provides power to the various circuits, and switches are used so the controller can be powered but leaves the motors disabled for testing.

Circuit Breaker

I decided to do some testing before all the systems were ready; here is the result of my testing late last fall.

The project sat over the winter, but I started again this spring. The various systems I described above were fabricated and tested manually without the controller. I then started to work on the code for the overall system. The system will have the following modes: INIT, START, STOP, RUNNING, FAILSAFE, MANUAL and KILL.

Here is a table with the various modes and sub-systems. Any software connected to hardware should have various MODES and some rigid guidelines to avoid any catastrophic issues. This is no time to slack on your code!

Mode Brake Throttle Shifter Steering Kill Relay
INIT ENGAUGED KILL DISABLED DISABLED OFF
STOP ENGAUGED Idle Allow Change DISABLED ON
RUNNING Controlled Controlled CONSTANT Controlled ON
START ENGAUGED Controlled DISABLED DISABLED ON
FAILSAFE ENGAUGED Idle CONSTANT DISABLED ON
KILL NA KILL NA NA OFF
Manual Controlled Controlled Controlled Controlled ON/OFF

The FAILSAFE and KILL modes are used if some failure occurs. Depending on the nature of the failure, a determination is made if it’s a FAILSAFE or KILL failure. Loss of radio contact is a FAILSAFE failure, but any communication failure with the controllers is a KILL failure. Other failure modes will be added as needed, such as low input voltage, temperature too high on some sub-system, over current on steering motor controller, etc.

I think most of these modes are easy to explain. MANUAL mode exists for testing, and will allow for testing in ways the other modes will not allow. My end customer for this mower will not have access to this mode.

From each of the above modes, only certain other modes can be intentionally reached except for KILL. Here is what switching to different modes looks like.

INIT -> STOP (Automatic)
STOP -> MANUAL/START/RUNNING
RUNNING -> STOP
MANUAL -> STOP
FAILSAFE -> STOP
KILL -> NONE

FAILSAFE mode is possible from any mode other than FAILSAFE or KILL, but the controller decides when it is activated. If the FAILSAFE condition is resolved, then switching to STOP is allowed.

At this point, all that is left is to test the above logic in standalone mode off the mower, and make 100 percent sure that the above logic table is followed and no crashes or weird behaviors occur. I’ve certainly found and fixed issues already, such as the brake not being engaged when switching from RUNNING to STOP. This would mean I could not stop the mower without switching to RUNNING mode again and that could lead to a runaway mower.

Once standalone mode is working, I will start testing the various control systems in isolation and then finally test the entire system on the mower. Please stay tuned for the final write-up on this project (hopefully in a month or two), and hear details about my next big project: a remote control/autonomous electric go-kart!


As a thank you for reading this far, I would like to let you know I have a special promo code you can use to get 10% off any SparkFun Original product. Just use ORIGINALRED2020 during checkout. This code is good through the end of 2020, but can only be used once per customer. Thanks for reading - I hope I can start attending STEM shows next year and show off my hard work on this and other projects I have been working on.

comments | comment feed

Read more »

A Brooklyn House With Abolitionist Ties Will Be Considered For Landmark Status #History #IndependenceDay

201907227duffield 2e16d0ba fill 661x496 Xwj8mxo

Great news for the preservation of Black history after a 10 year zoning battle.

Via the Gothamist:

For more than a decade, 227 Duffield Street has been a building in peril, the contentious battleground over Black history, preservationist and development interests. Back in 2007, the city sought to seize the property by eminent domain to create a park, but ultimately relinquished after expert testimony about its origins and public outcry.

But following a vote on Tuesday by the Landmarks Preservation Commission, the mid-nineteenth century house in Downtown Brooklyn associated with New York City’s abolitionist movement will finally be considered for landmark status.

Learn more!

Read more »

Meet Avra!

Howdy SparkFans! My name is Avra Saslow, and I’m so stoked to be joining you as the new Technical Content Creator. I’ve worked on catalog curation at SparkFun for about a year as an intern, but have now transitioned into a role in which I get to explore and showcase what’s possible with SparkFun products.

Meet Avra!

I initially got my start in the maker community in high school, where I explored the intersectionality between metalworking and electronics. I built a pair of glass speakers and a steampunk MIDI controller with a few surface transducers and an Arduino.

Metal Working

Fast forward to college - I found myself still hooked on electronics when my brother and I would ride our bikes on the Thursday Night Cruiser Ride here in Boulder, with a hundred other people who jury-rigged speakers and LEDs to their bikes for each themed ride.

I pursued that curiosity and studied Computer Science with an emphasis in design, geographic information systems and mathematics at CU Boulder (Sko Buffs!). If you can’t tell from what I studied, I’m really captivated by the intersection of multiple disciplines, and I’m hoping to showcase how various technical frameworks and ecosystems can build on each other.

RedBull Hack the Hits

My degree allowed me to not only study the fundamental technical aspects of CS, but also to be really creative in projects. I was selected to compete at a hackathon hosted by Red Bull called “Hack the Hits,” in which my team and I created an interactive DJ set inspired by the motions of physics.

I’ve collaborated with Specialized Bicycles to create an app that encourages kids to get out and ride their bikes. I’ve also done quite a bit of web development, ranging from a website that finds suitable sites to build solar farms using Machine Learning, to dynamic visualizations of the effects of urban surfaces (concrete and asphalt) on overall Earth albedo, to an interface that couples with an OBD-II to display all of your car’s information in a meaningful and elegant way.

Outside my current areas of specialty, I’m looking forward to integrating GPS/GNSS/IoT/ML technologies to improve our reporting capabilities on environmental applications (think fire tracking, precise weather data collection...what are your ideas for applications?).

I’m also hoping to provide you with a framework that more clearly connects the dots between hardware and software, so you can iterate on your hardware projects to make them more dynamic and agile (incorporating IoT, building Machine Learning models, creating mobile and web apps, and any other ideas you want explored!).

My job will be the most fulfilling for you and me if we work together, so let me know what kind of technologies and projects you all would like to see! What kinds of projects have you gotten stuck on? Why did you get stuck? What technologies interest you? Comment below and we can start working together!

In the meantime, as I’ll be producing content for SparkFun, you can also find me building swing bikes and e-bikes, reading about space and chaotic dynamics, or just riding my bike, kayaking, skating and adventuring throughout Colorado. I can’t wait to start working with you!

comments | comment feed

Read more »

Machine vision with low-cost camera modules

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
Camera.testPattern();
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found this downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Read more »