A Brooklyn House With Abolitionist Ties Will Be Considered For Landmark Status #History #IndependenceDay

201907227duffield 2e16d0ba fill 661x496 Xwj8mxo

Great news for the preservation of Black history after a 10 year zoning battle.

Via the Gothamist:

For more than a decade, 227 Duffield Street has been a building in peril, the contentious battleground over Black history, preservationist and development interests. Back in 2007, the city sought to seize the property by eminent domain to create a park, but ultimately relinquished after expert testimony about its origins and public outcry.

But following a vote on Tuesday by the Landmarks Preservation Commission, the mid-nineteenth century house in Downtown Brooklyn associated with New York City’s abolitionist movement will finally be considered for landmark status.

Learn more!

Read more »

Meet Avra!

Howdy SparkFans! My name is Avra Saslow, and I’m so stoked to be joining you as the new Technical Content Creator. I’ve worked on catalog curation at SparkFun for about a year as an intern, but have now transitioned into a role in which I get to explore and showcase what’s possible with SparkFun products.

Meet Avra!

I initially got my start in the maker community in high school, where I explored the intersectionality between metalworking and electronics. I built a pair of glass speakers and a steampunk MIDI controller with a few surface transducers and an Arduino.

Metal Working

Fast forward to college - I found myself still hooked on electronics when my brother and I would ride our bikes on the Thursday Night Cruiser Ride here in Boulder, with a hundred other people who jury-rigged speakers and LEDs to their bikes for each themed ride.

I pursued that curiosity and studied Computer Science with an emphasis in design, geographic information systems and mathematics at CU Boulder (Sko Buffs!). If you can’t tell from what I studied, I’m really captivated by the intersection of multiple disciplines, and I’m hoping to showcase how various technical frameworks and ecosystems can build on each other.

RedBull Hack the Hits

My degree allowed me to not only study the fundamental technical aspects of CS, but also to be really creative in projects. I was selected to compete at a hackathon hosted by Red Bull called “Hack the Hits,” in which my team and I created an interactive DJ set inspired by the motions of physics.

I’ve collaborated with Specialized Bicycles to create an app that encourages kids to get out and ride their bikes. I’ve also done quite a bit of web development, ranging from a website that finds suitable sites to build solar farms using Machine Learning, to dynamic visualizations of the effects of urban surfaces (concrete and asphalt) on overall Earth albedo, to an interface that couples with an OBD-II to display all of your car’s information in a meaningful and elegant way.

Outside my current areas of specialty, I’m looking forward to integrating GPS/GNSS/IoT/ML technologies to improve our reporting capabilities on environmental applications (think fire tracking, precise weather data collection...what are your ideas for applications?).

I’m also hoping to provide you with a framework that more clearly connects the dots between hardware and software, so you can iterate on your hardware projects to make them more dynamic and agile (incorporating IoT, building Machine Learning models, creating mobile and web apps, and any other ideas you want explored!).

My job will be the most fulfilling for you and me if we work together, so let me know what kind of technologies and projects you all would like to see! What kinds of projects have you gotten stuck on? Why did you get stuck? What technologies interest you? Comment below and we can start working together!

In the meantime, as I’ll be producing content for SparkFun, you can also find me building swing bikes and e-bikes, reading about space and chaotic dynamics, or just riding my bike, kayaking, skating and adventuring throughout Colorado. I can’t wait to start working with you!

comments | comment feed

Read more »

Machine vision with low-cost camera modules

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found this downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 


This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Read more »


Drive WS2812B LED strips from a PMOD device, a lightdriver project @ anfractuosity.com:

I created a PMOD module PCB using KiCAD, that enables connecting WS2812B lighting strips to an FPGA board with a PMOD interface. The board was assembled by JLCPCB.
This is my first project using an FPGA, I plan to soon implement an SPI interface with the FPGA, to accept colour pixels via SPI from a raspberry pi, to then drive the LEDs appropriately. I am making use of the original Zybo board which uses a Zynq FPGA, although I’m not using the ARM portion of this chip as I want to learn VHDL.

Project files are available on GitHub.

Read more »

Reflow Toaster Oven – a Qwiic Hack!

While working from home during the past few months, I have greatly missed being able to throw some prototypes through the super nice SparkFun production reflow oven.

zoom in of solder paste reflowing in a toaster oven

Ahhh, so glad I didn't have to hand solder each of those blobs.

I really wanted a reflow oven in my garage, so after doing some research online, I decided to buy a cheap toaster oven and give this hack a try. Ultimately, I decided to use a standard hobby servo to automate the movement of the temperature knob like so:

alt text

Just like a push-pull linkage in a model airplane!

Although slightly less elegant than some of the other methods out there, this servo control method works quite well. Additionally, it didn't require any modifications to the wiring of my toaster oven and was pretty darn cheap and simple to setup.


I came across many blogs, websites and products dedicated to hacking toaster ovens into reflow ovens:

The most important things I learned were the following:

  1. Get a toaster oven with these features (I purchased the Black & Decker TO3000G, which was $59 new):

    • Analog dial control style
    • Quartz elements (two above and two below if you can)
    • A convection fan
    • Small or medium-sized
  2. A single thermocouple "floating" in the air near the PCB works fine. Note, a lot of the pro-end models have multiple temp sensors (at multiple points in the chamber and one touching the PCB), but I haven't found the need yet. The SparkFun Qwiic Thermocouple and braided thermocouple work great!

  3. Opening the door seems to be the only way to get a good and fast cool down. Even the ~$4,000 pro machines have the drawer open automatically to achieve good "free air" cooling. I have yet to automate this step, but so far I love watching the solder flow. When I see my reflow time is up, I have been manually opening the door. If I find the need to automate this in the future, I'll most likely get another servo or stepper motor involved.

  4. Most toaster ovens can't achieve an ideal ramp up rate of 1.5C/second, but that's okay. You can add a slower ramp up during your soak phase - a slow ramp up from 150-180. More on this later as we discuss the profile, the code and show some results.

Parts List

Manual testing

Before rigging up a servo to move the control knob, I first gathered parts and got my thermocouple readings working. Then the plan was to simply move the control knob manually (with my hand), and see if I could get a decent profile while watching the data plotter.

alt text

My very first attempt manually adjusting the knob by hand.

Following the profile suggestions in the CompuPhase article and reading what Wikipedea had, I decided to try for a profile represented by the blue line in the graphic above. The red represents the live readings from the thermocouple. Note: in all of my serial plotter graphics, the blue line is being plotted as a reference, and my controller is not analyzing the blue line in any way.

And so I tried again:

alt text

Second attempt adjusting temp knob by hand.

As you can see, I started taking notes to keep track of what was working and what was not. For my second attempt, I actually set the oven to "warm" for ten seconds before starting my plotter. The adjustments noted in the graphic were done by me physically moving the dial with my hand.

By my fourth manual test, I was pretty pleased with the results - not perfect by any means, but starting to look a bit like an actual profile:

alt text

Fourth attempt by hand, starting to look decent.

Servo Hack

Now that I knew a decent profile could be achieved by simply moving the temp dial, it was time to slap a servo on this thing and automate the movements!

First things first, I ripped off that control knob. A flat head screw driver helped greatly to get it started. Next, I mounted the servo horn to the knob with a servo mount screw in the center hole of the servo horn.

Temp knob removed. Mount with center point screw first.

Then, I drilled out two more holes for a good secure mount. Note: I'd make sure to align the positioning mark (that little silver thing) directly down. This will ensure that the servo linkage and rotation can achieve the full 180 degrees of motion.

Mounted servo horn with center screw. Horn mounting complete.

Next was actually mounting the servo to the oven. I had a scrap piece of wood that fit the job exactly, but anything that can extend the servo out past the face of the oven should work. I opted to use dual-lock hook and loop, but servo tape or double-sided foam tape would have worked too. Make sure to align properly both along the center point of the servo horns and the depth away from the oven.

Center point alignment. Depth alignement.

Finally, the last step was to link the two control horns. There are many ways to do this with various linkage hardware, but the simplest is to use bare control rods and add "Z-bends" at specific points. I first tried this with a single control rod, but it was much smoother with two!

Z-bend Dual control rod hooked up

The Code

The source code for the project can be downloaded from the GitHub repository here:

Using Example 1 from the Qwiic Thermocouple Arduino Library and a basic servo control example, I was able to piece together a sketch that accomplishes a pretty nice reflow profile.

Before diving into the code, I'd like to highlight the fact that this project does not use PID control to try and follow a set profile. I first proved out that manually moving the temp knob by hand could create a decent reflow profile. By watching the temperature and keeping track of time, my Arduino can determine the zone (preheat, soak, ramp up, reflow, cooldown). Once you know the zone, you can simply set servo to predetermined positions (or toggle on/off as in soak) to hold a close enough temperature or ramp up.

My plan was:

  1. Set full power
  2. Wait to see 140C, kill power, enter soak zone
  3. During soak, toggle power on/off to climb from 150C to 180C
  4. Once soak time is complete, turn on full power (enter ramp-up zone)
  5. During ramp-up, wait to see reflow temp (220C) (enter reflow zone)
  6. During reflow, toggle servo on/off to hold temp
  7. Once reflow time is complete, turn servo to "off," indicating user should open door

My main loop simply uses a "for loop" to get through the zones, then when it's done it stops with a "while(1)". It is updateServoPos() that actually determines the zone and updates a global variable.

void loop() 

for (int i = 0 ; i <= 4 ; i++) // cycle through all zones - 0 through 4.

    int tempDelta = abs(profileTemp[i + 1] - profileTemp[i]);
    int timeDelta = abs(profileTimeStamp[i + 1] - profileTimeStamp[i]);
    float rampIncrementVal = float(tempDelta) / float(timeDelta);
    currentTargetTemp = profileTemp[i];

    // Print data to serial for good plotter view in real time.
    // Print target (for visual reference only), currentTemp, ovenSetting (aka servo pos), zone
    for (int j = 0 ; j < (timeDelta - 1); j++)
    currentTargetTemp += rampIncrementVal;
    Serial.print(currentTargetTemp, 2);

    if (tempSensor.available())
        currentTemp = tempSensor.getThermocoupleTemp();
        currentTemp = 0; // error


    updateServoPos(); // Note, this will increment the zone when specific times or temps are reached.


    Serial.println(zone * 20);

// monitor actual temps during cooldown
for (int i = 0 ; i < COOL_DOWN_SECONDS ; i++)

    currentTargetTemp = 0;
    Serial.print(currentTargetTemp, 2);
    if (tempSensor.available())
    currentTemp = tempSensor.getThermocoupleTemp();
    currentTemp = 0; // error
    Serial.println(zone * 20);

while (1); // end

The following function, updateServoPos(), is where the temperature and totalTime is checked to determine the zone we are currently in, and ultimately what servo positions are desired.

void updateServoPos()

    if ((zone == ZONE_preheat) && (currentTemp > 140)) // done with preheat, move onto soak
        zone = ZONE_soak;
    else if ((zone == ZONE_soak) && (soakSeconds > SOAK_TIME)) // done with soak, move onto ramp
        zone = ZONE_ramp;
    else if ((zone == ZONE_ramp) && (currentTemp > 215)) // done with ramp move onto reflow
        zone = ZONE_reflow;
    else if ((zone == ZONE_reflow) && (reflowSeconds > 30))
        zone = ZONE_cool;

switch (zone) 
    case ZONE_preheat:
    case ZONE_soak:
        if ((soakSeconds % 15) == 0) // every 15 seconds toggle
            soakState = !soakState;
        if (soakState)
    case ZONE_ramp:
    case ZONE_reflow:
        if ((reflowSeconds > 5) && (reflowSeconds < 10)) // from 5-10 seconds drop to off
    case ZONE_cool:

Some of these settings would surely need to be tweaked when using a different oven, thermocouple and servo setup. Slight variations in the servo hardware hookups will most likely require different servo positions settings to get the desired position on the temp knob.

Also, the position of the thermocouple can affect its readings, so each setup will most likely require different temperature trigger values (i.e. in the servoUpdatePos() function, you may need to adjust the if statements that determine the zone).

Last, paste and component size must be considered. After reflowing a few panels, I saw that the SAC305 paste I was using actually started reflowing closer to 220C, so I made that my target max reflow temp. And if I had huge components, these can act as heatsinks and may require a longer soak and/or higher peak reflow temp.


alt text

My last profile cooking actual panels and looking good!

The above screenshot is from my latest achieved profile using servo control. The different colored lines represent the following:

  • Blue: Generic reference profile
  • Red: Temp readings showing actual profile in the oven
  • Yellow: Zones, stepping up through preheat, soak, ramp-up, reflow, cooldown
  • Green: Servo position, represented by 0-100 percent power

Although this last profile screenshot is the result of almost 20 test runs and 10 actual board panels reflowing, I would like to mention that even with some slight variation between profiles, all of my panels came out beautifully. A good paste job and proper placement of parts can greatly help your initial yield, but I even had a few sloppy paste jobs and they came out fine!


My complete setup in the garage. Panel in the oven.

If you're interested in reflowing some electronics using a toaster oven, it can be done using a simple Arduino, servo and thermocouple. It does take a little time to do some test runs and make slight adjustments to the provided Arduino sketch, but the results are pretty great! I currently have to watch the oven to manually open the door when it's done, but this could be automated. Plus, while one panel was in the oven, I simply started hand placing parts on the next panel and so no time was really lost.

If you would like to give this a try and have any questions, please reach out in the comment below or at the GitHub project repository.

Also, if you have any experience with solder reflow profiles, I'd love to hear your thoughts in the comments below.

Thanks for reading and cheers to some good DIY electronics SMD assembly!


comments | comment feed

Read more »

A full-duplex tiny AVR software UART

Nerd Ralph writes:

I’ve written a few software UARTs for AVR MCUs. All of them have bit-banged the output, using cycle-counted assembler busy loops to time the output of each bit. The code requires interrupts to be disabled to ensure accurate timing between bits. This makes it impossible to receive data at the same time as it is being transmitted, and therefore the bit-banged implementations have been half-duplex. By using the waveform generator of the timer/counter in many AVR MCUs, I’ve found a way to implement a full-duplex UART, which can simultaneously send and receive at up to 115kbps when the MCU is clocked at 8Mhz.

More details on Nerd Ralph blog.

Read more »

Simple machine learning with Arduino KNN

Machine learning (ML) algorithms come in all shapes and sizes, each with their own trade-offs. We continue our exploration of TinyML on Arduino with a look at the Arduino KNN library.

In addition to powerful deep learning frameworks like TensorFlow for Arduino, there are also classical ML approaches suitable for smaller data sets on embedded devices that are useful and easy to understand — one of the simplest is KNN.

One advantage of KNN is once the Arduino has some example data it is instantly ready to classify! We’ve released a new Arduino library so you can include KNN in your sketches quickly and easily, with no off-device training or additional tools required. 

In this article, we’ll take a look at KNN using the color classifier example. We’ve shown the same application with deep learning before — KNN is a faster and lighter weight approach by comparison, but won’t scale as well to larger more complex datasets. 

Color classification example sketch

In this tutorial, we’ll run through how to classify objects by color using the Arduino_KNN library on the Arduino Nano 33 BLE Sense.

To set up, you will need the following:

  • Arduino Nano 33 BLE Sense board
  • Micro USB cable
  • Open the Arduino IDE or Arduino Create
  • Install the Arduino_KNN library 
  • Select ColorClassifier from File > Examples > Arduino_KNN 
  • Compile this sketch and upload to your Arduino board

The Arduino_KNN library

The example sketch makes use of the Arduino_KNN library.  The library provides a simple interface to make use of KNN in your own sketches:

#include <Arduino_KNN.h>

// Create a new KNNClassifier
KNNClassifier myKNN(INPUTS);

In our example INPUTS=3 – for the red, green and blue values from the color sensor.

Sampling object colors

When you open the Serial Monitor you should see the following message:

Arduino KNN color classifier
Show me an example Apple

The Arduino board is ready to sample an object color. If you don’t have an Apple, Pear and Orange to hand you might want to edit the sketch to put different labels in. Keep in mind that the color sensor works best in a well lit room on matte, non-shiny objects and each class needs to have distinct colors! (The color sensor isn’t ideal to distinguish between an orange and a tangerine — but it could detect how ripe an orange is. If you want to classify objects by shape you can always use a camera.)

When you put the Arduino board close to the object it samples the color and adds it to the KNN examples along with a number labelling the class the object belongs to (i.e. numbers 0,1 or 2 representing Apple, Orange or Pear). ML techniques where you provide labelled example data are also called supervised learning.

The code in the sketch to add the example data to the KNN function is as follows:


// Add example color to the KNN model
myKNN.addExample(color, currentClass);

The red, green and blue levels of the color sample are also output over serial:

The sketch takes 30 color samples for each object class. You can show it one object and it will sample the color 30 times — you don’t need 30 apples for this tutorial! (Although a broader dataset would make the model more generalized.)


With the example samples acquired the sketch will now ask to guess your object! The example reads the color sensor using the same function as it uses when it acquired training data — only this time it calls the classify function which will guess an object class when you show it a color:


 // Classify the object
 classification = myKNN.classify(color, K);

You can try showing it an object and see how it does:

Let me guess your object
You showed me an Apple

Note: It will not be 100% accurate especially if the surface of the object varies or the lighting conditions change. You can experiment with different numbers of examples, values for k and different objects and environments to see how this affects results.

How does KNN work?

Although the  Arduino_KNN library does the math for you it’s useful to understand how ML algorithms work when choosing one for your application. In a nutshell, the KNN algorithm classifies objects by comparing how close they are to previously seen examples. Here’s an example chart with average daily temperature and humidity data points. Each example is labelled with a season:

To classify a new object (the “?” on the chart) the KNN classifier looks for the most similar previous example(s) it has seen.  As there are two inputs in our example the algorithm does this by calculating the distance between the new object and each of the previous examples. You can see the closest example above is labelled “Winter”.

The k in KNN is just the number of closest examples the algorithm considers. With k=3 it counts the three closest examples. In the chart above the algorithm would give two votes for Spring and one for Winter — so the result would change to Spring. 

One disadvantage of KNN is the larger the amount of training example data there is, the longer the KNN algorithm needs to spend checking each time it classifies an object. This makes KNN less feasible for large datasets and is a major difference between KNN and a deep learning based approach. 

Classifying objects by color

In our color classifier example there are three inputs from the color sensor. The example colors from each object can be thought of as points in three dimensional space positioned on red, green and blue axes. As usual the KNN algorithm guesses objects by checking how close the inputs are to previously seen examples, but because there are three inputs this time it has to calculate the distances in three dimensional space. The more dimensions the data has the more work it is to compute the classification result.

Further thoughts

This is just a quick taste of what’s possible with KNN. You’ll find an example for board orientation in the library examples, as well as a simple example for you to build on. You can use any sensor on the BLE Sense board as an input, and even combine KNN with other ML techniques.

Of course there are other machine learning resources available for Arduino include TensorFlow Lite tutorials as well as support from professional tools such as Edge Impulse and Qeexo. We’ll be inviting more experts to explore machine learning on Arduino more in the coming weeks.

Read more »

Keep an OpenMV Mind

Welcome, welcome, welcome! We have a load of new products to showcase today and it all starts with five new supporting products for the popular OpenMV H7 Camera, including WiFi and LCD shields, and three unique lens and module options. One of those options includes the ability to add a FLIR Lepton module, so we are now offering the sensor on its own! Rounding out the day we also have a new version of the :MOVE mini buggy for micro:bit. Now, let's take a closer look!

Customize your OpenMV H7 Cam!

OpenMV WiFi Shield

added to your cart!

OpenMV WiFi Shield

Out of stock WRL-16776

The WiFi Shield gives your OpenMV Cam the ability to connect to the Internet wirelessly and is perfect for streaming video fr…


The WiFi Shield gives your OpenMV Cam the ability to connect to the Internet wirelessly. This shield features an ATWINC1500 FCC Certified WiFi module that can transmit data at up to 48 Mbps, making it perfect for streaming video from the OpenMV Camera. Your OpenMV Cam's firmware already has built-in support for controlling the WiFi Shield using the network module.

OpenMV LCD Shield

added to your cart!

OpenMV LCD Shield

18 available LCD-16777

The LCD Shield gives your OpenMV Camera the ability to display what it sees on-the-go while not connected to your computer.


The LCD Shield gives your OpenMV Camera the ability to display what it sees on-the-go while not connected to your computer. This shield features a 1.8" 128x160 16-bpp (RGB565) TFT LCD display with a controllable backlight. Your OpenMV Cam's firmware already has built-in support for controlling the LCD Shield using the LCD module.

OpenMV Global Shutter Module

added to your cart!

OpenMV Global Shutter Module

Out of stock SEN-16775

The Global Shutter Camera Module allows your OpenMV Cam to capture high quality grayscale image snapshots not affected by mot…


The Global Shutter Camera Module allows your OpenMV Cam to capture high quality grayscale images not affected by motion blur. The module features the MT9V034 Global Shutter Camera Module, capable of taking snapshot pictures on demand and able to run 80 FPS in QVGA mode, 200 FPS in QQVGA mode, and 400 FPS in QQQVGA mode.

OpenMV FLIR Lepton Adapter Module

added to your cart!

OpenMV FLIR Lepton Adapter Module

Out of stock DEV-16779

The FLIR® Lepton® Adapter Module allows your OpenMV Camera to interface with the FLIR Lepton 1/2/3 Thermal Imaging sensors …

FLIR Lepton 2.5 - Thermal Imaging Module

added to your cart!

FLIR Lepton 2.5 - Thermal Imaging Module

Only 13 left! SEN-16465

The radiometric Lepton has 80x60 active pixels to capture accurate, calibrated, and non-contact temperature data in every pi…


The FLIR® Lepton® Adapter Module allows your OpenMV Camera to interface with the FLIR® Lepton® (version 1, 2 or 3) thermal imaging sensors for thermal vision applications. Combining machine vision with thermal imaging allows you to better pinpoint or identify objects you wish to to measure the temperature of with astounding accuracy.

In order to help support this module, we are now offering the FLIR Lepton 2.5 Thermal Imaging Module on its own as well. Please be aware that we currently have a limit of one module per order.

OpenMV Ultra Wide Angle Lens

added to your cart!

OpenMV Ultra Wide Angle Lens

29 available SEN-16778

The OpenMV Ultra Wide Angle Lens increases your OpenMV Camera the ability to see a 100° field-of-view (FOV).


The OpenMV Ultra Wide Angle Lens gives your OpenMV Camera the ability to see a wider field of view (FOV). This lens can easily be screwed into your existing module and has about a 100° FOV. The standard lens that ships with your OpenMV Camera has about a 70° FOV, which is good but not ideal for security applications.

Kitronik :MOVE mini MK2 buggy kit

added to your cart!

Kitronik :MOVE mini MK2 buggy kit

Only 5 left! ROB-16787

The :MOVE mini MK2 Buggy Kit from Kitronik provides a fun introduction to robotics using the micro:bit.


The Kitronik :MOVE mini MK 2 buggy kit for the BBC micro:bit is the latest version of the ever popular :MOVE mini that provides a fun introduction to robotics. The :MOVE mini is a two-wheeled robot suitable for autonomous operation, remote control projects via a Bluetooth application, or being controlled using a second BBC micro:bit via the micro:bit's radio functionality.

That's it for this week! As always, we can't wait to see what you make! Shoot us a tweet @sparkfun, or let us know on Instagram or Facebook. We’d love to see what projects you’ve made!

Never miss a new product!

hbspt.forms.create( portalId: "2224003", formId: "e963ae12-71f6-46d7-bb00-126dbfef8636" );

comments | comment feed

Read more »