Welcome

Welcome to 3d Electronic Circuits .com.

We are going to be bringing some new concepts to the world of electronics here and are also going to be covering the more general electronic theories and circuits.

My basic theory for building true 3d electronic circuits can be found on “The Theory” page.

I am also hoping to be able to provide you with some electronic designing and project kits.

Keep checking back as this site is brand new and it does take some time to get some content up.

Soofa Signs use solar power and electronic paper – Giant EInk … wow! @mysoofa

Eink01-2

Soofa Signs use solar power and electronic paper, and by that I mean it was a giant 42″ eink display, I’ve never seen one this big so I needed to take a photo and look it up later (now), it appears to be this one, here’s a bit about the company…

Soofa is a female founded company, launched out of MIT and Harvard in 2014. Soofa is for people with a shared stake in a special place. We create the neighborhood news feed that connects a community with screens everyone can see and anyone can use. Our Soofa Signs provide a platform for everyone in the community to have a voice with Soofa Talk.

Read more.

Related

With a Raid on Javier Smaldone, Argentinian Authorities Have Restarted Their Harassment of E-Voting Critics

Javier Smaldone is a well-known figure in the Argentinian infosec community. As a security researcher, he’s worked to highlight the flaws in electronic voting in Argentina, despite the country’s local and federal attempts to move ahead with insecure software and electoral procedures.

The Argentinian authorities have a reputation of responding poorly to such criticism: In 2016, when Joaquín Sorianello warned an e-voting company about vulnerabilities in their e-voting software, his home was raided by the Buenos Aires’ police. Another technologist, Ivan Barrera Oro, was raided in 2017 shortly after demonstrating voting vulnerabilities in current software. The cases against Sorianello and Oro were both subsequently dismissed.

Now, it seems, it’s Smaldone’s turn to fend off a questionable criminal investigation. In early October, his home in Buenos Aires was raided by federal police, his phone and computers seized, and he was detained for questioning. The warrant for the search was in connection with a highly-publicized leak of data, exfiltrated in late July from the federal police themselves. The 700GB data was hosted anonymously, and caused political embarrassment both to law enforcement and the Argentinian politicians mentioned in the leaks.

Smaldone’s surprising raid was one of a series across the country against technologists by law enforcement investigating the leak. However, the material submitted by the police to the courts to obtain a warrant mostly points to perfectly lawful acts of free expression that would be entirely expected from an outspoken security researcher—not to any suspicious acts by Smaldone.

Police cited as incriminating Smaldone’s public discussion on Twitter of the high-profile politicians whose data was in the leaks, and his own subsequent analyses (on his blog and in the media) about how the attacks were carried out. It’s not surprising – much less incriminating – that Smaldone, who has testified before the Argentinian Senate on cyber-security, might have political opinions, or might express his expert opinion on the attacks. The police also claim in the request that Smaldone’s Twitter accounts “constantly expresses aversion to the police,” and that this “aversion” sometimes goes “beyond mockery.” But, again, this is not evidence of a crime.

Additional technical evidence for Smaldone’s involvement is weak, based on vague correlations between the geotracked location of Smaldone’s phone and activity related to the attack. The police even submitted as evidence that the leakers’ Tor onion service used the same version of the Nginx web server software as Smaldone’s blog – despite the fact that their shared version was the latest, stable update of what is currently the most popular web server application in the world, and was therefore also installed on millions of other Nginx servers at the time. At least based on the evidence that has been publicly disclosed, the raid on Smaldone appears unjustified.

Smaldone’s case is the latest, not just in a pattern of persecution against e-voting critics in Argentina, but in an accelerating trend of misunderstanding and scapegoating technologists in the wider region – one which we described in detail in our 2018 whitepaper, “Protecting Security Researchers’ Rights in the Americas.” Latin American technologists are increasingly caught up in unrelated, politicized, cyber-security investigations, with little evidence, conducted under too broad laws, by poorly-advised justices.

EFF has been fighting against such prosecutions since its founding in 1990. Argentina’s Javier Smaldone joins Ecuador’s Ola Bini as independent computer experts who are still being treated as dangerous suspects, for no more than practicing their lawful work, and using their inalienable right of free expression. We have joined with Access Now, and digital rights groups across the Americas in a letter to the judge and justice minister involved in the case, calling for Smaldone’s rights to be respected.

Join Us On November 21 For a LIVE Tour Of Co Lab Community Makerspace

Coming up on November 21, we’ll be taking a live tour of Co.Lab Community Makers thanks to Digi-Key, our sponsors. This is an incredible space located in Austin Texas that is free to use! We did a Makerspace Spotlight article where you can learn more about the space. It looks […]

Read more on MAKE

The post Join Us On November 21 For a LIVE Tour Of Co Lab Community Makerspace appeared first on Make: DIY Projects and Ideas for Makers.

Friday Product Post: Pro Skills Pay the Bills

Hello everyone and happy Friday! We’re kicking off the week with a brand new development kit to make you a pro with the Qwiic Connect System. Following the kit, we have a super small Serial Flash Memory IC that will be perfect with the Qwiic Micro Development Board we released a few weeks ago, as well as a new 2.8Ah LiPo battery pack. Let’s dive in!

Be a Qwiic Pro with the Qwiic Pro Kit!


SparkFun Qwiic Pro Kit

added to your cart!


SparkFun Qwiic Pro Kit

33 available


KIT-15349

This kit provides you with a RedBoard Turbo, two sensors, a joystick, and an OLED screen as well as all the cables you need t…


$59.95

Looking to get started with the SparkFun Qwiic system? The Qwiic Pro Kit provides you with a RedBoard Turbo, two sensors, two accessory boards and all the cables you need to start utilizing Qwiic and I2C easily. This kit was designed to allow users to get started with Arduino without the need for soldering or breadboarding. Hooking up a handful of inputs and outputs to a RedBoard has never been so easy – use the joystick, accelerometer or proximity sensor, and one small display for outputting text, graphics or even a microPong game!



Low Current Lithium Ion Battery Pack - 2.5Ah (USB)

added to your cart!


Low Current Lithium Ion Battery Pack – 2.5Ah (USB)

In stock


TOL-15204

We’ve taken the classic, portable, rechargeable lithium ion battery pack and tweaked the design to make it amenable to low cu…


$9.95

We’ve taken the classic, portable, rechargeable lithium ion battery pack and tweaked the design to make it applicable to low-current applications. While similar power banks are typically designed to power off automatically at lower currents, this battery pack will continue to operate if your device is drawing a mere 20mA or more. Just connect your device to the USB-A port on the battery pack and you’re good to go! To recharge the battery pack, just plug it into your computer or phone charger using the included USB micro-B cable.



Serial Flash Memory - W25Q32FV (32Mb, 104MHz, SOIC-8)

added to your cart!


Serial Flash Memory – W25Q32FV (32Mb, 104MHz, SOIC-8)

28 available


COM-15809

The W25Q32FV (32M-bit) Serial Flash memory provides a storage solution for systems with limited space, pins, and power.


$0.95

The W25Q32FV (32M-bit) Serial Flash memory provides a storage solution for systems with limited space, pins and power. This small SMD IC series offers flexibility and performance well beyond ordinary Serial Flash devices. They are ideal for code shadowing to RAM, executing code directly from Dual/Quad SPI (XIP), and storing voice, text and data.


That’s it for this week! As always, we can’t wait to see what you make! Shoot us a tweet @sparkfun, or let us know on Instagram or Facebook. We’d love to see what projects you’ve made!

comments | comment feed

Fruit identification using Arduino and TensorFlow

By Dominic Pajak and Sandeep Mistry

Arduino is on a mission to make machine learning easy enough for anyone to use. The other week we announced the availability of TensorFlow Lite Micro in the Arduino Library Manager. With this, some cool ready-made ML examples such as speech recognition, simple machine vision and even an end-to-end gesture recognition training tutorial. For a comprehensive background we recommend you take a look at that article

In this article we are going to walk through an even simpler end-to-end tutorial using the TensorFlow Lite Micro library and the Arduino Nano 33 BLE Sense’s colorimeter and proximity sensor to classify objects. To do this, we will be running a small neural network on the board itself. 

Arduino BLE 33 Nano Sense running TensorFlow Lite Micro

The philosophy of TinyML is doing more on the device with less resources – in smaller form-factors, less energy and lower cost silicon. Running inferencing on the same board as the sensors has benefits in terms of privacy and battery life and means its can be done independent of a network connection. 

The fact that we have the proximity sensor on the board means we get an instant depth reading of an object in front of the board – instead of using a camera and having to determine if an object is of interest through machine vision. 

In this tutorial when the object is close enough we sample the color – the onboard RGB sensor can be viewed as a 1 pixel color camera. While this method has limitations it provides us a quick way of classifying objects only using a small amount of resources. Note that you could indeed run a complete CNN-based vision model on-device. As this particular Arduino board includes an onboard colorimeter, we thought it’d be fun and instructive to demonstrate in this way to start with.

We’ll show a simple but complete end-to-end TinyML application can be achieved quickly and without a deep background in ML or embedded. What we cover here is data capture, training, and classifier deployment. This is intended to be a demo, but there is scope to improve and build on this should you decide to connect an external camera down the road. We want you to get an idea of what is possible and a starting point with tools available.

What you’ll need

  • Arduino Nano 33 BLE Sense
  • A micro USB cable
  • A desktop/laptop machine with a web browser 
  • Some objects of different colors 

About the Arduino board

The Arduino Nano 33 BLE Sense board we’re using here has an Arm Cortex-M4 microcontroller running mbedOS and a ton of onboard sensors – digital microphone, accelerometer, gyroscope, temperature, humidity, pressure, light, color and proximity. 

While tiny by cloud or mobile standards the microcontroller is powerful enough to run TensorFlow Lite Micro models and classify sensor data from the onboard sensors.

Setting up the Arduino Create Web Editor

In this tutorial we’ll be using the Arduino Create Web Editor – a cloud-based tool for programming Arduino boards. To use it you have to sign up for a free account, and install a plugin to allow the browser to communicate with your Arduino board over USB cable.

You can get set up quickly by following the getting started instructions which will guide you through the following:

  • Download and install the plugin
  • Sign in or sign up for a free account

(NOTE: If you prefer, you can also use the Arduino IDE desktop application. The setup for which is described in the previous tutorial.)

Capturing training data

We now we will capture data to use to train our model in TensorFlow. First, choose a few different colored objects. We’ll use fruit, but you can use whatever you prefer. 

Setting up the Arduino for data capture

Next we’ll use Arduino Create to program the Arduino board with an application object_color_capture.ino that samples color data from objects you place near it. The board sends the color data as a CSV log to your desktop machine over the USB cable.

To load the object_color_capture.ino application onto your Arduino board:

  • Connect your board to your laptop or PC with a USB cable
    • The Arduino board takes a male micro USB
  • Open object_color_capture.ino in Arduino Create by clicking this link

Your browser will open the Arduino Create web application (see GIF above).

  • Press OPEN IN WEB EDITOR
    • For existing users this button will be labeled ADD TO MY SKETCHBOOK
  • Press Upload & Save
    • This will take a minute
    • You will see the yellow light on the board flash as it is programmed
  • Open the serial Monitor
    • This opens the Monitor panel on the left-hand side of the web application
    • You will now see color data in CSV format here when objects are near the top of the board

Capturing data in CSV files for each object

For each object we want to classify we will capture some color data. By doing a quick capture with only one example per class we will not train a generalized model, but we can still get a quick proof of concept working with the objects you have to hand! 

Say, for example, we are sampling an apple:

  • Reset the board using the small white button on top.
    • Keep your finger away from the sensor, unless you want to sample it!
    • The Monitor in Arduino Create will say ‘Serial Port Unavailable’ for a minute
  • You should then see Red,Green,Blue appear at the top of the serial monitor
  • Put the front of the board to the apple. 
    • The board will only sample when it detects an object is close to the sensor and is sufficiently illuminated (turn the lights on or be near a window)
  • Move the board around the surface of the object to capture color variations
  • You will see the RGB color values appear in the serial monitor as comma separated data. 
  • Capture at a few seconds of samples from the object
  • Copy and paste this log data from the Monitor to a text editor
    • Tip: untick AUTOSCROLL check box at the bottom to stop the text moving
  • Save your file as apple.csv
  • Reset the board using the small white button on top.

Do this a few more times, capturing other objects (e.g. banana.csv, orange.csv). 

NOTE: The first line of each of the .csv files should read:

Red,Green,Blue

If you don’t see it at the top, you can just copy and paste in the line above. 

Training the model

We will now use colab to train an ML model using the data you just captured in the previous section.

  • First open the FruitToEmoji Jupyter Notebook in colab
  • Follow the instructions in the colab
    • You will be uploading your *.csv files 
    • Parsing and preparing the data
    • Training a model using Keras
    • Outputting TensorFlowLite Micro model
    • Downloading this to run the classifier on the Arduino 

With that done you will have downloaded model.h to run on your Arduino board to classify objects!

The colab will guide you to drop your .csv files into the file window, the result shown above
Normalized color samples captured by the Arduino board are graphed in colab

Program TensorFlow Lite Micro model to the Arduino board

Finally, we will take the model we trained in the previous stage and compile and upload to our Arduino board using Arduino Create. 

Your browser will open the Arduino Create web application:

  • Press the OPEN IN WEB EDITOR button
  • Import the  model.h you downloaded from colab using Import File to Sketch: 
Import the model.h you downloaded from colab
The model.h tab should now look like this
  • Compile and upload the application to your Arduino board 
    • This will take a minute
    • When it’s done you’ll see this message in the Monitor:
  • Put your Arduino’s RGB sensor near the objects you trained it with
  • You will see the classification output in the Monitor:
Classifier output in the Arduino Create Monitor

You can also edit the object_color_classifier.ino sketch to output emojis instead (we’ve left the unicode in the comments in code!), which you will be able to view in Mac OS X or Linux terminal by closing the web browser tab with Arduino Create in, resetting your board, and typing cat /cu/usb.modem[n]. 

Output from Arduino serial to Linux terminal using ANSI highlighting and unicode emojis

Learning more

The resources around TinyML are still emerging but there’s a great opportunity to get a head start and meet experts coming up 2-3 December 2019 in Mountain View, California at the Arm IoT Dev Summit. This includes workshops from Sandeep Mistry, Arduino technical lead for on-device ML and from Google’s Pete Warden and Daniel Situnayake who literally wrote the book on TinyML. You’ll be able to hang out with these experts and more at the TinyML community sessions there too. We hope to see you there!

Conclusion

We’ve seen a quick end-to-end demo of machine learning running on Arduino. The same framework can be used to sample different sensors and train more complex models. For our object by color classification we could do more, by sampling more examples in more conditions to help the model generalize. In future work, we may also explore how to run an on-device CNN. In the meantime, we hope this will be a fun and exciting project for you. Have fun!

Securely tailor your TV viewing with BBC Box and Raspberry Pi

Thanks to BBC Box, you might be able to enjoy personalised services without giving up all your data. Sean McManus reports:

One day, you could watch TV shows that are tailored to your interests, thanks to BBC Box. It pulls together personal data from different sources in a household device, and gives you control over which apps may access it.

“If we were to create a device like BBC Box and put it out there, it would allow us to create personalised services without holding personal data,” says Max Leonard.

TV shows could be edited on the device to match the user’s interests, without those interests being disclosed to the BBC. One user might see more tech news and less sport news, for example.

BBC Box was partly inspired by a change in the law that gives us all the right to reuse data that companies hold on us. “You can pull out data dumps, but it’s difficult to do anything with them unless you’re a data scientist,” explains Max. “We’re trying to create technologies to enable people to do interesting things with their data, and allow organisations to create services based on that data on your behalf.”

Building the box

BBC Box is based on Raspberry Pi 3B+, the most powerful model available when this project began. “Raspberry Pi is an amazing prototyping platform,” says Max. “Relatively powerful, inexpensive, with GPIO, and able to run a proper OS. Most importantly, it can fit inside a small box!”

That prototype box is a thing of beauty, a hexagonal tube made of cedar wood. “We created a set of principles for experience and interaction with BBC Box and themes of strength, protection, and ownership came out very strongly,” says Jasmine Cox. “We looked at shapes in nature and architecture that were evocative of these themes (beehives, castles, triangles) and played with how they could be a housing for Raspberry Pi.”

The core software for collating and managing access to data is called Databox. Alpine Linux was chosen because it’s “lightweight, speedy but most importantly secure”, in Max’s words. To get around problems making GPIO access work on Alpine Linux, an Arduino Nano is used to control the LEDs. Storage is a 64GB microSD card, and apps run inside Docker containers, which helps to isolate them from each other.

Combining data securely

The BBC has piloted two apps based on BBC Box. One collects your preferred type of TV programme from BBC iPlayer and your preferred music genre from Spotify. That unique combination of data can be used to recommend events you might like from Skiddle’s database.

Another application helps two users to plan a holiday together. It takes their individual preferences and shows them the destinations they both want to visit, with information about them brought in from government and commercial sources. The app protects user privacy, because neither user has to reveal places they’d rather not visit to the other user, or the reason why.

The team is now testing these concepts with users and exploring future technology options for BBC Box.

The MagPi magazine

This article was lovingly yoinked from the latest issue of The MagPi magazine. You can read issue 87 today, for free, right now, by visiting The MagPi website.

You can also purchase issue 87 from the Raspberry Pi Press website with free worldwide delivery, from the Raspberry Pi Store, Cambridge, and from newsagents and supermarkets across the UK.

 

The post Securely tailor your TV viewing with BBC Box and Raspberry Pi appeared first on Raspberry Pi.

Every jack-o’-lantern needs a pair of animatronic eyes

If you’d like your jack-o’-lantern to stand out, a pair of animatronic eyes should do the trick. While there are numerous ways that you can go about this, few (if any) look as good as the set made by Will Cogley in the first video below.

The incredibly realistic 3D-printed eyeballs are installed into the hollowed out pumpkin using skewers as supports, and glance in all directions, along with orange eyelids that open and close for an even more human(ish) appearance. 

The second clip delves deeper into the eyeballs themselves, which come in several forms. Control is via a Wii Nunchuk-esque joystick interface, with the help of an Arduino.

Bugs Begone: Chameleon Tongue Inspires Fast-Acting Robots With Flash-Like Reflexes #Biomimicry

Extra large 1572384547 cover image 1

Via iflscience

A team of researchers from Purdue University is taking cues from nature to inspire fast-acting robotics with chameleon-like reflexes capable of grabbing and maneuvering items with astonishing speed. With stretchable polymers, they say these soft robots could inform efficiency in future robot manufacturing.

High-powered and high-speed, the robots get their power from their elastic energy, or capability to expand to various degrees in order to move quickly. Internal pneumatic channels expand with pressure to snap and grab, yet are able to release their hold on an object by contracting. It all comes from biomimicry: the hyper-elastic tendons in woodpeckers, the snapping speed of Venus flytraps, and of course the quick-firing tongues of chameleons. One of the robots is capable of expanding up to five times its own length and can catch and retrieve a live flying beetle in just 120 milliseconds. (Tell that to the fruit flies haunting your kitchen.)

Learn more!

Surf Window is an interactive beach diorama that displays surf conditions

While some of us live directly beside the beach, others—the vast majority, in fact—reside inland where we can’t see the waves on a day-to-day basis. As a solution to this issue, surfer-maker Luke Clifford came up with his own “Surf Window,” an interactive diorama that shows real-time surf conditions at a glance.

The Arduino Mega-controlled device pulls beach info from the Magicseaweed API, then adjusts the laser-cut wooden stage to match. Indicators include starfish that light up depending on how good the surf conditions are overall, a physical wave model that moves up and down to represent height, a rotating seagull to reveal wind direction, and more. 

Whether you’re a landlocked surfer, or just someone who wants to know more about the environment, this looks like a really interesting gadget. The build is currently wrapping up a Kickstarter campaign if you’d like to have your own!