Book review: Hacker States #Books #Hacking @wmmna reviews Hacker States, by political sociologist Luca Follis and cultural anthropologist, documentary video producer and interdisciplinary scholar Adam Fish.

The Publisher writes:

Luca Follis and Adam Fish examine the entanglements between hackers and the state, showing how hackers and hacking moved from being a target of state law enforcement to a key resource for the expression and deployment of state power. Follis and Fish trace government efforts to control the power of the internet; the prosecution of hackers and leakers (including such well-known cases as Chelsea Manning, Edward Snowden, and Anonymous); and the eventual rehabilitation of hackers who undertake “ethical hacking” for the state. 

And WMMNA states:

The protagonists in the book are more or less famous (when they’ve been identified that is.) Chelsea Manning, Edward Snowden, Anonymous, Lauri Love, Julian Assange. There’s the hacktivists, the whistleblowers and other courageous actors of civil disobedience. Then come the Twitter bots, the cybercriminals, the fabricated Americans, the cyber mercenaries, WannaCry and the trolls. But also the FBI, the NSA, GCHQ, the police, the banks, the corporations, the lawyers and the courts. In the middle of all that jolly crowd and trying to make sense of it, there’s the press and of course, there’s you and I who live in what the authors of this book rightly call “high breach societies.”

The motivations of the hackers and the states might differ widely, their morality and beliefs might be at opposite ends of the ethical spectrum but they end up mingling more or less willingly. The authors explore the concept of “boundary” and how states redraw and expand borders and boundaries to enfold or remove hackers. States unleash law enforcement crackdown on hackers but they also attempt to neutralise, co-opt and exploit their power.

The summary is “The book is very U.S.-centred, with mentions of the UK, China, Russia, a couple of nods to North Korea, Latin America and the Islamic State in Syria (ISIS.) As for Europe, Iran and Israel, they are reduced to footnotes. Either we need other volumes of Hacker States or the message is that if it doesn’t happen within the elastic boundaries of the U.S., hacking simply doesn’t matter. I’d go for the first option.”

See the entire review here.

Plan C Live: The Montana Mask – Thursday May 7th

PLAN C LIVE is an online conversation with Dorothy Jones-Davis of Nation of Makers and Dale Dougherty of Make: Community and a panel of makers about the civic response to COVID-19. Thank you to those who have been participating as panelists and those of you who attended through Zoom. We […]

Read more on MAKE

The post Plan C Live: The Montana Mask – Thursday May 7th appeared first on Make: DIY Projects and Ideas for Makers.

Code Robotron: 2084’s twin-stick action | Wireframe #38

News flash! Before we get into our Robotron: 2084 code, we have some important news to share about Wireframe: as of issue 39, the magazine will be going monthly.

The new 116-page issue will be packed with more in-depth features, more previews and reviews, and more of the guides to game development that make the magazine what it is. The change means we’ll be able to bring you new subscription offers, and generally make the magazine more sustainable in a challenging global climate.

As for existing subscribers, we’ll be emailing you all to let you know how your subscription is changing, and we’ll have some special free issues on offer as a thank you for your support.

The first monthly issue will be out on 4 June, and subsequent editions will be published on the first Thursday of every month after that. You’ll be able to order a copy online, or you’ll find it in selected supermarkets and newsagents if you’re out shopping for essentials.

We now return you to our usual programming…

Move in one direction and fire in another with this Python and Pygame re-creation of an arcade classic. Raspberry Pi’s own Mac Bowley has the code.

Robotron: 2084 is often listed on ‘best game of all time’ lists, and has been remade and re-released for numerous systems over the years.

Robotron: 2084

Released back in 1982, Robotron: 2084 popularised the concept of the twin-stick shooter. It gave players two joysticks which allowed them to move in one direction while also shooting at enemies in another. Here, I’ll show you how to recreate those controls using Python and Pygame. We don’t have access to any sticks, only a keyboard, so we’ll be using the arrow keys for movement and WASD to control the direction of fire.

The movement controls use a global variable, a few if statements, and two built-in Pygame functions: on_key_down and on_key_up. The on_key_down function is called when a key on the keyboard is pressed, so when the player presses the right arrow key, for example, I set the x direction of the player to be a positive 1. Instead of setting the movement to 1, instead, I’ll add 1 to the direction. The on_key_down function is called when a button’s released. A key being released means the player doesn’t want to travel in that direction anymore and so we should do the opposite of what we did earlier – we take away the 1 or -1 we applied in the on_key_up function.

We repeat this process for each arrow key. Moving the player in the update() function is the last part of my movement; I apply a move speed and then use a playArea rect to clamp the player’s position.

The arena background and tank sprites were created in Piskel. Separate sprites for the tank allow the turret to rotate separately from the tracks.

Turn and fire

Now for the aiming and rotating. When my player aims, I want them to set the direction the bullets will fire, which functions like the movement. The difference this time is that when a player hits an aiming key, I set the direction directly rather than adjusting the values. If my player aims up, and then releases that key, the shooting will stop. Our next challenge is changing this direction into a rotation for the turret.

Actors in Pygame can be rotated in degrees, so I have to find a way of turning a pair of x and y directions into a rotation. To do this, I use the math module’s atan2 function to find the arc tangent of two points. The function returns a result in radians, so it needs to be converted. (You’ll also notice I had to adjust mine by 90 degrees. If you want to avoid having to do this, create a sprite that faces right by default.)

To fire bullets, I’m using a flag called ‘shooting’ which, when set to True, causes my turret to turn and fire. My bullets are dictionaries; I could have used a class, but the only thing I need to keep track of is an actor and the bullet’s direction.

Here’s Mac’s code snippet, which creates a simple twin-stick shooting mechanic in Python. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, go here.

You can look at the update function and see how I’ve implemented a fire rate for the turret as well. You can edit the update function to take a single parameter, dt, which stores the time since the last frame. By adding these up, you can trigger a bullet at precise intervals and then reset the timer.

This code is just a start – you could add enemies and maybe other player weapons to make a complete shooting experience.

Get your copy of Wireframe issue 38

You can read more features like this one in Wireframe issue 38, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 38 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Code Robotron: 2084’s twin-stick action | Wireframe #38 appeared first on Raspberry Pi.

Linkdump: March 2020

Dial Indicator Clock

Simple Wireless Notifier Project

With many families working and schooling from home, letting everyone know when you’re in a video meeting, Zoom conference, or anything that might require a little quiet time, has become a new reality (it certainly is in my home).

I knew I could have spent a little time in Gimp or Photoshop and printed out a sign to simply hang on the door when I was shooting a video. I could have gone really old school and just hung a tie on the doorknob every time I went into a meeting. But both of those would require me to walk all the way up the stairs, and neither would allow me to work on a sweet electronics project. So I dug through my parts drawer to see what I had available, and came up with this notifier.

Rogue video shot by me in my workshop

It’s simple and straightforward. As long as I’m not in a meeting, the LED on the receiving unit remains green, and a small wheel shows Madeline Kahn being very welcoming. When it’s time for me to head into a video meeting I press the red button, and the LED upstairs goes to red, while the graphics wheel spins 180° to show John Candy letting everyone know that they shouldn’t come downstairs.

Everything You Need

Chances are, you may have some or even all of these parts, so pick and choose what you need. Or if you’re like me, get the entire parts list because even if you already have some of them, you can never have too many components in your workshop.

alt text
Didn’t bother attaching the blue line from the RGB LED since I’m only using red & green

Both circuits are pretty straightforward. I did do a little soldering to wire up the RGB LED on the receiver, but I made everything else plug and play with jumper wires. I kept the code simple and straightforward, too.

Transmitter Sketch

 * I'm In A Meeting Notifier
 * by Rob Reynolds
 * March 27, 2020
 * I slapped this together during the Covid-19 pandemic as
 * a response to the Shelter In Place rules. I found myself WFH
 * down in my workshop, while my wife was WFH upstairs, and
 * our kids were online schooling at their desks. I put a small
 * transmitter downstairs with me, with a receiver
 * upstairs that would alert everyone, using LEDs and a graphic
 * controlled by a servo, to let them know when I was in an
 * online meeting.
 * Want to help support open source? Consider purchasing these
 * parts from SparkFun
 * This code is free, released under the beerware license. If you
 * find it useful, and see me (or any SparkFun employees) at the
 * local, you buy us a round.

 // We use Software Serial to communicate with XBee
 #include <SoftwareSerial.h>

//For Atmega328P's
// XBee's DOUT (TX) is connected to pin 2 (Arduino's Software RX)
// XBee's DIN (RX) is connected to pin 3 (Arduino's Software TX)
SoftwareSerial XBee(2, 3); // RX, TX

// Define pinouts for buttons and LEDs
const int greenButtonPin = 6;
const int redButtonPin = 7;

const int greenLED = 8;
const int redLED = 10;

int greenButtonState, redButtonState;         // variable for reading the pushbuttons

void setup() 

  pinMode(greenButtonPin, INPUT);
  pinMode(redButtonPin, INPUT);

  pinMode(greenLED, OUTPUT);
  pinMode(redLED, OUTPUT);


  digitalWrite(greenLED, HIGH);

void loop() 
  // read the state of the pushbutton value:
  greenButtonState = digitalRead(greenButtonPin);
  redButtonState = digitalRead(redButtonPin);

  // check if the pushbutton is pressed. If it is, the buttonState     is HIGH
if (greenButtonState == LOW) 
    digitalWrite(greenLED, HIGH);
    digitalWrite(redLED, LOW);
  else if (redButtonState == LOW) 
    digitalWrite(greenLED, LOW);
    digitalWrite(redLED, HIGH);

  delay(50);  //

Receiver Sketch

 * I'm In A Meeting Notifyer
 * by Rob Reynolds
 * March 27, 2020
 * I slapped this together during the Covid-19 pandemic as
 * a response to the Shelter In Place rules. I found myself WFH
 * down in my workshop, while my wife was WFH upstairs, and
 * our kids were online schooling at their desks. I had a     small transmitter
 * downstairs with me, while there was a receiver upstairs that
 * would alert everyone, using LEDs and a graphic controlled by
 * a servo, to let them know when I was in an online meeting.
 * Want to help support open source? Consider purchasing these
 * parts from SparkFun
 * This code is free, released under the beerware license. If you
 * find it useful, and see me (or any SparkFun employees) at     the
* local, you buy us a round.

 // We use Software Serial to communicate with XBee
 #include <SoftwareSerial.h>

 #include <Servo.h>
 Servo myservo;  // create servo object to control a servo
// twelve servo objects can be created on most boards
int pos = 0;    // variable to store the servo position

//For Atmega328P's
// XBee's DOUT (TX) is connected to pin 2 (Arduino's Software RX)
// XBee's DIN (RX) is connected to pin 3 (Arduino's Software TX)
SoftwareSerial XBee(2, 3); // RX, TX

//const int greenButtonPin = 6; // Used only on Transmitter
//const int redButtonPin = 7;  // Used only on Transmitter

const int greenLED = 8;
const int redLED = 10;

char msg;

void setup() 

  pinMode(greenLED, OUTPUT);
  pinMode(redLED, OUTPUT);

  myservo.attach(9);  // attaches the servo on pin 9 to the servo object

  digitalWrite(greenLED, HIGH);

void loop()  Serial.available()) 
    if (XBee.available()) 
      msg =;

    else if (Serial.available()) 
      msg =;
   if(msg == 'C')
          for (pos = 0; pos <= 180; pos += 1)  // goes from 0     degrees to 180 degrees
 // in steps of 1 degree
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(20);                       // waits 15ms for the     servo to reach the position
  digitalWrite(greenLED, LOW);   // turn the LED on (HIGH is the voltage level)
  digitalWrite(redLED, HIGH);    // turn the LED off by making the voltage LOW
  delay(5000);                       // wait for a second

   else if(msg == 'O')
     for (pos = 180; pos >= 0; pos -= 1)  // goes from 180 degrees to 0 degrees
          myservo.write(pos);              // tell servo to go to position in variable 'pos'
          delay(20);                       // waits 15ms for the servo to reach the position
  digitalWrite(greenLED, HIGH);    // turn the LED off by making the voltage LOW
  digitalWrite(redLED, LOW);   // turn the LED on (HIGH is the voltage level)
  delay(5000);                       // wait for a second




Putting It All Together

I didn’t create any kind of housing for the transmitter. It just sits on my workbench, and I plug it in with a simple power supply. For the receiver, which sits upstairs in plain view of people who aren’t me, I put in a little more effort.

I didn’t go crazy, especially with the housing (note the fine, high quality enclosure!), but I did put in a bit of time on the graphics wheel. I knew straight away that I would be using Madeline Kahn and John Candy, but of course I had to find just the right still and font, and decide which style of drop shadow to use and, well, you get the idea. I used a battery pack for the receiver so I could set it up anywhere without proximity to an outlet. Be aware that a battery pack with a barrel jack won’t give you the power you need, as it wants 7-12 volts. However, the 6 volts from four AA batteries will work just fine if you run it directly into Vin on your RedBoard.

Notifier bouncing back and forth
I could have simply used an LED, but John Candy and Madeline Kahn make everything better!

There are, of course, other ways to build a similar unit, depending what you have in your parts drawer. Perhaps you have a SparkFun ESP32 Thing, or a couple of SparkFun Bluetooth Mates, or maybe even a couple of micro:bits on your workbench. Have you build a project to let others in your home know when you’re busy? We want to see it! Tell us your idea or drop us a link in the comments. In the meantime, stay safe, be kind to each other, and Happy Hacking!

comments | comment feed

Breadboarding distortion circuits #Audio #Synth

A to Synth breadboards two versions of the distortion circuit, both with input attenuation and output amplification to keep a 10Vpp signal almost untouched.

Pre-distortion amplification

First I did the version with pre-distortion amplification. With distortion CV at 0V the distortion circuit sees around 30mVpp. This is heavily amplified to distort.

At the other end, a second OTA amplifies the signal. The output has unity gain for a 10Vpp input when CV is around 2.5V.

OTA and parallel resistor in feedback loop

Then I tried breadboarding the OTA-in-feedback version. I had lots of trouble and could not get it working as expected at all. After much experimentation I ended up with a well functioning circuit. I then started documenting the changes, and realised what was going on: I had inadvertently put a 33k resistor in the feedback of the distortion op amp. This works in tandem with the signal fed back through the OTA, so it completely changes the amount fed back.

See the circuits and full treatment on A to Synth.

Plan C: Our Panel To Learn About The Maker Response To Covid-19 In Spain

Join Dale Dougherty, founder of Make: Magazine and Maker Faire, and Karim Asry, co-founder of Espacio Open and co-producer of Maker Faire Bilbao. They’ll be talking with some of the central activators, innovators, and organizers of the open source and maker response across Spain to produce supplies and meet the […]

Read more on MAKE

The post Plan C: Our Panel To Learn About The Maker Response To Covid-19 In Spain appeared first on Make: DIY Projects and Ideas for Makers.

Get Creative In This World Wide Digital Build-off: The Deconstruction

The Deconstruction is back! Watching the fun little promotional video, you may find yourself still wondering what the Deconstruction is. Well, it is pretty simple: The Deconstruction is a creative collaboration event held online – and in real life. Before the event, a topic is released for our participants (you!) […]

Read more on MAKE

The post Get Creative In This World Wide Digital Build-off: The Deconstruction appeared first on Make: DIY Projects and Ideas for Makers.

Autonomous Driving Claims Send BS Meter to 11

As a skeptic covering the automotive industry, anytime I hear a prediction for something happening in the next two or three years, my thoughts turn to the movie This is Spinal Tap and my BS meter goes to 11.

The timescale is beguiling: Near enough to get the kudos now, but sufficiently distant so that everyone has forgotten about the prediction in three years. Some celebrity CEOs have played this game for years.

This might work for a tech company pitching a vision for consumer electronics at CES — where frankly anything goes — but in automotive there is a major problem: A testing and validation process lasting two to three years. Listening to a CEO boast that their widget is going into mass-market series-production vehicles in the next two or three years, my first thought is always: Show me the working prototype now, or stop wasting my time.

CES in Las Vegas is probably the most high-profile consumer electronics show on earth. Which is awesome for your PR, but any CEO should always remember that if they make a prediction in a CES Keynote, they create an unerasable digital footprint of that prediction. Everyone wants a CES Keynote, but if you are going to use it to talk about the future of automotive, don’t ever fall into the two-to-three-year trap.

Here is Nvidia CEO Jensen Huang making his keynote presentation at CES in 2017.

Watch the whole thing if you like — he’s a great showman and it sure makes a refreshing change from a Netflix boxset — but the interesting viewing starts at about an hour in (at 01:01:40) with Huang’s bromance with Audi’s Scott Keogh. Keogh says: Highly automated vehicle[s], in numerous situations, by 2020. This will be in production, Level 4 automation. This is huge. Really huge.

Warning; three-year prediction alert; BS meter at 11. Just do the math — 2020 minus three years equals 2017. If the prediction was factual at the time, that Level 4 Audi was already developed, locked and into the test-and-validation stage when those words were spoken. Now fast-forward to the present day and we find out that not only has Audi not achieved Level 4 functionality by 2020, it has also abandoned Level 3.

Machine-to-human handover
On the SAE automation taxonomy (J3016), Level 3 is called conditional automation. Activating conditional automation — when the machine driver takes over from the human — is not the issue. The issue is navigating the dreaded machine-to-human handover, when the machine driver requests to hand back control to the human.

Since it is irresponsible for the machine driver to simply signal to the human Here, you drive, it is evident there must be a period of time following the handover request for the human driver to regain proper situational awareness.

Neither Jensen Huang nor Scott Keogh has probably ever heard of Professor Natasha Merat, nor is it likely they could accurately pinpoint the city of Leeds on a map of the UK. But in a research paper* published in 2014, Prof. Merat and her team concluded: It took drivers around 35–40 seconds to stabilize their lateral control of the vehicle.

Academic research thus shows us that the handover period for Level 3 is in the realm of 35-40 seconds. Just consider a 5,000-pound SUV traveling at highway speed for more than 30 seconds waiting for the human to regain situational awareness and we can see why Audi has backed away from the liability associated with automated driving at Level 3. Other responsible automakers, such as BMW, Daimler and Nissan, are highly likely to follow Audi’s lead.

And then there was 2
With the handover problem at Level 3, Level 4 unproven, and Level 5 an unrealistic proposition for many decades, the development effort at the automakers is now almost certainly focused on plain old Level 2.

The target is the implementation of driver monitoring and assistance technology, with a much less ambitious and headline grabbing goal simply of making human drivers into safer drivers. With more than 1.3 million deaths occurring globally on our roads and highways each year, a lot of safety advances can be made just by mandating this technology on all vehicles with four or more wheels.

As car and truck sales collapse in global lockdown, perhaps Covid-19 just killed the race to autonomy and with it focused the attention of automakers, regulators and advisory bodies squarely onto the life-saving potential of driver monitoring and assistance technology.

Here is a novel thought: Has anyone considered how Covid-19 might be the catalyst for changes which save lives?

*Merat, N. et al., 2014. Transition to manual: driver behaviour when resuming control from a highly automated vehicle.

— Colin Barnden is principal analyst at Semicast Research.

The post Autonomous Driving Claims Send BS Meter to 11 appeared first on EETimes.

The Dangers of COVID-19 Surveillance Proposals to the Future of Protest

Many of the new surveillance powers now sought by the government to address the COVID-19 crisis would harm our First Amendment rights for years to come. People will be chilled and deterred from speaking out, protesting in public places, and associating with like-minded advocates if they fear scrutiny from cameras, drones, face recognition, thermal imaging, and location trackers. It is all too easy for governments to redeploy the infrastructure of surveillance from pandemic containment to political spying. It won’t be easy to get the government to suspend its newly acquired tech and surveillance powers.

When this wave of the public health emergency is over and it becomes safe for most people to leave their homes, they may find a world with even more political debate than when they left it. A likely global recession, a new election season, and re-energized social movements will provide an overwhelming incentive for record numbers of people to speak out, to demonstrate in public places, and to demand concessions of their governments. The pent-up urge to take to the streets may bring mass protests like we have not seen in years. And what impact would new surveillance tools, adopted in the name of public health, have on this new era of marches, demonstrations, and strikes?

The collection and sharing of phone location data that was sold and deployed in order to trace the spread of the virus could be used by a reigning administration to crack down on dissent. The government and vendors have yet to make a convincing argument for how this measure would contribute to the public health effort. Indeed, they cannot, because GPS data and cell site location information are not sufficiently granular to show whether two people were close enough together to transmit the virus (six feet). But this data is sufficiently precise to show whether a person attended a protest in a park, picketed in front of a factory, or traveled at night to the block where a dissident lives.

Many other technologies that should never be deployed to prevent the spread of the virus would also harm free speech. Vendors are seeking to sell face recognition cameras to the government to alert authorities if someone in mandatory quarantine went grocery shopping. They could just as easily be used to identify picketers opposing government initiatives or journalists meeting with confidential sources. For example, the infamous face surveillance company, Clearview AI, is in talks with the government to create a system that would use face recognition in public places to identify unknown people who may have been infected by a known carrier. This proposal would create a massive surveillance infrastructure, linked to billions of social media images, that could allow the government to readily identify people in public spaces, including protesters, by scanning footage of them against images found online. Likewise, thermal imagining cameras in public places will not be an effective means of finding people with a fever, given the high error rate when calculating a person’s temperature at a distance. But police might be able to use such cameras to find protesters that have fled on foot from police engaged in excessive force against peaceful gatherings.

The U.S. government is not known for its inclination to give back surveillance powers seized during extraordinary moments. Once used in acute circumstances, a tool stays in the toolbox until it is taken away.  The government did not relinquish the power to tear gas protesters after the National Guard was called in to break up the Bonus Marchers assembled in the capitol during the Great Depression. Only after decades of clandestine use did the American people learn about the ways the FBI misused the threat of Communism to justify the wholesale harassment, surveillance, and sabotage of civil rights leaders and anti-war protesters. The revelation of these activities resulted in Sen. Frank Church’s investigations into U.S. surveillance in the mid-1970s, the type of forceful oversight of intelligence agencies we need more of today. And the massive surveillance apparatus created by the PATRIOT Act after 9/11 remains mostly intact and operational even after revelations of its overreach, law-breaking, and large-scale data collection on U.S. persons.

Even more proportionate technologies could be converted to less benign purposes than COVID-19 containment. Bluetooth-based proximity tracking apps are being used to trace the distance between two peoples’ phones in an attempt to follow potential transmission of the virus. Done with privacy as a priority, these apps may be able to conceal the identities of people who come into contact with each other. Done wrong, these apps could be used to crack down on political expression. If police know that Alice was at a protest planning meeting, and police learn from the proximity app that Alice was near Bob that day, then police could infer that Bob was also at the meeting. Some versions of these apps also collect identifiers or geolocations, which could further be used to identify and track participants in protest planning meetings.

Done without collecting identifying information and minimizing storage, measures like aggregate geolocation tracking might assist public health response and be difficult to weaponize against protestors. But done with deliberate intention to survey demonstrations, aggregate location data might be disaggregated, merged with other data, and used to identify individual people. For example, police could single out individual protestors in a public plaza, track them to their respective homes and workplaces once the demonstration is over, and thereby identify them.

Free speech and political participation are chilled when governments put protests, protestors, activists, and organizers under surveillance. Studies have found that when people are aware of surveillance, they’re less likely to engage in political speech or debate the important issues of the day. The First Amendment also protects the right of association for purposes of collective expression. This right is threatened if people are worried that they will be put under surveillance for joining or meeting with specific people or groups. Suddenly a person’s movements, correspondence, or personal relationships are scrutinized by strangers within the government. At a moment when our society is desperate to find innovative solutions to daunting political problems, we should loudly condemn any surveillance efforts which might chill our ability to freely discuss and associate about pressing issues.

EFF has clear guidelines for how we evaluate whether a piece of surveillance technology, proposed as a tool of public health: Would it work? Is it too invasive? Are their sufficient safeguards? One of the biggest concerns is that new powers introduced at this current moment will long outstay their necessity, experience mission creep, and by overtly redeployed for other purposes. Now, more than ever, we must stay vigilant about any new surveillance powers, technologies, and public-private relationships.