ESP-32 a new standard of speed and connectivity

Developed by Espressif the ESP32 brings a new standard in low-cost WiFi boards. After porting one project to this new board I must say that it’s very similar to the ESP8266 and most code except some updated libraries, should work with minimal effort. 

ESP32 is a series of low-cost, low-power system on a chip microcontrollers with integrated Wi-Fi and dual-mode Bluetooth. The ESP32 series employs a TensilicaXtensa LX6 microprocessor in both dual-core and single-core variations and includes in-built antenna switches, RF balun, power amplifier, low-noise receive amplifier, filters, and power-management modules.” – Wikipedia description

What is still pending on my side is to compare power consumption and other details as WiFI range (With and without antenna) between ESP8266 and this new ESP32. But overall I’m really excited to build things on the top of this.

Combined with @tablatronix amazing WiFi Manager library it gives creators the possibility to have an independant IoT device with custom configuration, autoconnect and full info of the device.

It’s still pretty much on development so they are many things that are normally worst at the beginning (SPI Flash File System for example, a hell lot slower and at the moment only usable for small configuration files for me) But that’s normal, things are going to be fixed over the time, and hell it’s open source so let’s give these guys a hand and report new findings so they can fix it as soon as possible.

Yesterday I found the time to refactor my digital Camera project to use ESP32 (Makerfocus Heltec board). If you are interested in seeing the code updates please my commits in the board/esp32-oled branch.


So far still on my tests I won’t expand myself too long on this one. Giving some pointers and links for further research below.

Arduino Core
https://github.com/espressif/arduino-esp32

SPIFFS (ESP8266 code should be refactored to reflect library update)
https://github.com/espressif/arduino-esp32/blob/master/libraries/SPIFFS/examples/SPIFFS_Test/SPIFFS_Test.ino

PINS
https://github.com/makeitlabs/ratt/wiki/ESP32-Pin-Mapping

Using the OLED built in the Makerfocus ESP32

The ESP32 directory when installed using Arduino in Ubuntu is:
/home/martin/.arduino15/packages/esp32 -> I find weird that Arduino  keeps libraries in 3 different Folders making it very confusing to see library source code.


Uploading your pictures directly into the cloud

Yesterday I found some time to make an example of image push to the Cloud using Seafile storage. The idea is that instead of making a regular file upload that needs some kind of Backend gallery to preview the pictures, we can have a different take, like pushing the picture directly to your Cloud-storage.

WiFi Camera (C++) > API Endpoint (PHP) > Seafile cloud (Python/C)

So the first thing I tried is to browse the Seafile Web API manual and repeat in command line the curl commands. And then I found in packagist this great library that is a wrapper for all commands to implement them easily on your script.

For sure this would be also possible to do it without the middleware PHP, directly in C++ on the Espressif  SoC,  but I would like to have proper error handling and also to save a copy of the image in the case that the Cloud->push fails for any reason. It can be also a very complicated task considering they are at least 3 API calls using the bearer Auth token.

SeaUP

Short for Seafile upload this a code-example to make a PHP File upload using Seafile API and a simple JSON configuration

Repository lives here: https://github.com/martinberlin/seaup

Configuration is easy:

1– You need to obtain a Token

curl -d “username=username@example.com&password=123456” https://yourseafileserver.com/api2/auth-token

{“token”: “14fd3c026886e3121b2ca630805ed425c272Xxxx”}

2- Edit the sea-config.json and add it along with your seafile server and repository ID settings

{
  "seafile_host": "https://storage.luckycloud.de",
  "token": "14fd3c026886e3121b2ca630805ed425c272Xxxx",
  "repository_id": "Is on the last part of Url when opening a library: #my-libs/lib/REPOSITORY_ID"
}

3- That’s it ! You are ready to open the test provided in the github repository and see if the image appears on the cloud

FS2 camera from Blender to production

In this entry, I wanted to document what is the process of making one of this cameras, starting from the 3D – model to the end product where you turn it on and connects to WiFi ready to take pictures.

After removing the support and sanding the round columns the first thing is to connect the front and back case together and see that they fit correctly. Usually, they do but PET is a tricky plastic to print and the end termination is a bit rougher than with PLA so it requires some post-production work. As an advantage, this plastic is stronger than PLA, and will stand a crash much better since it’s more elastic and resistant. I would say the best termination and strength balance would be to print this in ABS but I dislike the smokes and the fact that is also very difficult to print at home.

When this step is ready then it’s the time to heat up the soldier and prepare the ON/off switch and the shutter button. Then there are 8 cables more that go from the Arducam (2 or 5 mega version) to the Wemos D1, that is the responsible of uploading the picture to the cloud. This is a prototype for myself so it looks a bit messy but shows how it is at this stage:

3 pair of cables from the left to the right: Battery, ON/off and shutter button

Then comes the reality shock moment that is to connect the Wemos ESP8266 through USB to the computer and upload the program that will do the magic of receiving the JPEG image from Arducam and upload it to a php API endpoint. Usually, at this point there is something that needs to be corrected, but either nothing works or all is fine and dandy. I open the mobile hotspot and turn on the camera. See if connects, try to take a picture, preview it on the PHP-gallery.  Try to test timelapse mode, see it works, and that’s pretty much it. A new camera is ready to be delivered.

FS2 WiFi camera

Connecting to SSL from Espressif 8266 SoC

There are 2 ways that I know about validating an SSL certificate:

1 –  Generate a root certificate in DER format (somehow tricky)

2 –  Copying the SHA1 hash from the browser certificate details (easier)

… and as we are mostly lazy developers we will go for the number 2

DER format into hexa

In this example, we will generate the code to validate https:://api.github.com

In the address bar, to the left of the URL, click on the circled ‘i’ icon for more information.

Click on the ‘>’ icon.

Click on “More Information” button at the bottom of the window.

In the new window titled Page Info – https://api.github.com, click on the “View Certificate” button. In the “Certificate Viewer” window, click on the “Details” tab.

In the “Certificate Hierarchy” top window pane click on “DigiCert High Assurance EV Root CA” so it is highlighted.

Click on the “Export…” button at the bottom of the page.

At the bottom of the page select “X.509 Certificate (DER)” format then click on Save.

Use your favorite program to convert the binary DER format to ASCII. Here is what I do in Ubuntu:

$ xxd -i DigiCertHighAssuranceEVRootCA.crt.der >cacert.h

Edit cacert.h to add PROGMEM and const keywords like in this ESP8266 example

Using a SHA1 hash

Well this one is the most easier and the one I use. Repeating the first steps of the first option, in the first window that you see the certificate information there is the SHA1 hash at the bottom.

So the trick is to simply copy it and for api.github.com is:
5F:F1:60:31:09:04:3E:F2:90:D2:B0:8A:50:38:04:E8:37:9F:BC:76

Open it in any text editor and replace the “:” for ” ” spaces to paste it in the firgerprint variable declaration:

const char* fingerprint = “5F F1 60 31 09 04 3E F2 90 D2 B0 8A 50 38 04 E8 37 9F BC 76”;

Here you can see an example that is ready to compile and test

And since I’m really lazy to delete this Gutenberg demo of WordPress new editor I will leave the rest here. In resume, in this new editor pressing (+) you can add anything you want in the world. And it would be even cooler if they add a magical (-) button so you can remove the content with the same ease as adding it.

The rest comes from the new editor bells & whistles and it’s not of my writing

Imagine everything that WordPress can do is available to you quickly and in the same place on the interface. No need to figure out HTML tags, classes, or remember complicated shortcode syntax. That’s the spirit behind the inserter—the (+) button you’ll see around the editor—which allows you to browse all available content blocks and add them into your post. Plugins and themes are able to register their own, opening up all sort of possibilities for rich editing and publishing.

Go give it a try, you may discover things WordPress can already add into your posts that you didn’t know about. Here’s a short list of what you can currently find there:

  • Text & Headings
  • Images & Videos
  • Galleries
  • Embeds, like YouTube, Tweets, or other WordPress posts.
  • Layout blocks, like Buttons, Hero Images, Separators, etc.
  • And Lists like this one of course :)

Visual Editing

A huge benefit of blocks is that you can edit them in place and manipulate your content directly. Instead of having fields for editing things like the source of a quote, or the text of a button, you can directly change the content. Try editing the following quote:

The editor will endeavor to create a new page and post building experience that makes writing rich posts effortless, and has “blocks” to make it easy what today might take shortcodes, custom HTML, or “mystery meat” embed discovery.

Matt Mullenweg, 2017

The information corresponding to the source of the quote is a separate text field, similar to captions under images, so the structure of the quote is protected even if you select, modify, or remove the source. It’s always easy to add it back.

Blocks can be anything you need. For instance, you may want to add a subdued quote as part of the composition of your text, or you may prefer to display a giant stylized one. All of these options are available in the inserter.

You can change the amount of columns in your galleries by dragging a slider in the block inspector in the sidebar.

Media Rich

If you combine the new wide and full-wide alignments with galleries, you can create a very media rich layout, very quickly:

Accessibility is important — don’t forget image alt attribute

Sure, the full-wide image can be pretty big. But sometimes the image is worth it.

The above is a gallery with just two images. It’s an easier way to create visually appealing layouts, without having to deal with floats. You can also easily convert the gallery back to individual images again, by using the block switcher.

Any block can opt into these alignments. The embed block has them also, and is responsive out of the box:

You can build any block you like, static or dynamic, decorative or plain. Here’s a pullquote block:

Code is Poetry

The WordPress community

If you want to learn more about how to build additional blocks, or if you are interested in helping with the project, head over to the GitHub repository.


Thanks for testing Gutenberg!

👋

Digital Camera FS2

Days after publishing this post about ArduCam and ESP8266 I got some good feedback and 2 friends asked me to get one Camera. At the same time my new 3D Printer “Prusa MK3” has arrived so I decided to make a case remake and release a new small Low-Resolution, instant WiFi upload Camera. What I’m trying to achieve here is a digital Polaroid. Press the shooter buttton and the JPEG will be uploaded to a digital gallery in the next 4 seconds. So it’s a pure WiFi camera, without memory card, and you need to be online to use it.
And that’s nowadays very easy right ? You just need to make a mobile hotspot in the phone if you are outside home. And if the camera does not detect a WiFi then creates an Access point called:
CAM-autoconnect

Then you have to connect to it through the phone and browse 162.168.4.1 there will greet you a “WiFi manager” so you can select a WiFi and write the credentials to make a connection. After that you are all set, you just need to enable the hotspot and the camera will reset and connect to it automatically.

It can take a picture both with the shutter button or have a Video stream or Photo shoot via the Web UI. ( cam.local )

 

3D Renderings made with Blender

I decided to make a small release of 5 FS2 digital WiFi instant upload cameras at the price of 70 € each.

Materials and costs if anyone is interested to make one are the following:

  • 1S 3.7V Li-Polymer  (Got one in eBay)
    Battery size: 6 x 41 x 68mm  2000mAh /hr
    8.50 €
  • Charger: Adafruit Micro Lipo w/MicroUSB Jack  (eBay Not in the picture since this is just my personal prototype)
    8 €
  • Arducam Multi Camera Adapter Board 2MP (eBay)
    25 €
  • Wemos D1 mini
    6 € but can be purchased for less
  • Various connectors and cables (8 Pin SPI white)
    2.5 €

That gives a total of 50 € as total hardware costs, printing the case and testing that all works together I’m summing up and additional 20. And you can get 1 year of API use for free. Then you can move it to your own server or you will get part of my Amazon AWS invoice ;)

With the battery full loaded should be online for about 18 hours. It has an On/Off switch that is still not in the picture.

The photos now when it has all cables soldered are better than before although I liked some of the strange effects when the cables where loose. The idea is to make a camera that you shoot blindly without looking at the frame. That gives for me interesting results and I had a lot of fun with it. That combined that in about 4 seconds the picture is already online is really cool. If you are interested in getting one just contact me through this website. Shipping costs are not included.

Picture previews:

1280x960

Photo shooter has 3 options:

  1. ONE CLICK Shuts only one picture
  2. LONG CLICK enters time-lapse mode (a picture every 5 min. but can be configured to your request, should be minimun time 5 seconds though)
  3. DOUBLE CLICK disables time-lapse mode

The picture is uploaded in my API but the code is called PHP-gallery and it’s in a public repository on github so it can be hosted in any location that supports PHP 5+ and image-magick (Thumbs generator)

ArduCAM plus Espressif equals WiFiCAM

I’ve been having some fun taking pictures last weekend with a self built camera. It costs about 15 € in Ebay and summing up the 6 of the ESP8266 Wemos D1 it makes a total hardware cost of 21 euros. Let’s say 35 if you add a Lion battery and an USB charger to it.

The most basic layout looks like this.

Using here a 2 megapixels ArduCam. There is also a 5 megapixels version. The Pin definition of the camera is like any other serial parallel interface device:

Wiring with Wemos D1
D0 CS
D7 Mosi
D6 Miso
D5 Clk
D2 Sda
D1 Scl

Arducam put together some Instructions Kit where you can find the previous table plus a demo example here.

I went a bit further and made a simple PHP Gallery plus upload-receiver hook and packed it together in a git repository here:

Php bootstrap4 image gallery

I got some critics saying that this is of course invented and I’m not creating anything new here. But of course the idea is not to make a professional WiFi camera. If so I will simply get one ! The idea was to built one from the scratch.

And the beauty of it is that it does not take perfect pictures. Just because now it’s not soldered sometimes it makes an interesting noise in the pictures and imperfections. The idea is to take this and make it whatever you want. Like for example a security cam that takes a serie of pictures or even records a stream when it detects movement. Or get a 5MB pixels more advanced camera and pack it together in a 3D printed case putting a shutter button and some eink display to preview the picture and change settings.

This is an example of a picture with noise:

And this another one when the right amount of light is there and the cables are good connected

For more pictures and to see how the Php gallery script works check this link.

Postman self documenting tests and runner import

I’ve been working on the last days doing extensive API testing and needed to find an easy way to document my tests. Postman offers already something very useful that uploads your tests documentation to their website. But sometimes we need just a simple HTML that can be privately delivered to the client.

That’s where this project was born:

https://github.com/martinberlin/postman-reporter

The intention of this simple PHP script is to generate a Standalone HTML for your Postman tests that you can send to the client without the need to upload all the tests in the open internet.

It serves to achieve two things:

  1. Make a standalone HTML document from your Postman Collections
  2. Import the test run-results into a mysql database

With the second one only the importing is done. It’s then up to you how to present this data. It populates two tables, resume and detail, first one with the performance result and the detailed with a line per test. Much more information can be extracted from the runner json this is just a kickstart idea. Have fun with it!

If it calls your interest then please read more details in the github repository.

3D Printer upgrade

Since last year, when I started 3D-printing, I bought my first printer just to see if I find my way into it.

Anycubic Delta "Kossel Plus"
Anycubic Delta “Kossel Plus”

Since then I found out that I could remember my days in the university studing graphic design and that I can actually design pretty well. But from that to product design there is million-light years. Anyways I like a lot Blender as a 3D Modeling software and I’m starting to get quite advanced doing my own models.

That combined with my passion to soldier electronic stuff and to create new devices has found it’s way. So it’s time for a more professional update.

Last month I purchased a new Prusa MK3.

https://www.flickr.com/photos/mjtmail/
Photo by the real Tiggy Flickr: https://www.flickr.com/photos/mjtmail/

I would be really happy to posting my results with the new machine and sharing with all of you the experience of working and creating new stuff with it.

Reading an image bitmap file from the web using ESP8266 and C++

There are a couple of different ways to do it, but I wanted to do it after a simple image example, to understand a bit better how reading a stream from the web to get as far as the pixel information and send it to a display. As a reference then I started with /GxEPD Library :

There are a couple of basic things to understand when dealing with streams of information. One is that the information comes on chunks specially if there is a large file, then buffering whatever is coming is essential if you want to read from it. The first examples I did without buffering just filled the 7.5 E-ink display of separated lines, that only resembled part of the web screenshot I was going to send it.

So how comes an image you request from the web then ?

First of all like any other web content there is a request made to an endpoint to whatever script or API that delivers the image. This part of the code is well reflected here:

String request;
  request  = "GET " + image + " HTTP/1.1\r\n";
  request += "Accept: */*\r\n";
  request += "Host: " + host + "\r\n";
  request += "Connection: close\r\n";
  request += "\r\n";
  Serial.println(request);

  if (! client.connect(host, 80)) {
    Serial.println("connection failed");
    client.stop();
    return;
  }
  client.print(request); //send the http request to the server
  client.flush();

In this case is a get Request. Then in the case of reading a Windows BMP image that is one of the easiest formats to read, the first thing is to check for the starting bits, that for a .bmp image file are represented by 2 bytes represented by HEX 0x4D42

But before that, when you send a Request and the server replies with a Response, it comes with the headers. For example it looks something like this:

HTTP/1.1 200 OK
Host:display.local
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0)
Accept: text/html

(And some more that I will spare here ending at the end with an empty line only with “\r” known as Carriage return)

Then after this the image should start. So there are two choices:

1 To make something that loops reading the first lines discarding the headers and then attempts to read the 2 starting bytes of the image

2 To read from the start including the headers and scan this 2 bytes until we find 4D42 that represent the start of the image

Between the two I prefer the first since it looks cleaner. If we where to take the second one for this image it will look like this, note is a 4-bit bmp:

5448 5054 312F 312E 3220 3030 4F20 D4B 440A 7461 3A65 5720 6465 202C 3130 4120
6775 3220 3130 2038 3131 343A 3A38 3334 4720 544D A0D 6553 7672 7265 203A 7041
6361 6568 322F 342E 312E 2036 4128 616D 6F7A 296E 4F20 6570 536E 4C53 312F 302E
312E 2D65 6966 7370 5020 5048 372F 302E 332E D30 580A 502D 776F 7265 6465 422D
3A79 5020 5048 372F 302E 332E D30 430A 6E6F 656E 7463 6F69 3A6E 6320 6F6C 6573
A0D 7254 6E61 6673 7265 452D 636E 646F 6E69 3A67 6320 7568 6B6E 6465 A0D 6F43
746E 6E65 2D74 7954 6570 203A 6D69 6761 2F65 6D62 D70 D0A 310A 3065 3637 A0D
4D42 ->BMP starts here. File size: 122998
Image Offset: 118
Header size: 40
Width * Height: 640 x 384 / Bit Depth: 4
Planes: 1
Format: 0
Bytes read:122912

Then as we can see in this example they come as the starting bits the image headers itself that are readed with this part of code:

// BMP signature
if (bmp == 0x4D42)
{
    uint32_t fileSize = read32();
    uint32_t creatorBytes = read32();
    uint32_t imageOffset = read32(); // Start of image data
    uint32_t headerSize = read32();
    uint32_t width  = read32();
    uint32_t height = read32();
    uint16_t planes = read16();
    uint16_t depth = read16(); // bits per pixel
    uint32_t format = read32();
}
uint16_t read16()
{
  // Reads 2 bytes and returns then
  uint16_t result;
  ((uint8_t *)&result)[0] = client.read(); // LSB
  ((uint8_t *)&result)[1] = client.read(); // MSB
  return result;
}

uint32_t read32()there
{
  // Reads 4 fucking bytes
  uint32_t result;
  ((uint8_t *)&result)[0] = client.read(); // LSB
  ((uint8_t *)&result)[1] = client.read();
  ((uint8_t *)&result)[2] = client.read();
  ((uint8_t *)&result)[3] = client.read(); // MSB
  return result;
}

In there comes a very important 2 bytes of information and without it is impossible or I just couldn’t find out how to read the pixels, and that’s Image Offset: 118 which means at byte 118 the image information starts. Also Depth that represents how many bits represents one single pixel. So in 1 bit, we can store a black and white image, and if we want full RGB then we need 24 bits per pixel, also 1 byte for each color (Red, Green and Blue)
Our dear Wikipedia says about this:

For an uncompressed, packed within rows, bitmap, such as is stored in Microsoft BMP file format, a lower bound on storage size for a n-bit-per-pixel (2n colors) bitmap, in bytes, can be calculated as:

size = width • height • n/8, where height and width are given in pixels.

So there we have then the Image Offset: 118, but to get to read this headers, we already got from the client 32 bytes. Then we need to make the difference and start reading the image:

// Attempt to move pointer where image starts
client.readBytes(buffer, imageOffset-bytesRead);

That should be it, then we need to read every row up to the reported width in our example 640, inside of a height loop of 384 pixels. And then read each pixel taking in account the pixel depth. In the code example this looks a bit rough around the corners:

    if ((planes == 1) && (format == 0 || format == 3)) { // uncompressed is handled
      // Attempt to move pointer where image starts
      client.readBytes(buffer, imageOffset-bytesRead);
      size_t buffidx = sizeof(buffer); // force buffer load

      for (uint16_t row = 0; row < height; row++) // for each line
      {
        uint8_t bits;
        for (uint16_t col = 0; col = sizeof(buffer))
          {
            client.readBytes(buffer, sizeof(buffer));
            buffidx = 0; // Set index to beginning
          }
          switch (depth)
          {
            case 1: // one bit per pixel b/w format
              {
                if (0 == col % 8)
                {
                  bits = buffer[buffidx++];
                  bytesRead++;
                }
                uint16_t bw_color = bits & 0x80 ? GxEPD_BLACK : GxEPD_WHITE;
                display.drawPixel(col, displayHeight-row, bw_color);
                bits <<= 1;
              }
              break;

            case 4: // was a hard word to get here
              {
                if (0 == col % 2) {
                  bits = buffer[buffidx++];
                  bytesRead++;
                }
                bits <<= 1;
                bits < 0x80 ? GxEPD_WHITE : GxEPD_BLACK;
                display.drawPixel(col, displayHeight-row, bw_color);
                bits <<= 1;
                bits < 0xFF  / 2) ? GxEPD_WHITE : GxEPD_BLACK;
                display.drawPixel(col, displayHeight-row, bw_color);
                bytesRead = bytesRead +3;
              }
          }
        } // end pixel
      } // end line

And I still have an issue that still didn't found why it does not work. This code works good and I can see images in 4-bits and 24-bits but it hangs on 1-bit image.
It's something about the headers, using the point 1 described before, also discarding headers the 1-bit image works. But not the other depths (4/24)
It's maybe some basic thing about how the byte stream comes that I'm not getting or I'm simply missing something stupid enough not to get around it.
There are other better examples on ZinggJM Repositories that deal much better with the buffering and other aspects, where the BMP reading truly works. But sometimes I like to understand the stuff and fight with it, before implementing something, since it's only way to learn how stuff works.
Have you ever though how far we are that every OS and every Browser have the resources to read almost any existing Image or Video format ? How many Megabytes of software is that ? ;)
That's what I love about coding simple examples in C++ on the Espressif chips. That you need to go deep, there is no such a thing of ready made json_decode or do-whatever libraries as in PHP. You need to read it from the bits. But the cool thing is that if you get around it, then you have a grasp of what is need to be done to read in this case a very simple Bitmap Format image. I cannot imagine how to read a compressed JPG or a PNG, I think for that yes, I will put my head down and use some library.
UPDATE: I found out after about 4 hours fight why it is. And it's the fact that I'm reading the bytes in chunks of 2. Reading them one by one and adding lastByte in the comparison to check them then it works for both 1 and 4 bits images. I can post the solution here if someone is interested, but if not, I will keep it as is to avoid making it a boring long read.