Wednesday, January 25, 2012


The nRFCam is an open-source 2.4GHz wireless camera built using the TCM8230MD camera, an ARM Cortex-m3 LPC176x MCU and a Nordic nRF24l01p chip for the radio link.

On reset, the Nordic chip is initialized and a PWM channel is configured to generate the clock signal for the EXTCLK, fast GPIO is used for the data bus and the other sync signals. Once the camera is configured, using the I2C interface, it starts sending frames over the 8-bits wide data bus as usual, synchronized by the HSYNC, VSYNC and DCLK signals (for more details on this see the TCM8230MD-Breakout post).

The VSYNC interrupt handler is written entirely in assembly, unfortunately this makes it difficult to port but it's much more optimized.  Each scanline is read in a loop and converted from RGB565 to grayscale and then stored in the frame buffer.

Once we read a complete frame, the frame is compressed using lzf and then sent out over SPI to the nordic radio chip. A simple frame header precedes the frame data, consisting of a start of frame marker (0xAA), to synchronise the frames, followed by the length of the frame and the frame data.

The datasheet does not mention how the data bus clock (DCLK) frequency relates to the external clock (EXTCLK) frequency, however, the maximum DCLK frequency can be inferred from the timing characteristics and from the following diagram, given that the minimum setup time TSU and hold time THD is 10, if the camera is running at the maximum external clock (EXTCLK) frequency which is 25Mhz, then the maximum DCLK frequency is 1s/20ns = 50MHz.
If you sample the data at the rising edge of DCLK it means you have only 10ns to read DOUT, if the MCU is running at 100MHz that's one clock cycle to read DOUT, which is not possible, however, since DCLK is a function of EXTCLK lowering EXTCLK lowers DCLK and increase the time window we have to read the data. The NRFCam generates a 6Mhz EXTCLK for the camera, which gives us a few cycles to read the data.

The Nordic chip has an on-air datarate of 2Mbps or 250KBps (ignoring any  protocol overhead), it means that even with the camera set to output the smallest possible frame 24KB (128x96x2), the maximum we can send is 10 FPS. In other words, there's no point of trying to keep up with a higher frame rate when all you can send is 10FPS.

The frame is converted from RGB565 to gray scale reducing the frame size by half (12KB), since we don't have enough time between each DCLK edge, the whole scanline is read first and then converted before the next scanline starts (before the next HSYNC edge).

After the whole frame has been read, the frame is compressed using the lzf compression algorithm, there's no particular reason for selecting lzf other than that it was the easiest to port and configure for a low memory footprint, maybe there's an algorithm that can better exploit the data, anyway, at this point the frame size is reduced to 4KBs-6KBs (depending on the frame).

On the other side, the frame can be received by any compatible Nordic chip, decompressed and the grayscale level of each pixel is repeated in the RGB components of the new pixel. Note that since the blue component has 6 bit, in an RGB565, you will need to multiply the grayscale value by two, or left shift by one, for the blue component.

The Prototype
I haven't tested the new board yet, so it's not in the repository. There's however a new breakout, the one used in the prototype, the new breakout is basically the same except that this one is meant to be connected as module, not to the breadboard, and I've removed the crystal oscillator and instead exposed the EXTCLK pin to be controlled by a PWM. This is the prototype in action:
Eagle files and source code
hg clone


  1. Hi again! I'm attempting to follow a little in your footsteps with the Toshiba camera. I've compiled the source for an LPCXpresso using the Code Red toolchain.

    However, sleep() appears not to be implemented. Any ideas? Perhaps there's a separate library I need? Or your toolchain provides it for you or something? Will try to track down a suitable delay function.

    I'm hoping to be able to do some basic blob detection with the camera... we'll see...

    1. Hi Michael, I use the systick timer to implement sleep, checkout my other post about the arm cortex for the code. I'm working on something similar, I made an SPI cam widget, you just plug it in and it sends the frames, and now I'm trying to implement the viola-jones face detection algorithm on the camera, if you would like to help please contact me at i.abdalkader (at) gmail (dot) com

  2. Wow, AWESOME ! I just stumbled on this blog from your youtube video. I've been working on the exact same thing for some time now. I'm trying to get the jpeg compression to work on my (similar aptiva) camera. It reduces the size to about 7Kb in full color (same size as yours) so your frame rate or size could be much larger.

    You might want to try the TCM8240MD I think it has jpeg. I'm building a robot if you want to check it out or contact me I would be very open to collaborating

  3. Hello,
    I can't find the source code for this project on anymore.
    Is there any chance you could link me the src ?
    I'm mainly interested in the NRF module lib for ARM (mine is stm32f103) and I can't seem to find any good example on the net.

    1. Hi, the code is still there I've just checked

      There's also an AVR library here

      And another AVR example here too