January 25

The team is working on finishing up first designs for all things prototyping. January has already flown by and we are working away.

Mechanical Update

The mechanical progress so far this term has focused on CAD design of the modules and collaborating with the electrical team to meet system specifications. First, the front module of the bike system was designed; this module mounts and protects the front stereo camera pair and the front illumination LED. The components are protected behind an acrylic cover in individual sections to prevent light bleed. Lastly, the LED is tilted slightly downward to optimize the light coverage in front of the cyclist for the selected lens angle.


Front Module CAD Design

The main module CAD is currently being worked on, which sits in the center of the bike frame and houses the majority of the electronics for the system. Since last term, the compute module was changed and an additional camera splitting board was added to the device. The main module layout was adapted to accommodate the changes. Main module CAD development also included the design of side light and piezo mounting modules, battery mounting, and heat sink design. The heat sink connects the hot internal electrical components to the metal side plates using a manually milled aluminum block and thermal pads. This passive heat dissipation design doesn’t require a fan and keeps the electronics sealed for simple waterproofing. Finally, the purple body in the CAD is a placeholder for the custom Falcon board being developed by the electrical team.


Main and Front Module on Bicycle

Electrical Update

The electrical schematic design for the main and auxiliary PCBs has been completed. The main PCB manages charging and balancing of the lithium ion battery pack, power distribution to all other components of the system (Raspberry Pi boards, lighting elements), and drives the piezo buzzers and high power front light unit. The main PCB is equipped with a microcontroller, several DC/DC converters, and switching circuitry to accomplish this. The auxiliary PCBs have independent LED drivers and LED arrays to provide side and rear lighting.

The next step is to complete the PCB layout, keeping in mind compatibility with the mechanical design. The PCB is being designed such that it can be assembled in-house, allowing us to source components asynchronously to PCB manufacturing. This tightens the design cycle and decreases sourcing risk associated with the ongoing chip shortage.

Schematics for Connectors + Microcontroller

Schematics for Battery Charging + Balancing

Schematics for High Power Regulation

Schematics for Low Power Regulation

Software Update

Earlier this month, we received our first set of stereo cameras, the Arducam 12MP MINI IMX477 Synchronized Stereo Camera Bundle Kit. The kit consists of a stereo camera HAT and two 12MP IMX477 camera modules, synchronised at the hardware level. Hardware synchronisation eliminates the challenge of synchronising pairs of cameras and allows our team to run two cameras through the Raspberry Pi’s single high-speed MIPI CSI-2 port.

A static housing was designed for testing the performance of the Raspberry Pi 4 Model B and the Arducam stereo cameras. The housing aligns the two cameras in a stereo configuration with a baseline of 18.5cm.


Static Stereo Housing

Our team verified that frames could be read at the advertised 30fps at 2028 x 1522 pixel resolution using the official Raspberry Pi drivers. The full resolution of 8112 x 3040 can only be achieved at 6fps with a third-party Arducam driver. To satisfy our constraints, a framerate higher than 10fps is required. Thus our team elected to run the cameras at the lower resolution.

With the cameras functional, object detection benchmarking was performed using the EfficientNet-Lite0 model. This model designed by Google, is designed for performance on mobile CPU and has the lowest latency of the EfficientNet-Lite family. With a 320 x 320 pixel input, we recorded average inferences times of 134.54 milliseconds. This performance is exceptional considering the Raspberry Pi lacks a co-processor for ML acceleration but fails to meet our 10 Hz (FPS) specification.

Inferences times may be improved by selecting a different model or adding a hardware accelerator. Extensive benchmarking by Alasdair Allan [1] suggests that both can improve performance considerably. As shown below, the best results may be achieved using MobileNet v1 and a Coral USB Accelerator.


Inference Times in Milliseconds

Our team will benchmark MobileNet v1 using TensorFlow Lite on the Raspberry Pi CPU. Unfortunately, it is challenging to source a reasonably priced Coral USB Accelerator due to supply shortages. The alternative Intel Neural Compute Stick 2 provides less substantial improvements but is in stock and may be considered.

In the last few weeks, the team received a pair of Raspberry Pi 4 Model B single-board computers (SBCs). These will be used to perform the image capturing from the stereo camera kits, run the computer vision & trajectory planning models, and communicate the safety critical information with the mobile application.

Due to the complexity of communicating between a mobile application and two SBCs simultaneously, it was presumed that the two Raspberry Pis will communicate with each other over a wired connection and only one will communicate with the mobile app. With the team obtaining the pair of Raspberry Pis earlier this month, this presumption was resolved and resulted in a communication latency of 560 nanoseconds.

The team verified the communication between the two SBCs via a Gigabit ethernet connection. As the information that is communicated between the two SBCs is safety critical, TCP was selected for the transport layer of the ethernet communication architecture model. When compared with UDP, TCP has a longer header, but TCP is more reliable and guarantees that the data is received by the destination router. This is performed through acknowledge segments and handshaking techniques that are built into the transport layer.

To minimize the programming time while ensuring that the team meets the specifications, JSON data interchange format was selected for templating the information communicated between the SBCs. The JSON format eases the debugging process for the software sub-team as JSON objects contain human-readable text to store and transmit data objects. Additionally, the text structure of JSON objects enables it to perform lossless compression before transmitting the data for fast communication. The figure below illustrates the JSON schema that will be used to communicate the information between the two SBCs.


JSON schema for communication between the two Raspberry Pis

Using the JSON schema presented above, and the two Raspberry Pi 4 Model Bs connected via a Gigabit ethernet cable, the communication between the two devices was performed. In the test, 100,000 JSON objects were sent between the devices. The one-way communication time obtained from this test are illustrates in the figure below.


One-way communication time over TCP/IP

This verifies that the communication latency between the two Raspberry Pis will on average be 560 nanoseconds, leaving a significant amount of time to perform other computationally challenging tasks. Looking ahead, the team will wrap up this test and integrate it into the final software stack.


[1] A. Allan, “The Big Benchmarking Roundup,” Hackster.io, Jan-2020. [Online]. Available: https://www.hackster.io/news/the-big-benchmarking-roundup-a561fbfe8719. [Accessed: 25-Jan-2022]. 

So build away, we do.

Come back and check out our blog for regular updates.

University of Waterloo
  • Group 9
  • Falcon Safety