Femto Mega Documentation

Synchronize Multiple Femto Mega Devices

Each Femto Mega device has an 8-pin GPIO sync port that can be used to interconnect multiple devices. Once connected, the devices can be coordinated via software to trigger captures at synchronized times.

This document covers how to connect and synchronize multiple devices.

 

 

Benefits of Using Multiple Femto Mega Devices

There are several reasons to use multiple Femto Mega devices, including:

  • Filling occlusion zones. Although the Femto Mega data transform generates a single image, the two cameras (depth and RGB) are actually spaced a small distance apart. This offset allows for occlusion. Occlusion refers to situations where a foreground object blocks part of the background scene from one of the cameras on the device. In the generated color image, the foreground object appears to cast a shadow over the background object.

For example, in the image below, the left camera can see the gray pixel “P2”. But the white foreground object blocks the IR view from the right camera. The right camera cannot acquire data for “P2”. Adding synchronized devices can provide data for the occlusion.

  • Scanning 3D objects.
  • Increasing valid frame rate to 30 frames per second (fps) or higher.
  • Capturing multiple 4K color images of the same scene with exposures aligned within 100 microseconds (us).
  • Increasing spatial coverage of the cameras.

 

 

Planning Multi-Device Configuration

Be sure to review the Hardware Specifications and Depth Camera sections before beginning.

 

 

Selecting a Device Configuration

Device configurations can be accomplished using either:

  • Star configuration – One master device synced with up to eight subordinate devices.

  • Daisy chain configuration – One master device synced with up to eight subordinate devices.

 

Introduction to camera synchronous interface and synchronous wire (Multi-camera Sync Cable)

The camera uses 8-pin GPIO ports to realize synchronization. The specific interfaces are described in the following table:

Pin Definition Function
1 SYNC_VCC ●      Input & continuous level signal

●      I/O level setting signal: the I/O level of the 8-pin synchronous interface is set to 1.8V by default; If the level voltage of 3.3V or 5V is provided on the SYNC_VCC interface, the I/O level setting can be adjusted to 3.3V or 5V to improve the transmission stability and transmission distance of the synchronous signal.

2 GPIO_OUT ●      Output & pulse signal

●      Synchronous driving signal: IR exposure synchronous signal; Typical application is to drive external supplementary light

3 VSYNC_OUT ●      Output & pulse signal

●      Synchronous trigger signal: used to trigger the synchronous acquisition of data by the subsequent equipment

4 TIMER_SYNC_OUT ●      Output & pulse signal

●      Hardware timestamp reset signal: used to reset the hardware timestamp of the subsequent device

5 RESET_IN ●      Input & pulse signal

●      Camera hard reset signal: trigger the camera to power down first, and automatically power on to reset

6 VSYNC_IN ●      Input & pulse signal

●      Synchronous trigger signal: the synchronous acquisition trigger signal sent by the pre-sequence device

7 TIMER_SYNC_IN ●      Input & pulse signal

●      Hardware timestamp reset signal: the hardware timestamp reset instruction sent by the pre-sequence device

8 GND ●      Input/Output & continuous level signal

●      Grounding signal: reference of synchronous signal

Multi-camera Sync Cable introduction, as shown in the following figure, the professional version of the Multi-camera Sync Cable, the left Connector 8-pin interface is connected to the camera terminal, the right RJ45 interface is connected to the network Cable terminal.

 

 

Using External Trigger

In both configurations, the master device provides the trigger signal to subordinates. But you can use a custom external source as the sync trigger instead. For example, this option can enable syncing captures with other equipment. The external trigger source connects to the VSYNC_IN port on devices whether in star or daisy chain configurations.

The external trigger source must operate the same as the master device. It must provide a sync signal with:

  • Rising edge trigger
  • High pulse width: >1ms
  • Trigger level: 1.8V/3.3V/5V
  • Frequency support: exactly 30FPS, 25FPS, 15FPS and 5FPS (the frequency of the master camera’s VSYNC_IN signal)

The trigger source must use a Multi-camera Sync Cable to relay the signal to the sync signal input port on subordinates.

Multi-camera synchronization of Primary connected to the Multi-camera Sync hub Pro diagram

Multi-camera synchronization of Secondary connected to the Multi-camera Sync hub Pro diagram

 

 

Planning Camera Settings and Software Configurations

See Orbbec SDK and Orbbec SDK K4A Wrapper for info on setting up the software to control cameras and use image data.

This section discusses factors that affect synchronized devices together rather than individual devices. The software should account for these factors.

 

Exposure Considerations

To precisely align device exposure timing, we recommend using manual exposure. With auto exposure, each color camera may dynamically change the actual exposure value. Since the exposure duration affects timing, such changes impact sync effectiveness.

Avoid repeating the same exposure settings in image capture loops unless necessary. Call the API just once if possible.

 

Avoiding Interference Between Depth Cameras

If multiple depth cameras image overlapping fields of view, each must image using its own associated laser. To prevent interference between lasers, the camera captures should be offset from each other by 160us or more.

For each depth camera capture, the laser fires 9 times, active for just 125us each time. The laser then idles for 1450us or 2390us depending on mode. This behavior means the offset calculation starts from 125us.

In addition, the difference between the camera clock and device firmware clock increases the minimum offset to 160us. To calculate a more precise offset based on your configuration, note the depth mode used and reference the Depth Sensor Raw Timing table. Using the data in this table, the minimum offset (exposure time for each camera) can be computed as:

Where:

  • Infrared pulse = Number of laser pulses per exposure, which is 9
  • Pulse width = Duration laser is active per pulse
  • Idle cycle = Number of idle periods per exposure, which is 8
  • Idle time = Duration of each idle period

With a 160us offset, up to 9 additional depth cameras can be configured so that when one laser fires, the others remain idle.

In software, use depth_delay_off_color_usec or subordinate_delay_off_master_usec to ensure each infrared laser fires within its corresponding 160us window, or has a different field of view.

Note: The actual pulse width is 125us but we state 160us to provide some margin. As an example with NFOV UNBINNED, after each 125us pulse there is a 1450us idle. Adding those up – (9 x 125) + (8 x 1450) – gives the exposure time of 12.8ms. The trick to interleaving exposure across two devices is to have the first pulse of the second camera land in the first idle period of the first camera. The delay between the first and second camera can be as little as 125us (the pulse width) but we recommend a margin, so we state 160us. Given 160us, up to 10 cameras could interleave exposure times.

 

 

Preparing Devices and Other Hardware

In addition to configuring multiple Femto Mega devices, other hosts and hardware may be needed to support the configuration you intend to build. Use the info in this section to ensure all devices and hardware are ready before starting setup.

 

Femto Mega Devices

For each Femto Mega device to be synced:

  • Ensure the latest firmware is installed on the device. See Update Femto Mega Firmware for details on updating firmware.
  • Note the serial number of each device. This will be needed later during setup.

 

Host PCs

It is typical to connect multiple Femto Megas to dedicated host PCs individually, with dedicated host controllers depending on how the devices are used and the amount of data transferred over USB connections.

Ensure Orbbec SDK or Orbbec SDK K4A Wrapper is installed on the hosts. See Download Orbbec SDK and K4A Wrapper for details on installing the SDK.

 

Linux Machines: USB Memory on Ubuntu

By default, Linux-based hosts allocate just 16MB kernel memory for USB transfers per controller to handle USB transfers. This is often sufficient for a single Femto Mega device. But more memory per controller is needed to support multiple devices. To increase the memory:

  1. Edit /etc/default/grub.
  2. Find the line:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
  1. Change it to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.usbfs_memory_mb=128"
  1. Run sudo update-grub.
  2. Reboot the machine.

Note: These commands set the USB memory to 128MB. This example is 8X the default setting. Much larger values can be set as appropriate for the solution.

 

Cables

To interconnect devices and connect to hosts, Multi-camera Sync Cables, network cables, and a sync hub must be used. Network cable lengths should be less than 10m.

The number of cables needed depends on the number of devices and the specific device configuration. Sync cables and hub are not included with Femto Mega cameras. Sync cables, hub, and network cables must be purchased separately.

 

 

Connecting Devices in Star Configuration

  1. Connect each Femto Mega device to power.
  2. Connect each device to its own host computer.
  3. Choose one device to be the master and connect its sync cable.
  4. Connect the other end of that sync cable to a T568B to T568B network cable, with the network cable going into the Primary In port on the professional star sync hub.
  5. To connect another device, attach its sync cable to the Femto Mega sync port and connect it via network cable to the other sync cable end, with the network cable going into the first subordinate port on the hub: Secondary Cam 1. The second subordinate connects to Secondary Cam 2, and so on.
  6. Repeat for all devices.

 

 

Connecting Devices in Daisy Chain Configuration

  1. Connect each Femto Mega device to power.
  2. Connect each device to its own host computer.
  3. Choose one device to be the master and connect its sync cable.
  4. Connect the other end of that sync cable to the Primary In port on the daisy chain sync hub.
  5. To connect another device, attach its sync cable to the Femto Mega  sync port and connect the other end to the first subordinate port on the hub: Secondary Cam 1. The second subordinate connects to Secondary Cam 2, and so on.
  6. Repeat for all devices.

Note: The daisy chain connection provides a developer version of sync hub and cables not supporting network cable connections like the professional star version hub. See the Multi-Camera Sync Hub Specs for details.

 

 

Verifying Devices Are Connected and Communicating

To verify devices are connected properly, use the K4A Viewer. Repeat this process as needed to test each subordinate device paired with the master.

Important: During testing, you must know the serial number of each Femto Mega.

  1. Open two instances of the K4A Viewer.
  2. Under “Open Device”, select the serial number of the subordinate device to test.

       3.Under External Sync options, select Sub.

        4.Select Start.

Note: Since this is a subordinate device, the K4A Viewer will not show images after starting the device. Images are not displayed until the subordinate receives the sync signal from the master.

  1. After starting the subordinate, use the other instance of the K4A Viewer to open the master device.
  2. Under External Sync options, select Master.
  3. Select Start.

Once the Femto Mega master starts, both instances of the K4A Viewer should display images.

Note: In K4A Viewer, Master corresponds to Primary master, and Sub corresponds to Secondary subordinate.

 

 

Calibrating Synchronized Multi-Device Setups

Once devices are verified communicating, they can be calibrated to generate images in a unified domain.

Within a single device, the depth and RGB cameras are factory calibrated to work together. But when using multiple devices together, they must be calibrated to determine how to transform images from the camera domain they were captured in to the camera domain used for processing the images.

Several options exist for cross calibrating devices. A GitHub green screen code sample using OpenCV methods is provided here. The readme with this code example provides more details and guidance on calibrating devices.

See Calibration Functions for more on calibration.

 

Stay updated

Be the first to learn about our new
products and updates.