Skip to main content

šŸš€ FlowSentry-Wake | Stage 2 Update

  • January 29, 2026
  • 3 replies
  • 39 views

EdgeFlowNet Optical Flow. Successfully Deployed On Board

1ļøāƒ£ EdgeFlowNet Is Now Running on the Board

We’ve successfully compiled and deployed EdgeFlowNet,

a modern encoder–decoder optical flow model,

directly on the Orange Pi 5 Plus + Metis M.2 platform.

2ļøāƒ£ Calibration for Optical Flow Is Not ā€œPlug-and-Playā€

Unlike single-image models, optical flow operates on consecutive frame pairs.

To make this work inside the Voyager SDK calibration flow,

we implemented a custom DataAdapter dedicated to optical flow:

  • Takes two consecutive RGB frames
  • Concatenates them into a 6-channel tensor
  • Ensures layout, normalization, and ordering match model expectations

This adapter is used only for calibration,

but it is critical for correct quantization and deployment.

3ļøāƒ£ A Mysterious Shape Mismatch — and How We Found It

During deployment, we encountered a confusing Relay type error:

Error: The Relaytype checker is unable to show the following types match:
Tensor[(192,260,4,64),int8]
Tensor[(192,256,1,64),int8]

This dimensional mismatch could not be traced back to the model itself despite multiple confirmations

In the end, we found the right way to debug this in compiler_configs_full.md.

To locate the issue, we enabled two debug options in the model’s YAML file.

compilation_config:
save_error_artifact:true
trace_tvm_passes:true

This exposed the real culprit:

šŸ‘‰ ICR was enabled by default.

The fix was explicit and clean:

compilation_config:
enable_icr:false
enable_swicr:false

Once disabled, the mismatch disappeared.

4ļøāƒ£ Non-Symmetric Padding in Transposed Convolutions

EdgeFlowNet’s decoder contains several stride-2 transposed convolutions with odd kernels.

This leads to non-symmetric padding, for example:

  • Padding: [0, 0, 1, 1]

However, AIPU requires symmetric padding.

Retraining the model was not an option —

we wanted to preserve the original weights.

So we applied a targeted transformation strategy:

  • Detect non-symmetric padding
  • Convert it into:
    • Symmetric padding
    • Explicit output padding compensation

Example:

Before the change:

Padding was set to [0, 0, 1, 1]. Output padding was [0, 0]. AutoPad was set to SAME_UPPER.

After the change:

Padding was adjusted to [1, 1, 1, 1]. Output padding became [1, 1]. AutoPad was changed to NOTSET.

This satisfies hardware constraints

without changing the final output behavior or performance.

A pure deployment-side fix.

5ļøāƒ£ Where We Are Now

At this point, we have confirmed:

  • EdgeFlowNet compiles successfully
  • Runs fully on-board
  • Uses a custom calibration pipeline
  • Passes hardware constraints without retraining

This is a solid foundation.

šŸ”œ Next Stage: Making It Useful

Coming up next:

  • Optical flow visualization
  • Runtime cost analysis
  • Integration into the FlowSentry-Wake multi-stage perception framework
  • Coordinated operation of:
    • Object detection
    • Motion estimation
    • Power-aware stage switching

This is where the system-level design really starts to shine.

Stay tuned.

!-->

3 replies

  • Cadet
  • January 29, 2026

This is great news! I was wondering if the compiled model has been made open source?Ā Would be of great help.

PS:- What is the fps?


  • Author
  • Cadet
  • January 29, 2026

This is great news! I was wondering if the compiled model has been made open source?Ā Would be of great help.

PS:- What is the fps?

Thanks! The full code will be open-sourced around the end of February when the project wraps up.
FPS results will be shared in the next stage, once the optical flow visualization pipeline is ready.


Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • January 29, 2026

Awesome work! I’m not sure if I’ve heard about anyone getting optical flow running on Metis yet - nicely done! Can’t wait to see how it performsĀ šŸ‘