Skip to main content

Real-time single-cell tracking for unattended research microscopy — powered by Axelera Metis M.2

  • April 11, 2026
  • 2 replies
  • 27 views

Hey Axelera community! 👋

We're a small optical microscopy engineering company, and we've been exploring whether the Metis M.2 could bring edge AI inference into real research lab environments — not just benchmarks, but actual unattended scientific workflows. Here's what we found.

🧫 What we built

A complete pipeline for real-time single-cell tracking of live Paramecium microorganisms, combining a motorized XY microscopy stage with YOLOv8-seg inference — and ultimately running it on Axelera Metis M.2 + Radxa Rock 5 ITX+.

Real-time motorized tracking with YOLOv8-seg

The core system: YOLOv8-seg detects and segments live Paramecium specimens in real time, driving a motorized XY stage (Nikon/Märzhäuser) to keep individual microorganisms continuously centered in the field of view.

https://youtu.be/3-1CXPTxqB8

Long-duration trajectory reconstruction

What the system actually captures: a real individual trajectory reconstructed from an extended recording session. We've tracked single specimens continuously for up to 5h 49minutes. This kind of long-duration, unattended data collection is exactly what research labs need for quantitative behaviour experiments.

https://youtu.be/B9h5Wr4lQZc

Metis M.2 proof of concept on Radxa Rock 5 ITX+

This is the edge deployment piece — and where things got interesting. Getting Metis running on the Radxa Rock 5 ITX+ (RK3588) wasn't straightforward, so here are the key steps and gotchas in case it helps anyone:

https://youtu.be/s5maD4ojorg

Steps that actually worked:

  • OS: Ubuntu 24.04 Joshua-Riek build (not stock Ubuntu)
  • Compiler: Python 3.12 required for us
  • Model compatibility: YOLOv8n-seg ✅ — YOLOv11-seg ❌ (fails due to unsupported attention operators at the time that we tried)
  • The critical fix — DTB patch: The Radxa Rock 5 ITX+ PCIe MEM window for fe150000 is only 14 MB by default. Metis needs at least 512 MB. We patched the DTB to expand it to 512 MB in 64-bit address space (0x900000000). Without this patch, Metis simply won't initialize. If you're hitting mysterious PCIe failures on ARM SBCs, maybe this is likely your culprit.

Results:

  • 88.6 FPS with YOLOv8n-seg
  • 🖥️ 16.1% CPU usage
  • 🌡️ 38°C — thermally stable for 24/7 operation

Why this matters for research labs

The combination of Metis M.2 + an affordable SBC like the Radxa Rock 5 ITX+ opens up a genuinely useful deployment scenario: unattended AI inference in research laboratory conditions. No GPU workstation, no active cooling concerns, no high power draw. Just a compact, stable platform running continuously alongside a motorized microscope.

For frontier science applications — long-duration behavioral studies, automated screening, continuous environmental monitoring — this kind of edge AI node could be a game changer.

Happy to share more technical details on the DTB patch, the YOLOv8-seg training pipeline, or the stage control integration. Just ask! 🙌

 

Regards,

FxGc

2 replies

Spanner
Axelera Team
Forum|alt.badge.img+3
  • Axelera Team
  • April 14, 2026

This is brilliant! Real unattended science workflows are exactly the kind of use case where edge AI properly earns its keep over a GPU workstation!

I'm curious about the closed-loop stage control. At 88.6 FPS you're getting a new segmentation mask roughly every... 11ms or something? How are you translating that into XY stage commands? Is the stage keeping up at that rate, or are you only sending movement corrections every [N] frames?

And by the way, those numbers on the Rock 5 ITX+ (88.6 FPS at 38°C) are well impressive!  Very tidy for 24/7 operation 👍 Will this be used in actual lab conditions and science projects?


  • Author
  • Cadet
  • April 14, 2026

Hi,

The 88.6 FPS is inference-only on pre-recorded video — the camera (Lumenera Infinity X) delivers ~15 FPS in practice. That said, minimizing inference latency still matters a lot: Paramecium doesn't wait for us! 😄

The closed-loop control works like this:

  • Every frame, YOLOv8-seg returns the segmentation mask
  • We calculate the centroid from the mask (straightforward point list analysis)
  • If the centroid deviates from the XY frame center, we compute the real position in microns and send a velocity vector to the stage — not just a target position. We keep sending velocity corrections until the Paramecium is re-centered and has actually stopped moving
  • The Märzhäuser Stage and our USB controller has ~30–60 ms to process each new velocity command, but the stage itself can reach 8,000 µm/s — German engineering at its finest!
  • End-to-end, the effective tracking rate is 10–15 FPS at the output trajectory file level.

And yes — real lab conditions, absolutely! We only need a table for the motorized microscope (and Paramecium is very, very small 😄). We're running long-duration behavioural studies unattended in collaboration with Dr. Humbert Salvadó's research group at the Universitat de Barcelona (UB).

FxGc — Winkoms Open Microscopy