Skip to main content
Question

Run a custom model with end-to-end C/C++ pipeline

  • November 17, 2025
  • 4 replies
  • 66 views

Forum|alt.badge.img

Hi,
How can I implement the following preprocessing to run the end-to-end C/C++ pipeline?
I've already implemented the build_gst function of the custom decoder with C++.

def override_preprocess(self, img: PIL.Image.Image | np.ndarray) -> torch.Tensor:

#trasformazione immagine

img = np.array(img)

if img.ndim == 2:

img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB)

else:

img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

 

img = np.float32(img/255.)

#Stima rumore

sigma_est = estimate_sigma(img, channel_axis=-1, average_sigmas=True)

sigma_est = np.sqrt(sigma_est)

sigma_est = min(sigma_est, 0.08)

 

img = cv2.resize(img, (640,480), interpolation=cv2.INTER_LINEAR)

img_tensor = torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()

noise_tensor = torch.full_like(img_tensor[:1, :, :], fill_value=sigma_est, dtype=torch.float32)

final_tensor = torch.cat([img_tensor, noise_tensor], dim=0).to(torch.float32)

return final_tensor

4 replies

Spanner
Axelera Team
Forum|alt.badge.img+2
  • Axelera Team
  • November 17, 2025

Hi ​@Giodst! Just to clarify a bit - are you aiming to run the entire pipeline in C++, or are you looking to implement just the preprocessing step in C++ and integrate it into a Voyager pipeline?

If it’s the former, Voyager SDK doesn’t really support fully end-to-end pipelines written and executed in C++ as far as I know...


Forum|alt.badge.img
  • Author
  • Ensign
  • November 17, 2025

I'm asking if I can also implement pre-processing in C++, as I did with post-processing through the decoder:

def build_gst(self, gst: gst_builder.Builder, stream_idx: str):

gst.decode_muxer(lib='libgenerative_decoder.so', options=f'meta_key:{str(self.task_name)};')

The reason is that I can't implement preprocessing of this particular model via yaml preprocess operators (https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.3/docs/reference/yaml_operators.md#preprocess).
 

So i need an alternative method to Build End-to-End GStreamer Pipelines (https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.3/ax_models/tutorials/general/tutorials.md#tutorial-5-building-end-to-end-gstreamer-pipelines).


Forum|alt.badge.img
  • Axelera Team
  • November 19, 2025

Hi ​@Giodst 

Have you tried looking into the operators/src here: https://github.com/axelera-ai-hub/voyager-sdk/tree/release/v1.4/operators. This contains, almost all of the pre and post proc gst-operators used by the gst pipeline that is built with inference,py. You can also define your own pre proc operator, and then modify the CMake here: https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.4/operators/CMakeLists.txt#L160 

You will have to build the operators again with make operators and also have to make sure that your pre proc operator is bing used by the pipeline, by modifying the low-level gst yaml file. You can generate this file with --save-compiled-gst flag and then use the modified gst yaml with --ax-precompiled-gst flag in the inference.py.

But perhaps, you can try out your custom pre and post proc with AxInferenceNet directly, as shown in the examples, here: https://github.com/axelera-ai-hub/voyager-sdk/tree/release/v1.4/examples/axinferencenet

Please try it out, and let us know if you have more questions or queries.
Thanks!


Forum|alt.badge.img
  • Author
  • Ensign
  • November 19, 2025

Hi ​@Habib 
I'll try, thank you very much!