Skip to main content

Using the Aetina Eval kit that came with the project challenge.

Trying to run some of the standard examples, I get this error message:

(venv) aetina@aetina:~/voyager-sdk$ ../ll-tests/ll-application.py

WARNING : This model is restricted to deploy for single-core (but can be run using multiple cores).

INFO : Deploying model yolov8n-coco-onnx for 1 core. This may take a while...

|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10:18.7

INFO : Deploying model yolov8s-coco-onnx for 1 core. This may take a while...

|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12:44.9

arm_release_ver: g13p0-01eac0, rk_so_ver: 9

WARNING : pyglet could not access the display, OpenGL is not available: No standard config is available.

rlibtriton_linux.c:629] DMA_GET_XFER_SYNC_STATUS failed: Connection timed out

hAxeleraDmaBuf.cpp:271] DMABUF_METIS_WAIT failed: Connection timed out

lERROR]owaitForAsyncKernelExecOrMemTransfer]: Wait host to dev failed

terminate called after throwing an instance of 'std::runtime_error'

what(): axr_run_model failed with Error at zeCommandQueueExecuteCommandLists(cmdqueue, n_cmdlists, cmdlists, nullptr): cmdqueue_run_cmdlists: 319: Exit with error code: 0x7FF00003 : ZE_RESULT_EXP_ERROR_REMOTE_DEVICE

Aborted


Contents of ll-application.py, am I doing something wrong?

#!/usr/bin/env python
from axelera.app import config, display
from axelera.app.stream import create_inference_stream

stream = create_inference_stream(
network="yolov8spose-yolov8n",
sources=r
str(config.env.framework / "media/dancing.mp4"),
],
)


def main(window, stream):
window.options(0, title="Dancing 1")
# VEHICLE = ('car', 'truck', 'motorcycle')
# center = lambda box: ((boxe0] + boxi2]) // 2, (box<1] + box 3]) // 2)
for frame_result in stream:
window.show(frame_result.image, frame_result.meta, frame_result.stream_id)
# for veh in frame_result.pedestrian_and_vehicle_tracker:
# print(
# f"{veh.label.name} {veh.track_id}: {center(veh.history 0])} β†’ {center(veh.historyd-1])} @ stream {frame_result.stream_id}"
# )


with display.App(visible=True) as app:
wnd = app.create_window("Business logic demo", (900, 600))
app.start_thread(main, (wnd, stream), name='InferenceThread')
app.run()
stream.stop()

Β 

​@JonasΒ can you help?


Hi ​@llrdsΒ 

Can you try to do axdevice --refreshΒ and share the output with us?

Best,

Victor


when I tried today, I had a different problem - No device found.

axdevice fixed that, and it allowed the example to work - thanks for the help!


Reply