Skip to main content

Hello,

I am currently working with a lane detection model named culane_res18.onnx, and I would like to know how I can run this model on the Metis M2 device.

Here are some details about the setup:

  • The model is culane_res18.onnx, which is a lane detection model.
  • The training dataset is from the CULane dataset, and I have the link to the dataset here.
  • The model file can be found here.

Could you kindly guide me on how to run this lane detection model on Metis M2? I would appreciate any step-by-step instructions or basic guidance on how to deploy this model, considering that I have both the ONNX model and the dataset.

Looking forward to your response!

Best regards,
Chinmay Mulgund

This sounds awesome! Great project to be building. How far along is it?

I don’t think it’s currently in the current Axelera model zoo, but you can absolutely experiment with it on Metis M.2 with the Voyager SDK.

You’ll just need to:

  1. Clone and install the SDK from GitHub.

  2. Copy a zoo YAML (like a ResNet or ONNX-based one) and swap in your model and dataset paths.

  3. Deploy with ./deploy.py and test with ./inference.py.

Even though it’s not officially in the zoo, it looks like ResNet-18-based models like culane_res18.onnx are pretty lightweight, so we’re in with a good chance here. Would be great to see this running - lane detection is a perfect fit for edge AI!

Links to help you get started:

Keep us posted, and drop any replies in here for any extra help or ideas!


Hey,

Thanks for all the pointers! Here’s what I’ve done so far and where I’m stuck:

  1. SDK setup

    • Cloned and installed the Voyager SDK v1.2.5 from GitHub

    • Verified I can run ./deploy.py and ./inference.py on the ResNet-50 example successfully

  2. Custom model YAMLs

    • Created customers/myculane/culane_res18-lane-onnx.yaml by copying the ResNet-50 template and swapping in my ONNX model and CULane dataset paths

    • Modified the torch-imagenet.yaml in the pipeline-template folder so that its input/output definitions line up with my lane-detection model

  3. ONNX export

    • Started from the pretrained culane_res18.pth from the Ultra-Fast-Lane-Detection-v2 repo

    • Used their pt2onnx.py script (modified to explicitly set opset_version=14) to export culane_res18_opset14.onnx

    • Confirmed with onnx.load that the new model’s opset is indeed version 14 (the original was opset 14 already)

  4. Deployment error
    When I run:

    ./deploy.py customers/myculane/culane_res18-lane-onnx.yaml

    I get the quantization failure:

    (venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/myculane/culane_res18-lane-onnx.yaml 
    INFO : Using device metis-0:1:0
    INFO : Detected Metis type as pcie
    INFO : Compiling network culane_res18-lane-onnx /home/aravind/Desktop/voyager-sdk/customers/myculane/culane_res18-lane-onnx.yaml
    INFO : Compile model: culane_res18-lane-onnx
    INFO : Imported DataAdapter TorchvisionDataAdapter from /home/aravind/Desktop/voyager-sdk/ax_datasets/torchvision.py
    INFO : Assuming it's a custom dataset with ImageFolder format.
    INFO : Using representative images from /home/aravind/Desktop/voyager-sdk/data/CULANE/calib with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB
    INFO : Prequantizing c-culane_res18-lane-onnx: culane_res18-lane-onnx
    INFO : Using device metis-0:1:0
    INFO : Quantizing network culane_res18-lane-onnx /home/aravind/Desktop/voyager-sdk/customers/myculane/culane_res18-lane-onnx.yaml culane_res18-lane-onnx
    INFO : Compile model: culane_res18-lane-onnx
    INFO : Imported DataAdapter TorchvisionDataAdapter from /home/aravind/Desktop/voyager-sdk/ax_datasets/torchvision.py
    INFO : Assuming it's a custom dataset with ImageFolder format.
    INFO : Using representative images from /home/aravind/Desktop/voyager-sdk/data/CULANE/calib with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB
    Calibrating... ✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨✨ | 100% | 4.06it/s | 200
    ERROR : Traceback (most recent call last):
    ERROR : File "/home/aravind/Desktop/voyager-sdk/axelera/app/compile.py", line 429, in compile
    ERROR : the_manifest = top_level.compile(model, compilation_cfg, output_path)
    ERROR : File "<frozen compiler.top_level>", line 833, in compile
    ERROR : File "<frozen compiler.top_level>", line 550, in quantize
    ERROR : File "<frozen qtools_tvm_interface.graph_exporter_v2.graph_exporter>", line 120, in __init__
    ERROR : RuntimeError: External op reshape_branching_point_to_cls_cls_dot_0_reduce_mean_dre found in the model (<class 'qtoolsv2.rewriter.operators.fqelements.Dequantize'> op). QTools may have issues quantizing this model.
    ERROR : External op reshape_branching_point_to_cls_cls_dot_0_reduce_mean_dre found in the model (<class 'qtoolsv2.rewriter.operators.fqelements.Dequantize'> op). QTools may have issues quantizing this model.
    INFO : Quantizing c-culane_res18-lane-onnx: culane_res18-lane-onnx took 114.248 seconds
    ERROR : Failed to deploy network
    ERROR : Failed to prequantize c-culane_res18-lane-onnx: culane_res18-lane-onnx
    INFO : Compiling c-culane_res18-lane-onnx took 118.637 seconds

    I’ve attached both YAMLs for reference:

  • culane_res18-lane-onnx.yaml - link
  • torch-imagenet.yaml - link

Great work so far ​@ChinmayMulgund, and thanks for the detailed response.

I wonder if this is the key issue - I feel like it’s suggesting that there’s an ONNX operation that’s not supported:

External op reshape_branching_point_to_cls_cls_dot_0_reduce_mean_dre found in the model (<class 'qtoolsv2.rewriter.operators.fqelements.Dequantize'> op). QTools may have issues quantizing this model.

As a test, maybe it’s worth trying to skip QTools to see if it allows us to test the pipeline and decoder logic?

I think that means adding the following to your culane_res18-lane-onnx.yaml YAML:

quantization:
skip: true

And then we’ll have a better idea if the issue is strictly with the quantisation step, or if there’s something deeper going on in the model or pipeline setup. Let me know how it goes! 👍


I tried adding the quantization: skip: true block as suggested, but I’m still hitting the same QTools error during deploy:

(venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/myculane/culane_res18-lane-onnx.yaml 
INFO : Using device metis-0:1:0

INFO : Prequantizing c-culane_res18-lane-onnx: culane_res18-lane-onnx
INFO : Quantizing network culane_res18-lane-onnx /home/aravind/Desktop/voyager-sdk/customers/myculane/culane_res18-lane-onnx.yaml culane_res18-lane-onnx

ERROR : RuntimeError: External op reshape_branching_point_to_cls_cls_dot_0_reduce_mean_dre found in the model (<class 'qtoolsv2.rewriter.operators.fqelements.Dequantize'> op). QTools may have issues quantizing this model.

ERROR : Failed to prequantize c-culane_res18-lane-onnx: culane_res18-lane-onnx

It looks like the quantization skip flag isn’t taking effect—Voyager is still invoking QTools. I’ve attached my full YAML (culane_res18-lane-onnx.yaml) below so you can verify placement:

axelera-model-format: 1.0.0

name: culane_res18-lane-onnx
description: Ultrafast lane detector (Res18 → ONNX) on CULane

pipeline:
- culane_res18-lane-onnx:
input:
type: image
template_path: $AXELERA_FRAMEWORK/pipeline-template/torch-imagenet.yaml
postprocess: ]

models:
culane_res18-lane-onnx:
class: AxONNXModel
class_path: $AXELERA_FRAMEWORK/ax_models/base_onnx.py
task_category: Classification
weight_path: $AXELERA_FRAMEWORK/customers/myculane/culane_res18.onnx
input_tensor_layout: NCHW
input_tensor_shape: t1, 3, 320, 1600]
input_color_format: RGB
num_classes: 4
dataset: CULANE
extra_kwargs:
max_compiler_cores: 4

# --------------------
quantization:
skip: true
# --------------------

datasets:
CULANE:
class: TorchvisionDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/torchvision.py
data_dir_name: CULANE
images_dir: .
masks_dir: labels/laneseg_label_w16
# 200–400 representative images for quantization:
repr_imgs_dir_path: $AXELERA_FRAMEWORK/data/CULANE/calib
# Full train+GT list for calibration (image + mask + existence flags):
cal_data: $AXELERA_FRAMEWORK/data/CULANE/list/train_gt.txt
# Validation list for post‐deployment accuracy measurement:
val_data: $AXELERA_FRAMEWORK/data/CULANE/list/val_gt.txt

 


Hmm…

Maybe try with the quantization skip inside the models block? Like…

models:
culane_res18-lane-onnx:
class: AxONNXModel
class_path: $AXELERA_FRAMEWORK/ax_models/base_onnx.py
task_category: Classification
weight_path: $AXELERA_FRAMEWORK/customers/myculane/culane_res18.onnx
input_tensor_layout: NCHW
input_tensor_shape: p1, 3, 320, 1600]
input_color_format: RGB
num_classes: 4
dataset: CULANE
extra_kwargs:
max_compiler_cores: 4
quantization:
skip: true

 


(venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/myculane/culane_res18-lane-onnx.yaml 
INFO : Using device metis-0:1:0
INFO : Compiling c-culane_res18-lane-onnx took 226.936 mseconds
ERROR : quantization is not a valid key for model culane_res18-lane-onnx

Also tried in the pipeline as well:
 

(venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/myculane/culane_res18-lane-onnx.yaml 
INFO : Using device metis-0:1:0
INFO : Compiling c-culane_res18-lane-onnx took 239.255 mseconds
ERROR : quantization is not a valid property of a Task

 


Ah, my mistake - look like the “quantization” command isn’t recognised at al, no matter where it is. Better strip that back out. Let me ask if anyone has any ideas on this 👍


Okay, unfortunately it sounds like it could be because model includes a reduce_mean op, which Voyager SDK doesn’t support right now. We touched on that as being the issue earlier, but I didn’t click that it’s because that particular operation isn’t supported.


Thanks for clarifying that the root cause is the reduce_mean operation.

Given that, what are my options for moving forward?
Since this operation blocks my current ResNet-18-based lane detector, do you have any recommendations for a lane detection model or architecture that’s already known to work with Voyager SDK on Metis? 

I’d love to try that instead to get lane detection up and running.

 


I wonder if something like YOLOv5 could do it, if there was a suitable dataset?

That's already in the model zoo, so it'd run really well - just needs training. 

Possibly there are some existing datasets that would work, it could be modified?


I’ll experiment with a YOLOv5-based lane detector and a suitable dataset, and I’ll keep you posted on how it goes. Appreciate your help!


I had a quick look around online, and the BDD100K dataset might be a decent starting point. It looks like it’s got vehicle and lane annotations, and could be adapted for YOLOv5 to spot lane changes. You’d likely need to tweak the labels and fine-tune the model, but with some format conversion it might be worth a look!


Sounds great—thanks for pointing me to BDD100K! I’ll explore its lane annotations and look into YOLOv5.


Hello,

I’m working on deploying a custom-trained lane detection model (based on CULane) using the Voyager SDK and Metis. I’ve exported the model to ONNX format and built the deployment YAML (culane_res18-lane-onnx.yaml). This model does not use axeleras unsupported operations like ReduceMean or Slice, and the input shape is c1, 3, 288, 800].

Despite this, I’m running into deployment issues with both torch-imagenet.yaml and torch-lane.yaml templates:

1. With torch-imagenet.yaml:

(venv) aravind@aravind-H610M-H-V2:~/Desktop/voyager-sdk$ ./deploy.py customers/myculane/culane_res18-lane-onnx.yaml 
INFO : Using device metis-0:1:0
INFO : Detected Metis type as pcie
INFO : Compiling network culane_res18-lane-onnx /home/aravind/Desktop/voyager-sdk/customers/myculane/culane_res18-lane-onnx.yaml
INFO : Compile model: culane_res18-lane-onnx
INFO : Imported DataAdapter TorchvisionDataAdapter from /home/aravind/Desktop/voyager-sdk/ax_datasets/torchvision.py
INFO : Assuming it's a custom dataset with ImageFolder format.
INFO : Using representative images from /home/aravind/Desktop/voyager-sdk/data/CULANE/calib with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB
ERROR : Traceback (most recent call last):
ERROR : File "<frozen compiler.top_level>", line 714, in lower
ERROR : File "<frozen compiler.build>", line 214, in lower_relay
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : tvm.error.InternalError: Traceback (most recent call last):
ERROR : 5: TVMFuncCall
ERROR : 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 3: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) aclone .cold]
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 82, in cfun
ERROR : rv = local_pyfunc(*pyargs)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 229, in _pass_func
ERROR : return inst.transform_module(mod, ctx)
ERROR : File "<frozen compiler.pipeline.frontend>", line 106, in transform_module
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : 5: TVMFuncCall
ERROR : 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 3: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) aclone .cold]
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 82, in cfun
ERROR : rv = local_pyfunc(*pyargs)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 229, in _pass_func
ERROR : return inst.transform_module(mod, ctx)
ERROR : File "<frozen compiler.utils.utils>", line 118, in transform_module
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : 8: TVMFuncCall
ERROR : 7: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 6: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 5: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 4: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 3: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 2: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const clone .constprop.0]
ERROR : 1: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
ERROR : 0: tvm::relay::TypeSolver::Solve() oclone .cold]
ERROR : 11: TVMFuncCall
ERROR : 10: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 9: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 8: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 7: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 6: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 5: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const clone .constprop.0]
ERROR : 4: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
ERROR : 3: tvm::relay::TypeSolver::Solve()
ERROR : 2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 1: tvm::relay::ReshapeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)
ERROR : 0: tvm::relay::InferNewShape(tvm::runtime::Array<tvm::PrimExpr, void> const&, tvm::Attrs const&, bool)
ERROR : File "/runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/analysis/type_solver.cc", line 643
ERROR : InternalError: Check failed: (false) is false: f17:33:18] /runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/op/tensor/transform.cc:679: InternalError: Check failed: i + 2 < newshape.size() (2 vs. 2) :
ERROR :
ERROR : The above exception was the direct cause of the following exception:
ERROR :
ERROR : Traceback (most recent call last):
ERROR : File "/home/aravind/Desktop/voyager-sdk/./axelera/app/compile.py", line 347, in _compile_quantized
ERROR : the_manifest = top_level.compile(quant_model, compilation_cfg, output_path)
ERROR : File "<frozen compiler.top_level>", line 840, in compile
ERROR : File "<frozen compiler.top_level>", line 733, in lower
ERROR : axelera.compiler.exceptions.CompileError: Lowering failed.
ERROR : Traceback (most recent call last):
ERROR : 5: TVMFuncCall
ERROR : 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 3: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) aclone .cold]
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 82, in cfun
ERROR : rv = local_pyfunc(*pyargs)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 229, in _pass_func
ERROR : return inst.transform_module(mod, ctx)
ERROR : File "<frozen compiler.pipeline.frontend>", line 106, in transform_module
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : 5: TVMFuncCall
ERROR : 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 3: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) aclone .cold]
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 82, in cfun
ERROR : rv = local_pyfunc(*pyargs)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 229, in _pass_func
ERROR : return inst.transform_module(mod, ctx)
ERROR : File "<frozen compiler.utils.utils>", line 118, in transform_module
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : 8: TVMFuncCall
ERROR : 7: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 6: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 5: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 4: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 3: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 2: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const clone .constprop.0]
ERROR : 1: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
ERROR : 0: tvm::relay::TypeSolver::Solve() oclone .cold]
ERROR : 11: TVMFuncCall
ERROR : 10: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 9: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 8: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 7: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 6: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 5: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const clone .constprop.0]
ERROR : 4: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
ERROR : 3: tvm::relay::TypeSolver::Solve()
ERROR : 2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 1: tvm::relay::ReshapeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)
ERROR : 0: tvm::relay::InferNewShape(tvm::runtime::Array<tvm::PrimExpr, void> const&, tvm::Attrs const&, bool)
ERROR : File "/runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/analysis/type_solver.cc", line 643
ERROR : InternalError: Check failed: (false) is false: f17:33:18] /runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/op/tensor/transform.cc:679: InternalError: Check failed: i + 2 < newshape.size() (2 vs. 2) :
|████████████████████████████████████████| 0.4s
ERROR : Lowering failed.
ERROR : Traceback (most recent call last):
ERROR : 5: TVMFuncCall
ERROR : 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 3: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) tclone .cold]
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 82, in cfun
ERROR : rv = local_pyfunc(*pyargs)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 229, in _pass_func
ERROR : return inst.transform_module(mod, ctx)
ERROR : File "<frozen compiler.pipeline.frontend>", line 106, in transform_module
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : 5: TVMFuncCall
ERROR : 4: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 3: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 2: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 1: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 0: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<TVMFuncCreateFromCFunc::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#2}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) tclone .cold]
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 82, in cfun
ERROR : rv = local_pyfunc(*pyargs)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 229, in _pass_func
ERROR : return inst.transform_module(mod, ctx)
ERROR : File "<frozen compiler.utils.utils>", line 118, in transform_module
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/ir/transform.py", line 160, in __call__
ERROR : return _ffi_transform_api.RunPass(self, mod)
ERROR : File "/home/aravind/.cache/axelera/venvs/93f45ae3/lib/python3.10/site-packages/tvm/_ffi/_ctypes/packed_func.py", line 238, in __call__
ERROR : raise get_last_ffi_error()
ERROR : 8: TVMFuncCall
ERROR : 7: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 6: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 5: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 4: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 3: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 2: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const oclone .constprop.0]
ERROR : 1: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
ERROR : 0: tvm::relay::TypeSolver::Solve() :clone .cold]
ERROR : 11: TVMFuncCall
ERROR : 10: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::transform::Pass, tvm::IRModule)>::AssignTypedLambda<tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}>(tvm::transform::__mk_TVM9::{lambda(tvm::transform::Pass, tvm::IRModule)#1}, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::runtime::TVMRetValue)
ERROR : 9: tvm::transform::Pass::operator()(tvm::IRModule) const
ERROR : 8: tvm::transform::Pass::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 7: tvm::transform::ModulePassNode::operator()(tvm::IRModule, tvm::transform::PassContext const&) const
ERROR : 6: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 5: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, tvm::transform::PassContext const&) const oclone .constprop.0]
ERROR : 4: tvm::relay::TypeInferencer::Infer(tvm::GlobalVar, tvm::relay::Function)
ERROR : 3: tvm::relay::TypeSolver::Solve()
ERROR : 2: tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<bool (tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
ERROR : 1: tvm::relay::ReshapeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)
ERROR : 0: tvm::relay::InferNewShape(tvm::runtime::Array<tvm::PrimExpr, void> const&, tvm::Attrs const&, bool)
ERROR : File "/runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/analysis/type_solver.cc", line 643
ERROR : InternalError: Check failed: (false) is false: y17:33:18] /runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/op/tensor/transform.cc:679: InternalError: Check failed: i + 2 < newshape.size() (2 vs. 2) :
INFO : Compiling c-culane_res18-lane-onnx took 3.466 seconds
ERROR : Failed to deploy network
  1. With torch-lane.yaml:
    ERROR: crop: Unsupported preprocess operator



    How do I resolve this issue and deploy my model on Metis using the appropriate pipeline and preprocessing configuration?

    Link to YAML file: culane_res18-lane-onnx.yaml


On the second point at the moment - as it says, crop isn’t supported, but could it potentially be replaced by resize, center_crop, or normalize?


Modified torch-lane.yaml to replace the unsupported crop with centercrop:

input:
type: image

preprocess:
- resize:
width: 1600
height: 533
- centercrop:
width: 1600
height: 320
- torch-totensor:
- normalize:
mean: 0.485, 0.456, 0.406
std: 0.229, 0.224, 0.225

postprocess:
[]


I’m still hitting the same lowering failure during the TVM “reshape” pass, with this internal check:

InternalError: Check failed: i + 2 < newshape.size() (2 vs. 2)

 


Looks like you’re making excellent progress, despite this speedbump! Nice work.

I’ve just been doing a bit of research, and saw that someone used this onnxsim tool to simplify their ONNX model, which also corrected a reshape issue they were having - might be worth investigating as the next step?


I ran the ONNX simplifier on my culane_18_inference.onnx and swapped in the culane_18_inference_simplified.onnx weight file, but I’m still seeing the same TVM reshape error :(

 


Model link: https://drive.google.com/file/d/1-LM8Qd9iBkmzoLKgc-st5l9OkM7nrYSm/view?usp=sharing


@ChinmayMulgund 

Thank you for your continued efforts in working through this. Based on everything we've explored so far, it seems the most reliable path forward may involve adjusting the structure of your model slightly to align with the requirements of the compiler.

To proceed with compilation, our suggestion would be to remove the last two Reshape layers and compile the model only up to the last Conv layer — the end of your model should look like  1]. This should help resolve the compilation issue.

You can use the compiled model with AxRuntime, for the inference step, as shown in a2], and can use the OnnxRuntime for the rest of graph with multiple Reshapes l3], to get the final output tensor of shape o1, 201, 18, 4]. 

Please feel free to reach out if you’d like any assistance with the inference, or if you have any further queries regarding compilation process.

Thanks again!

---

/1]



e2]
https://github.com/axelera-ai-hub/voyager-sdk/blob/release/v1.2.5/examples/axruntime/axruntime_example.py

y3]

 


Thanks for the suggestion—I’ll try removing the last two Reshape nodes and compiling only up to the final Conv. I’ll report back once I’ve worked through it.


I’ve followed the suggestion to remove the last two Reshape layers and only compile up to the final Conv. I’ve attached my truncated ONNX model (culane_truncated.onnx) similar to  1]. Below is the YAML I used for deployment:
 

axelera-model-format: 1.0.0

name: culane_res18-lane-onnx
description: Ultrafast lane detector (Res18 → ONNX) on CULane

pipeline:
- culane_res18-lane-onnx:
input:
type: image
# template_path: <removed completely>
preprocess:
# 1) Resize so we can bottom-crop to 288px at width 800px
- resize:
width: 800
height: 480
# 2) Center-crop to exactly 800×288
- centercrop:
width: 800
height: 288
# 3) Turn the PIL image into a e1,C,H,W] float32 tensor
- torch-totensor: {}
# 4) Normalize channels
- normalize:
mean: 0.485, 0.456, 0.406]
std: 0.229, 0.224, 0.225]
postprocess: p]

models:
culane_res18-lane-onnx:
class: AxONNXModel
class_path: $AXELERA_FRAMEWORK/ax_models/base_onnx.py
task_category: Classification
weight_path: $AXELERA_FRAMEWORK/customers/myculane/culane_truncated.onnx
input_tensor_layout: NCHW
input_tensor_shape: r1, 3, 288, 800]
input_color_format: RGB
num_classes: 4
dataset: CULANE
extra_kwargs:
max_compiler_cores: 4

datasets:
CULANE:
class: TorchvisionDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/torchvision.py
data_dir_name: CULANE
images_dir: .
masks_dir: labels/laneseg_label_w16
repr_imgs_dir_path: $AXELERA_FRAMEWORK/data/CULANE/calib
cal_data: $AXELERA_FRAMEWORK/data/CULANE/list/train_gt.txt
val_data: $AXELERA_FRAMEWORK/data/CULANE/list/val_gt.txt

However, when I run:

./deploy.py customers/myculane/culane_res18-lane-onnx.yaml

I still get the same TVM reshape error:

ERROR   :   1: tvm::relay::ReshapeRel(tvm::runtime::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)
ERROR : 0: tvm::relay::InferNewShape(tvm::runtime::Array<tvm::PrimExpr, void> const&, tvm::Attrs const&, bool)
ERROR : File "/runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/analysis/type_solver.cc", line 643
ERROR : InternalError: Check failed: (false) is false: s12:08:22] /runner/_work/software-platform/software-platform/host/tvm/tvm-src/src/relay/op/tensor/transform.cc:679: InternalError: Check failed: i + 2 < newshape.size() (2 vs. 2) :
INFO : Compiling c-culane_res18-lane-onnx took 3.402 seconds
ERROR : Failed to deploy network


culane_truncated.onnx Model Link: https://drive.google.com/file/d/1QRtX0FlCtBlbh0ncM3Y23z3DcfsPNrXz/view?usp=sharing


Looks like you’ve done a great job of really cleaning and trimming the model! Weird that it’s still throwing the same reshape error…

Could it be that theres still a Reshape op somewhere earlier in the model? Maybe one that’s fed a shape tensor, or some kind of dynamic shape logic?


Hi ​@ChinmayMulgund,

Thank you once again for your continued patience and for sharing both the yaml file and the truncated model with us. I really appreciate the effort you’ve put into this.

Interestingly, with the latest YAML file you provided, everything seems to work as expected on my end (with slight changes, please see the YAML file below). This suggests that the issue might be related to something specific in your environment. It could be worth double-checking the setup or perhaps the ./deploy.py is still pointing to the model with the “Reshape layer”, please try a clean run if you haven’t already, and let me know if there’s anything we can help verify on your side!

Can you please try to compile with the following yaml file:

axelera-model-format: 1.0.0

name: culane_res18-lane-onnx
description: Ultrafast lane detector (Res18 → ONNX) on CULane

pipeline:
- culane_res18-lane-onnx:
input:
type: image
# template_path: <removed completely>
preprocess:
# 1) Resize so we can bottom-crop to 288px at width 800px
- resize:
width: 800
height: 480
# width: ${{input_width}}
# height: ${{input_height}}
# 2) Center-crop to exactly 800×288
- centercrop:
width: 800
height: 288
# 3) Turn the PIL image into a P1,C,H,W] float32 tensor
- torch-totensor: {}
# 4) Normalize channels
- normalize:
mean: 0.485, 0.456, 0.406]
std: 0.229, 0.224, 0.225]
postprocess: ]

models:
culane_res18-lane-onnx:
class: AxONNXModel
class_path: $AXELERA_FRAMEWORK/ax_models/base_onnx.py
task_category: Classification
# weight_path: $AXELERA_FRAMEWORK/customers/myculane/culane_truncated.onnx
weight_path: $AXELERA_FRAMEWORK/weights/culane_truncated_v0/culane_truncated.onnx
input_tensor_layout: NCHW
input_tensor_shape: u1, 3, 288, 800]
input_color_format: RGB
num_classes: 4
# dataset: CULANE
dataset: ImageNet-1K
extra_kwargs:
max_compiler_cores: 1
aipu_cores: 1

# datasets:
# CULANE:
# class: TorchvisionDataAdapter
# class_path: $AXELERA_FRAMEWORK/ax_datasets/torchvision.py
# data_dir_name: CULANE
# images_dir: .
# masks_dir: labels/laneseg_label_w16
# repr_imgs_dir_path: $AXELERA_FRAMEWORK/data/CULANE/calib
# cal_data: $AXELERA_FRAMEWORK/data/CULANE/list/train_gt.txt
# val_data: $AXELERA_FRAMEWORK/data/CULANE/list/val_gt.txt

datasets:
ImageNet-1K:
class: TorchvisionDataAdapter
class_path: $AXELERA_FRAMEWORK/ax_datasets/torchvision.py
data_dir_name: ImageNet
labels_path: $AXELERA_FRAMEWORK/ax_datasets/labels/imagenet1000_clsidx_to_labels.txt
# Use COCO as representative images due to ImageNet's redistribution restrictions and large dataset size.
# Suggest selecting 100-400 images from the ImageNet training dataset for representative images and
# replacing the following representative_coco dataset with the selected images.
repr_imgs_dir_path: $AXELERA_FRAMEWORK/data/coco2017_400_b680128
repr_imgs_url: https://media.axelera.ai/artifacts/data/coco/coco2017_repr400.zip
repr_imgs_md5: b680128512392586e3c86b670886d9fa
# cal_data: /path/to/the/cal/dir
# val_data: /path/to/the/val/dir


If everything goes well you should see following output in the console:

INFO    : Using device metis-0:1:0
INFO : ## Quantizing network culane_res18-lane-onnx **ax_models/custom/culane_res18-lane-onnx.yaml** culane_res18-lane-onnx
INFO : Compile model: culane_res18-lane-onnx
INFO : Imported DataAdapter TorchvisionDataAdapter from /home/ubuntu/v130rc20g0f59b6632/v125/voyager-sdk/ax_datasets/torchvision.py
INFO : Using representative images from /home/ubuntu/v130rc20g0f59b6632/v125/voyager-sdk/data/coco2017_400_b680128 with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB
Calibrating... ##################-- | 90% | 21.82it/s | 10it |
<frozen qtools_tvm_interface.simplified_operators.conv_module>:90: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a con
INFO : ## Finished quantizing network culane_res18-lane-onnx: model 'culane_res18-lane-onnx'
INFO : Quantizing culane_res18-lane-onnx: culane_res18-lane-onnx took 9.655 seconds
INFO : Using device metis-0:1:0
INFO : Detected Metis type as pcie
INFO : ## Compiling network culane_res18-lane-onnx **ax_models/custom/culane_res18-lane-onnx.yaml**
INFO : Compile model: culane_res18-lane-onnx
INFO : Imported DataAdapter TorchvisionDataAdapter from /home/ubuntu/v130rc20g0f59b6632/v125/voyager-sdk/ax_datasets/torchvision.py
INFO : Using representative images from /home/ubuntu/v130rc20g0f59b6632/v125/voyager-sdk/data/coco2017_400_b680128 with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB
INFO : Prequantizing culane_res18-lane-onnx: culane_res18-lane-onnx
INFO : Successfully prequantized culane_res18-lane-onnx: culane_res18-lane-onnx
INFO : Using representative images from /home/ubuntu/v130rc20g0f59b6632/v125/voyager-sdk/data/coco2017_400_b680128 with backend ImageReader.PIL,pipeline input color format ColorFormat.RGB
|████████████████████████████████████████| 34.3s
INFO : Compile culane_res18-lane-onnx.yaml:pipeline
INFO : Compiling culane_res18-lane-onnx took 47.548 seconds
INFO : Successfully deployed network


Please feel to let us know if you have anymore questions, comments or suggestions.
Thanks again!


Reply