Skip to main content
Solved

IndexError with Gemm and Tanh

  • December 16, 2025
  • 6 replies
  • 45 views

Similar to https://community.axelera.ai/voyager-sdk-2/run-simple-gemm-model-1127, I got IndexError when converting a Gemm-only model via “compile -i model.onnx -o model --overwrite”. However, I don’t have Flatten layer. Could you help on checking what’s going wrong?

 

The model I use and logs are attached below, and here are my system information:

Best answer by Spanner

Thanks for the additional files ​@yan12125 👍

The 4D input looks right. But as far as I understand itmodel-with-reshape.onnx needs an Identity Conv as the first layer, rather than Reshape. So it goes more like Identity Conv → Flatten → Gemm → Activation.

Also, in the other onnx file you shared, MatMul isn’t supported, so that’d need to use Gemm.

So I think the next step to try is:

  1. 4D input (this already looks right)
  2. Identity Conv as first layer (not Reshape)
  3. Use Gemm (not MatMul)
  4. Maybe flatten between Conv and Gemm?

The original_with_conv.zip that ​@Habib provided to ​@MArio has this as a working strucrure, which might be a useful reference/example? 

Let me know if this helps!

6 replies

  • Author
  • Cadet
  • December 16, 2025

Oops there’s a typo in the title. Originally the error occurs with Gemm+Tanh, and I still get the error with Gemm+ReLU, as seen in the attached model.


Spanner
Axelera Team
Forum|alt.badge.img+2
  • Axelera Team
  • December 17, 2025

Yo ​@yan12125 ! Yeah, as you say, this sounds really similar to the issue ​@MArio was having.

I just took a quick glance at the log, and before we dig any deeper, I saw something that might help us out here - the compiler expects 4D tensors, but I think the model is feeding it a 2D tensor directly into the Gemm. As I recall, this was a similar problem in the other post.

Reshaping the input into 4D ((1, 10, 1, 1) instead of (1, 10), perhaps?) could be worth a shot 👍

Let me know!


  • Author
  • Cadet
  • December 18, 2025

Thanks. I tried some ways to use 4D input, but neither works

  • Use (1, 10, 1, 1) as input and put Reshape layer before Gemm to turn 4D data back to 2D for Gemm. The error looks the same as the original model - IndexError in axelera.compiler.frontend.passes.pass_legalize_flatten_linear. The tested model is model-with-reshape.onnx
  • Use (1, 1, 1, 10) and use 4D pytorch linear layer. The model is generated from the following script, and Axelera compiler results into another error - AssertionError in axelera.compiler.frontend.passes.pass_rewrite_dense_to_conv2d. Detailed logs and the tested model (model-4d-linear) are attached below.

from torch import nn import torch.onnx

class Model(nn.Module):     def __init__(self, input_dim, hidden_size, output_dim):         super().__init__()         self.layers = nn.Sequential(             nn.Linear(input_dim, hidden_size, bias=False),             nn.ReLU(),         )

    def forward(self, x):         x = x.view(1, 10)         return self.layers(x)

model = Model(input_dim=10, hidden_size=10, output_dim=10) torch.onnx.export(model, (torch.rand(1, 1, 1, 10),), 'model.onnx')

 


Spanner
Axelera Team
Forum|alt.badge.img+2
  • Axelera Team
  • Answer
  • December 18, 2025

Thanks for the additional files ​@yan12125 👍

The 4D input looks right. But as far as I understand itmodel-with-reshape.onnx needs an Identity Conv as the first layer, rather than Reshape. So it goes more like Identity Conv → Flatten → Gemm → Activation.

Also, in the other onnx file you shared, MatMul isn’t supported, so that’d need to use Gemm.

So I think the next step to try is:

  1. 4D input (this already looks right)
  2. Identity Conv as first layer (not Reshape)
  3. Use Gemm (not MatMul)
  4. Maybe flatten between Conv and Gemm?

The original_with_conv.zip that ​@Habib provided to ​@MArio has this as a working strucrure, which might be a useful reference/example? 

Let me know if this helps!


  • Author
  • Cadet
  • December 23, 2025

Thanks! Indeed prepending Identity Conv & Flatten layers to the ONNX model allows compilation to work.


Spanner
Axelera Team
Forum|alt.badge.img+2
  • Axelera Team
  • December 23, 2025

Excellent news, and good to know at this end too, so thanks for the update! That could be useful info if anyone else has this 👍