Skip to main content
Question

Create from 0 AI local resource

  • November 19, 2025
  • 3 replies
  • 46 views

 I am considering installing a local AI system in my company.
I kindly ask you to provide me with useful documentation to do this.
Please consider a user who only has experience with C++ and Linux servers.
Below is a brief project description.
Project objectives
Create an autonomous AI platform installed at your premises.
Manage and monitor systems installed by customers (lights, sensors, cycles, errors, maintenance).
Integrate Antholux3 and Topiaria2 into a single management architecture.
Eliminate cloud dependencies → everything local: secure, scalable, proprietary.
Create a technology hub for support, diagnostics, updates and remote assistance.

General system architecture
2.1 Main components
Local AI server (mini-PC, NUC or workstation)
Internal database
Communication broker
Rules engine + local AI
Unified dashboard
OTA module for updating remote systems
Logging and diagnostics system

2.2 Technologies used

Server: Linux Debian/Ubuntu
Database: PostgreSQL
Broker: MQTT (Mosquitto)
UI: React or Angular + REST API
Local AI: Compact LLM + specific models (internal post-training)
Remote communication: VPN + WireGuard encrypted tunnels

Thank you in advance for your time

3 replies

Spanner
Axelera Team
Forum|alt.badge.img+2
  • Axelera Team
  • November 20, 2025

Hi there ​@bpaolo, welcome to Axelera! 😃 

Hmm, so, looking at where Axelera might fit into your architecture, your "Local AI server" component could use either the Metis PCIe card or Metis M.2 card. Either of these are great for accelerating AI inference locally - particularly computer vision tasks like monitoring lights/sensors/cycles through camera feeds.

We've got experimental LLM support too, but that's early days and only works on the PCIe cards due to the amount of available memory.

In terms of integration with your stack:

  • Voyager SDK runs on Ubuntu/Debian, either natively for Ubuntu 22.04 (although native 24.04 is imminent!) or it’s deployable via Docker
  • You'd handle MQTT/PostgreSQL/dashboard integration wouldn’t be something Axelera’s gear would contribute to directly - Metis just provides fast AI inference results that your code/stack can use however you need it to

But certainly the edge AI side of things is where Axelera could help! That’s what we’re all about 👍


  • Author
  • Cadet
  • November 20, 2025

First of all, thank you for your prompt reply. At this point, I would like to ask you to respond to the issue of the system's poor documentation. Could you please provide me with some references and examples of image processing applications? What I need is to interpret the state of plant leaves. Thank you in advance and I look forward to hearing from you.


Steven Hunsche
Axelera Team
Forum|alt.badge.img

Hi Bpaolo,

Which part of our documentation do you find lacking? We’re always happy to hear feedback on how to improve our usability!

I recommend looking at our quick start guide to get you started using our products. Afterwards, since you mentioned using C++, our axinferencenet guide could help you get started setting up your pipeline using C++ APIs.

As for setting up your own pipeline to interpret the state of plant leaves, do you already have a target network in mind? Here is how you deploy your custom model, or if you simply have custom weights for a model from our zoo, try our custom weights tutorial.