hi there,
i am currently writing up an application with friends for a grant on “ai&art” which is due by tomorrow.
we want to do music reactive visuals on edge with multimodal inputs and want to use the Metis for live inference.
i think of visuals being an interesting playground since they allow to very freely construct the loads for the chip.
if you are interested in collaborating please feel free to reach out! there is no requirement on expertise.
ideas only are also very much welcome.
(as well as additional sources for funding or sponsors)
hope to have caught your interest
best carlo