Skip to main content

hi there,

i am currently writing up an application with friends for a grant on “ai&art” which is due by tomorrow.
we want to do music reactive visuals on edge with multimodal inputs and want to use the Metis for live inference. 

i think of visuals being an interesting playground since they allow to very freely construct the loads for the chip. 

if you are interested in collaborating please feel free to reach out! there is no requirement on expertise.  
ideas only are also very much welcome.
(as well as additional sources for funding or sponsors)

hope to have caught your interest
best carlo 

Hope you manage to get some interest for this ​@carlo - sounds like a very cool project!

It actually put me in mind of a music track from when I was a kid, called Stakker Humanoid! I loved that track back in… what? Like, 1988, I think it was? 😅

But the reason the music was actually made in the first place was because the computer graphics company (called Stakker) needed some audio to go with a digital demo reel they’d made. This become the video for the music in the end. Very early days of generated visuals/music. 

It’s a long way from today’s AI, but it feels like it might be a distant ancestor to the project you’re now working on, and it popped into my mind so I thought I’d share 😁


Reply