Offline audio sync?

Newbie. From reading, it doesn’t sound like we can do real-time audio-based graphics. If I have a set of sound files, can effects be synchronized with the audio and rendered out?


Not that I’m a visual programming or wizard, but actually, depending on what you’re trying to do, quite a bit can be done between synchronized audio and patch ops. I just looked into some of the op documentations and I can throw a couple of operators you might want to look into:

There is also a decently sized MIDI suite of ops to look into as well, but the main op(s) I personally end up using are the AudioAnalyzer, and using the array data to power various effects and patch things. You could sync things up to a specific volume threshold (say, the kick being a main part of a song, and generally seeing how loud the song gets when a kick or some other element comes in), etc.

Also, depending on how knowledgeable you are with pushing and receiving OSC messages, this in and of itself kind of opens the world to you to, say, interface with an outside DAW or software and send data back and forth between it and cables. It’s not too terrible to get set up either, if you can get around command line and installing/using Node.js a little bit for setup., if you’re interested in that sort of thing and a tutorial.

Also as a side note about the analyzer op: the various arrays and data it outputs are just that: data. There’s nothing stopping you from using a “FFT” or “Waveform” array as a regular array to generate a texture, or power some other array-based thing in a patc. Quite useful in many cases, actually :slight_smile: