Writer advanced

yo!
i’m not satisfied with the resoult that i have with my video that i captured with my
dv cam…
picture is not sharp as i want it to be…

so i have to do it some other way. my problem is how to make a sound responsive video
in non
realtime method…
first on my tought was to create a patch with writer(File) node that would analyse a
song and write
a value of the fft for each frame. and this is the place where i stop. can someone
help and tell me
how to write this data in a *.txt? it should look like this:

00334
00456
00754
etc.

i know how to read it, using a RegExpr, but don’t know how to write it.
i think that it would be great to have a non-real-time-renderer that would deal with
the video and
sound.

and one question for sanch:
how do you record your clips that are sound responsive?

and onother for joreg:
how did you made that triko video, witch i like wery much by the way? i read here
that you were
using a midi to create interaction. how can you do that? can you write a midi data
same way as with
writer(File) node? can you do it in vvvv?

i hope that’s not to much to ask

kind regards, vedran

This is actually touching on a topic I wanted to elaborate on for a while. It would basically be a musical markup format that allows you not only to implement a score and karaoke titles, but basically any freely defined markup inside a music file.

It seems like the perfect principle for VJing and any other situation where you’d want to embed events in a time-based manner. For example you could

  • place markers inside a music where changes of voice, mood, pitch etc. take place (either manually or automatically based on heuristics/analysis)
  • embed the midi score to go along with the rest
  • place text markers (which is obviously possible with MP3 already, but not being parsed by most media players) like subtitles, Karaoke track, etc.
  • store abstract commands for music visualisation modules
  • embed SMIL style media layout/animation info
  • embed time-based Flash movies (uärx)

etc.etc.

I’d say the way to go should be to embed this inside the header of an MP3 (/ogg /WMV or whatever) file.

just an idea.

halo.

glad you liked triko. i did it the manual way:

  • opened my favourite midi-sequencer
  • loaded the triko.mp3 in the waveform
  • placed midi-notes along the timeline manually where i heard events
  • played back the midifile in vvvv on a midi-loopback device
  • received triggers via MidiNote

this could also work in nonrealtime. i’ve not tested it.

and maxi:
what i actually tried is merging the midi-file and mp3 file in one .avi file. (since as we all know the .avi file format is just a container for streams of any type) but then strange things happened. the two streams went out of sync. somehow. then i didn’t take the time to investigate this further.

but. having an eventstream alongside with the audio/video-track is a qool thing. could also try with different media container formats like quicktime… and then there was this other thing i alwaysnever exactly new if it should interest me: matroska

Have you tried increasing the resolution of the renderer? ie 1024 x 512 and unclicking auto backbuffer?
I capture to DV with fairly good results depending on the patch, 2 DX renderers seemes to cause problems though!

alo!
maybe i was not clear enought, sorry. but your explanation spreads my view on this topic…
what i want is to simply analyse the stream in vvvv and save the resoult in a txt file as i described in the firs post. then i’ll read it back when playing animation instead of playing stream. all that becouse when i render in non real time, with NRT patch, i have a jitters with audio stream and this is no good for animation
so, joreg your method might help, i didn’t try it yet but will try and i hope it won’t create jitters
and thnx sanch! i managed to get better resoult with my cam but when i create video in non real time way picture is far better then with the cam…

vedran