Hey,
Im new to vvvv, but im working straight to realize one specific project, an interactive art installation. its an project for university, i am studying communicationdesign in hamburg, germany.
The idea is to create somekind of an interactive “painting” (or better say a projection), which changes its appearance by the behaviour of the visitors. the changes should be very subtle, so that the visitors dont think on the first sight, that they control the installation. we hope that it seems that the installation has its own life and that the people try to understand the reaction of the installation or have at least that strange feeling, the apprehension that the visuals are not randomly generated but a consequence of the visitors presence. we hope to achieve an aesthetic that is some kind of amorph, organic, maybe fluid-like. we get the data about the behaviour with the help of a kinect. maybe the word behaviour is too much, at least with my skills, it would be also ok to say: there is something direct in front of the kinect, make the color more vivid, or on the other hand, there is just something/someone in the back, make the color more bluish. the people should not wave infront of it or making jumping jacks. the idea is that the pure presence is sufficent to change the installation. its more about changing the atmosphere of the room and with that, the feelings of the people who enter the room. it could be possible, that, if the room is full of people, the installation starts radiating a bright red color, like a pulse. the room would change its whole appereance and the feeling of being cornered would be stronger.
so far so good, that would be the idea behind it. of course it is not an easy project, so thats why i need some help. what would be the best way to achieve a prototype of that idea? the visuals are at this point of realization not that important, making it looking good would be one of the last parts. i would be very happy, if at least a rudimentary version, which works only with colors or shapes, came into existence.
i already set up the kinect, i got access to the information of the depth (via a pointcloud shader), but now i stuck in the part where i convert the data into useful information, so that i can say vvvv, that, if something is very close, make this or that brigter, slower, higher or something.
in the end, there should be visuals which are based on the kinect information, not a representation of the kinectdata itself.
any ideas? is it good way to use the pointcloudtransformationdata for that purpose or is the a more easy way?
i would really appreciate any help! if there are some germans here, the vernissage is in the beginning of july in the gängeviertel.
i hope you understood me,
thank you very much