Improved Sorting

Fast multidimensional sorting would be great.

but what do you mean by ‘multidimensional sorting’? what is the sorting criteria? sorting itself is one dimensional by definition, because you have one sorting criteria…

The sort node should be able not only to sort a simple one-dimensional list of values, but also two-, three- or more-dimensional vectors (ordered chunky (like xyz,xyz…)).
This would require only two additional pins named “block size” (or something similar…) and “order by every n-th entry”.
So if one selects to order blocks with a size of three values each and chooses “3” for “order by every n-th entry”, the whole list would be devided into segments of three values each, sorted by their z-value.
Got what I mean? Probably sounds much more complicated than it actually is.

Btw. did you check out Herfs approach to radix sorting? What do you think?

Surely you could patch that?
something like that?

Maybe some reverse nodes might be usefull too…

vectorZ sort.v4p (3.7 kB)

Thanks Cat,
I already patched a module for block-sorting some time ago. The point is: VVVVs sorting-algo is way too slow for most applications. And things get even slower when using a module for tasks like this.
So my only wish is: Please make the sort-node faster and more flexible, that’s all.

"> Fast multidimensional sorting would be great. " i agree with that ,

that radix sort algorithm looks very good. but i am not sure if the speed of the sort algorithm is the problem. and i’ve made some tests against the std::sort from the c++ STL. if sorting 1000 numbers the radix sort is slower, at 2000 its becoming faster. and in vvvv is a spread count of 1000 quite large.
a convenient Sort node should be easy to implement. but i dont know if its the best solution for the z-sort problem.

gregsn, joreg, what do you think about this?

I think that large spread counts turn into a problem, as soon as they pass multiple nodes. Moving and eventually resampling spreads from one node to another is a known bottleneck in vvvv. That’s why I basically suggested to do the clustering and sorting within one single node.
For dealing with different spread counts I think it would be best to provide different algorithms (like quicksort, herfs radix or whatever) and let the user decide. That would be the most flexible. Viewing it from the users side: I personally would really like to have a choice here.

my problem with this is, that cats solution fullfills all criteria of a “beautiful” vvvv patch.

so before adding somehow arcane features on the Sort node (which with even more reasons should be added to the likes of GetSlice, SetSlice, Bounds, + etc.). i admit svens solution to be simple and clever, but i´d rather propose

(1) doing the reshuffling on the slices with a GetSlice on the vertex buffer level (with might have additional benefits on the directx level)

and fine tune some of the more basic nodes, to make standard solutions need less nodes. So for example

(2) GetSlice with a BinSize property (handles vectors of slices as one), so you can select contiguous slices of a vector without resampling your Index spread.

(3) adding a stepsize property on the I node

but there might be more little generalisations, which make simple patches even simpler.

about big spreads being a well known bottleneck in vvvv -
yes memory in general is a well known bottleneck on modern cpus - while cpus followed moores law quite well, the speed of ram didnt.
i´d say the spread handling in general is quite efficient. but have a cpu with huge cache memory on board.

this is also the reason why i am not completely enthusiastic about the radix sort, which exchange cpu utilisation with ram utilisation, but there surely are special cases, where an alternative sorting would be useful.

Well, first of all I think offering alternative algorithms is not really an “arcane” feature… ;)
Instead I would consider it being a professional enhancement. Concerning the radix method: Usage of RAM is not a mentionable issue nowadays, especially not in this context as we are not talking about gigabytes. Taking computing power off the cpu isn’t bad either…

However, please do not reduce the application possibilities of an enhanced sorting function to sorting vertices. I’m thinking about multidimensional lists in general. Wether lists of particle coordinates, lists of colour values, whatsoever…

I sure know that any additional functionality for basic nodes like “sort” might be implemented by building a module around it and you know that my personal attitude concerning the complexity of vvvvs nodes is not too far from yours. But in this particular case - dealing with huge lists of values - the loss of cpu power is significant. So I really would ask for a “hardwired” solution here.

Usage of RAM is not a mentionable issue

sorry, no. it really really is. not size but speed.

Eh… I’m afraid I can’t follow your statement…
If radix sorting consumes more memory, but is - at larger numbers of values - obviously faster, where is the point?
Or do you refer to an vvvv internal problem in preparing larger arrays of memory?

no. vvvv has no more internal problems on working with large amounts of data than any modern x86 cpu. i was just not too enthusiastic about the speed of the radix sort in real world applications these days. but it might have its uses.