I've started benchmarking the changes to the minimisation code which
remove the apply function. Here are some preliminary results, based on 5
executions of the multimodel sample script on a moderate sized data set:
          1.3 code        apply branch code
1         5m30.059s       6m14.447s
2         5m38.187s       5m38.949s
3         5m38.408s       5m35.776s
4         5m1.070s        8m46.338s
5         8m8.980s        5m20.290s
It appears from this that the apply changes have no appreciable effect
on the minisation performance, and that the very preliminary tests I had
done previously were simply chance, due to the huge variations of
execution time.
The origin of these variations is a bit of a mystery to me. I ran all of
this overnight, when my machine should have been essentially idle, and
indeed each process had >99% of processor time. It is not an issue of
differences in the optimisation - the logs of each run are identical,
both within and between the different code bases.
Any ideas why these identical processes should have such vastly
different performance?
Chris