Hi Chris This sounds, a priori, quite weird... However, I think one possible cause would be some cron jobs which run overnight or once every, let's say, day or week, and happened to be on that night you made the benchmarks... Such cron jobs, if you're using Linux could be the extensively disk using update-db for the slocate cache directory, a rdiff-backup job, etc. Those jobs could use a lot more disk than CPU, thus not affecting too much what you see with a 'top'. Cheers Seb Chris MacRaild wrote:
I've started benchmarking the changes to the minimisation code which remove the apply function. Here are some preliminary results, based on 5 executions of the multimodel sample script on a moderate sized data set: 1.3 code apply branch code 1 5m30.059s 6m14.447s 2 5m38.187s 5m38.949s 3 5m38.408s 5m35.776s 4 5m1.070s 8m46.338s 5 8m8.980s 5m20.290s It appears from this that the apply changes have no appreciable effect on the minisation performance, and that the very preliminary tests I had done previously were simply chance, due to the huge variations of execution time. The origin of these variations is a bit of a mystery to me. I ran all of this overnight, when my machine should have been essentially idle, and indeed each process had >99% of processor time. It is not an issue of differences in the optimisation - the logs of each run are identical, both within and between the different code bases. Any ideas why these identical processes should have such vastly different performance? Chris _______________________________________________ relax (http://nmr-relax.com) This is the relax-devel mailing list relax-devel@xxxxxxx To unsubscribe from this list, get a password reminder, or change your subscription options, visit the list information page at https://mail.gna.org/listinfo/relax-devel