Hi,
Yes, the data I got from Remco are the ones on ClpP. The fact that the dw values were obtained by other means (low T) is probably not a fundamental problem, and I think we can nevertheless check the algorithm with these data.
It will be a very useful test. He just used the dw values as the starting point though, in a way you would normally perform a grid search. We can do that in the test too. But during optimisation this parameter was not fixed. He optimised it.
It may actually be a useful feature if relax allowed to set some values (such as dw) to fixed values that are not fitted. We talked about this at some point, and you were not so much favoring such a solution, because you were afraid that this would allow users to make unreasonable assumptions, as I understood you.
This is technically difficult, but it could be done. The modifications would need to happen not in relax, but in the minfx library (https://gna.org/projects/minfx/). Minfx used to be part of relax, but was spun off into its own project. The entire library of optimisation algorithms would need to be modified. The best way would be if the generic_minimise() function, which is a simplified interface for all of the various algorithms, were modified to take any array of Boolean types - the 'fixed' argument. Then for the parameter vector, gradient array, and full Hessian (the matrix of second partial derivatives), the dimension for that parameter would need to be stripped out in all 3 structures. Currently for the dispersion analysis in relax, no gradients or Hessians have been derived by hand or numerically approximated. But handling the fixed parameter in these first and second partial derivative structures will, without question, have to be handled by minfx. The gradient and Hessian will be passed in to minfx as functions, so minfx will need to provide wrapper functions which perform the stripping. All of the 20 algorithms accessed by the generic_minimise(), as well as the other algorithms not accessed, will need to be modified to handle this. Note that the overhead of fixing one or more parameters will be large and will cause the optimisation times to be longer. So, you can probably see that there is a few good weeks of work involved. This is the sole reason I am hesitant.
But I agree, if we had the synthetic data from Korzhnev, it might be a good additional test, and would allow to see whether the algorithm finds the dw. I know you don't favor such a solution, but in my opinion we could also make such synthetic data, make sure they are the same as the Korzhnev data (as good as we can tell by comparing to Korzhnev graphs), and fit them.
Ah, I had not thought of this. That is a perfect solution - comparing the data to the published figures will help us be sure that the SQ + MQ data combination is independent and that the optimisation is finding the correct results (or at least the published results). Cheers, Edward