[rkward-devel] Status update

meik michalke Meik.Michalke at uni-duesseldorf.de
Thu Jul 23 14:58:35 UTC 2009


On Thursday 23 July 2009 13:05:15 Thomas Friedrichsmeier wrote:
> For the 3PL_parameter_estimation, I get a warning
> "Hessian matrix at convergence is not positive definite; unstable
> solution." and noteable differences in the coefficients. Could you try to
> find something more "stable"?

hm, i'd like to correct that, but i can't reproduce it. i always get the same 
stable solution:

 > tpm(data = LSAT)

 Call:
 tpm(data = LSAT)

 Coefficients:
         Gussng  Dffclt  Dscrmn
 Item 1   0.037  -3.296   0.829
 Item 2   0.078  -1.145   0.760
 Item 3   0.012  -0.249   0.902
 Item 4   0.035  -1.766   0.701
 Item 5   0.053  -2.990   0.666

 Log.Lik: -2466.66

i have installed R version 2.9.1, eRm version 0.10-2 and ltm version 0.9-1. 
what does your output look like?

> For several of the other tests I got slight differences in the last one or
> two digits.

yes, i think this is to be expected when estimating those parameters. 
actually, when i try to picture what happens inside this machine when multiple 
unknown values of a formula are estimated at once through hundreds of 
iterations, usually with random start values, i'm always amazed that the final 
results of several runs are comparable *at all* :-D

however, i've re-run the tests several times, of course, and i always get the 
exact same results to the last digit. do your results only differ slightly 
from mine, or from run to run as well? i'd suspect the first and hold floating 
point accuracy of different cpus accountable, or something like that.

since i would assume that no sane person would really care for more than the 
second digit of these estimations, for practical reasons, in this case five 
digits are more than enough.

> The idea was to make sure tests are as modular as possible, so, if test
> number 7 fails, you know the cause is in that particular test, and not - say
> - something got changed in test number 2 that causes this failure.

i can see that, and it makes perfect sense. there could be some kind of check 
function if a prior but needed test has already failed. so, you don't just use 
its results, but check if the very result producing test has passed, and if it 
hasn't spit out an informative warning why this depending test is expected to 
fail as well. at least you wouldn't waste time searching for bugs in the wrong 
place.

> For now, there are two solutions:

ok, thanks. i'll try the second solution and come back on that later :-)


viele grüße :: m.eik

-- 
  dipl. psych. meik michalke
  abt. f"ur diagnostik und differentielle psychologie
  institut f"ur experimentelle psychologie
  heinrich-heine-universit"at d"usseldorf
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.kde.org/pipermail/rkward-devel/attachments/20090723/9b2ec0d4/attachment.sig>


More information about the Rkward-devel mailing list