NN code: further progress
Richard A. O'Keefe
ok@atlas.otago.ac.nz
Thu, 11 Mar 1999 16:08:02 +1300 (NZDT)
It was suggested to me that it would be of interest to convert my NN code
from Clean to Haskell. This I did, but ran into a problem. The Clean
compiler has a hard job getting a large constant data structure down its
throat, but it *did* manage it. Hugs quickly chokes to death. More
surprisingly, GHC 3.02 *also* choked on it, with a 60 MB heap and after
about half an hour of CPU time.
That made it urgent for me to get the constant data out of the program.
(I have a list of 20 stimulus-response pairs, where the stimulus is a
list of 233 numbers, and so is the response.)
Recall that the previous version of the program, _with_ the data in the
program, took
453 sec (user) + 13 sec (GC) = 466 sec (total).
Well, just reading the data from a file takes
0.8 + 0.0 = 0.8 seconds,
and reading the data and solving the problem takes
399 sec (user) + 12 sec (GC) = 411 sec (total)
We're now down to 4 times slower than optimised C, which is darned good.
The extremely interesting thing here is that moving a large constant
data structure into a file and reading it from the file *saved* quite
a lot of time, contrary to my expectation.
Conclusion: large constant data structures are bad news,
and not just for Clean either.