- #1
- 29,045
- 4,418
Disclaimer: the only thing I know about Perl is the language name.
Raspbian on Pi (512 MB RAM).
I have a Perl script that I want to use. And it works - sort of.
Problem is, it works OK for small data sets, but it fails for large ones. When it works, it works for some time and then it spits out the result. In case of known errors (like a wrong file name) it displays an error message (while I don't know Perl I see these are coded in the script). When it doesn't work - it initially works, then just ends, without any messages.
My first odea was that it is limited by the memory. For a large set of data I see it (with top) using 98% of CPU and allocating more and more memory, but when it stops it is at 3% RAM, so just about 15 MB, not that much.
Any ideas what I can do? Is there a way of checking why it crashes? Can it be related to some memory limit per process? Or is there some limit set to amount of memory available for Perl?
I did some blind googling but all I see seems to be suggesting memory should not be a problem here.
Raspbian on Pi (512 MB RAM).
I have a Perl script that I want to use. And it works - sort of.
Problem is, it works OK for small data sets, but it fails for large ones. When it works, it works for some time and then it spits out the result. In case of known errors (like a wrong file name) it displays an error message (while I don't know Perl I see these are coded in the script). When it doesn't work - it initially works, then just ends, without any messages.
My first odea was that it is limited by the memory. For a large set of data I see it (with top) using 98% of CPU and allocating more and more memory, but when it stops it is at 3% RAM, so just about 15 MB, not that much.
Any ideas what I can do? Is there a way of checking why it crashes? Can it be related to some memory limit per process? Or is there some limit set to amount of memory available for Perl?
I did some blind googling but all I see seems to be suggesting memory should not be a problem here.