[rkward-tracker] [ rkward-Bugs-2866476 ] FEEDBACK: Memory leak when using mclapply()

SourceForge.net noreply at sourceforge.net
Fri Sep 25 14:45:52 UTC 2009

Bugs item #2866476, was opened at 2009-09-25 13:02
Message generated for change (Settings changed) made by tfry
You can respond by visiting: 

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: CRASH
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Nobody/Anonymous (nobody)
Assigned to: Nobody/Anonymous (nobody)
>Summary: FEEDBACK: Memory leak when using mclapply()

Initial Comment:
There appears to be a memory leak in rkward when using the function mclapply() in package multicore.  This eventually crashes rkward and the session.  The mclapply() function does not leak memory outside of rkward.   mclapply is a version of lapply for  running parallel computations on multiple CPU cores
Ubuntu 9.04 'Jaunty'
rkward version 0.5.0d
KDE version 4.2.2
R version 2.9.2
multicore 0.1-3


>Comment By: Thomas Friedrichsmeier (tfry)
Date: 2009-09-25 16:45

Thanks for reporting. I hope I can still reach you, as it would be nice, if
you could provide some more info / do some more testing.

I was not able to reproduce this exactly using the current SVN version
(but read on, below). Can you provide a reproducible example script to
trigger the crash?

You say this is a memory leak. Do you have any specific evidence for

Read on below, only if you're curious.


** What I can reproduce, and some thoughts on that**

The following script will hang after a few iterations in RKWard (not R).
(On a single core machine). gcinfo() does not show growing memory
consumption, but simply nothing happens after a while, with the main
rkward.bin process hogging up most of the CPU.

library (multicore)

print (gc ())

for (iteration in 1:100) {
        print (iteration)
        mclapply (1:1000, rnorm, mc.cores=2)

print (gc ())
gcinfo (FALSE)

One symptom around this is that a number of not-quite-dead processes is
spilt. Those are mostly sleeping, however. The backtrace inside those
processes is:

#0  0xb8070430 in __kernel_vsyscall ()                                    
#1  0xb5b58292 in pthread_cond_timedwait@@GLIBC_2.3.2 () from
#2  0xb5c588b4 in pthread_cond_timedwait () from /lib/i686/cmov/libc.so.6 
#3  0xb69caf8e in thread_sleep (ti=0x9a1e7d0) at
#4  0xb69cb0bb in QThread::msleep (msecs=<value optimized out>) at
#5  0x0813abe9 in RThread::handleStandardCallback (this=0x99da3a0,
args=0x9a1e8f4) at
#6  0x08148b92 in RReadConsole (prompt=0xb71aef7a "Selection: ",
buf=0xb720d200 "", buflen=4096, hist=0)                                    
    at /home/thomas/develop/rkward4/rkward/rbackend/rembedinternal.cpp:202
#7  0xb7141f05 in R_ReadConsole () from /usr/lib/R/lib/libR.so            
#8  0xb7071a1c in ?? () from /usr/lib/R/lib/libR.so                       
#9  <signal handler called>                                               
#10 0xb6bf2046 in QDBusAdaptorConnector::relaySlot (this=0xa8ba170,
argv=0xb3c5fefc) at qdbusabstractadaptor.cpp:268                           
#11 0xb6bf29d8 in QDBusAdaptorConnector::qt_metacall (this=0xa8ba170,
_c=QMetaObject::InvokeMetaMethod, _id=0, _a=0xb3c5fefc)                    
    at qdbusabstractadaptor.cpp:364                                       
#12 0xb6ad1b33 in QMetaObject::activate (sender=0xabb4580,
from_signal_index=0, to_signal_index=1, argv=0xb3c5fefc) at
#13 0xb6ad1f60 in QMetaObject::activate (sender=0xabb4580, m=0x8182540,
from_local_signal_index=0, to_local_signal_index=1, argv=0xb3c5fefc)       
    at kernel/qobject.cpp:3206                                            
#14 0xb6ad1feb in QObject::destroyed (this=0xabb4580, _t1=0xabb4580) at
#15 0xb6ad2df9 in ~QObject (this=0xabb4580, __in_chrg=<value optimized
out>) at kernel/qobject.cpp:757                                            
#16 0xb6d9de3d in ~Scheduler (this=0xabb4580, __in_chrg=<value optimized
out>) at ../../kio/kio/scheduler.cpp:259                                   
#17 0xb6da1131 in ~SchedulerPrivate () at ../../kio/kio/scheduler.cpp:102 
#18 destroy () at ../../kio/kio/scheduler.cpp:209                         
#19 0xb6cd68db in ~KCleanUpGlobalStatic (this=0xb6e9f1f4, __in_chrg=<value
optimized out>) at ../../kdecore/kernel/kglobal.h:62                       
#20 0xb5b98589 in exit () from /lib/i686/cmov/libc.so.6                   
#21 0xb28d9e9e in mc_exit (sRes=0xa190538) at fork.c:492                  
#22 0xb701c689 in ?? () from /usr/lib/R/lib/libR.so                       
#23 0xb7045b6a in Rf_eval () from /usr/lib/R/lib/libR.so                  
#48 0xb7045895 in Rf_eval () from /usr/lib/R/lib/libR.so                  
#49 0xb7048628 in ?? () from /usr/lib/R/lib/libR.so
#50 0xb7045895 in Rf_eval () from /usr/lib/R/lib/libR.so
#51 0xb7072b33 in R_ReplDLLdo1 () from /usr/lib/R/lib/libR.so
#52 0x08142aed in runUserCommandInternal () at
#53 0xb6ff2f19 in R_ToplevelExec () from /usr/lib/R/lib/libR.so
#54 0x0814406a in REmbedInternal::runCommandInternal (this=0x99da3a8,
command_qstring=..., Rf_error=0xb3c622d8, print_result=true)
#55 0x0813c616 in RThread::doCommand (this=0x99da3a0, command=0xaad53c8)
at /home/thomas/develop/rkward4/rkward/rbackend/rthread.cpp:189
#56 0x0813d755 in RThread::run (this=0x99da3a0) at
#57 0xb69cb582 in QThreadPrivate::start (arg=0x99da3a0) at
#58 0xb5b544b5 in start_thread () from /lib/i686/cmov/libpthread.so.0
#59 0xb5c49a5e in clone () from /lib/i686/cmov/libc.so.6

So what we appear to have here, is that the main thread has already exited
in mc_exit(), and the R thread is somehow left waiting (for response from
the main thread; likely it got a SIGSEGV, though, and is asking for info on
that). I'm not sure, whether these zombie process are really the problem,
though, or just a symptom.

Are Qt / GUI programs expected to be fork()able at all?


You can respond by visiting: 

More information about the rkward-tracker mailing list