Groups 46 of 99+ julia-users › cleaning up objects in parallel processes? 9 posts by 2 authors Seth 9 5 15 I've finally made some progress in parallelizing my code. However, at the end of the run, I have my answer in my main process the REPL and each worker process has about 1GB of memory held. Is there a way to tell the worker processes to free that memory? everywhere gc didn't seem to do it, and I don't really know what it's from since the only thing that was done on the worker processes was sync parallel for s in i state dijkstra_shortest_paths_sparse spmx, s, distmx, true if endpoints _parallel_accumulate_endpoints! betweenness, state, s else _parallel_accumulate_basic! betweenness, state, s end end Every large structure I'm passing to the remote workers is some form of shared array spmx, distmx, betweenness . The answer I need is in the betweenness shared array. Any ideas? Thank you. Nils Gudat 9 6 15 Not entirely sure about this, but wouldn't you have to first re-allocate those large arrays before gc can free up the memory? That's how I tend to do it, based on what it says in the manual here: For example, if A is a gigabyte-sized array that you no longer need, you can free the memory with A 0. The memory will be released the next time the garbage collector runs; you can force this to happen with gc . So would everywhere begin large_array 0 gc end do the trick? Seth 9 6 15 The thing is, there's no large array allocated anywhere. Everything's shared memory. Nils Gudat 9 7 15 Aren't you locally creating state on each of the worker processes? Seth 9 8 15 Yes, but it's small - it's a type with a couple of vectors that won't exceed the number of vertices in the graph. On Monday, September 7, 2015 at 2:04:58 AM UTC-7, Nils Gudat wrote: Aren't you locally creating state on each of the worker processes? Seth 9 8 15 Following up - how do I even begin to determine what's eating up memory on remote processes? Is there something out there I can use to get a report? Nils Gudat 9 9 15 I think whos should help you, it'll give a list of all objects defined, as well as their size in memory. If you run it as remotecall_fetch 2, whos, , where 2 is the number of the worker process of course you could pick any number returned by procs , you should be able to figure out what's taking up the space in memory. Let me know if this works! Seth 9 9 15 julia remotecall_fetch 2, whos, From worker 2: ArrayViews 137 KB Module : ArrayViews From worker 2: AutoHashEquals 5345 bytes Module : AutoHashEquals From worker 2: Base 20321 KB Module : Base From worker 2: Compat 25 KB Module : Compat From worker 2: Core 2741 KB Module : Core From worker 2: FactCheck 34 KB Module : FactCheck From worker 2: GZip 250 KB Module : GZip From worker 2: GraphMatrices 294 KB Module : GraphMatrices From worker 2: LightGraphs 296 KB Module : LightGraphs From worker 2: LightXML 35 KB Module : LightXML From worker 2: Main 25038 KB Module : Main From worker 2: ParserCombinator 206 KB Module : ParserCombinator From worker 2: StatsBase 342 KB Module : StatsBase From worker 2: StatsFuns 348 KB Module : StatsFuns Nothing showing the 10.30GB Activity Monitor is showing for this instance. On Wednesday, September 9, 2015 at 2:06:15 AM UTC-7, Nils Gudat wrote: I think whos should help you, it'll give a list of all objects defined, as well as their size in memory. If you run it as remotecall_fetch 2, whos, , where 2 is the number of the worker process of course you could pick any number returned by procs , you should be able to figure out what's taking up the space in memory. Let me know if this works! Seth 9 9 15 Not sure if this is significant, but rmprocs 2 returns immediately with :ok and frees up the memory again, according to Activity Monitor . Topic options Overview Discussion Chronological view Tree view Paged view Collapse all Link to this topic Email updates to me Report abuse