Groups 42 of 99+ julia-users › Parallel computation of Array of user-defined types 4 posts by 2 authors ami... gmail.com Sep 9 Hello, What would be the best option to parallelize this code: type T a end f i T i v map f, collect 1:1:100 This example could sound stupid but the point is that I have a function ```f``` that returns a rather complicated user-defined type ```T```, and I need to store a lot of these types in an ```Array```. I've read a bit about the parallel paradigm of Julia and I honestly wouldn't know how to do such a thing. Any hint? Many thanks! Germán Aquino Sep 9 I usually use a combination of spawn and fetch. spawn fires a lightweigth task that can return a value of any type, which can be collected by the main thread with fetch: refs Vector RemoteRef N for i in 1:N refs i spawn begin doSomethingAndReturnValueOfTypeT end end for i in 1:N v i fetch refs i end I'm not aware if there is a better way, as far as I know SharedArrays don't work with user-defined types. You need to have worker threads for this to be of any use, by using addprocs or by starting julia with the -p flag, for instance $ julia -p 4. I hope that helps, Germán. ami... gmail.com Sep 9 Thanks German, indeed I have seen that SharedArrays couldn't be used for user-defined types... I tried to compare the approach you suggested with the serial one, and results are so awfully slow with the parallel version that I might have done something wrong... Here it is: everywhere type T a end N Int 1e5 v Array T, N refs Vector RemoteRef N for i 1:N refs i spawn begin T i end end for i 1:N v i fetch refs i end called with: time julia -p 4 parallel-jl takes 0m31.706s, whereas type T a end N Int 1e5 v Array T, N for i 1:N v i T i end called with: time julia serial-jl takes only 0m0.293s... Germán Aquino Sep 9 Hello, I think that if N is too large and the computation load of spawn is too small then the overhead of creating and dispatching the tasks outweighs the gain of parallelization. You can try parallel, which distributes loop iterations across different workers. f i T i parallel for i in 1:N v i T i end Other options include to use pmap instead of map, or to use async, but I didn't see any improvement in this case. Regards, Germán. Topic options