Groups 18 of 99+ julia-users › Parallel computing and type-stability 6 posts by 3 authors Sleort Sep 9 I've been playing around with the parallel functionality of Julia (0.4.6) in order to get a better understanding of it. Just for the sake of testing, I made a "silly" nworkers() implementation like julia> function silly_nworkers() @parallel (+) for p in workers() 1 end end which, as expected, return the number of workers - just like nworkers(). Now, running julia -p 2 (or any other number of processes) julia> @code_warntype silly_nworkers() I get the result Variables: Body: begin # none, line 2: return (AST(:($(Expr(:lambda, Any[], Any[Any[Any[symbol("#8#therange"),Array{Int64,1},19]],Any[],Any[Function],Any[]], :(begin # expr.jl, line 113: NewvarNode(symbol("#8#therange")) # multi.jl, line 1587: #8#therange = (Main.workers)()::Array{Int64,1} # multi.jl, line 1587: # multi.jl, line 1542: GenSym(0) = AST(:($(Expr(:lambda, Any[:(#9#lo::(top(getfield))(Base,:Int)::Any::Any),:(#10#hi::(top(getfield))(Base,:Int)::Any::Any)], Any[Any[Any[symbol("#9#lo"),Any,18],Any[symbol("#10#hi"),Any,18],Any[:p,Any,2],Any[symbol("#11#ac"),Any,2],Any[symbol("#s7"),Any,2]],Any[Any[symbol("#8#therange"),Array{Int64,1},19]],4,Any[]], :(begin #9#lo = (top(typeassert))(#9#lo,(top(getfield))(Base,:Int)::Any)::Any #10#hi = (top(typeassert))(#10#hi,(top(getfield))(Base,:Int)::Any)::Any # multi.jl, line 1543: p = (Main.getindex)(#8#therange,#9#lo)::Any # multi.jl, line 1544: # none, line 3: #11#ac = 1 # multi.jl, line 1545: unless ((top(getfield))(Base,:(!=))::Any)(#9#lo,#10#hi)::Any goto 0 # multi.jl, line 1546: GenSym(1) = (Main.getindex)(#8#therange,(Main.colon)(((top(getfield))(Base,:+)::Any)(#9#lo,1)::Any,#10#hi)::Any)::Any #s7 = (top(start))(GenSym(1))::Any unless (top(!))((top(done))(GenSym(1),#s7)::Any)::Any goto 1 2: GenSym(2) = (top(next))(GenSym(1),#s7)::Any p = (top(getfield))(GenSym(2),1)::Any #s7 = (top(getfield))(GenSym(2),2)::Any # multi.jl, line 1547: # none, line 3: GenSym(3) = 1 #11#ac = #11#ac + GenSym(3)::Any 3: unless (top(!))((top(!))((top(done))(GenSym(1),#s7)::Any)::Any)::Any goto 2 1: 0: # multi.jl, line 1550: return #11#ac end::Any))))) return ((top(getfield))(Base,:preduce)::F)(Main.+,GenSym(0),(Base.arraylen)(#8#therange::Array{Int64,1})::Int64)::Any end::Any))))))()::Any end::Any where there is a large number of undetermined types/Anys in the code. (Except for that, I must admit that I don't understand all the details of the code/output itself). The conventional Julia optimization wisdom tells me that I should have specific types in order to get fast code. What's going on here, and is there a way to "fix"/improve this situation? The silly_nworkers() itself seems to be rather straightforward to me, so I'm not sure why the compiler cannot make more type-specific code... Yichao Yu Sep 9 - show quoted text - The inter process communication has a lot of overhead which means, 1. You should put more work in the loop body, otherwise it's not going to speed things up. 2. The type instability of the multi processing infrastructure doesn't really matter compare to the communication overhead. 0 0 Richard Dennis Sep 9 When I run this code in Julia 0.5.0-rc4 the code is much shorter and most of the red indicating type instability is gone. Cheers. 0 0 Sleort Sep 9 @Yichao Yu: Sure. I'm aware that there is a lot of overhead in the inter process communication. This was just a minimal test case, not something I expect to run fast(er). (A silly nworker() implementation indeed;-)) I was just curious if it is possible to reduce the type instability showing up in parallel programming situations like this (and others). One reason is that such a type instability seem to "mask" @time results (number of allocations and memory usage is typically much larger for type unstable code, right?), making it harder to get a quick idea of the performance/type stability of parts of the code that really matters... @Richard Dennis: That's interesting! Do you (or anyone else) have an idea why? What is updated/changed in 0.5.0-rc4 compared to 0.4.6? 0 0 Yichao Yu Sep 10 Re: [julia-users] Re: Parallel computing and type-stability On Fri, Sep 9, 2016 at 11:16 PM, Sleort wrote: @Yichao Yu: Sure. I'm aware that there is a lot of overhead in the inter process communication. This was just a minimal test case, not something I expect to run fast(er). (A silly nworker() implementation indeed;-)) I was just curious if it is possible to reduce the type instability showing up in parallel programming situations like this (and others). One reason is that such a type instability seem to "mask" @time results (number of allocations and memory usage is typically much larger for type unstable code, right?), making it harder to get a quick idea of the performance/type stability of parts of the code that really matters... @Richard Dennis: That's interesting! Do you (or anyone else) have an idea why? What is updated/changed in 0.5.0-rc4 compared to 0.4.6? Inlining an printing. No type instability is changed and it doesn't matter. 0 0 Sleort Sep 10 Re: [julia-users] Re: Parallel computing and type-stability I see. Thanks for the information! 0 0