Page 1 of 1

Parallel version of MICRESS

Posted: Fri Feb 20, 2015 7:54 pm
by deepumaj1
Can MICRESS run parallely?

Re: Parallel version of MICRESS

Posted: Fri Feb 20, 2015 11:22 pm
by Bernd
Hi deepumaj1,

MICRESS is parallel since Version 6.1 (the actual version is 6.2). But in general, one should not expect too much, scaling is good only in certain cases.

The problem of making MICRESS perform well in parallel is that we strongly optimized it for serial usage. This is good because it is much faster than other phase-field software, and because of that it can be used on "normal" computers. The drawback is that parallelisation is really difficult. We are working on it, but there is a lot which remains to be done...

Presently, a good speedup can be expected in the following cases:

1.) Simulations with a rather high grid resolution, where the diffusion solver is the bottleneck (typically more scientific problems). The diffusion solver is parallel and scales quite well.

2.) Simulations with coupling to the stress solver. The stress solver is very time-consuming and scales very good,so a good speedup can be expected.

3.) For pure grain growth, as the phase-field solver is parallel.

Bernd

Re: Parallel version of MICRESS

Posted: Thu May 21, 2015 2:11 am
by Raina
Hi Bernd,

We cannot use multi-threads for MICRESS_par_noTQ_x64 (6.200) on our linux server.
We used the driving file "Grain_Growth_3D.in" in the examples. The number of threads was set to 4. But the CPU usage is <=100% which means one cpu was used. It was also the case for my own simulation.
I wrote an openmp C++ code to test our system and found my parallel code can run with multi-threads, e.g. cpu usage of 400% when i set OMP_NUM_THREADS=4.

Did anyone report similar problems to you?

Ben

Re: Parallel version of MICRESS

Posted: Thu May 21, 2015 9:57 am
by ralph
Sorry. that is because point 3 of Bernd's post is wrong.

The diffusion, stress and temperature solvers are parallelized, resp. additionally the flow solver in version 6.200. See the MICRESS 6.2 manual volume 2 ( chapter 3.2.21 ) for a description how to estimate the performance which could be expected for test cases. A quick answer would be to consult the TabP output and see how much time is spend in the different solvers.

We are working on the parallelization of the phase field solver but still have to solve some problems because of the reason mentioned by Bernd above. Especiallly, this part of the code and data structures in use is highly optimized for serial execution.

Ralph