Dear Sean,
Nice to hear from you. Your simulation results look very nice!
Let me try to start with the first question about the anisotropy coefficients. Unfortunately it is difficult to get physical values, and there is no general strategy what to do then.
But on top of that problem, I think there is some numerical mixing between the static and kinetic coefficient which comes from the low spatial resolution which we are forced to use for performance reasons. In MICRESS, (close to) correct interface kinetics are achieved by using "mob_corr". But this mobility correction method leads to an artificial reduction of kinetics in order to correct for the "tunneling" effect, which comes from a too large numerical interface thickness (in comparison to the diffusion length). This means that the kinetic part of the front undercooling gets relevant (while in reality the kinetic coefficient and kinetic anisotropy are typically not relevant). In my understanding, this means that some part of the static anisotropy should be addressed as kinetic anisotropy in order to achieve more realistic results. Therefore, I usually use a relatively high kinetic anisotropy coefficient (like 0.2-0.3), although this is perhaps far above the physical value, while choosing a smaller one for the static coefficient (e.g. 0.1).
But there is another issue when simulating 3D: When selecting cubic symmetry, MICRESS uses a four-folded anisotropy description by default which is fine for 2D, but not correct for 3D. To invoke the 3D-equivalent, you should add "harmonic_expansion" after the "anisotropic" keyword. The further changes which are necessary are to replace the static coefficient of stiffness by a static coefficient of energy (i.e. divide the value of the coefficient by 16), and to add a second coefficient with value "0" to the static and kinetic coefficients (in the same line). You will see that this makes a difference!
Your second question was about performance. If the time spent for diffusion (in .TabP) is already a big part of the total time, there is probably not so much to gain anymore. Of course, the diffusion solver runs in parallel, so using parallel computing should help significantly. Please make sure that you are really benefiting from parallelisation by putting all threads to the same CPU. But you also should keep in mind that the diffusion time also has some serial part which counts all the redistribution stuff. This part is similar to the list time and PF-time and could still profit from a larger time step if possible (see the discussion about
minimum time step).
Finally, some further comments to your input file:
- interface energy: You use an unusually high value for all interface energies which is at least 10 times higher than normal. This should lead to coarser dendrite structures.
- boundary conditions: You use symmetric conditions for all sides. Remember that symmetric boundary conditions assume mirror symmetry with the symmetry plane being inside the domain (1/2 grid cell from the border of the domain). This condition requires dendrites either to grow exactly along the boundary, or far away. Otherwise, strange effects will occur, like a liquid layer between the dendrite and the boundary. And at the top boundary condition we should always use a fixed condition for concentration when we do directional solidification with moving_frame in order to fix the far-field concentrations. Thus, I propose to use ppppii for phase-field and ppppif for concentration (you will have to enter the initial composition as fixed values again at another place).
- diffusion data: You use "diagonal g" for all element and phases in your simulation. I think, this makes sense only in case of dilute alloys where the off-diagonal terms would be small anyway. Ni-superalloys are high-alloyed (almost "high-entropy") and show strong cross-diffusion effects. Thus, the best way here is to use "multi" instead. If this has to be avoided because of performance (liquid phase) or potential problems with miscibility gaps (γ/γ'-phase), the second best option is "diagonal_dilute g" which intends to evaluate an "effective" diagonal value which also cannot be negative.
- interface thickness: You use 2.5 cells which is legitimate (and what I also did for a long time). However, increasing it slightly to 2.84 (or above) gives a significant boost in phase-field accuracy, because then also diagonal neighbors come into play when evaluating Laplace and curvature.
Best wishes and good luck!
Bernd