In Part I of “Turbulence Modeling Best Practice Guidelines: Standard EVMs” I have presented some motivation, and two of what I find as the most immediate and more pressing issues regarding turbulence modeling in general, namely, that before following and choosing any particular best practice guideline one should at least know the basics about each of the turbulence models and the most significant differences between them, and that of the meshing process, closely coupled with our success in reducing the modeling error.
In Part II I delve into topics which are of no less importance such as V&V, and near-wall treatment.
Tip No. 3: Always V&V
Some would claim that this should have been the first tip on the list, and I am not sure I disagree. It is best to not rely on CFD at all, than to rely on CFD without performing proper Verification and Validation.
Verification and validation are processes that collect evidence of a model’s correctness or accuracy for a specific scenario. The latter statement means that V&V cannot prove that a model is correct and accurate for all possible conditions and applications, rather, it can provide evidence that a model is sufficiently accurate. Therefore, the V&V process is completed when sufficiency is reached.
Verification is the process of determining that a model implementation accurately represents the developer’s conceptual description of the model and the solution to the model.
Simply put, verification is about:
Solving the equations right.
Many Verification procedures are presented in previous blog posts, and are an awesome topic for a future post in and of themselves, so there is no intention on presenting an exhaustive review here, rather, the most essential procedure, one that I practice with each and every simulation I conduct – grid convergence study.
The primary goal of a grid convergence study is to provide evidence that a sufficiently accurate solution is being computed. Assuming our inputs to the simulation are good enough, insufficient grid refinement is typically the largest contributor to error.
A proper grid convergence is conducted by systematically refining the mesh and/or the time steps to achieve a monotonic reduction in discretization error on at least three successive levels of refinement.
A few considerations are of particular importance concerning EVMs (we shall see later that applying such guidelines to LES is a whole different story):
- The choice of turbulence model may impact the effectiveness of the procedure. For example, it is incorrect to refine a high Reynolds wall-sensitive model such as the standard k-ε such that the first grid away from the wall penetrates to the viscous sublayer.
- Generally speaking, conducting a grid convergence study to EVMs is more straight forward, since we are only interested in comparing either integral quantities (such as drag or lift), or in the reproduction of first order statistics (such as mean velocity profile for example).
- Whether shear flows or wall bounded flows are concerned, It is most important to asses the impact of the mesh refinement in locations of high gradients.
- Remember that a grid convergence procedure is necessary but certainly not sufficient to ensure the validity of the results. Validity is the outcome of validation, while the procedure is part of the verification process. This is always a true statement, yet especially important for EVMs which are a very crude and limited representation of a very complex phenomenon.
Validation assessment is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. The goal of validation is to quantify confidence in the predictive capability of the model by comparison with experimental data.
Simply put, validation is about:
Solving the right equations.
There is a lot to be said about validation processes, but I especially want to emphasize the fact that although in essence, the purpose of validation assessment is to measure the level of agreement between our CFD prediction and experimental data from a validation experiment, the fact the the latter always contains an inherent level of uncertainty is often overlooked by many….
A few considerations are (again) of particular importance concerning EVMs (due to their very limited formal range of validity), some of which are important in general:
- There are many ways to conduct scientific experiments, all have their merits, they improve fundamental understanding of physical behavior, they are valuable in the construction of mathematical models, helpful in estimating values of model parameters, great at assessing component or system performance, etc’… Validation experiments are a whole different animal though! Validation experiments are performed with the aim generating very specific high quality data for the sole purpose of assessing the models we wish to validate. This requires highly accurate measuring devices, very accurately prescribing boundary and initial conditions, and all the rest of the input parameters of the model. Such an experiment should also be highly documented. The collected experimental data should then be the standard to which predictions are compared. This is a science in and of itself. I am very skeptical upon often hearing practitioners claim: “I have validated the simulation through many experiments…”, if what a validation experiment means is not clearly understood and guidelines for such are not enforced, it is not a validation now, is it?…
- “Standing on the shoulders of giants…” – let’s be realistic here, due to knowledge gap, time overhead and limited resources in conducting validation experiments, many times they will either not be conducted at all, or will not match an evolving application (again, do not forget the difference between “an experiment” and a validation experiment).
This is why we you should always research for validated benchmarks of which we may gain much insight, sometimes even validating some specific aspects in the performance of the model.
- Remembering that as far as EVMs are concerned, data compared between the standard (based on the data collected from the validation experiment) and the model prediction is mostly comprised out of either integral quantities (such as drag or lift), or with reproduction of first order statistics (such as mean velocity profile for example). In that sense, we could be lay astray to gain confidence through matching compared value over some limited range, while in actuality this is only a construct of crude assessment and compounding errors canceling one another in a “fortunate” way… We should be very wary from our prediction being “not even wrong”, in the sense that the application itself, and specifically the output parameters are way out of the range of validity (formal and practical) of the turbulence model at hand. A beautiful example to such a case concerning EVMs is that of the “Stanford Diffuser” test case. This diffuser opens up in two directions, and also slightly in a third side wall, which creates a very asymmetric flow flow field. It is found that the flow is very sensitive to the separation from one corner, which if not taken into account (a problem evident with all standard EVMs) result in a completely flawed flow topology when compared to validation experiments:
Tip No. 4: Near-Wall Behavior
Walls are the main source of vorticity in turbulence. We wouldn’t have any turbulence, if it wouldn’t have a wall generating it. At the same time near wall modeling also happens to be the most problematic area in turbulence modeling. Dealing with near wall modeling means focusing on the turbulent boundary-layer.
In Part I we’ve distinguished between cases of which the the viscous sublayer is resolved from cases for which it is not. Furthermore, upon shortly describing the basic features of the most popular standard EVMs, I have also stressed the important feature of being wall-insensitive (k-ω family of turbulence models, or the Spalart-Allmaras turbulence model) or wall sensitive (k-ε family of turbulence models). It is the latter which I would like to focus on now.
My standpoint on the matter is that the fact that a family of models is wall-sensitive doesn’t mean that we should stop using them altogether. It might have been so if a k-ω family of turbulence models had a choice for every standard EVMs related application which is at least as good as choosing one of the k-ε variants along with some near-wall treatment. But this is simply not so for most flow applications.
There are many ways to treat the near-wall region to achieve the desirable modeling resolution for the k-ε variants, and I will not cover all of them, nonetheless, knowing the different possibilities, and the fact many more may be constructed and tweaked on a “per-application” basis is very valuable.
Wall function approaches:
- Standard wall functions:
This approach works reasonably well for a broad range of high Reynolds wall-bounded flows, where the local equilibrium assumptions holds (meaning that the flow application does not incorporate physical phenomena such as flows just before separation, just after reattachment, impinging flows, flows of which the mean flow is subjected to adverse pressure gradients and rapid changes).
For the approach to be appropriate, we should also pay attention that y+>30 (note that in some codes the log-law is extrapolated to a y+< 30, inside what is formally defined as the buffer layer. Then, the smallest value for which the log profile holds will be our criterion for y+), and that there are no locations in the domain where y+ is such the first cell away from the wall is located in a location closer to the wall for which the log profile does not hold.
- Scalable wall functions:
To attend the problem described above, as engineering applications often contain complex geometries making the mission of defining the first grid point away from the wall at at a location matching the the smallest value for which the log profile holds as the minimal y+ in a consistent manner for the entirety of the domain, to avoid deterioration of the solution by standard wall functions in situations where it’s unavoidable for the first grid point to be located in a y+ smaller than were the log profile holds, some codes force the the solver to use the the smallest value for which the log profile holds as a minimal y+ whenever arbitrary grid refinement is encountered.
- Non-equilibrium wall functions:
As noted above standard wall functions will not work well for cases where the local equilibrium assumption doesn’t hold. For such cases of non-equilibrium (flows just before separation, just after reattachment, impinging flows, flows of which the mean flow is subjected to adverse pressure gradients and rapid changes), a beneficial approach is achieved by sensitizing the log-law for mean velocity to pressure gradient effects and by the use of the two-layer-based concept to compute the reciprocal relations between turbulence kinetic energy production, and its dissipation, by non-equilibrium means (See how this is done exactly in “Turbulence Modeling – Near Wall Treatment”).
Near-Wall Model Approaches:
- Low Reynolds turbulence models:
As previously noted in Turbulence Modeling Best Practice Guidelines: Standard EVMs – PART I, the family of k-ε models is a family of essentially high Reynolds models. Adaptations are typically required to obtain accuracy across the viscosity-affected near-wall layer. These often include near-wall damping terms in the turbulent viscosity and other model coefficients (mostly dependent on wall-distance or turbulent Reynolds number). An example, is the Launder-Sharma low-Reynolds k-ε turbulence model:
More examples (albeit a little bit too detailed sometimes to be clearly communicated…) may be found here.
The main issue is that for such choice of model to work well y+<1 must be kept in the entirety of the domain.
- Two-layer zonal models:
Sometimes termed Enhanced Wall Treatment methods, there are many variants for such methods, but essentially they work under the premise that the best scenario by concept is to have a near wall treatment suitable for viscous sublayer integration when the mesh is at y+<1 and to use wall functions formulation when the first grid away from the wall falls in a location where the log profile holds and the viscous sublayer solution is of minor consequence. Moreover, the scenario should prevent excessive error to occur for the intermediate meshes where the first near-wall node is placed neither in the fully turbulent region, where the wall functions are suitable, nor in the direct vicinity of the wall at y+<1, where the low-Reynolds approach is adequate.
Different commercial codes employ different ways to achieve such a goal. FloEFD for example, uses the k-ε model as the only turbulence model in such a way that if the first grid point lies whithin the viscous sublayer, a low Reynolds model is chosen. Otherwise, if the first grid point is located outside, wall functions formulation is applied. FloEFD does that without user intervention but solely by applying an inherent switching function to decide upon the location of the first grid point and the subsequent adoption of low Reynolds or wall functions formulation.
Fluent has two different options to do that, the first termed Enhanced Wall Treatment by which the separation of the two regions is conducted via a wall-distance-based turbulent Reynolds number, for regions where the this turbulent Reynolds number is above 200 the original standard k-ε is employed, otherwise, when it is below 200, a one equation for the transport of turbulence kinetic energy is employed (Wolfstein’s k-equation) along with algebraically computing the dissipation ε.
The second option is based on the Menter-Lechner ε-Equation, and proposes a remedy for a deficiency of the former, namely, its performance in cases where regions with small turbulence kinetic energy may also have a turbulent Reynolds number below 200 and as such treated with a near-wall formulation even though they are actually away from the wall. In-depth review of such methodologies is found in “Turbulence Modeling – Near Wall Treatment”.
The main issue to note for these kind of near-wall modeling approaches is that in essence they impose wall-insensitivity.
In my experience, Enhanced wall treatment method perform in such a way that far exceeds the expectation from the formal range of validity expected from them. Some would rather avoid the difficulty presented by the subtleties associated with wall-sensitive turbulence models by using the k-ω SST as default. It my opinion, the choice of turbulent model should be based on comparing the performance of the model (assuming best practice guidelines are followed and maintained) and not on the time overhead or the lack of knowledge in applying them, and indeed, in what follows I shall present features of the k-ω SST which make this model highly non-optimal for some specific flow applications.
This concludes Part II of “Turbulence Modeling Best Practice Guidelines: Standard EVMs” , where the topics of V&V, and near-wall treatment were covered. The main reason for dividing to few separate posts is to keep the posts length at bay… 🤓
In “Turbulence Modeling Best Practice Guidelines: Standard EVMs – PART III” I will cover model-specific best-practice guidelines, such as how and when to change the model constants, specific corrections to popular models aimed on achieving specific flow application dependent goals: sensitivity to adverse pressure gradient, including rotation and streamline curvature effects, compressibilty corrections, viscous heating, the favorable effects of changing certain limiters, etc… And there is more to come!… So stay tuned!!!