Professional Documents
Culture Documents
A Dual-Mesh Simulation Strategy for Improved AV-8B Empennage Buffet Load Prediction
Nathan Hariharan and James Hunt US Naval Air Systems Command (NAVAIR), Patuxent River, MD nathan.hariharan.ctr@hpcmo.hpc.mil, james.p.hunt@navy.mil Andy Wissink and Venke Sankaran US Army/AFDD, Ames Research Center, Moffett Field, CA andrew.m.wissink@us.army.mil
Abstract
This paper describes the efforts to model full threedimensional unsteady loads due to aerodynamic, and engine exhaust effects for an AV-8B aircraft model. This study utilizes USM3D for baseline comparison, and uses a Computational Research and Engineering Acquisition Tools and Environments-Air Vehicles (CREATE-AV) productHeliosto transition into an engineering prediction tool. The comparative advantages of the different aerodynamic load prediction codes involved, and the capability gaps to routinely predict engineering loads for buffet performance are also reported.
1. Introduction
Unsteady aero-loads impinging on the empennage, and vertical/horizontal tail assembly of the AV-8B aircraft is a leading cause of pre-mature fatigue cracking of the underlying structure. Current practices to model aft fuselage structure use industry-standard design methods that lead to conservative designs. Moreover, often such designsthat do not fully account for a variety of unsteady aerodynamic loading possibilitieshave fatigue issues that are not addressed until in service. This results in costly retro-analysis, and design fits, when in service. This paper focuses on efforts to model the full threedimensional (3D) aerodynamic unsteady loads, and engine exhaust effects. The unsteady analysis required for buffet load predictions leads to further complexities. Specifically, one is interested in: x Origins of shed vortices, and their flow regimes x Location of flow-vortex impingement on the afttails x Location of engine-exhaust impingement on the aft-tails
978-0-7695-4392-5 2010 U.S. Government Work Not Protected by U.S. Copyright DOI 10.1109/HPCMP-UGC.2010.34 54
Computing, and convecting unsteady shear-layer structures that can excite structural modes on the aft-tails x Addressing possible mechanisms for mitigating the unsteady aerodynamic loads The geometric complexities of a full-up aircraft with all the associated pylons, and missiles and pods require flexible and efficient computational fluid dynamics (CFD) simulation tools that utilize unstructured meshes. Traditionally, codes such as USM3D and Cobalt, have been used for steady-state simulations. Further, for the analysis of fatigue problems, the unsteady aerodynamic load drivers that feed structural analysis are of interest. In this study, we first investigate the use of USM3D, and determine the framework necessary to compute unsteady solutions with fidelity sufficient enough to meaningfully couple with structural models. Apart from unsteady aerodynamic load, structural vibration modes of the empennage can also be triggered by engine exhaust plume impingement. Capturing the necessary physics of engine plume shear-layers pose their own computational challenges, including dissipation issues associated with second-order codes. The current strategy is to counter with localized solution-adapted refinement, and use largescale computing power to ensure that unsteady forcetriggers on the aft-tail surfaces are captured. This study utilizes USM3D for baseline comparison, and uses a Computational Research and Engineering Acquisition Tools and Environments-Air Vehicles (CREATE-AV) productHeliosto transition into an engineering prediction tool. The comparative advantages of the different aerodynamic load prediction codes involved, and the capability gaps to routinely predict engineering loads for buffet performance are also analyzed.
The paper is organized as follows. Section 2 describes the process of grid-generation adopted. Section 3 analyzes the results from USM3D simulations and Section 4 describes the results from Helios simulations using static meshes and Automated Mesh Refinement (AMR).
2. Grid Generation
The AV-8B grid model was inherited from an earlier study of abrupt wing-stall by Chung et al.[1] at US Naval Air Systems Command (NAVAIR). Figure 1 shows the AV-8B aircraft. The available geometry had the wing with flow-through ducts for the engine inlet/out ducts. Figure 2 shows the surface pressure distribution from a typical high-AOA computation, and Figure 3 shows the span-wise sectional lift distribution as a function of steady-state angle-of-attack (from Reference 1). In the study by Chung et al.[1], VGRID was used to generate the volume grid with carefully selected source clustering. VGRID is traditionally known to produce excellent quality grids if the source functions are correctly placed. For relatively large grids, VGRID[2] is known to take anywhere from 424 hours for generating the volume grid. In this effort, the focus is to understand the source mechanisms that contribute to fatigue loads, and hence each computation is more exploratory. Therefore, the grid generation process needs to be less set-up intensive and the turn-around time for generating volume grids faster. One of the charters of CREATE-AV is to propagate the use of Computational-Based Engineering (CBE) to non-CFD experts, and therefore the process utilized to arrive at the solution cannot be too dependent on fine-tuned expertise of the user. A fast and efficient process to arrive at a good-quality engineering solution is as important, and is a concurrent focus of this effort. Two different approaches to generate the surface, and volume grids were explored. First, an existing (cleaned up) surface geometry made up of popularly used PLOT3D patches were taken as the starting point. Modifications were made to the surface definition in order to enforce engine inlet/outlet boundary conditions downstream at the flow-solver level, using the commercial grid generation package Gridgen[3]. Amongst the large-scale applied CFD group at NAVAIR, Gridgen is widely-employed as an effective means of fixing surface geometric issues. Once the surface geometry was defined, the surface grid was generated using Gridgens automatic surface triangulation capabilities. This capability is fairly robust if the surface definition is water-tight, and devoid of tolerance issues. However, surface grid clustering needs to be fixed by fine-tuning grid point distribution on individual patches. It takes a considerable amount of
man-hours to get this process right in order to provide the volume grid solvers with a good surface grid definition. Once a satisfactory surface grid is generated it is used as the basis from which a volume grid is generated. The second approach used previously-generated volume grids of similar configurations as a starting point. In such instances it is possible to take the volume grid, and strip the surface grid out using packages such as Ensight[4]. Such surface grids would be well-clustered, and it would require fewer tweaks using packages such as Gridgen to arrive at the requisite final surface grid. In this study, both of these options were available, and in the final analysis the second option provided the better surface grid, and was utilized. Figure 4 shows the surface grid for the AV-8B geometry that was used in the studies reported in this paper. Once a satisfactory surface grid distribution was obtained, AFLR3[5] was used to generate the volume grid. This is a fairly straightforward process, and after a few passes in getting the boundary layer distribution correct, volume grids of different grid densities were generated (9 million, and 15 million grid points). If the surface grid definition is clean, AFLR3 works without any problems and generates volume grids within 2030 minutes in a standard Linux workstation. Figure 5 shows two different sectional views of the tetrahedral volume grids with embedded boundary grids generated by AFLR3.
55
that enforces the correct mass balance between engine inlet, and outlet. However, enforcing this boundary condition consistently resulted in divergence of flow solution. Some possible explanations and fixes to make the 1D engine model work were discerned, but not explored further, as the main focus of this effort was to try and capture unsteady excitations at the tail. Therefore, the engine was modeled purely as an exhaust condition. Figure 6 shows iso-surface contours of x-momentum, colored by vorticity magnitude from a steady-state simulation showing the effect of the engine-wake at zero angle-of-attack flight conditions. The edge of the engine exhaust shear-layer hits the horizontal tail surface. Figure 7 shows the same iso-surface contours from different view points for a closer look at the likely impingement location on the horizontal tail. The challenges of accurately simulating shear-layer instabilities are well known, and documented[9]. In this simulation, the magnitude of any shear-layer-associated perturbation is largely seen to be damped out within the first diameter of the jet-exhaust. Figure 8 shows a grid cross-section plotted on top of the iso-momentum surface. Clearly, the grid density drops off away from body-surfaces, and contributes to the dissipation of the unsteady content. Moreover, the spatial accuracy of typical tetrahedral unstructured solvers, including USM3D, is second-order, and it would take a large number of targeted gridrefinements across the shear layer in order to achieve sufficient unsteady fidelity, and remains a challenge. Figure 9 shows a close-up view of the engine-exhaust shear-layer x-momentum contours. The shape of AV-8B exhaust results in a plume that is seen to have a tendency to fold over, possibly leading to further instability. The engine-exhaust simulation was repeated at a high angleof-attack of 20 degrees to see if the aft-empennage painted by the engine exhaust differs significantly over typical flight maneuvers. Figure 10 shows different views of similar momentum iso-surface contours, and the exhaust is seen to more directly impinge the tail surface. The vertical tail remains unaffected by the engine exhaust, and any vertical tail excitation can be attributed to aerodynamic vortex impingement.
they reach the tail structures. Inadequate grid density distributions and lower spatial order-of-accuracy of computations contribute to the dissipation. Typically, a second-order-accurate flow solver would require 2030 points across the core of the vortex to numerically convect the vortices without dissipation. In this context, there are several impediments in using a computational strategy as this as a means for engineering predictions of buffet drivers, for a large range of flight conditions. 1. Global refinement over the entire aircraft providing 2030 grid points across all the vortices produced over the aircraft will result in an untenable number of grid points for engineering computations. 2. Targeted refinements to vorticity and gradients (USM3D does not have such a capability) may work if automated, but have historically been proven difficult to achieve without a good deal of expertise in controlling the refinements. Moreover, even with targeted refinements, it is impossible to provide that many grid points across the path of the evolving vortices using second-order-accurate computations. 3. Further, in most instances of engineering uses as in the case of AV-8B - it may be known that fatigue occurs, but the exact conditions that contribute to maximum fatigue are not known apriori. A whole suite of flight conditions has to be investigated, and hence any computational strategy to arrive at unsteady tail loads without numerical dissipation necessarily needs to incorporate automatic grid refinement that is robust, and that does not require expert user interference. With these challenges in mind, in the next section we investigate the use of CREATE-AVs Helios code as a possible tool to capture unsteady tail buffet loads.
infrastructure, and employs an efficient fifth-order spatially-accurate solver, SAMARC[12]. The use of highorder spatially-accurate methods has been demonstrated to compute vortex-laden flowfields efficiently[13]. The Helios platform deploys overset hole cutting using an implicit methodology[14] that does not require any user interference. The overset connectivity is handled by PUNDIT[15], and the entire process supports parallel/distributed computation. Figure 13 illustrates the various components of the Helios platform. Further, the SAMRAI infrastructure supports the ability automatically to perform Cartesian refinement and adapt to geometric and flow features. The CREATE-AV release of Helios slated for early 2010 (named Whitney) does not support the automated refinement functionality, but the 2011 release (Shashta) will officially support the feature. In this work, three levels of high-AoA simulations under the Helios infrastructure are reported: i) Simulation using purely mixed-element unstructured, second-order spatially-accurate NSU3D; ii) Dual-grid near-body mixed-element, second-order (NSU3D), and off-body Cartesian fifth-order (SAMARC) simulation; and iii) Dual-grid simulation with adaptive Cartesian meshing. The Helios simulations were conducted only for analyzing aerodynamic effects, and engine exhaust conditions were not included.
accurate simulation results in weak unsteady aerodynamic perturbation near the tail area.
that avoids the need for user tuning is currently being tested and is planned to be part of the Shasta release of Helios. The Automatic Mesh Refinement (AMR) case used the same near-body mesh used in the fixed-refined case (5.55M nodes). The off-body mesh used up to 8 levels (each level in the Cartesian grid framework doubles the number of grid pointsalong each of the axes compared to the previous level) of refinement, one level finer than the fixed-refined case shown earlier. The case was first run with refinement applied to the problem geometry (i.e., no solution refinement) in order to dissipate non-physical startup transients. It has been found from past calculations that the AMR scheme attempts to track and preserve the startup transients if it is turned on at initialization, and it is helpful to first converge a solution on the geometry-refined mesh before turning on solution-based refinement. The geometryrefined off-body mesh system contained 12.7M off-body nodes, the time-step for the dual-mesh calculation was 3.61sec on 64 processors, with 75% of the total time spent in the near-body solver (NSU3D) and 25% of the time in the off-body solver (SAMARC). Figure 19 shows a cross-section of the grids showing the near-body, and offbody grids if Helios is allowed to adapt the off-body Cartesian grids based on the geometric features, and nearbody grid resolution. The difference between the offbody Cartesian grid structures of the current simulation, and the simulation described in Section 4.2 can be seen by comparing Figures 19 and 16a/b. In Figure 19, the finest level mesh are adapted to the geometry and is not uniformly fine over a region covering the entire aircraft as in Figure 16a/b. Figure 20 shows the evolving vorticity field with only the geometry-based adaption active. Initially, the geometry-refined solution was converged. Then the case was run further with the vorticity-based solution refinement turned on. Mesh adaptation takes place every 250 steps, totaling 40 adapt cycles over the simulation. The final mesh system at the end of the simulation contained 20.0M off-body nodes, the time-per-step is 4.18 sec on 64 processors, with 68% of the time spent in the near-body solver (NSU3D) and 32% in the off-body solver (SAMARC). Figure 20 shows several streamwise sections of the refined Cartesian grid along the length of the aircraft. The grid refinement is seen to track the approximate evolution of the vortices generated. Figure 21 shows vorticity contours overlaid with the refined grid at a spanwise section along the wing. The vorticity transported from the near-body solution near the wing is convected in the off-body grid all the way to the empennage, with no noticeable loss due to the localized refinement of the Cartesian grid. Figure 22 shows iso-vorticity contours colored by wvelocity over the entire aircraft, similar to Figure 18. The
richness of the vorticity field in the vicinity of the empennage sections is remarkably well captured without any noticeable dissipationdue to a combination of 5thorder special accurate algorithm and adaptive provision of enough Cartesian grid points across the path of the vortex structures. A central advantage in this approach is the ability of the method to automatically adapt the background Cartesian grid with changing flight conditions without having to go back and regenerate the near-body unstructured grid.
58
accurate convection of the vortex structures hitting the empennage (in an aerodynamic setting, convecting engine shear-layer unsteady features was not explored at this point). A central advantage in this approach is the ability of the method to automatically adapt the background Cartesian grid with changing flight conditions without having to go back and regenerate the near-body unstructured grid. Further detailed studies are recommended to validate the Helios methodology by incorporating engine exhaust effects, and coupling the tail unsteady loads to a structural model.
Acknowledgements
Material presented in this paper is a product of the CREATE-AV Element of the Computational Research and Engineering for Acquisition Tools and Environments (CREATE) Program sponsored by the US Department of Defense High Performance Computing Modernization Program Office. Dr. Robert Meakin is the Program Manager for CREATE-AV. The first author would like to thank Mr. Joseph Laiosa (CREATE-AV, NAVAIR 4.3.2.1) for his guidance in the grid-generation process, and Dr. David Findlay (NAVAIR 4.3.2.1) for his support.
for Overset CFD with Adaptive Cartesian Grids. AIAA-20080927, 46th AIAA Aerosciences Conference, Reno NV, January 2008. 13. Hariharan, N., Rotary-Wing Wake-Capturing: High-Order Schemes Towards Minimizing Numerical Vortex Dissipation. Journal of Aircraft, Volume 39, No.5, pp. 822830, September October 2002. 14. Lee, Y. and J. Baeder, Implicit Hole Cutting A New Approach to Overset Connectivity. AIAA-2003-4128, 16th AIAA CFD Conference, Orlando, FL, June 2003. 15. Sitaraman, J., M. Floros, A.M. Wissink, and M. Potsdam, Parallel Unsteady Overset Mesh Methodology for a MultiSolver Paradigm with Adaptive Cartesian Grids. AIAA-20087117, 26th AIAA Applied Aerodynamics Conference, Honolulu HI, January 2008. 16. Kamkar, S., et al., Automated Grid-Refinement Using Feature Detection. AIAA 2009-1496, 47th AIAA Aerospace Sciences Meeting, Orlando, FL, January 2010.
References
1. Chung, J. and P. Parikh, A Computational Study of the Abrupt Wing-Stall (AWS) Characteristics for Various Fighter Jets: Part II: AV-8B and F/A-18C. AIAA 2003-0747, Reno, NV, January 2003. 2. VGRID Unstructured Grid, http://tetruss.larc.nasa.gov/vgrid. 3. Gridgen, http://www.pointwise.com. 4. Ensight, http://www.ensight.com. 5. AFLR3, http://www.simcenter.msstate.edu/docs/solidmesh. 6. T/AV-3B Fatigue Analysis Report, Boeing Company, prepared for Naval Air Systems Command, May 1981. 7. Parikh, P., Application of a Scalable, Parallel, UnstructuredGrid-Based Navier-Stokes Solver. AIAA-2001-2584, June 2001. 8. USM3D online manual, http://tetruss.larc.nasa.gov/ usm3d/index.html. 9. Arienti, M., C. Eckett, and N. Hariharan, Allstar Hybrid-LES Validation. UTRC Report #R2005-2.105.0027-4, United Technologies Research Center, East Hartford, CT, March 2005. 10. Sitaraman et al., Application of Helios Computational Platform to Rotorcraft Flowfields. 48th AIAA Aerospace Sciences Meeting, Orlando, FL, January 2010. 11. Mavriplis, D.J., and V. Venkatakrishnan, A Unified Multigrid Solver for the Navier-Stokes Equations on Mixed Element Meshes. International Journal for Computational Fluid Dynamics, Vol. 8, pp. 247263, 1997. 12. Wissink, A.M., J. Sitaraman, V. Sankaran, D.J. Mavriplis, and T.H. Pulliam, A Multi-Code Python-Based Infrastucture
59
Figure 7. Contour of x-momentum iso-surface contours colored by vorticity magnitude showing the jet-exhaust for zero angle-of-attack conditions. (a) View from behind, and (b) View from top showing interaction point with the horizontal tail.
Figure 8. Contour of x-momentum iso-surface contours colored by vorticity magnitude showing the jet-exhaust at zero angle-of-attack conditions. Grid plotted over the jetisosurface.
Figure 5. Side- and front-view of volume grid. Volume grid with embedded boundary-layer generated using AFLR3.
Figure 9. Closer-view of jet-x-momentum iso-surface showing fold-off due to exhaust shape. Grid plotted over the iso-surface contours.
Figure 6. Contour of x-momentum iso-surface contours colored by vorticity magnitude showing the jet-exhaust. Half-plane model solved using USM3D. Jet modeled as exhaust boundary condition.
60
Figure 12. Overall view of vorticity contours at several streamwise locations across the aircraft. Twenty degree angle-of-attack conditions. Figure 10. Contour of x-momentum iso-surface contours colored by vorticity magnitude showing the jet-exhaust at twenty degree angle-of-attack conditions, top and bottom views. Only the horizontal-tail is directly impinged.
Figure 13. Dual-mesh paradigm used in the Helios platform with unstructured near-body grids to capture geometric features and boundary-layer near the body surface, and block-structured Cartesian grids to capture far-field flow features.
Figure 11. Directional vorticity contours at several streamwise locations across the aircraft. Twenty degree angle-of-attack conditions. Grid density plotted along with the voriticty.
Figure 14. Directional vorticity contours at several streamwise locations across the aircraft. Twenty degree angle-of-attack conditions. Computations using single-grid NSU3D under the Helios environment.
61
Figure 16c. Front-view of a sectional-cut of a coupled nearbody (NSU3D) and off-body (SAMARC) computation using Helios. Cartesian off-body grids get progressively finer near the body.
Figure 15. Vorticity iso-surface over the entire aircraft colored by z-velocity. Twenty degree angle-of-attack conditions. Computations using single-grid NSU3D under the Helios environment.
Figure 16a. Side-view of a sectional-cut of a coupled nearbody (NSU3D) and off-body (SAMARC) computation using Helios.
Figure 17. Directional vorticity contours at several streamwise locations across the aircraft. Twenty degree angle-of-attack conditions. Computations from composite near-body (NSU3D), and off-body (SAMARC) overset computations under the Helios environment. Off-body th solutions are 5 -order-accurate in spatial resolution.
Figure 16b. Top-view of a sectional-cut of a coupled nearbody (NSU3D) and off-body (SAMARC) computation using Helios. Cartesian off-body grids get progressively finer near the body.
Figure 18. Vorticity iso-surface over the entire aircraft colored by z-velocity. Twenty degree angle-of-attack conditions. Computations from composite near-body (NSU3D), and off-body (SAMARC) overset computations th under the Helios environment. Off-body solutions are 5 order-accurate in spatial resolution.
62
Figure 19. (a) Off-body Cartesian grids adapted to geometry by Helios platform. (b)Vorticity iso-surface in the evolving flowfield colored by z-velocity from a first-pass solution prior to flow-based Cartesian adaptation.
Figure 21. Spanwise sectional view of vorticity contours overlaid with adapted off-body Cartesian mesh
Figure 20. Streamwise sectional views of the off-body Cartesian grids refined to track vorticity
Figure 22. Vorticity iso-surface over the entire aircraft colored by z-velocity. Twenty degree angle-of-attack conditions. Helios computation employing off-body th Cartesian grid refinement (5 -order-accuracy).
63