Aerodynamic Coefficients Prediction via Cross-Attention Fusion and Physical-Informed Training

Authors: Yueqing Wang, Peng Zhang, Yushuang Liu, Jianing Zhao, Jie Lin, Yi Chen

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental validation demonstrates that our proposed method performs excellently in multiple aerodynamic prediction tasks. This achievement brings a new technological breakthrough to the field of aerodynamic prediction and provides robust support for the design optimization of complex systems such as aircraft and vehicles. Experiments To comprehensively demonstrate the capabilities of the proposed rapid aerodynamic prediction model, we conduct tests on 3D point cloud datasets from both automotive and aircraft domains.
Researcher Affiliation Academia 1State Key Laboratory of Aerodynamics, Sichuan, China, 2Computational Aerodynamics Institute, China Aerodynamics Research and Development Center, Sichuan, China, 3Sichuan Tianfu Fluid Big Data Research Center, Chengdu, China, 4School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China, 5College of Computer Science and Technology, National University of Defense Technology, Changsha, China, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the architecture, components (Shape Encoder, Flow Condition Encoder, Cross-Attention Fusion, Prediction Head), and learning algorithm in detail but does not provide structured pseudocode blocks or algorithms with numbered steps.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes Driv Aer Net++ Driv Aer Net++ comprises 8000 diverse car designs modeled with high-fidelity CFD simulations (Elrefaie et al. 2024), and is integrated into NVIDIA Modulus. The dataset includes diverse car configurations.
Dataset Splits Yes The training and testing set comprises shapes 1 to 900, encompassing samples under flow conditions with a refined range of the middle 80% of the values for Ma and Ao A. The validation sets are further categorized into three types: the shape validation set utilizes shapes 901 to 1000, totaling 4524 samples under the same flow conditions; the condition validation set employs shapes 1 to 900 but under extreme flow conditions,i.e. the extreme 10% of the data for both Ma and Ao A, containing 2425 samples; finally, the shape and condition generalization validation set also utilizes shapes 900 to 1000 but under these extreme flow conditions, with 291 samples. This split methodology comprehensively evaluates the model s generalization capabilities under various conditions.
Hardware Specification Yes The work is carried out on Nvidia A100 GPU featuring 40GB of memory capacity, leveraging the Py Torch 1.10.1 framework.
Software Dependencies Yes The work is carried out on Nvidia A100 GPU featuring 40GB of memory capacity, leveraging the Py Torch 1.10.1 framework.
Experiment Setup Yes We utilize Adam W as the optimizer and employ the Cosine Learning Rate strategy for learning rate scheduling, where the learning rate gradually increases from a minimum of 10 7 to a maximum of 10 4, repeating every 60 epochs. To mitigate overfitting, we set the weight decay coefficient to 5 10 4. Furthermore, to smoothly initiate the training process, we include a warm-up period of 60 epochs and continually adjust and optimize the model performance over the total 300 epochs. Additionally, we use MRE Loss to complete the network training, ensuring that our models minimize the average absolute difference between predicted and actual aerodynamic values. In the shape encoder and cross-attention fusion, we adopt the same network structure as Point GPT, allowing for direct fine-tuning of the pre-trained network. For other components, to reduce the impact of specialized network architectures and to highlight the cross-attention fusion and physical-informed training approaches presented in this work, we utilize simple MLP structures. Specifically, in the flow condition encoder section, we employ an MLP with dimensions {fin, 64, 128, 512}, where fin represents the flow condition dimension. Each prediction head utilizes an MLP with dimensions {256, 256, 1}.