The Eclipsing Binaries via Artificial Intelligence. II. Need for Speed in PHOEBE Forward Models

In modern astronomy, the quantity of data collected has vastly exceeded the capacity for manual analysis, necessitating the use of advanced artificial intelligence (AI) techniques to assist scientists with the most labor-intensive tasks. AI can optimize simulation codes where computational bottlenec...

Full description

Saved in:
Bibliographic Details
Main Authors: Marcin Wrona, Andrej Prša
Format: Article
Language:English
Published: IOP Publishing 2025-01-01
Series:The Astrophysical Journal Supplement Series
Subjects:
Online Access:https://doi.org/10.3847/1538-4365/ada4ae
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In modern astronomy, the quantity of data collected has vastly exceeded the capacity for manual analysis, necessitating the use of advanced artificial intelligence (AI) techniques to assist scientists with the most labor-intensive tasks. AI can optimize simulation codes where computational bottlenecks arise from the time required to generate forward models. One such example is PHOEBE, a modeling code for eclipsing binaries (EBs), where simulating individual systems is feasible, but analyzing observables for extensive parameter combinations is highly time consuming. To address this, we present a fully connected feedforward artificial neural network (ANN) trained on a data set of over one million synthetic light curves generated with PHOEBE. Optimization of the ANN architecture yielded a model with six hidden layers, each containing 512 nodes, providing an optimized balance between accuracy and computational complexity. Extensive testing enabled us to establish ANN's applicability limits and to quantify the systematic and statistical errors associated with using such networks for EB analysis. Our findings demonstrate the critical role of dilution effects in parameter estimation for EBs, and we outline methods to incorporate these effects in AI-based models. This proposed ANN framework enables a speedup of over 4 orders of magnitude compared to traditional methods, with systematic errors not exceeding 1%, and often as low as 0.01%, across the entire parameter space.
ISSN:0067-0049