Neural networks have shown significant potential in solving partial differential equations (PDEs). While deep networks are capable of approximating complex functions, direct one-shot training often faces limitations in both accuracy and computational efficiency. To address these challenges, we propose both Galerkin and collocation adaptive methods that uses neural networks to construct basis functions guided by the equation residual. The approximate solution is computed within the space spanned by these basis functions. As the approximation space gradually expands, the solution is iteratively refined; meanwhile, the progressive improvements serve as reliable a posteriori error indicators that guide the termination of the sequential updates. Additionally, we introduce adaptive strategies for collocation point selection and parameter initialization to enhance robustness and improve the expressiveness of the neural networks. We also derive the approximation error estimate and validate the proposed method with several numerical experiments on various challenging PDEs, demonstrating both high accuracy and robustness of the proposed methods.
Neural networks have shown significant potential in solving partial differential equations (PDEs). While deep networks are capable of approximating complex functions, direct one-shot training often faces limitations in both accuracy and computational efficiency. To address these challenges, we propose both Galerkin and collocation adaptive methods that uses neural networks to construct basis functions guided by the equation residual. The approximate solution is computed within the space spanned by these basis functions. As the approximation space gradually expands, the solution is iteratively refined; meanwhile, the progressive improvements serve as reliable a posteriori error indicators that guide the termination of the sequential updates. Additionally, we introduce adaptive strategies for collocation point selection and parameter initialization to enhance robustness and improve the expressiveness of the neural networks. We also derive the approximation error estimate and validate the proposed method with several numerical experiments on various challenging PDEs, demonstrating both high accuracy and robustness of the proposed methods.