Traditional numerical methods face numerous challenges in handling high-dimensional problems, complex regional segmentation, and error accumulation caused by time iteration. Concurrently, neural network methods based on optimization training suffer from insufficient accuracy, slow training speeds, and uncontrollable errors due to the lack of efficient optimization algorithms. To combine the advantages of these two approaches and overcome their shortcomings, randomized neural network methods have been proposed. This method not only leverages the strong approximation capabilities of neural networks to circumvent the limitations of classical numerical methods but also aims to resolve issues related to accuracy and training efficiency in neural networks. By incorporating a posterior error estimation as feedback, in this talk, we propose Adaptive Growing Randomized Neural Networks for solving PDEs. This approach can adaptively generate network structures, significantly improving the approximation capabilities.