In various binary regression learning problems, classification accuracies have been persistently improving with new methods. How do we know if we are reaching the limit or there is still room for improvement? Such goodness of fit (GOF) assessment questions are important in data science and application.In this talk, we will review classical approaches to GOF assessment in binary regression and develop new methods to overcome their major drawbacks. In particular, we advocate a data splitting approach with multiple data splitting ratios to determine if a general classification algorithm (parametric or not) has a lack of fit or not for the data at hand. Simulation and real data examples show advantages of the proposed methods.This talk is based on joint works with Jie Ding, Chunling Lu and Jiawei Zhang.