Today, there are several ways to find the interatomic potential energy of materials, which each of them has their own benefits. Quantum mechanics methods are highly accurate but can be very time-consuming. These methods can be tedious, particularly if the number of structures studied is large. There are substances in nature that, due to their specific atomic structure, may exist from a combination of hundreds of different structures. These so-called crystalline compounds are called disorder crystalline compounds. Therefore, finding the energy of these compounds using the previous methods is not very economical. Materials such as $NaCaNi_2F_7$, $MnFe_2O_4$, $ca_{8.63}Sb_{10}Sr_{2.37}$ and $Co_2Ni_2Nb_2O_9$ , which, due to this form of irregularity, contain 97, 1337, 318 and 644 independent structures, respectively. $Co_2Ni_2Nb_2O_9$ can exist in both ferrimagnetic and antiferromagnetic states, thus form two different datasets. Advances in computer science and knowledge of statistics and data have led to the application of machine learning in many areas of condensed matter physics, which has helped to perform calculations with the necessary accuracy and efficiency. Machine learning has already created a model for finding interatomic potential using data fitting. The main challenge in this research is to properly represent atomic structures and then find interatomic potential energy using machine learning methods. To use atomic configurations as input to machine learning, atomic systems must be converted to a numerical set of vectors or matrices by a specific set of functions. This set of functions is called a descriptor. To achieve the appropriate inputs for machine learning, descriptors must be invariant relative to the rotation, transfer, and permutation of the same atoms in the atomic systems. We have used descriptors Ewald matrix, Sine matrix, MBTR, SOAP and ACSF in this work. Recently, neural networks and kernel ridge regression have been considered for finding interatomic potentials, and we have used these two methods in this work. These two methods appear to have almost the same accuracy, but the neural network computations seems to be longer. Models obtained using kernel ridge regression using the sine matrix with an average error value of $0.0034Ha$ will be less accurate than other methods in this work. It can be said that the best model in the kernel ridge regression method is obtained by using MBTR with an average error of $0.0003144Ha$. Using neural networks with an average error value of $0.0004369Ha$ will create suitable models for us in this work.