: Tracking moving objects is one of the issues in the field of computer vision. Despite the passage of more than three decades, it has attracted the attention of researchers, mainly because of its many applications and challenges. Although the problem of object tracking history goes back to military issues, today it has very broad applications in the areas of industry and commerce in order to motion activity detection, surveillance, human-computer interaction, etc. Tracking can be determined as showing the state of an object and following its variations in successive frames of a video. Tracking in the unrestricted environments is still a challenging problem due to appearance changes of target object caused by intrinsic (such as pose and shape variations of object) or extrinsic (such as environmental illumination changes, camera motion and blurring, clutter, noise disturbance and occlusion) factors. However, in the majority of the video sequences, some parts of target object are not visible in different frames. This phenomenon is called partial occlusion and has been raised during recent years as a key issue in tracking research. The main goal of this study is to find a partial occlusion robust tracking algorithm. In addition, this algorithm should have superior performance against other tracking challenges and should have real time operation. In according to these reasons, we propose a two stage tracker based on coarse and fine sparse representation. At first, target object appearance modeled by a dictionary consisting of PCA basis vectors and trivial templates. We apply APG_L1 solving method for this modeling. We decrease number of trivial templates to decrease computation load such that we consider one trivial template for every a by a pixels. Then, best candidates in previous stage are evaluated by PCA_L1 method in order to determine the last candidate. We evaluate proposed tracker by performance criteria. Results show that, in addition to suitable resistivity to partial occlusion and different challenges, the tracker is significantly shorter in process duration. Also, simulations show that proposed tracker has a better performance in both accuracy and speed in comparison to other algorithms. We compare this tracker with 5 different popular algorithms and 7 new sparse trackers using 12 datasets. We conclude that proposed tracker with 14.2 frame per second, 10.2 pixel center error, 0.7475 average overlapped rate, has a better performance in comparison with nearest competitor with 2.7 frame per second, 12.9 pixel center error, 0.7458 average overlapped rate. Keywords: visual tracking, sparse representation, principal component analysis (pca), partial occlusion, appearance modeling