Nowadays lots of factors such as traffic, high mortality from accidents, as well as the importance of safety car occupants have led big companies to the desire to self-propelled cars increase sharply. On the other hand, people who are unable to drive also severely need such devices like kids and old age persons. Because of that, lots of companies are looking for solutions to give humans driving to cars with minimal intervention but this requires a lot of complex models that be able to perceive and respond to the conditions of the environment using sensors such as a camera and GPS. One of the methods that recently made to respond to this need, is the use of models based on deep learning. These models are far more accurate to previous models. Besides lots of advantage they have, some disadvantages such as consume high processing and low interpretability exist. But the most important problem is the vulnerability that is the main focus of this dissertation. In fact, despite the high power of these models, they can be deceived and returned incorrect output. Therefore, the issue of security of these networks is so important. In this study, attacking self-driving cars and deception of them are examined. Also, convolutional neural networks (subsets of deep learning ) suppose as the visual system and guidance of these devices.Because it is very costly to check the performance of the self-driving car in the real environment and alose design attacks is almost impossible in real-world, all attacks and research performed on a simulator platform. In this research, two methods have been proposed to deceive this group of cars, which in the first method giving a plate on the street and changing its colour, an error is created in the steering angle of the car and the car deviates from the original path. In this way, the desired colour for deception is obtained during an optimization process and then applied to the plate to deflect the device. For this purpose, two simulator and two different optimization methods are used. In the first step, the Udacity simulator is used, which has a simple path and a small network to drive the car and differential evolution used as the optimizer. In the next step, by learning the tips of the previous experiments we use Carla simulator, which has better graphics. The sophisticated DDPG model has also been used to steer the car and the Bayesian algorithm has been used for optimization that increases the effectiveness of the attack and also needs less sample to evaluate. In the second method, the object recognition system is deceived by seeing the plate installed on the back of the front car to decrease the recognition rate. In all attacks, it is assumed that there is no access to the structure and weight of the network and only its output exists. Finally, in all the reported attacks, the proposed method can deceive the network and disturb the machine decision making Key Words : Convolutional Neural Network, Self-driving car, Deep-Learning, Adversarial Attacks, Deceiving autonomous systems