Nowadays, the Internet is overgrowing, and accessing the Internet is more comfortable than before. Ease of access and more coverage of the Internet result in more devices to be online. Besides the massive number of connected devices, new applications have emerged in areas like smart city, smart traortation, healthcare, and emergency response. As a result, the velocity of generating data is also increasing, as same as the volume of generated data. This enormous amount of data require a comprehensive infrastructure for processing and storing them. Also, in the Big Data era, many useful decisions can be made using this tremendous data. On the other hand, terminal devices cannot provide enough resources for processing and make real-time decisions. To this end, Cloud Computing is introduced as a solution for providing the processing and storage capabilities to this demand. Although cloud servers provide high-performance resources, communication costs are one the drawbacks for sending tasks to the cloud. Alternately, rather than moving data to the cloud, it may be useful to locate the resources closer to terminal devices. Fog Computing, as a novel computing paradigm, is introduced that aims this issue. In Fog Computing, there is an extra layer located between the cloud and terminal layer. The fog nodes provide sufficient resources for the terminal devices. Although fog nodes have better performance than the IoT devices in the terminal layer, the capability of fog nodes is still limited due to deployment costs. So they cannot serve all of the received tasks, and some of them should be offloaded. Offloading tasks to the cloud results in high delay and affects the Quality-of-Service. On the other hand, some applications are delay sensitive and hence cannot tolerate the delay. It will be necessary that there must be an optimized mechanism for offloading tasks. In this thesis, we iected possible methods used for offloading in fog computing and proposed a novel approach for this task based on reinforcement learning. The proposed method uses Q-Learning for making the decision on which task should be offloaded and aims to minimize the delay at the same time. A fog node when wants to offload a task benefits from Q-table and make its decision based on Q-value. Finally, the experiment results illustrate improvements in the delay, which is the aimed parameter to be minimized. کلمات کلیدی انگلیسی: Internet of Things, Cloud Computing, Fog Computing, Workload Offloading, Machine Learning, Reinforcement Learning