Recently, the utilization of robots in the industry has faced a dramatic growth. In not-to-distance future, we will observe a Wide spread implementation of them in household and medical applications. In the meantime, the safety issue is vital because of the robot’s interaction with humans. Impedance control, as one of the flexible methods against external forces, has been able to prevent damages during a collection to a environmental obstacle. In this method, the robot is taken into account as a mass-spring-damper system. In the impedance control, force and position control are not accomplished separately, but the system’s error dynamics, or in other words, the relationship between external forces and tracking error should follow a desirable dynamic. In the first phase, the design of the impedance controller for the SCARA robot as the nominal controller and the main target of the controlling aim has been addressed while flexing against external forces. Invariance control is employed as one of the control methods to apply constraints in dynamic environments. This controller prevents the violation of the limitations or predetermined vicinity limit to the vulnerable obstacle by switching between nominal and corrective control. Hence, in the next phase, an invariant controller is designed as a system-correcting controller for the behavior of the system and the generator of the optimal secondary path. The Q-learning algorithm is one of the methods of making control systems intelligent. This algorithm, which is subset of reinforcement learning algorithms, attempts to learn optimal behavior using trial and error based on variant environment conditions. Since the state of states and actions in the algorithm are discrete, in step three, by discretization of these two spaces for the SCARA robot and adjusting the coefficients of the PD controllers, deploying the online incremental Q-learning algorithm, the robot is well controlled. Key words: Manipulators, Impedance Control, Invariance Control, Incremental Q-Learning