Multifocus image fusion plays an important role in image processing and machine vision applications. In frequent occasions, captured images are not focused throughout the image because the optical lenses that are commonly used for producing image have limited depth of field. Therefore only the objects that are near the focal range of the camera are clear while other parts are blurred. One solution is to capture several images with different focal ranges and combine them to produce an image that is focused everywhere. To identify focused regions, current implementations of the mentioned solution use spatial or transform-domain. These methods usually suffer from artifacts such as blockiness or ringing. In this thesis we have defined an energy term and categorized a region’s energy into low, medium, and high levels. Then based on the level of energy each pixel is defined as either focused or not. The output fused image is constructed from focused pixels of the two source images. Experimental results reveal the superiority of our method compared to comparable algorithms. Keywords: Multi-focus, energy map, fusion, image reconstruction