in this thesis a method to acquire horizontal and vertical light fields using a hand held camera is proposed and disparity map of scene is extracted and some experiments facilitated by this approach is described. Initially an image sequence from a camera translating along an approximately plus wise path with limited camera rotations is taken as an input. One can acquire such this easily in a few seconds by moving a hand-held camera. Then the input is resampled into two regular 3D light fields (vertical and horizontal) by aligning them in the spatio-temporal domain is included. Here two techniques are proposed, one is filter match scores to reduce noise and extracting a clear disparity map estimation from light fields, two is an edge-preserving smoothing filter on disparity map to reduce noise and edge restoration. To reduce memory usage and improve performance, down-sampled and original image are combined without decreasing disparity map quality. Finally, to remove weaknesses of vertical and horizontal uniform region, both vertical and horizontal disparity map are merged based on confidence measure. Experiments showed our approach is better in details of disparity map rather than other approaches such as Kim,Wanner , Hosseini and Bigdeli. Keywords: Light field technology, Computational Photography, Disparity map