In a communication system, we try to convey information from one point called information source to a determined destination, very often in a noisy environment. Before the information is transmitted, the sequence of symbols generated by the source, is usually processed by a block called source encoder. In fact, source coding is used to compress the information. Data compression helps saving valuable resources such as transmission bandwidth and storage memory. Hence, coding a source by a code with minimum possible average codelength is more desirable. Furthermore in some applications such as video compression, decoding procedure must be accomplished instantaneously. This requirement necessitates using prefix-free codes in which no codeword is prefix of any other one. It is well known that the codeword lengths of any binary prefix-free code satisfy the Kraft inequality. This constraint on the codeword lengths, makes the entropy as a lower bound on the average codeword length. Therefore, the performance of a code is usually evaluated by its redundancy, which is defined as the difference between the average codelength and the entropy. The optimal prefix-free code, which has the minimum average codeword length (or equivalently minimum redundancy) for an information source, can be obtained by the well-known Huffman algorithm and called Huffman code. Regardless of the probability distribution of the source, the redundancy of the Huffman code is less than 1 bit per symbol. A special dir=ltr In this thesis, to realize our goal, we evaluate the difference between the average codeword length of the optimal fix-free code and that of the Huffman code. Recently, Savari has proposed an algorithm which generates optimal fix-free code for a given information source. However, deriving a relation between codeword lengths generated by this algorithm, and that of the Huffman code seems rather difficult. Kew words Prefix-free code, fix-free code, Kraft sum, Huffman code, redundancy, average codelength