Nature has comprised of highly advanced systems capable of performing complex computations, adaptation, and learning using analog components. Although digital systems have significantly surpassed analog systems in terms of precision, high speed and mathematical computations, digital systems cannot outperform analog systems in terms of power. In this thesis, analog VLSI circuits are presented for performing arithmetic functions and for implementing neural networks. These circuits are based upon the power of the analog building blocks to perform low power and parallel computations. Circuits for performing squaring, square root, and multiplication/division are shown. A circuit that performs a vector normalization, based on cascading the preceding circuits, is shown to display the ease with which simpler circuits may be combined to obtain more complicated functions. In this thesis, two feedforward neural network implementations are also presented. The first uses analog synapses and neurons with a digital serial weight bus. The network is trained in a loop with the computer performing control and weight updates. Also in the second neural network, weights are implemented digitally, and counters are used to update them. A parallel perturbative weight update algorithm is used. The network uses multiple pseudorandom bit streams to perturb all of the weights in parallel. Some of the conventional architectures dose not have desired characteristics for high speed operation and requires area for chip implantation. In this thesis, we propose a modification on basic blocks of designed neural networks to minimize chip area in order to increase learning speed and decrease the cost. Also, a new mechanism based on two LFSRs and one XOR network is presented to combine outputs from different taps to obtain uncorrelated noise. Experimental simulations show that both networks are learned successfully as digital function such as AND and XOR.