While Convolutional Neural Networks (CNN) have proven their competence in various signal and image processing tasks, the theory of CNN in the forward pass is still lacking. In parallel, recent advances in sparse coding have attracted much attention to Convolutional Sparse Coding (CSC). Providing a global sparse model, CSC can overcome several limitations of the patch-based sparse model. ML-CSC is emerged from the cascade of CSC layers, demonstrating the close connection between CNN forward pass and sparse coding. This connection brings a fresh view to CNN under simple local sparsity conditions. This study proposes a new structure of ML-CSC network, along with a new adaptive approach to design a global structural dictionary. This approach uses a multi-dictionary learning model, and the dictionary optimization algorithm learns in wavelet sub-bands. This algorithm improves the adaptation and complexity of atoms in the trained set of dictionaries compared to the single-dictionary structure. Furthermore, because the pooling operation loses some location information, we propose a structure for deep network without using the pooling layer, and demonstrate the advantage of the proposed algorithm for image denoising in terms of performance and convergence. Key words: Sparse Coding, Dictionary learning, Wavelet, Deep Convolutional Neural Network.