Graphs and graph databases are applicable over a wide range of applications like text mining. Using graphs to represent relationships between entities has enriched the models. Natural language processing algorithms use graphs to model structural relationships of texts efficiently, resulting in improved performance. However, the need to increase the accuracy of graph construction and weight allocation remains an important challenge. Some existing methods reduce the efficiency and lack scalability for large graphs. In this study, we propose a novel graph-based method for text modeling and running a query to evaluate the similarity of text segments. In this method, the graph corresponding to the text is first created by modeling words and named entities by the state-of-the-are pre-trained BERT model. Graph nodes are then weighted in two stages. In the first stage the nodes with more generalization obtain higher weights. The second weighting stage is done by the graph obtained from the query text. In this weighting step, nodes are considered important if they are specifically related to the query text. After determining the important nodes in the graph, the semantic similarity between the query text and the texts in the database is measured. The whole process of this framework uses natural language processing pipeline in Apache Spark scalable platform. The efficiency of the model was evaluated for both distributed and non-distributed configuration, as well as its scalability by using a Spark cluster. Evaluation of the accuracy using the Pearson correlation coefficient show that the proposed method performs much more efficient than its competitors. Keywords Graph Database, Semantic Similarity, Selective Weight, Apache Spark, Unsupervised Learning, BERT, Distributed Algorithm