IMPROVING USER EXPERIENCE IN THE IMPLEMENTATION OF THE PUBLIC PROCUREMENT LAW IN THE REPUBLIC OF SERBIA THROUGH AN INTERACTIVE CHATBOT BASED ON AI TECHNOLOGY

Authors

  • Dejan Dodić EDUKOM D.O.O. Vranje, Serbia

Keywords:

Chatbot, neural networks, Python, Public Procurement Law (ZJN), hyperparameters, modeling, BM25Okapi, GPT-2, NLP, wandb, result analysis, model training, virtual consultant, revolution in consulting services, dataset

Abstract

This scientific research paper presents the development of a neural network for a computer model that will be applied in the development of a chatbot web application based on a proprietary dataset. The aim of the chatbot is to provide virtual consulting support in the implementation of the Public Procurement Law in the Republic of Serbia. This AI tool is of crucial importance for both procurers and bidders, given the need for understanding and compliance with the law. By using the complete content of texts related to the law's implementation available on the internet, the model will train its neural networks based on an algorithm to provide answers to various questions.
The chatbot will be capable of answering questions related to the definitions of the Public Procurement Law ("What is...?"), procedural questions ("What to do in the following case...?"), and ways to overcome obstacles in implementing the law ("How to overcome the problem...?"). By tracking the conversation context and searching through texts, the model will provide meaningful responses.
This project has the potential to revolutionize the implementation of the Public Procurement Law in Serbia, considering the large number of officials and the lack of consultants who could provide accurate real-time answers and solutions. Currently, there are numerous decisions of the Republic Commission for the Protection of Rights that represent the practice of law implementation, and the chatbot would enable access to this information to ensure the correctness and transparency of public procurement procedures.
Therefore, the model can be expanded and adapted for application with other laws, provided that there is appropriate textual material for model training and tokenization, which would further enhance the accuracy of responses to questions regarding the implementation of the Public Procurement Law.
It is important to note that as the conceptual creator of this concept, I have personally submitted a request to the Institute for Artificial Intelligence - AI Institute Novi Sad, Serbia, to collaborate on realizing this idea.

References

Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009). Curriculum learning. Proceedings of the 26th

Annual International Conference on Machine Learning (ICML), 41, 41-48. https://doi.org/10.1145/1553374.1553380

Bergstra, J., & Bengio, Y. (2012). Random search for hyper-parameter optimization. Journal of Machine

Learning Research, 13(Feb), 281-305. http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf

Brown, T. B., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing

Systems 33 (NeurIPS 2020).

Brown, T. B., et al. (2020). Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165.

Dai, Z., et al. (2019). Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. Proceedings

of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019).

Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

https://arxiv.org/abs/1412.6980

Liu, Y., et al. (2020). Generative Pre-trained Transformer 3 (GPT-3). arXiv preprint arXiv:2005.14165.

Liu, Y., et al. (2020). On the Variance of the Adaptive Learning Rate and Beyond. Proceedings of the 7th

International Conference on Learning Representations (ICLR 2019).

Radford, A., et al. (2019). Language Models are Unsupervised Multitask Learners. OpenAI Blog.

Radford, A., et al. (2019). Language Models are Unsupervised Multitask Learners. arXiv preprint

arXiv:1910.10683.

Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are

unsupervised multitask learners. OpenAI Blog, 1(8). https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf

Raffel, C., et al. (2020). Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.

arXiv preprint arXiv:1910.10683.

Sasaki, Y. (2007). The truth of the F-measure. The University of Tokyo. http://acl.ldc.upenn.edu/W/W04/W04-

pdf

Vaswani, A., et al. (2017). Attention Is All You Need. Proceedings of the 31st Conference on

Neural Information Processing Systems (NeurIPS 2017).

Wolf, T., et al. (2020). Transformers: State-of-the-Art Natural Language Processing. Proceedings of the 2020

Conference on Empirical Methods in Natural Language Processing (EMNLP 2020).

Downloads

Published

2023-06-01

How to Cite

Dodić, D. (2023). IMPROVING USER EXPERIENCE IN THE IMPLEMENTATION OF THE PUBLIC PROCUREMENT LAW IN THE REPUBLIC OF SERBIA THROUGH AN INTERACTIVE CHATBOT BASED ON AI TECHNOLOGY. KNOWLEDGE - International Journal , 58(3), 473–480. Retrieved from https://ikm.mk/ojs/index.php/kij/article/view/6133