This article is a summary of a seminar paper written as part of the Master's in Software Engineering program, dated July 10, 2023.
This study focuses on revolutionizing the requirements elicitation phase in software development by proposing a solution that utilizes Artificial Intelligence (AI) through a motivational chatbot connected to ChatGPT. The chatbot engages in text conversations with stakeholders, posing thought‑provoking questions to gather information about their software and desired outcomes. It then processes and organizes the gathered information into comprehensive requirements, compiling them into a user‑friendly file for developers. This innovative framework aims to make requirement gathering more efficient and effective by minimizing conflicts and incompleteness that are often encountered in traditional methods. To evaluate the effectiveness, an experiment was conducted comparing a traditional interview approach with the chatbot experience for participants developing similar appointment‑booking applications. The study's results are transformative, demonstrating the potential of leveraging AI through the chatbot to revolutionize the software requirement elicitation process. This innovative method has the potential to produce results that are quicker, more cost‑effective, and more efficient, taking software development to new heights of success. Software professionals can boost productivity and communication while also producing software projects that are above and beyond expectations by using this solution.
Keywords: software requirements, requirement elicitation techniques, chatbot in software requirements, ChatGPT.
The software development life cycle (SDLC) is a series of stages that a software project goes through, from idea and planning to evaluation and maintenance. The initial stage is Planning, where the software development team determines the necessary features and project operations. Next is the Requirements stage, where client requirements are understood and documented. This is followed by the Design and Development stages, where engineers design and build the software. The Testing stage involves thorough testing by software testers to ensure quality. The software is then deployed to end‑users in the Deployment stage, followed by the Maintenance stage [1,2]. Figure 1 shows a general perception of the software project.
Requirements gathering is a crucial stage that influences the final product and stakeholders' expectations [3]. Various definitions describe requirements elicitation as the process of identifying needs and classifying them within constraints [3,20]. Collecting requirements is challenging as it requires complete documentation of program details and future software visualization [3]. There are different methods for eliciting requirements, and this study explores the use of chatbots for this purpose.
Chatbots, like virtual personal assistants, utilize artificial intelligence to engage in human‑like conversations [3]. The first chatbot, Eliza, was developed in 1966 and acted as a psychotherapist [4]. Improvements included Parry (1972) and, more recently, virtual assistants such as Apple Siri [5], Microsoft Cortana [6], Amazon Alexa [7], Google Assistant [8], IBM Watson [9], and ChatGPT [10]. People appreciate chatbots because they respond quickly and answer specific questions, leading companies to use them to increase productivity. Despite their advantages, chatbots still lack emotional interaction [11].
Software engineers use ChatGPT, a large language model (LLM), to automate tasks including code generation, completion, and bug fixing [11]. It can also support requirements gathering and system design by producing textual descriptions of software elements or system architectures. Traditional methods for eliciting requirements face problems such as conflicting requirements, incompleteness, lack of knowledge among developers or stakeholders, meeting difficulties, missed deadlines, and implementation issues. Several studies have proposed chatbots for software requirements elicitation [24–29].
Several experiments and studies have examined software requirements elicitation methods in both educational and industrial environments, analysing required time, cost, effort, skills, and product quality. Because chatbots are a relatively new application, we identified only seventeen studies (2019–2023) that proposed using chatbots for requirements elicitation, of which seven were relevant.
Study [24] proposed an intelligent conversational chatbot to automate requirements elicitation and classification into functional and non‑functional categories. While effective, the authors noted that feasibility prediction, cost estimation, and scheduling required improvement.
In [25], a chatbot trained novice requirements engineers by using context‑free questions and summarising interviews. It successfully identified incompleteness but required further work to behave like a real assistant.
Study [26] introduced an Android chatbot for requirements elicitation, achieving over 80 % on user‑acceptance criteria. Future work will address advanced NLP for Bahasa Indonesia and developer dashboards.
The Reqbot developed in [27] asked questions, detected ambiguity, and initiated disambiguation. While it improved requirement quality, usability gains were limited.
Rietz [28] designed a conversational system for novice end‑users, focusing on scalable evaluation of collected data. Future iterations will allow users to edit previous answers.
Wolfinger et al. [29] built a chatbot to elicit contextual information from user feedback, providing structured reports.
White et al. [11] catalogued thirteen prompt patterns for ChatGPT, including the Requirements Simulator, Specification Disambiguation, and Change Request Simulator, highlighting their potential for requirements engineering.
The research proposes a chatbot based on ChatGPT for requirements elicitation and evaluates its feasibility against traditional interviews.
A qualitative approach documents chatbot conversations and traditional interviews. Surveys collect participant feedback. Analysis focuses on missing requirements, completeness, effort, time, comfort, and trustworthiness.
Two dentists requiring similar appointment‑booking applications participated: one engaged in a traditional Arabic interview; the other used the chatbot in English.
The chatbot was built in Python with a Tkinter text interface and integrated ChatGPT. It summarises elicited requirements into an IEEE‑STD‑830‑1998 compliant Software Requirements Specification (SRS) file. ChatGPT provides multilingual support; conversation accuracy is manually validated during this prototype stage.
Requirements completeness, effort, time, comfort, and reliability are compared between methods.
Metrics include completeness of elicited requirements, effort/time consumption, user satisfaction, and chatbot result reliability.
Participants were informed of conversation recording; privacy and data‑protection standards were followed.
Text‑based chatbot interactions may reduce comfort compared with face‑to‑face interviews and affect duration.
The manual document captured comprehensive requirements, whereas the chatbot document was concise and omitted security and UML elements.
The traditional interview lasted ~30 minutes; the chatbot interview ~10 minutes.
The Question (Scale 1–5) | Score | |
---|---|---|
Traditional Interview | Chatbot | |
Overall satisfaction with the requirement‑elicitation process | 4 | 5 |
Clarity and comprehensibility of questions | 4 | 4 |
Coverage of important aspects | 3 | 4 |
Ease of expressing requirements | 5 | 4 |
Effectiveness in capturing requirements | 4 | 5 |
Level of engagement and interaction | 5 | 4 |
Comfort sharing thoughts and ideas | 3 | 5 |
Interviewer/chatbot understanding of needs | 4 | 5 |
Probing questions for deeper understanding | 3 | 5 |
Confidence in accuracy and completeness | 5 | 4 |
Total Score | 44 / 50 | 45 / 50 |
Question | Traditional Interview | Chatbot |
---|---|---|
Perceived strengths of the method | Explanation of the idea through a scenario; face‑to‑face interview. | Quick interview; direct questions; prompted to consider new points. |
Perceived limitations/areas for improvement | Scheduling required; interview lengthy; occasional misunderstanding. | Chatbot did not ask about unmentioned details; typing on keyboard uncomfortable. |
Both approaches provided positive user experiences; the chatbot reduced effort and time. Although the chatbot omitted some requirement details, these issues can be resolved by enhancing its prompts and adding voice interaction. Survey scores were comparable, indicating the chatbot's usability even as a prototype.
This study proposed a chatbot‑based approach to software requirements elicitation. The chatbot captured most essential requirements while reducing time and effort compared with traditional interviews. Although some specificity was lacking, the results demonstrate the feasibility and potential advantages of chatbot‑based elicitation. Future work will address completeness, add voice interaction, and further evaluate reliability.