Abstract
Purpose - The emergence of AI-driven chatbots like ChatGPT has revolutionized how students and educators interact with technology in higher education. While ChatGPT’s ability to provide on-demand, conversational, and resourceful assistance offers unparalleled learning opportunities, it simultaneously raises pressing concerns about academic integrity, dependency, and the erosion of critical thinking skills. This study explores the perceptions of students and lecturers in Hong Kong universities, evaluating both benefits and ethical risks to provide actionable insights for policymakers and educators on managing ChatGPT’s integration responsibly.
Methodology - A mixed-methods approach was employed. Quantitative data were collected from 200 students and 30 lecturers across eight Hong Kong universities, incoporating Likert-scale questions and structured prompts on ChatGPT’s perceived benefits, ethical concerns, and technological resources. Qualitative data were gathered through open-ended responses, particularly from lecturers, to capture narratives about ChatGPT's impact on academic practices. Data were analyzed using SPSS for quantitative patterns, while thematic coding was applied to qualitative responses to identify recurring themes and deeper insights.
Findings - The findings highlight a polarized perception of ChatGPT among stakeholders. Students with lower grade point averages (GPAs) valued its ability to provide accessible research support and simplify complex concepts, while students with higher GPAs and lecturers emphasized concerns about dependency, plagiarism, and diminished critical thinking. Notably, 19.6% of students believed that paraphrasing chatbot-generated content was not plagiarism, a view categorically rejected by lecturers, underscoring a significant ethical divide. Peer influence was identified as a primary driver of chatbot use among students, especially in collaborative assignments and difficult subjects. However, lecturers preferred fostering independent thought and warned against ChatGPT’s potential to replace traditional learning methods. Both groups acknowledged ChatGPT’s value when used responsibly, but emphasized the need for clear guidelines, detection tools, and training to mitigate ethical risks. Universities must adopt advanced tools, such as AI watermarking and plagiarism-checking systems, to effectively monitor misuse.
Implications - This study contributes to the growing discourse on AI's role in higher education by exploring its dual nature as a tool for empowerment and a source of ethical concern. Focusd on the context of Hong Kong universities, it underscores the need for localized and policy-specific interventions. The research advocates for a holistic approach to AI integration, encompassing robust institutional policies, advanced detection technologies, and initiatives to foster a culture of academic integrity. Recommendations include developing student training programs, merit-based recognition systems, and collaborations with AI developers to enhance transparency and accountability.
Methodology - A mixed-methods approach was employed. Quantitative data were collected from 200 students and 30 lecturers across eight Hong Kong universities, incoporating Likert-scale questions and structured prompts on ChatGPT’s perceived benefits, ethical concerns, and technological resources. Qualitative data were gathered through open-ended responses, particularly from lecturers, to capture narratives about ChatGPT's impact on academic practices. Data were analyzed using SPSS for quantitative patterns, while thematic coding was applied to qualitative responses to identify recurring themes and deeper insights.
Findings - The findings highlight a polarized perception of ChatGPT among stakeholders. Students with lower grade point averages (GPAs) valued its ability to provide accessible research support and simplify complex concepts, while students with higher GPAs and lecturers emphasized concerns about dependency, plagiarism, and diminished critical thinking. Notably, 19.6% of students believed that paraphrasing chatbot-generated content was not plagiarism, a view categorically rejected by lecturers, underscoring a significant ethical divide. Peer influence was identified as a primary driver of chatbot use among students, especially in collaborative assignments and difficult subjects. However, lecturers preferred fostering independent thought and warned against ChatGPT’s potential to replace traditional learning methods. Both groups acknowledged ChatGPT’s value when used responsibly, but emphasized the need for clear guidelines, detection tools, and training to mitigate ethical risks. Universities must adopt advanced tools, such as AI watermarking and plagiarism-checking systems, to effectively monitor misuse.
Implications - This study contributes to the growing discourse on AI's role in higher education by exploring its dual nature as a tool for empowerment and a source of ethical concern. Focusd on the context of Hong Kong universities, it underscores the need for localized and policy-specific interventions. The research advocates for a holistic approach to AI integration, encompassing robust institutional policies, advanced detection technologies, and initiatives to foster a culture of academic integrity. Recommendations include developing student training programs, merit-based recognition systems, and collaborations with AI developers to enhance transparency and accountability.
Original language | English |
---|---|
Publication status | Published - 9 Jul 2025 |
Event | 2025 International Conference on Open and Innovative Education - Hong Kong Metropolitan University, Hong Kong, China Duration: 9 Jul 2025 → 11 Jul 2025 https://www.hkmu.edu.hk/icoie/ |
Conference
Conference | 2025 International Conference on Open and Innovative Education |
---|---|
Abbreviated title | ICOIE 2025 |
Country/Territory | China |
City | Hong Kong |
Period | 9/07/25 → 11/07/25 |
Internet address |
Keywords
- online learning
- quality assurance
- student engagement
- student satisfaction