Saifullah Razali, University of Hertfodshire, Singapore
Figurative language: metaphor, simile, idiom, hyperbole, sarcasm, and irony encodes meaning that often departs from literal interpretation and is crucial across literature, social media, and educational content. Detecting such language remains challenging for NLP systems because it requires pragmatic, cultural and world knowledge. This paper presents a thorough study of figurative language detection using pretrained language models (PLMs). We review linguistic foundations, describe architectures and training strategies using PLMs (BERT, RoBERTa, and GPT-style models), present an experimental framework, and report results drawing on recent benchmark datasets and shared tasks.
Figurative Language, Metaphor Detection, Sarcasm Detection, Pretrained Language Models, BERT, RoBERTa, GPT
Zinia Rahman1, Wang Zheng1, Refat Khan Pathan2 1School of Automation, Department of Control Science and Engineering, Southeast University Nanjing, China 2School of Computing and Artificial Intelligence, Faculty of Engineering and Technology Sunway University, Malaysia
Automatic interpretation of poetry presents significant challenges for natural language processing due to figurative language, cultural symbolism, and subtle emotional cues. This study proposes a comparative computational framework for extracting themes and emotions from English and Bangla poems using TF-IDF features and multiple supervised algorithms. Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Decision Trees (DT), Random Forests (RF) - and a Convolutional Neural Network (CNN) were evaluated for both thematic and emotional categorization. For English poetry, ensemble and margin-based models achieved the highest performance, with SVM and Random Forest attaining up to 88.7% accuracy for emotion and 85.5% for theme classification. In Bangla poetry, emotion classification reached perfect accuracy across all models, while theme classification remained highly discriminative, with Random Forest achieving 94% accuracy. The study demonstrates the effectiveness of traditional machine learning approaches for bilingual poetic analysis in low-resource literary domains.
Poetry Analysis, Emotion and theme classification, Deep Learning, CNN, ML
Alexander Chang1 David Gari2 1Troy High School, 2200 Dorothy Ln, Fullerton, CA 92831 2David T. Garcia, University of California, Los Angeles, CA 90095 2School of Computing and Artificial Intelligence, Faculty of Engineering and Technology Sunway University, Malaysia
Large language models are increasingly integrated into educational and professional environments; however, effective interaction with these systems requires prompt engineering skills that most users have not formally developed. This paper presents an intelligent mobile application designed to teach prompt engineering through structured instruction, iterative practice, and AI-powered real-time feedback. The system integrates user authentication, a scaffolded educational framework, and live interaction with large language models through OpenAI. Key challenges addressed include feedback consistency, learner engagement, and operational cost management. Experimental evaluations examined the reliability of AI-generated feedback and the sustainability of API usage under simulated user loads. Results demonstrated strong alignment and correlation between AI evaluations and expert assessments, as well as significant efficiency gains through response caching. By combining meta-prompting, adaptive learning design, and mobile accessibility, the proposed application enables continuous skill development and offers a scalable solution for improving effective communication with large language models.
Prompt Engineering, Mobile Application, AI, Real-time Feedback
Graeme Heald , Australia
The paper addresses the issue of hallucinations in Large Language Models (LLMs), which arise from the limitations of classical binary logic that forces a True/False output, leading to stochastic guessing when data is missing or contradictory. It introduces U4 Logic, a four-valued non-classical framework with True (T), False (F), Uncertain (U), and Null ( ) states. U4 Logic incorporates Uncertainty (U) as a valid truth value and Null ( ) as a non-designated state to absorb contradictions and prevent their propagation. By dismantling the Principle of Explosion, U4 ensures contradictions collapse into a non-actionable Null state, blocking hallucinatory cascades. Strict-vacuous mapping rules, such as U → T = F and F → T = F, prevent the AI from deriving confident conclusions from uncertainty or falsehoods. U4 transforms LLMs into Logic-Gated Reasoners, offering a robust framework for trustworthy AI in high-stakes environments by prioritizing inference integrity over probabilistic guessing.
Hallucinations, LLM, Softmax, U4 logic