With the exponential growth in social media usage, the rapid spread of misinformation has become a critical global challenge. Recent advances in large language models (LLMs) have shown promising potential in automated misinformation detection. This survey provides a comprehensive review of LLM-based approaches for detecting misinformation in textual data on social media platforms. In this work, we analyze 70+ recent papers; to examine the evolution, implementation, and effectiveness of various LLM architectures in this domain. Our analysis reveals that BERT-based models dominate the field, appearing in approximately 85% of studies, with domain-specific variants like CT-BERT demonstrating superior performance in specialized contexts such as COVID-19 misinformation detection. We provide detailed comparisons of model architectures, implementation strategies, and performance metrics across different domains. Additionally, seven major datasets commonly used in this field were analyzed, examining their characteristics, limitations, and suitability for different detection tasks. The survey also addresses key challenges, including linguistic nuances, model interpretability, and ethical considerations. Our findings indicate that while LLM-based approaches achieve impressive accuracy metrics, significant challenges remain in cross-domain generalization and real-time detection. This survey concludes by identifying promising research directions and providing recommendations for robust model evaluation frameworks.