Please use this identifier to cite or link to this item: http://dspace.azjhpc.org/xmlui/handle/123456789/510
Full metadata record
DC FieldValueLanguage
dc.contributor.authorIsmayilov, Elviz-
dc.date.accessioned2026-01-02T20:20:28Z-
dc.date.available2026-01-02T20:20:28Z-
dc.date.issued2025-11-07-
dc.identifier.issn2616-6127 2617-4383-
dc.identifier.urihttp://dspace.azjhpc.org/xmlui/handle/123456789/510-
dc.description.abstractHigh-Performance Computing (HPC) is a cornerstone of scientific and engineering advancements, enabling complex computations in areas such as climate modeling, genomics, and artificial intelligence. Concurrently, Large Language Models (LLMs) have emerged as powerful AI-driven tools capable of code optimization, automation, and scientific reasoning. The integration of LLMs into HPC systems presents significant opportunities, including enhanced code generation, improved workload management, and efficient parallel execution. However, this convergence also introduces several challenges, such as high computational costs, scalability issues, memory constraints, security risks, and interpretability concerns. This paper explores the role of LLMs in HPC, discusses existing research and industrial applications, and highlights key challenges and potential solutions. Furthermore, it provides insights into recent advances in AI-powered HPC solutions and presents case studies showcasing real-world implementations. The paper concludes with future research directions, focusing on efficient LLM architectures, integration with emerging HPC technologies, and ethical considerations. The findings emphasize the need for continued innovation to make LLMs more efficient, scalable, and reliable for HPC applications.en_US
dc.language.isoen_USen_US
dc.publisherAzerbaijan Journal of High Performance Computingen_US
dc.subjectHigh-Performance Computingen_US
dc.subjectLarge Language Modelsen_US
dc.subjectAI-Driven Optimizationen_US
dc.subjectParallel Computingen_US
dc.subjectScientific Computingen_US
dc.subjectMachine Learningen_US
dc.subjectCode Optimizationen_US
dc.subjectFederated Learningen_US
dc.subjectAI Ethicsen_US
dc.titleHARNESSING LARGE LANGUAGE MODELS FOR HIGH-PERFORMANCE COMPUTING: OPPORTUNITIES AND CHALLENGESen_US
dc.typeArticleen_US
dc.source.volumeVolume 7en_US
dc.source.issuee2025.04en_US
dc.source.beginpage1en_US
dc.source.endpage7en_US
dc.source.numberofpages7en_US
Appears in Collections:Azerbaijan Journal of High Performance Computing

Files in This Item:
File Description SizeFormat 
doi.org.10.32010.26166127.2025.04.pdf204.61 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.