학술논문
			
				
            
	            
	             
            
            
            
                
            
            
        
        HEMA: A Hippocampus-Inspired Extended Memory Architecture for Long-Context AI Conversations
이용수 30
- 영문명
- 발행기관
- 한국인공지능학회
- 저자명
- Kwangseob AHN Yongjoo SONG
- 간행물 정보
- 『인공지능연구』Vol.13 No. 2, 1~7쪽, 전체 7쪽
- 주제분류
- 복합학 > 과학기술학
- 파일형태
- 발행일자
- 2025.06.30
                
                    무료
                
                
               
                
                	
					
					
						
					
                
                
                
                    
            
        구매일시로부터 72시간 이내에 다운로드 가능합니다.
	                    이 학술논문 정보는 (주)교보문고와 각 발행기관 사이에 저작물 이용 계약이 체결된 것으로, 교보문고를 통해 제공되고 있습니다.
국문 초록
Large language models (LLMs) maintain coherence over a few thousand tokens but degrade sharply in multi hundred turn conversations. We present a hippocampus inspired dual memory architecture that separates dialogue context into (1) Compact Memory, a continuously updated one sentence summary that preserves the global narrative, and (2) Vector Memory, an episodic store of chunk embeddings queried via cosine similarity. Integrated with an off the shelf 6B parameter transformer, the system sustains > 300 turn dialogues while keeping the prompt under 3.5 K tokens. On long form QA and story continuation benchmarks, Compact + Vector Memory elevates factual recall accuracy from 41 % to 87 % and human rated coherence from 2.7 to 4.3. Precision-recall analysis shows that, with 10 K indexed chunks, Vector Memory achieves P@5 ≥ 0.80 and R@50 ≥ 0.74, doubling the area under the PR curve relative to a summarisation only baseline. Ablation experiments reveal that (i) semantic forgetting—age weighted pruning of low salience chunks—cuts retrieval latency by 34 % with < 2 pp recall loss, and (ii) a two level summary of summaries eliminates cascade errors that otherwise emerge after 1,000 turns. By reconciling verbatim recall with semantic continuity, our architecture offers a practical path toward scalable, privacy aware conversational AI capable of engaging in months long dialogue without retraining the underlying model.
                    영문 초록
목차
1. Introduction
	                       
	                          2. Literature Review
	                       
	                          3. Methodology
	                       
	                          4. Results
	                       
	                          5. Discussion
	                       
	                          6. Conclusion
	                       
	                          References
	                       
                    	해당간행물 수록 논문
- HEMA: A Hippocampus-Inspired Extended Memory Architecture for Long-Context AI Conversations
- Performance Comparisons of Bio-Inspired Optimization Algorithms for Grid Synchronization
- Adaptive Movement and Formation Coordinated Control for Flying Ad-Hoc Networks (FANETs) in Dynamic Environments
- A Study on Learning Method for Korean Speech Data Using Limited Computing Resource
- Keypoint-based Distortion Correction and Data Augmentation for High-angle License Plate Recognition
참고문헌
관련논문
복합학 > 과학기술학분야 BEST
- Culinary Narratives on the Global Stage: Analyzing K-Food's Cultural Capital through Netflix's 'Black and White Chef
- The Sociocultural Meaning of Zero-Calorie Beverage Consumption: A Qualitative Study on Health Perceptions and Beverage Choices Among Young Adults in South Korea
- Effects of Integrative Healing Meditation on Stress, Self-Determination, and Interpersonal Satisfaction
복합학 > 과학기술학분야 NEW
더보기최근 이용한 논문
					교보eBook 첫 방문을 환영 합니다!
									
						 
					
				
			신규가입 혜택 지급이 완료 되었습니다.
바로 사용 가능한 교보e캐시 1,000원 (유효기간 7일)
						지금 바로 교보eBook의 다양한 콘텐츠를 이용해 보세요!
 
					 
         
        