본문 바로가기 주메뉴 바로가기
검색 검색영역닫기 검색 검색영역닫기 ENGLISH 메뉴 전체보기 메뉴 전체보기

학술행사

세미나

ICIM 연구교류 세미나(1.26.금)

등록일자 : 2024-01-15

https://icim.nims.re.kr/post/event/1057

  • 발표자  서상현 박사(㈜엘지경영개발원)
  • 개최일시  2024-01-26 10:00-12:00
  • 장소  국가수리과학연구소 산업수학혁신센터(판교)
  1. 일시: 2024년 1월 26일(금), 10:00 - 12:00

  2. 장소: 판교 테크노밸리 산업수학혁신센터 세미나실

    • 경기 성남시 수정구 대왕판교로 815, 기업지원허브 231호 국가수리과학연구소
    • 무료주차는 2시간 지원됩니다.
  3. 발표자: 서상현 박사(㈜엘지경영개발원)

  4. 주요내용: Instruction Tuning and Retrieval-Augmented Generation for Large Language Models

In this seminar, I will present research trends on Large Language Models(LLMs), g on instruction tuning and Retrieval-Augmented Generation (RAG). Recently, LLMs have been proposed by pretraining Transformer models over large-scale corpora and have shown strong capabilities in solving various natural language processing (NLP) tasks. instruction tuning is a crucial technique to enhance the capabilities and controllability of LLMs. Instruction tuning bridges the gap between the next-word prediction objective of LLMs and the users’ objective of having LLMs adhere to human instructions. One the other hand, LLMs demonstrate significant capabilities but face challenges such as hallucination, outdated knowledge, and nontransparent, untraceable reasoning processes. RAG has emerged as a promising solution by incorporating knowledge from external databases. This presentation will help you understand the core technologies and applications of the latest LLMs.

*유튜브 스트리밍 예정입니다.

  1. 일시: 2024년 1월 26일(금), 10:00 - 12:00

  2. 장소: 판교 테크노밸리 산업수학혁신센터 세미나실

    • 경기 성남시 수정구 대왕판교로 815, 기업지원허브 231호 국가수리과학연구소
    • 무료주차는 2시간 지원됩니다.
  3. 발표자: 서상현 박사(㈜엘지경영개발원)

  4. 주요내용: Instruction Tuning and Retrieval-Augmented Generation for Large Language Models

In this seminar, I will present research trends on Large Language Models(LLMs), g on instruction tuning and Retrieval-Augmented Generation (RAG). Recently, LLMs have been proposed by pretraining Transformer models over large-scale corpora and have shown strong capabilities in solving various natural language processing (NLP) tasks. instruction tuning is a crucial technique to enhance the capabilities and controllability of LLMs. Instruction tuning bridges the gap between the next-word prediction objective of LLMs and the users’ objective of having LLMs adhere to human instructions. One the other hand, LLMs demonstrate significant capabilities but face challenges such as hallucination, outdated knowledge, and nontransparent, untraceable reasoning processes. RAG has emerged as a promising solution by incorporating knowledge from external databases. This presentation will help you understand the core technologies and applications of the latest LLMs.

*유튜브 스트리밍 예정입니다.

이 페이지에서 제공하는 정보에 대해 만족하십니까?