Personal Intelligent Assistant Based on Large Language Model: Personalized Knowledge Extraction and Query Answering Using Local Data and Large Language Model
Qiu, Ke (2024)
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2024052917785
https://urn.fi/URN:NBN:fi:amk-2024052917785
Tiivistelmä
As large language models (LLMs) become increasingly capable of understanding and generating natural language, the potential application of these models in the field of personal intelligent assistants is receiving increasing attention. This thesis proposes a new approach to building personal intelligent assistants that combines the advantages of local data sources with LLMs to provide personalized knowledge extraction and query answering services.
The system integrates RAG technology and an LLM using LangChain, utilizing local files on the user's personal computer as a knowledge source. First, RAG technology is used to extract file chunks related to user questions. These snippets are then fed into an LLM along with the original query, and the model organizes the language to generate fluent and contextual answers.
There are three main advantages of this approach: (1) It enables personalized knowledge extraction based on the user's unique data sources. (2) It leverages the powerful language understanding and generation capabilities of LLMs to provide high-quality responses. (3) It improves users' ability to quickly acquire knowledge in daily work and study.
In addition to studying how to combine large language models with local data to promote users to efficiently obtain personalized information, we have also conducted forward-looking thinking on the future expansion and improvement directions of the system. We believe that this approach will provide a new perspective on the future direction of intelligent assistants.
The system integrates RAG technology and an LLM using LangChain, utilizing local files on the user's personal computer as a knowledge source. First, RAG technology is used to extract file chunks related to user questions. These snippets are then fed into an LLM along with the original query, and the model organizes the language to generate fluent and contextual answers.
There are three main advantages of this approach: (1) It enables personalized knowledge extraction based on the user's unique data sources. (2) It leverages the powerful language understanding and generation capabilities of LLMs to provide high-quality responses. (3) It improves users' ability to quickly acquire knowledge in daily work and study.
In addition to studying how to combine large language models with local data to promote users to efficiently obtain personalized information, we have also conducted forward-looking thinking on the future expansion and improvement directions of the system. We believe that this approach will provide a new perspective on the future direction of intelligent assistants.