https://www.deeplearning.ai/short-courses/building-evaluating-advanced-rag/
## 課程重點
https://www.trulens.org/
* sentence window retrieval
* auto-merging retrieval
* evaluation metrics
* context relevance
* groundedness
* answer relevance
> 覺得只講理論的話,知識點也不是很多,很多時間都花在教 llamaindex API 細節而已
## Advanced RAG Pipeline
1. 用了 Llamaindex 做了 native RAG,然後跑 Trulens 評估
3. 使用 Sentence Window retrieval,然後跑 Trulens 評估
4. 使用 Auto-merging retrieval ,然後跑 Trulens 評估
![[Pasted image 20240112171438.png]]
![[Pasted image 20240112171947.png]]
> 這 Tru 評估好像比 Ragas 簡單
![[Pasted image 20240112173158.png]]
![[Pasted image 20240112173729.png]]
![[Pasted image 20240112173955.png]]
## RAG Triad of metrics
![[Pasted image 20240112220857.png]]
![[Pasted image 20240112220947.png]]
![[Pasted image 20240112221047.png]]
![[Pasted image 20240112221359.png]]
> 這個不是用參考答案來評分,而是用答案和問題是否相關來評分而已 (所以這不能當作 end-to-end 最終分數)
![[Pasted image 20240112222123.png]]
![[Pasted image 20240112222210.png]]
![[Pasted image 20240112222353.png]]
![[Pasted image 20240112222847.png]]
![[Pasted image 20240112225113.png]]
![[Pasted image 20240112225149.png]]
TruLens 提供的評估包括:
![[Pasted image 20240112225611.png]]
## Sentence-window retrieval
使用 SentenceWindowNodeParser
![[Pasted image 20240113004137.png]]
![[Pasted image 20240113004148.png]]
![[Pasted image 20240113004203.png]]
![[Pasted image 20240113004847.png]]
使用 postprocessor 來做
* MetadataReplacementPostProcessor 替換成更大的 window chunk
* SentenceTransformerRerank 重新排序
![[Pasted image 20240113130025.png]]
![[Pasted image 20240113130042.png]]
注意令牌使用或成本之間的權衡。隨著窗口大小的增加,令牌使用和成本將會上升,如同在許多情況下的上下文相關性一樣。同時,在開始時增加窗口大小,我們預期將提高上下文相關性,因此也將間接提高著地性。其中一個原因是,當檢索步驟沒有產生足夠相關的上下文時,完成步驟中的 LLM 將傾向於利用其預先訓練階段的既有知識來填補這些空白,而不是明確依賴於檢索到的上下文片段。而這種選擇可能會導致著地性得分較低,因為回想著地性意味著最終回應的組件應該可以追溯到檢索到的上下文片段。因此,我們預期的是,隨著你不斷增加句子窗口大小,上下文相關性將增加到某個點,著地性也會如此,然後超過那個點後,我們將看到上下文相關性要么保持不變,要么下降,而著地性很可能也會呈現類似的模式。
我們預期的是,隨著你不斷增加句子窗口大小,上下文相關性將會增加到一定程度,接著將會達到飽和或下降,而「實地性」(groundedness)很可能也會呈現類似的模式。此外,在實踐中,上下文相關性和實地性之間也存在一個非常有趣的關係。當上下文相關性較低時,實地性通常也較低。這是因為LLM通常會嘗試利用其預訓練階段的知識來填補檢索到的上下文中的空白。這會導致實地性下降,即使答案實際上可能相當相關。隨著上下文相關性的增加,實地性通常也會增加到一定程度。但是,如果上下文大小過大,即使上下文相關性很高,也可能會因為LLM在面對過於龐大的上下文時感到不堪重負,而回歸到其訓練階段的預先存在的知識庫,導致實地性下降。
![[Pasted image 20240113130430.png]]
![[Pasted image 20240113195328.png]]
最後測出 window size 3 比 1 好
但是 5 沒有比 3 好,不但 cost 增加,groundedness 還下降
![[Pasted image 20240113201603.png]]
## Auto-merging retrieval
使用 HierarchicalNodeParser
![[Pasted image 20240113222627.png]]
![[Pasted image 20240113223059.png]]
使用 AutoMergingRetriever 和 RetrieverQueryEngine
> 上一張使用 sentence_index.as_query_engine 其中內部也是建立 RetrieverQueryEngine
評測 2 layers (2048,512) 和 3 layers (2048,512,128)
3 layers 節省 tokens 約一半,分數也較好
![[Pasted image 20240114010102.png]]
e.g. 某個問題需要需要 node 1 跟 node 4,在 sentence-window 可能會抓不出來,因為不連續。但是在 auto-merging 中可以抓出上一層的 parent node 而有更好的 context 結果。
## 備註 utils.py
這檔案竟然放在 jupyter 目錄下,而不是直接放在教學的 notebook 裡面.... :|
```
#!pip install python-dotenv
import os
from dotenv import load_dotenv, find_dotenv
import numpy as np
from trulens_eval import (
Feedback,
TruLlama,
OpenAI
)
from trulens_eval.feedback import Groundedness
import nest_asyncio
nest_asyncio.apply()
def get_openai_api_key():
_ = load_dotenv(find_dotenv())
return os.getenv("OPENAI_API_KEY")
def get_hf_api_key():
_ = load_dotenv(find_dotenv())
return os.getenv("HUGGINGFACE_API_KEY")
openai = OpenAI()
qa_relevance = (
Feedback(openai.relevance_with_cot_reasons, name="Answer Relevance")
.on_input_output()
)
qs_relevance = (
Feedback(openai.relevance_with_cot_reasons, name = "Context Relevance")
.on_input()
.on(TruLlama.select_source_nodes().node.text)
.aggregate(np.mean)
)
#grounded = Groundedness(groundedness_provider=openai, summarize_provider=openai)
grounded = Groundedness(groundedness_provider=openai)
groundedness = (
Feedback(grounded.groundedness_measure_with_cot_reasons, name="Groundedness")
.on(TruLlama.select_source_nodes().node.text)
.on_output()
.aggregate(grounded.grounded_statements_aggregator)
)
feedbacks = [qa_relevance, qs_relevance, groundedness]
def get_trulens_recorder(query_engine, feedbacks, app_id):
tru_recorder = TruLlama(
query_engine,
app_id=app_id,
feedbacks=feedbacks
)
return tru_recorder
def get_prebuilt_trulens_recorder(query_engine, app_id):
tru_recorder = TruLlama(
query_engine,
app_id=app_id,
feedbacks=feedbacks
)
return tru_recorder
from llama_index import ServiceContext, VectorStoreIndex, StorageContext
from llama_index.node_parser import SentenceWindowNodeParser
from llama_index.indices.postprocessor import MetadataReplacementPostProcessor
from llama_index.indices.postprocessor import SentenceTransformerRerank
from llama_index import load_index_from_storage
import os
def build_sentence_window_index(
document, llm, embed_model="local:BAAI/bge-small-en-v1.5", save_dir="sentence_index"
):
# create the sentence window node parser w/ default settings
node_parser = SentenceWindowNodeParser.from_defaults(
window_size=3,
window_metadata_key="window",
original_text_metadata_key="original_text",
)
sentence_context = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model,
node_parser=node_parser,
)
if not os.path.exists(save_dir):
sentence_index = VectorStoreIndex.from_documents(
[document], service_context=sentence_context
)
sentence_index.storage_context.persist(persist_dir=save_dir)
else:
sentence_index = load_index_from_storage(
StorageContext.from_defaults(persist_dir=save_dir),
service_context=sentence_context,
)
return sentence_index
def get_sentence_window_query_engine(
sentence_index,
similarity_top_k=6,
rerank_top_n=2,
):
# define postprocessors
postproc = MetadataReplacementPostProcessor(target_metadata_key="window")
rerank = SentenceTransformerRerank(
top_n=rerank_top_n, model="BAAI/bge-reranker-base"
)
sentence_window_engine = sentence_index.as_query_engine(
similarity_top_k=similarity_top_k, node_postprocessors=[postproc, rerank]
)
return sentence_window_engine
from llama_index.node_parser import HierarchicalNodeParser
from llama_index.node_parser import get_leaf_nodes
from llama_index import StorageContext
from llama_index.retrievers import AutoMergingRetriever
from llama_index.indices.postprocessor import SentenceTransformerRerank
from llama_index.query_engine import RetrieverQueryEngine
def build_automerging_index(
documents,
llm,
embed_model="local:BAAI/bge-small-en-v1.5",
save_dir="merging_index",
chunk_sizes=None,
):
chunk_sizes = chunk_sizes or [2048, 512, 128]
node_parser = HierarchicalNodeParser.from_defaults(chunk_sizes=chunk_sizes)
nodes = node_parser.get_nodes_from_documents(documents)
leaf_nodes = get_leaf_nodes(nodes)
merging_context = ServiceContext.from_defaults(
llm=llm,
embed_model=embed_model,
)
storage_context = StorageContext.from_defaults()
storage_context.docstore.add_documents(nodes)
if not os.path.exists(save_dir):
automerging_index = VectorStoreIndex(
leaf_nodes, storage_context=storage_context, service_context=merging_context
)
automerging_index.storage_context.persist(persist_dir=save_dir)
else:
automerging_index = load_index_from_storage(
StorageContext.from_defaults(persist_dir=save_dir),
service_context=merging_context,
)
return automerging_index
def get_automerging_query_engine(
automerging_index,
similarity_top_k=12,
rerank_top_n=2,
):
base_retriever = automerging_index.as_retriever(similarity_top_k=similarity_top_k)
retriever = AutoMergingRetriever(
base_retriever, automerging_index.storage_context, verbose=True
)
rerank = SentenceTransformerRerank(
top_n=rerank_top_n, model="BAAI/bge-reranker-base"
)
auto_merging_engine = RetrieverQueryEngine.from_args(
retriever, node_postprocessors=[rerank]
)
return auto_merging_engine
```