LangChain提供了一個(gè)回調(diào)系統(tǒng),允許您掛接到LLM應(yīng)用程序的各個(gè)階段。這對(duì)于日志記錄、監(jiān)視、流式傳輸和其他任務(wù)非常有用。
class BaseCallbackHandler:
"""Base callback handler that can be used to handle callbacks from langchain."""
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
"""Run when LLM starts running."""
def on_chat_model_start(
self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any
) -> Any:
"""Run when Chat Model starts running."""
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
"""Run on new LLM token. Only available when streaming is enabled."""
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
"""Run when LLM ends running."""
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""Run when chain starts running."""
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when chain ends running."""
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when chain errors."""
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
"""Run when tool starts running."""
def on_tool_end(self, output: str, **kwargs: Any) -> Any:
"""Run when tool ends running."""
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when tool errors."""
def on_text(self, text: str, **kwargs: Any) -> Any:
"""Run on arbitrary text."""
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run on agent action."""
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run on agent end."""
StdOutCallbackHandler將所有事件的日志作為標(biāo)準(zhǔn)輸出,打印到終端中。
注意: 當(dāng)
verbose
參數(shù)設(shè)置為true
時(shí),StdOutCallbackHandler
是被默認(rèn)啟用的,也就是你看到的它將運(yùn)行過程的日志全部打印到了終端窗口中。
上示例:
from langchain.callbacks import StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
handler = StdOutCallbackHandler()
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
# Constructor callback: First, let's explicitly set the StdOutCallbackHandler when initializing our chain
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
chain.invoke({"number":2})
# Use verbose flag: Then, let's use the `verbose` flag to achieve the same result
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
chain.invoke({"number":2})
# Request callbacks: Finally, let's use the request `callbacks` to achieve the same result
chain = LLMChain(llm=llm, prompt=prompt)
chain.invoke({"number":2}, {"callbacks":[handler]})
輸出:
對(duì)代碼和運(yùn)行結(jié)果的解釋:
從運(yùn)行結(jié)果可以看出,三次輸出的結(jié)果相同。再看代碼,用三種方式實(shí)現(xiàn)了StdOutCallbackHandler
的設(shè)置。
第一種:chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
,chain中直接在callbacks中將callback handler傳入
第二種:使用verbose=True
,即使不顯式聲明callbacks,它也使用StdOutCallbackHandler
第三種:chain.invoke({"number":2}, {"callbacks":[handler]})
,在invoke時(shí)傳入callbacks
實(shí)現(xiàn)一個(gè)自己的Callback handler,繼承自BaseCallbackHandler,然后重寫自己需要的回調(diào)函數(shù)即可。
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import HumanMessage
from langchain_openai import ChatOpenAI
class MyCustomHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"My custom handler, token: {token}")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])
chat([HumanMessage(content="Tell me a joke")])
運(yùn)行結(jié)果:
有時(shí)候我們可能在CallBack內(nèi)做大量的數(shù)據(jù)處理,可能比較耗時(shí),如果使用通用 CallBack,會(huì)阻塞主線程運(yùn)行,這時(shí)候異步 CallBack就比較有用了。
實(shí)現(xiàn)一個(gè)自己的Callback handler,繼承自AsyncCallbackHandler,然后重寫自己需要的回調(diào)函數(shù)即可。
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
...... 重寫相關(guān)回調(diào)函數(shù) ......
開發(fā)項(xiàng)目過程中,寫日志是重要的調(diào)試手段之一。正式的項(xiàng)目中,我們不能總是將日志輸出到終端中,這樣無法傳遞和保存。
from langchain.callbacks import FileCallbackHandler
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain_openai import OpenAI
logfile = "output.log"
handler = FileCallbackHandler(logfile)
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
# this chain will both print to stdout (because verbose=True) and write to 'output.log'
# if verbose=False, the FileCallbackHandler will still write to 'output.log'
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler], verbose=True)
answer = chain.run(number=2)
運(yùn)行結(jié)果:
題外話:上面的log文件打開后有點(diǎn)亂碼,可以用下面方法解析展示出來:
pip install --upgrade ansi2html
pip install ipython
from ansi2html import Ansi2HTMLConverter
from IPython.display import HTML, display
with open("output.log", "r") as f:
content = f.read()
conv = Ansi2HTMLConverter()
html = conv.convert(content, full=True)
display(HTML(html))
Token就是Money,所以知道你的程序運(yùn)行中使用了多少Token也是非常重要的。通過get_openai_callback
來獲取token消耗。
from langchain.callbacks import get_openai_callback
from langchain_openai import OpenAI
llm = OpenAI(temperature=0)
with get_openai_callback() as cb:
llm("What is the square root of 4?")
total_tokens = cb.total_tokens
print("total_tokens: ", total_tokens)
## 輸出結(jié)果:total_tokens: 20
本文我們學(xué)習(xí)了LangChain的Callbacks模塊,實(shí)踐了各種 CallBack 的用法,知道了怎么利用LangChain進(jìn)行寫日志文件、Token計(jì)數(shù)等。這對(duì)于我們debug程序和監(jiān)控程序的各個(gè)階段非常重要。
如果覺得本文對(duì)你有幫助,麻煩點(diǎn)個(gè)贊和關(guān)注唄 ~~~
聯(lián)系客服