Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请求大模型返回数据 self._fastapi_stream2generator 报错 #57

Open
Aseisman opened this issue Oct 23, 2024 · 2 comments
Open

请求大模型返回数据 self._fastapi_stream2generator 报错 #57

Aseisman opened this issue Oct 23, 2024 · 2 comments

Comments

@Aseisman
Copy link

2024-10-23 11:54:32,830 - _client.py[line:1038] - INFO: HTTP Request: GET http://127.0.0.1:7862/sdfiles/download?filename=&save_filename= "HTTP/1.1 200 OK"
2024-10-23 11:54:32.832 | DEBUG    | webui.dialogue:dialogue_page:306 - prompt: hallo
2024-10-23 11:54:32.865 | INFO     | webui.utils:_fastapi_stream2generator:244 - <starlette.responses.StreamingResponse object at 0x7fad88ab3ac0>
2024-10-23 11:54:32.872 | WARNING  | muagent.llm_models.openai_model:__init__:54 - There is no zdatafront, you just do as openai config
/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/muagent/chat/llm_chat.py:30: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use RunnableSequence, e.g., `prompt | llm` instead.
  chain = LLMChain(prompt=chat_prompt, llm=model)
/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/muagent/chat/llm_chat.py:31: LangChainDeprecationWarning: The method `Chain.__call__` was deprecated in langchain 0.1.0 and will be removed in 0.3.0. Use invoke instead.
  content = chain({"input": query})
2024-10-23 11:54:32,965 - _base_client.py[line:1047] - INFO: Retrying request to /chat/completions in 0.988324 seconds
2024-10-23 11:54:33,956 - _base_client.py[line:1047] - INFO: Retrying request to /chat/completions in 1.587208 seconds
2024-10-23 11:54:35.558 | ERROR    | webui.utils:_fastapi_stream2generator:252 - Traceback (most recent call last):
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
    yield
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_transports/default.py", line 236, in handle_request
    resp = self._pool.handle_request(req)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
    raise exc
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
    stream = self._connect(request)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 122, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_backends/sync.py", line 213, in connect_tcp
    sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/contextlib.py", line 137, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 952, in _request
    response = self._client.send(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_client.py", line 926, in send
    response = self._send_handling_auth(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_client.py", line 954, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_client.py", line 1027, in _send_single_request
    response = transport.handle_request(request)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_transports/default.py", line 236, in handle_request
    resp = self._pool.handle_request(req)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/contextlib.py", line 137, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/codefuse-chatbot/examples/webui/utils.py", line 246, in _fastapi_stream2generator
    for chunk in  iter_over_async(response.body_iterator, loop):
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/muagent/utils/server_utils.py", line 120, in iter_over_async
    done, obj = loop.run_until_complete(get_next())
  File "/opt/conda/envs/devopsgpt/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/muagent/utils/server_utils.py", line 115, in get_next
    obj = await ait.__anext__()
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/starlette/concurrency.py", line 63, in iterate_in_threadpool
    yield await anyio.to_thread.run_sync(_next, iterator)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/starlette/concurrency.py", line 53, in _next
    return next(iterator)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/muagent/chat/base_chat.py", line 83, in chat_iterator
    result, content = self.create_task(query, history, model, llm_config, embed_config, **kargs)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/muagent/chat/llm_chat.py", line 31, in create_task
    content = chain({"input": query})
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_core/_api/deprecation.py", line 180, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain/chains/base.py", line 383, in __call__
    return self.invoke(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain/chains/base.py", line 166, in invoke
    raise e
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain/chains/base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain/chains/llm.py", line 126, in _call
    response = self.generate([inputs], run_manager=run_manager)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain/chains/llm.py", line 138, in generate
    return self.llm.generate_prompt(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
    raise e
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
    self._generate_with_cache(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
    result = self._generate(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_openai/chat_models/base.py", line 534, in _generate
    return generate_from_stream(stream_iter)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 88, in generate_from_stream
    generation = next(stream, None)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/langchain_openai/chat_models/base.py", line 482, in _stream
    with self.client.create(messages=message_dicts, **params) as response:
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/resources/chat/completions.py", line 606, in create
    return self._post(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 1240, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 921, in request
    return self._request(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 976, in _request
    return self._retry_request(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 1053, in _retry_request
    return self._request(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 976, in _request
    return self._retry_request(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 1053, in _retry_request
    return self._request(
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/openai/_base_client.py", line 986, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

2024-10-23 11:54:35.561 Uncaught app exception
Traceback (most recent call last):
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 85, in exec_func_with_error_handling
    result = func()
  File "/opt/conda/envs/devopsgpt/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 576, in code_to_exec
    exec(code, module.__dict__)
  File "/home/codefuse-chatbot/examples/webui.py", line 92, in <module>
    pages[selected_page]["func"](api)
  File "/home/codefuse-chatbot/examples/webui/dialogue.py", line 523, in dialogue_page
    st.experimental_rerun()
AttributeError: module 'streamlit' has no attribute 'experimental_rerun'
@Aseisman
Copy link
Author

本地模型chatglm-6b,放入了llm_models里面,同时按照配置进行配置之后,启动后发送hallo命令,返回报错。如何验证本地模型已经正常启动了呢

@Aseisman
Copy link
Author

python examples/llm_api.py 时信息:

 model_config.py[line:33] - ERROR: No module named 'zdatafront'

.....................


当前启动的LLM模型:['chatglm-6b'] @ cpu

.......................

 ERROR: config: {'model_path': 'THUDM/chatglm-6b', 'device': 'cpu'}, chatglm-6b, dict_keys(['gpt-3.5-turbo'])

.............................................

Traceback (most recent call last):
  File "/home/codefuse-chatbot/examples/llm_api.py", line 830, in start_main_server
    controller_started.wait() # 等待controller启动完成
  File "/opt/conda/envs/devopsgpt/lib/python3.9/multiprocessing/managers.py", line 1085, in wait
    return self._callmethod('wait', (timeout,))
  File "/opt/conda/envs/devopsgpt/lib/python3.9/multiprocessing/managers.py", line 810, in _callmethod
    kind, result = conn.recv()
  File "/opt/conda/envs/devopsgpt/lib/python3.9/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
  File "/opt/conda/envs/devopsgpt/lib/python3.9/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/opt/conda/envs/devopsgpt/lib/python3.9/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
  File "/home/codefuse-chatbot/examples/llm_api.py", line 735, in f
    raise KeyboardInterrupt(f"{signalname} received")
KeyboardInterrupt: SIGINT receive

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant