Ollama + deepseek
改:curl -C - -v -fsSL "https://ollama.com/install.sh" -o install.sh 2>&1 | tee curl_download.log。这样还是修改install.sh 文件,防止中断,从本地运行 (torch_gpu) rkqq@oem:~/tradegen$ sh install.sh。真正安装是这样,下面这个ollama网站所谓的安装,
什么都没有,直接pip install ollama, 如果是下面的安装位置?
(torch_gpu) rkqq@oem:~/tradegen/llm$ locate ollama
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/__init__.py
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/__pycache__
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/_client.py
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/_types.py
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/_utils.py
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/py.typed
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/__pycache__/__init__.cpython-39.pyc
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/__pycache__/_client.cpython-39.pyc
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/__pycache__/_types.cpython-39.pyc
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama/__pycache__/_utils.cpython-39.pyc
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info/INSTALLER
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info/LICENSE
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info/METADATA
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info/RECORD
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info/REQUESTED
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info/WHEEL
/home/rkqq/tradegen/test_ollama.py
但python代码中import ollama可以,但运行模型还是要连网去 C-S 访问,基本也是网络问题慢。pip uninstall ollama 后:
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama
/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/ollama-0.4.7.dist-info
/home/rkqq/tradegen/test_ollama.py
真正安装是这样,下面这个ollama网站所谓的安装,这个本质还是网络下载一个运行环境,非一切本地服务器+本地客户端, 命令行可以算本地,但没python本地方式。
STEP 1
方式1 : curl -fsSL https://ollama.com/install.sh | sh
方式2:
curl -C - -fsSL https://ollama.com/install.sh -o install.sh
改:curl -C - -v -fsSL "https://ollama.com/install.sh" -o install.sh 2>&1 | tee curl_download.log
-C -
: This allowscurl
to resume a partially downloaded file.-v
: This enables verbose output to provide more detail about the download process.-fsSL
: These options ensure thatcurl
fails silently on server errors but shows progress and follows redirects.-o install.sh
: This specifies that the output should be saved to the fileinstall.sh
这样还是修改install.sh 文件,防止中断,从本地运行 (torch_gpu) rkqq@oem:~/tradegen$ sh install.sh
不管什么方式,最后是方式1成功了,(第2次新硬盘。方式1, 但上网慢的那种)
(torch_gpu) rkqq@oem:~/tradegen$ ollama -v
ollama version is 0.5.7
(torch_gpu) rkqq@oem:~/tradegen$ ollama list
没有模型, https://ollama.com/search , 找, 7b参数的文件4G大小,最大400G以上,下载和本地运行启动
STEP 2
ollama pull deepseek-r1:7b
ollama run deepseek-r1:7b (直接运行他,会自动先下载)
运行 ollama list, 显示有,具体位置可能:
sudo find / -name "*deepseek*"
/usr/share/ollama/.ollama/models/manifests/registry.ollama.ai/library/deepseek-r1
(torch_gpu) rkqq@oem:~$ ollama run deepseek-r1:7b
他cmd方式运行的结果一个界面,就是等输入文字回车,再回答问题的界面
>>>
提示 Use Ctrl + d or /bye to exit. ;ctrl+C还不行。
STEP 3
代码中运行,
还是pip install ollama, 不然ollama没有提供python的方式只有命令行,
下面结果正常,但还要联网,有点慢
from ollama import chat
from ollama import ChatResponse
response: ChatResponse = chat(model='deepseek-r1:7b', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response.message.content)
之间要C-S:网络环境就算最简单的可以了, 但不是真正离线工作。
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpx/_client.py", line 1014, in _send_single_request
response = transport.handle_request(request)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpx/_transports/default.py", line 250, in handle_request
resp = self._pool.handle_request(req)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
raise exc
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
response = connection.handle_request(request)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
return self._connection.handle_request(request)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/http11.py", line 133, in handle_request
raise exc
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/http11.py", line 111, in handle_request
) = self._receive_response_headers(**kwargs)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/http11.py", line 176, in _receive_response_headers
event = self._receive_event(timeout=timeout)
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_sync/http11.py", line 212, in _receive_event
data = self._network_stream.read(
File "/home/rkqq/anaconda3/envs/torch_gpu/lib/python3.9/site-packages/httpcore/_backends/sync.py", line 126, in read
更多推荐
所有评论(0)