跳转至

3D生成模型 3DTopia/LGM 复现过程记录

约 766 个字 • 76 行代码

xformers版本与torch版本不匹配

按照README.md中的指示安装环境时,按顺序 pip install 之后,尝试运行时,显示了torch 2.1.0版本与xformers的要求不匹配的警告,

查看xformers版本为 0.0.26 ,于是打算安装低版本的xformers,

pip install xformers==0.0.23 -i https://download.pytorch.org/whl/cu118

安装时,发现会安装 2.1.1 版本的torch,于是就添加上了 --no-deps 的选项(不安装依赖),

pip install xformers==0.0.23 --no-deps -i https://download.pytorch.org/whl/cu118

再次运行,就没有版本不匹配的报错了

安装xformers后,提示torchaudio torchvision的版本不兼容了 · Issue #24 · 3DTopia/LGM (github.com)

本地加载huggingface模型

仿照README.md中,尝试输入命令运行 app.py ,但显示了如下报错

Couldn't connect to the Hub: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/ashawkey/mvdream-sd2.1-diffusers (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd50fac6880>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 7ccf0ec8-57c5-4094-8283-902e242d5b1b)').
Will try to load from local cache.

...

Traceback (most recent call last):
  File "/home/yjliao/.local/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py", line 1205, in download
    info = model_info(pretrained_model_name, token=token, revision=revision)
  ...
requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/ashawkey/mvdream-sd2.1-diffusers (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd50fac6880>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 7ccf0ec8-57c5-4094-8283-902e242d5b1b)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "app.py", line 61, in <module>
    pipe_text = MVDreamPipeline.from_pretrained(
  ...
OSError: Cannot load model ashawkey/mvdream-sd2.1-diffusers: model is not cached locally and an error occurred while trying to fetch metadata from the Hub. Please check out the root cause in the stacktrace above.

认为是由于远程服务器无法访问huggingface,所以无法在服务器中下载模型,需要我在自己的电脑上下载好模型,再把模型上传到远程服务器中

模型仓库地址


传好之后,开始查询如何设置离线使用模型,

了解到大致的步骤是,先将 local_files_only 参数设置成 True (将 app.py 中对应的注释放开即可),

app.py
# load dreams
pipe_text = MVDreamPipeline.from_pretrained(
    'ashawkey/mvdream-sd2.1-diffusers', # remote weights
    torch_dtype=torch.float16,
    trust_remote_code=True,
    # local_files_only=True,
)
pipe_text = pipe_text.to(device)

pipe_image = MVDreamPipeline.from_pretrained(
    "ashawkey/imagedream-ipmv-diffusers", # remote weights
    torch_dtype=torch.float16,
    trust_remote_code=True,
    # local_files_only=True,
)
pipe_image = pipe_image.to(device)

然后还需要通过 cache_dir 参数设置自定义的(huggingface下载模型的)缓存的位置,

一开始进行了几次不同的尝试,但都显示找不到对应的路径,

huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.

一直找不到设置 cache_dir 的方法。

使用vscode对远程服务器中的代码进行调试运行

我想查看最后报错时,查看的路径是什么(然后就可以设置对应的路径了),

由于 app.py 在运行时需要输入参数,一开始不知道如何输入参数调试,于是就将接收参数的变量保存成二进制文件再(在调试时)加载

opt = tyro.cli(AllConfigs)

import pickle

with open("opt.bin", "wb") as f:
    pickle.dump(opt, f)
import pickle

with open("opt.bin", "rb") as f:
    opt = pickle.load(f)

后来发现,可以通过 launch.json 来设置调试的属性,如 是否需要输入命令行参数步入是否只进入自己的函数

vscode_python_debugger

.vscode/launch.json
{
    // 使用 IntelliSense 了解相关属性。 
    // 悬停以查看现有属性的描述。
    // 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python 调试程序: 包含参数的当前文件",
            "type": "debugpy",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            // "args": "${command:pickArgs}",
            "justMyCode": false,
        }
    ]
}

经过调试,在这行代码得到了最后的路径

huggingface_hub/_snapshot_download.py
storage_folder = os.path.join(cache_dir, repo_folder_name(repo_id=repo_id, repo_type=repo_type))

'ashawkey/mvdream-sd2.1-diffusers' cache_dir="/home/yjliao/huggingface" 为例,最后的 storage_folder

'/home/yjliao/huggingface/models--ashawkey--mvdream-sd2.1-diffusers'

但这个路径下的结构和huggingface中克隆下来的目录结构略有不同,最后通过在我自己的电脑上运行 app.py 中的代码进行模型的下载,得到了相应的目录结构

/home/yjliao/huggingface/models--ashawkey--mvdream-sd2.1-diffusers
├─refs
  └─main
└─snapshots
   └─73a034178e748421506492e91790cc62d6aefef5

refs/main 是一个文本文件,内容就是 snapshots 目录下文件夹的名字,如 73a034178e748421506492e91790cc62d6aefef5

snapshots/73a034178e748421506492e91790cc62d6aefef5 就是克隆下来的仓库的内容,可以把克隆下来的仓库直接放进去,

最后再次运行 app.py 文件,就可以正常运行了。

mcubes

在运行这行命令时,

python3 convert.py big --test_path workspace_test/saved.ply

显示没有安装mcubes包,而直接进行 pip install mcubes 却显示

ERROR: Could not find a version that satisfies the requirement mcubes (from versions: none)
ERROR: No matching distribution found for mcubes

在谷歌上搜索,发现这个包叫PyMCubes,所以应该

pip install PyMCubes

Ninja RuntimeError

安装好缺少的包之后,再次尝试运行 convert.py ,但是显示了

RuntimeError: Ninja is required to load C++ extensions

于是查询这个错误,

python - Ninja is required to load C++ extensions - Stack Overflow

发现直接 pip install ninja 就修复好了


最后更新: 2024-05-09
创建日期: 2024-05-09

评论