概述
如果使用官方的 GGUF, 则使用此处理方式: 一共修改两个文件, 主要是增加兼容性和异常处理,不修改逻辑
修改: D:\Soft\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\loader.py
修改 13行, 增加mistral3
TXT_ARCH_LIST = {"t5", "t5encoder", "llama", "qwen2vl", "qwen3", "qwen3vl", "gemma3", "mistral3"}修改 482行,增加mistral3
elif arch in {"llama", "qwen2vl", "qwen3", "qwen3vl", "gemma3", "mistral3"}:在503行,下面增加fux2处理
if arch == "mistral3": if "tekken_model" in sd: sd["tekken_model"] = sd["tekken_model"].to(torch.uint8) elif "spiece_model" in sd: sd["spiece_model"] = sd["spiece_model"].to(torch.uint8)
修改 D:\Soft\ComfyUI_windows_portable\ComfyUI\comfy\text_encoders\flux.py
在75行def load_mistral_tokenizer(data): 函数中
增加def load_mistral_tokenizer(data): if data is None: return {"tokenizer_object": None, "legacy": False}
