Bootstrap

【Meta开源大模型Meta-Llama-3.1测试】

测试环境

GPU:【A100-SXM4-80GB】

准备

受资源限制,本次只测试meta-llama/Meta-Llama-3.1-8B-Instruct
参考官方链接:
url{https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct}
实际测试时遇到如下

: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/

于是改用魔搭社区提供的版本
https://modelscope.cn/models/LLM-Research/Meta-Llama-3.1-8B-Instruct

使用transforms进行推理测试

import transformers
import torch
from modelscope import snapshot_download

model_id = snapshot_download("LLM-Research/Meta-Llama-3.1-8B-Instruct")

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

第一次运行报错如下

    pipeline = transformers.pipeline(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 805, in pipeline
    config = AutoConfig.from_pretrained(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 989, in from_pretrained
    return config_class.from_dict(config_dict, **unused_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/configuration_utils.py", line 772, in from_dict
    config = cls(**config_dict)
             ^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/configuration_llama.py", line 161, in __init__
    self._rope_scaling_validation()
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/configuration_llama.py", line 182, in _rope_scaling_validation
    raise ValueError(
ValueError: `rope_scaling` must be a dictionary with two fields, `type` and `factor`, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}

升级transforms至4.43后报错如下:

    pipeline = transformers.pipeline(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 895                                                                                                                                                                 , in pipeline
    framework, model = infer_framework_load_model(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/pipelines/base.py", line 283, in                                                                                                                                                                  infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", li                                                                                                                                                                 ne 564, in from_pretrained
    return model_class.from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3775, i                                                                                                                                                                 n from_pretrained
    model = cls(config, *model_args, **model_kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",                                                                                                                                                                  line 1066, in __init__
    self.model = LlamaModel(config)
                 ^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",                                                                                                                                                                  line 845, in __init__
    [LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",                                                                                                                                                                  line 845, in <listcomp>
    [LlamaDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",                                                                                                                                                                  line 632, in __init__
    self.self_attn = LLAMA_ATTENTION_CLASSES[config._attn_implementation](config=config, layer_idx=layer_idx)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",                                                                                                                                                                  line 306, in __init__
    self.rotary_emb = LlamaRotaryEmbedding(config=self.config)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/anaconda3/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py",                                                                                                                                                                  line 110, in __init__
    self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling["type"])
                                                          ~~~~~~~~~~~~~~~~~~~^^^^^^^^
KeyError: 'type'

参考如下链接:
https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct/discussions/7
升级transforms至4.43.1后解决

pip install transformers==4.43.1
;