Bootstrap

SAM 2运行笔记

文章标题:SAM 2: Segment Anything in Images and Videos

1. 环境配置

2.1. 只支持命令行运行的环境配置

创建环境

conda create -n sam2 python=3.10

激活环境

conda activate sam2

安装torch

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

安装sam2

pip install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple

2.2. 支持远端交互的环境配置

还需如何操作

pip install -e ".[notebooks]" -i https://pypi.tuna.tsinghua.edu.cn/simple
npm install -g yarn
pip install -e '.[interactive-demo]' -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install imagesize -i https://pypi.tuna.tsinghua.edu.cn/simple

2. 本地运行

3. 远端交互运行

3.1. 运行后端

cd demo/backend/server/
PYTORCH_ENABLE_MPS_FALLBACK=1 \
APP_ROOT="$(pwd)/../../../" \
API_URL=http://localhost:7263 \
MODEL_SIZE=base_plus \
DATA_PATH="$(pwd)/../../data" \
DEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 \
gunicorn \
    --worker-class gthread app:app \
    --workers 1 \
    --threads 2 \
    --bind 0.0.0.0:7263 \
    --timeout 60

3.2. 运行前端

cd demo/frontend
yarn install
yarn dev --port 7262

参考文献

GitHub - facebookresearch/sam2: The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

;