MinkowskiEngine是一个开源的自动微分库,专门为高维稀疏张量而设计,可以用于深度学习中的卷积、池化和全连接操作。它是针对高维空间中的稀疏数据设计的,例如在点云处理和图像分析中常见的数据类型。
以前由于 Visual Studio 编译器(例如 #pragma atomic)中对 OpenMP 指令的支持问题,其 CPU 版本的 Minkowski 引擎无法在 Windows 上运行。但是,从 Visual Studio 版本 17.4(2022 年 11 月)开始,添加了对其中许多说明的支持。因此,只需修改 setup.py 文件,就可以完美地编译 Minkowski Engine 的 CPU 版本。
第一种方法(比较繁琐)
要求:
• Visual Studio 2022 (>17.4)
• Python >3.8
PyTorch (1.13.1) 和 CUDA 11.7
安装过程的第一步是使用 Visual Studio 编译和安装 OpenBLAS for Windows。
• https://www.openblas.net/
将 openblas.lib 复制到 Minkowski Engine 文件夹。
下一步是将 setup.py 文件替换为提供的文件。此外,还必须安装 ninja 和 open3d(仅适用于分段示例)。
因此完整步骤如下:
在Windows环境下,使用Visual Studio 编译和安装 OpenBLAS(怎么编译详见公众号)
打开anaconda终端
git clone https://github.com/NVIDIA/MinkowskiEngine.git
cd MinkowskiEngine
将编译和安装OpenBLAS后生成的 openblas.lib 文件复制到 MinwokskiEngine 文件夹
将 setup.py 文件中的代码替换为下面的代码。
conda create -n env_name python=3.10
conda activate env_name
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
pip install ninja open3d
python setup.py install bdist_wheel --blas_include_dirs=“include_blas_dir” --blas=openblas --cpu_only
第二种方法安装MinkowskiEngine库
要求:
python=3.10
PyTorch (1.13.1) with CUDA 11.7 (CUDA 11.6 也可以,我的11.6安装成功了)
Visual Studio 2022 (>17.4)
具体步骤为:
conda create -n py3-mink-cu117 python=3.10
conda activate py3-mink-cu117
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
下载安装包(要是直接复制进去在浏览器下载不了就用迅雷下载)
https://github.com/NVIDIA/MinkowskiEngine/files/10931944/MinkowskiEngine-0.5.4-py3.10-win-amd64.zip
下载下来后是个压缩包,把它解压到一个文件夹中
然后在anaconda prompt终端中cd到解压目录执行
pip install MinkowskiEngine-0.5.4-cp310-cp310-win_amd64.whl
命令即可安装成功
setup.py文件中的代码如下:
r"""
Parse additional arguments along with the setup.py arguments such as install, build, distribute, sdist, etc.
Usage:
python setup.py install <additional_flags>..<additional_flags> <additional_arg>=<value>..<additional_arg>=<value>
export CC=<C++ compiler>; python setup.py install <additional_flags>..<additional_flags> <additional_arg>=<value>..<additional_arg>=<value>
Examples:
python setup.py install --force_cuda --cuda_home=/usr/local/cuda
export CC=g++7; python setup.py install --force_cuda --cuda_home=/usr/local/cuda
Additional flags:
--cpu_only: Force building only a CPU version. However, if
torch.cuda.is_available() is False, it will default to CPU_ONLY.
--force_cuda: If torch.cuda.is_available() is false, but you have a working
nvcc, compile cuda files. --force_cuda will supercede --cpu_only.
Additional arguments:
--blas=<value> : type of blas library to use for CPU matrix multiplications.
Options: [openblas, mkl, atlas, blas]. By default, it will use the first
numpy blas library it finds.
--cuda_home=<value> : a directory that contains <value>/bin/nvcc and
<value>/lib64/libcudart.so. By default, use
`torch.utils.cpp_extension._find_cuda_home()`.
--blas_include_dirs=<comma_separated_values> : additional include dirs. Only
activated when --blas=<value> is set.
--blas_library_dirs=<comma_separated_values> : additional library dirs. Only
activated when --blas=<value> is set.
"""
import sys
if sys.version_info < (3, 6):
sys.stdout.write(
"Minkowski Engine requires Python 3.6 or higher. Please use anaconda https://www.anaconda.com/distribution/ for an isolated python environment.\n"
)
sys.exit(1)
try:
import torch
except ImportError:
raise ImportError("Pytorch not found. Please install pytorch first.")
import codecs
import os
import re
import subprocess
import shutil
import warnings
from pathlib import Path
from sys import argv, platform
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension
here = os.path.abspath(os.path.dirname(__file__))
def read(*parts):
with codecs.open(os.path.join(here, *parts), "r") as fp:
return fp.read()
def find_version(*file_paths):
version_file = read(*file_paths)
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
def run_command(*args):
try:
subprocess.run(args, check=True)
except subprocess.CalledProcessError as e:
print(f"Error: {e}")
exit(1)
def remove_build_dir():
try:
shutil.rmtree("build")
except FileNotFoundError:
pass
def uninstall_package(package_name):
run_command("pip", "uninstall", package_name, "-y")
def _argparse(pattern, argv, is_flag=True, is_list=False):
if is_flag:
found = pattern in argv
if found:
argv.remove(pattern)
return found, argv
else:
arr = [arg for arg in argv if pattern == arg.split("=")[0]]
if is_list:
if len(arr) == 0: # not found
return False, argv
else:
assert "=" in arr[0], f"{arr[0]} requires a value."
argv.remove(arr[0])
val = arr[0].split("=")[1]
if "," in val:
return val.split(","), argv
else:
return [val], argv
else:
if len(arr) == 0: # not found
return False, argv
else:
assert "=" in arr[0], f"{arr[0]} requires a value."
argv.remove(arr[0])
return arr[0].split("=")[1], argv
remove_build_dir()
uninstall_package("MinkowskiEngine")
# For cpu only build
CPU_ONLY, argv = _argparse("--cpu_only", argv)
FORCE_CUDA, argv = _argparse("--force_cuda", argv)
if not torch.cuda.is_available() and not FORCE_CUDA:
warnings.warn(
"torch.cuda.is_available() is False. MinkowskiEngine will compile with CPU_ONLY. Please use `--force_cuda` to compile with CUDA."
)
CPU_ONLY = CPU_ONLY or not torch.cuda.is_available()
if FORCE_CUDA:
CPU_ONLY = False
# args with return value
CUDA_HOME, argv = _argparse("--cuda_home", argv, False)
BLAS, argv = _argparse("--blas", argv, False)
BLAS_INCLUDE_DIRS, argv = _argparse("--blas_include_dirs", argv, False, is_list=True)
BLAS_LIBRARY_DIRS, argv = _argparse("--blas_library_dirs", argv, False, is_list=True)
MAX_COMPILATION_THREADS = 12
Extension = CUDAExtension
extra_link_args = []
include_dirs = []
libraries = []
CC_FLAGS = []
NVCC_FLAGS = []
if CPU_ONLY:
print("--------------------------------")
print("| WARNING: CPU_ONLY build set |")
print("--------------------------------")
Extension = CppExtension
else:
print("--------------------------------")
print("| CUDA compilation set |")
print("--------------------------------")
# system python installation
libraries.append("cusparse")
if not (CUDA_HOME is False): # False when not set, str otherwise
print(f"Using CUDA_HOME={CUDA_HOME}")
if sys.platform == "win32":
vc_version = os.getenv("VCToolsVersion", "")
if vc_version.startswith("14.16."):
CC_FLAGS += ["/sdl"]
else:
CC_FLAGS += ["/permissive-", "/openmp:llvm","/std:c++17"]
else:
CC_FLAGS += ["-fopenmp"]
if "darwin" in platform:
CC_FLAGS += ["-stdlib=libc++", "-std=c++17"]
NVCC_FLAGS += ["--expt-relaxed-constexpr", "--expt-extended-lambda"]
FAST_MATH, argv = _argparse("--fast_math", argv)
if FAST_MATH:
NVCC_FLAGS.append("--use_fast_math")
BLAS_LIST = ["flexiblas", "openblas", "mkl", "atlas", "blas"]
if not (BLAS is False): # False only when not set, str otherwise
assert BLAS in BLAS_LIST, f"Blas option {BLAS} not in valid options {BLAS_LIST}"
if BLAS == "mkl":
libraries.append("mkl_rt")
CC_FLAGS.append("-DUSE_MKL")
NVCC_FLAGS.append("-DUSE_MKL")
else:
libraries.append(BLAS)
if not (BLAS_INCLUDE_DIRS is False):
include_dirs += BLAS_INCLUDE_DIRS
if not (BLAS_LIBRARY_DIRS is False):
extra_link_args += [f"-Wl,-rpath,{BLAS_LIBRARY_DIRS}"]
else:
# find the default BLAS library
import numpy.distutils.system_info as sysinfo
# Search blas in this order
for blas in BLAS_LIST:
if "libraries" in sysinfo.get_info(blas):
BLAS = blas
libraries += sysinfo.get_info(blas)["libraries"]
break
else:
# BLAS not found
raise ImportError(
' \
\nBLAS not found from numpy.distutils.system_info.get_info. \
\nPlease specify BLAS with: python setup.py install --blas=openblas" \
\nfor more information, please visit https://github.com/NVIDIA/MinkowskiEngine/wiki/Installation'
)
# The Ninja cannot compile the files that have the same name with different
# extensions correctly and uses the nvcc/CC based on the extension. Import a
# .cpp file to the corresponding .cu file to force the nvcc compilation.
SOURCE_SETS = {
"cpu": [
CppExtension,
[
"math_functions_cpu.cpp",
"coordinate_map_manager.cpp",
"convolution_cpu.cpp",
"convolution_transpose_cpu.cpp",
"local_pooling_cpu.cpp",
"local_pooling_transpose_cpu.cpp",
"global_pooling_cpu.cpp",
"broadcast_cpu.cpp",
"pruning_cpu.cpp",
"interpolation_cpu.cpp",
"quantization.cpp",
"direct_max_pool.cpp",
],
["pybind/minkowski.cpp"],
["-DCPU_ONLY"],
],
"gpu": [
CUDAExtension,
[
"math_functions_cpu.cpp",
"math_functions_gpu.cu",
"coordinate_map_manager.cu",
"coordinate_map_gpu.cu",
"convolution_kernel.cu",
"convolution_gpu.cu",
"convolution_transpose_gpu.cu",
"pooling_avg_kernel.cu",
"pooling_max_kernel.cu",
"local_pooling_gpu.cu",
"local_pooling_transpose_gpu.cu",
"global_pooling_gpu.cu",
"broadcast_kernel.cu",
"broadcast_gpu.cu",
"pruning_gpu.cu",
"interpolation_gpu.cu",
"spmm.cu",
"gpu.cu",
"quantization.cpp",
"direct_max_pool.cpp",
],
["pybind/minkowski.cu"],
[],
],
}
debug, argv = _argparse("--debug", argv)
HERE = Path(os.path.dirname(__file__)).absolute()
SRC_PATH = HERE / "src"
if "CC" in os.environ or "CXX" in os.environ:
# distutils only checks CC not CXX
if "CXX" in os.environ:
os.environ["CC"] = os.environ["CXX"]
CC = os.environ["CXX"]
else:
CC = os.environ["CC"]
print(f"Using {CC} for c++ compilation")
if torch.__version__ < "1.7.0":
NVCC_FLAGS += [f"-ccbin={CC}"]
else:
print("Using the default compiler")
if debug:
CC_FLAGS += ["-g", "-DDEBUG"]
NVCC_FLAGS += ["-g", "-DDEBUG"]
else:
CC_FLAGS += []
NVCC_FLAGS += []
if "MAX_JOBS" not in os.environ and os.cpu_count() > MAX_COMPILATION_THREADS:
# Clip the num compilation thread to 8
os.environ["MAX_JOBS"] = str(MAX_COMPILATION_THREADS)
target = "cpu" if CPU_ONLY else "gpu"
Extension = SOURCE_SETS[target][0]
SRC_FILES = SOURCE_SETS[target][1]
BIND_FILES = SOURCE_SETS[target][2]
ARGS = SOURCE_SETS[target][3]
CC_FLAGS += ARGS
NVCC_FLAGS += ARGS
ext_modules = [
Extension(
name="MinkowskiEngineBackend._C",
sources=[*[str(SRC_PATH / src_file) for src_file in SRC_FILES], *BIND_FILES],
extra_compile_args={"cxx": CC_FLAGS, "nvcc": NVCC_FLAGS},
libraries=libraries,
),
]
# Python interface
setup(
name="MinkowskiEngine",
version=find_version("MinkowskiEngine", "__init__.py"),
install_requires=["torch", "numpy"],
packages=["MinkowskiEngine", "MinkowskiEngine.utils", "MinkowskiEngine.modules"],
package_dir={"MinkowskiEngine": "./MinkowskiEngine"},
ext_modules=ext_modules,
include_dirs=[str(SRC_PATH), str(SRC_PATH / "3rdparty"), *include_dirs],
cmdclass={"build_ext": BuildExtension.with_options(use_ninja=True)},
author="Christopher Choy",
author_email="[email protected]",
description="a convolutional neural network library for sparse tensors",
long_description=read("README.md"),
long_description_content_type="text/markdown",
url="https://github.com/NVIDIA/MinkowskiEngine",
keywords=[
"pytorch",
"Minkowski Engine",
"Sparse Tensor",
"Convolutional Neural Networks",
"3D Vision",
"Deep Learning",
],
zip_safe=False,
classifiers=[
# https: // pypi.org/classifiers/
"Environment :: Console",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Other Audience",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: C++",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Topic :: Multimedia :: Graphics",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Visualization",
],
python_requires=">=3.6",
)
顺道也给我的公众号做个宣传,希望大家多多关注。
微信公众号名称:可乐加冰有点凉