AIStudio 镜像中心 Dockerfile 收集
本页面收集和整理了可用于 AIStudio 镜像中心的各类 Dockerfile 示例。您可以参考这些 Dockerfile,快速构建适用于 AI 开发、部署和实验的镜像环境。每个 Dockerfile 都包含了常用依赖和配置,方便在 AIStudio 平台上直接使用或定制。
Xinference
基于 Dockerfile 构建一个只包含 Transformer 引擎的 Xinference 镜像:
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
RUN python3 -m pip install --upgrade pip && \
python3 -m pip install --no-cache-dir "xinference[transformers]" sentence-transformers
使用支持高并发的高性能大模型推理引擎 vLLM,可使用以下 Dockerfile 构建镜像。
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
# Install necessary Python packages with no cache to reduce the image size.
RUN python3 -m pip install --no-cache-dir "xinference[vllm]" \
&& python3 -m pip install --no-cache-dir flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
LLaMA Factory
自行构建 LLaMA Factory 镜像,Dockerfile 示例:
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
# 设置环境变量
# DEBIAN_FRONTEND=noninteractive 防止安装过程中的交互式提示
# PATH 添加CUDA可执行文件路径
# LD_LIBRARY_PATH 添加CUDA库文件路径
# CUDA_HOME 设置CUDA根目录
# USE_MODELSCOPE_HUB 启用ModelScope模型仓库
ENV DEBIAN_FRONTEND=noninteractive \
PATH=/usr/local/cuda/bin:/usr/local/cuda-12.2/bin${PATH:+:${PATH}} \
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} \
CUDA_HOME=/usr/local/cuda \
USE_MODELSCOPE_HUB=1
# 安装CUDA并配置系统环境
# 1. 更新apt并安装必要工具
# 2. 清理apt缓存减小镜像体积
# 3. 下载CUDA安装包
# 4. 静默安装CUDA工具包(请勿安装 driver)
# 5. 配置动态链接库搜索路径
# 6. 更新动态链接库缓存
# 7. 清理安装包
RUN apt-get update && apt-get install -y wget build-essential \
&& rm -rf /var/lib/apt/lists/* \
&& wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run \
&& sh cuda_12.2.2_535.104.05_linux.run --toolkit --silent --override \
&& echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf \
&& echo '/usr/local/cuda-12.2/lib64' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda_12.2.2_535.104.05_linux.run
# 安装PyTorch/LLaMA-Factory/ModelScope/Tensorboard 等
# 1. 安装指定版本的PyTorch
# 2. 安装 Tensorboard
# 3. 克隆LLaMA-Factory代码库(使用--depth 1只克隆最新版本以减小体积)
# 4. 切换到LLaMA-Factory目录
# 5. 以可编辑模式安装LLaMA-Factory,并包含torch/metrics/modelscope附加功能
RUN pip install --no-cache-dir torch==2.4.0 \
&& pip install --no-cache-dir tensorboard \
&& git clone --depth 1 https://gh-proxy.com/https://github.com/hiyouga/LLaMA-Factory.git \
&& cd LLaMA-Factory \
&& pip install --no-cache-dir -e ".[torch,metrics,modelscope]"
Sglang
Sglang 要求环境支持系统级 CUDA。
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
# 设置环境变量
# DEBIAN_FRONTEND=noninteractive 防止安装过程中的交互式提示
# PATH 添加CUDA可执行文件路径
# LD_LIBRARY_PATH 添加CUDA库文件路径
# CUDA_HOME 设置CUDA根目录
ENV PATH=/usr/local/cuda/bin:/usr/local/cuda-12.4/bin${PATH:+:${PATH}} \
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-12.4/lib64:/usr/lib/x86_64-linux-gnu${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} \
CUDA_HOME=/usr/local/cuda \
CPATH=/usr/include${CPATH:+:${CPATH}} \
LIBRARY_PATH=/usr/lib/x86_64-linux-gnu${LIBRARY_PATH:+:${LIBRARY_PATH}}
# 安装CUDA并配置系统环境
# 1. 更新apt并安装必要工具
# 2. 清理apt缓存减小镜像体积
# 3. 下载CUDA安装包
# 4. 静默安装CUDA工具包(请勿安装 driver)
# 5. 配置动态链接库搜索路径
# 6. 更新动态链接库缓存
# 7. 清理安装包
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y wget build-essential \
&& rm -rf /var/lib/apt/lists/* \
&& wget https://developer.download.nvidia.com/compute/cuda/12.4.1/local_installers/cuda_12.4.1_550.54.15_linux.run \
&& sh cuda_12.4.1_550.54.15_linux.run --toolkit --silent --override \
&& echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf \
&& echo '/usr/local/cuda-12.4/lib64' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda_12.4.1_550.54.15_linux.run
# 安装cuDNN并配置环境
# 1. 下载并安装NVIDIA CUDA仓库公钥
# 2. 更新apt源
# 3. 安装cuDNN
# 4. 配置额外的库路径
# 5. 更新动态链接库缓存
# 6. 清理安装包和apt缓存
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get install -y cudnn-cuda-12 \
&& echo '/usr/lib/x86_64-linux-gnu' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda-keyring_1.1-1_all.deb \
&& rm -rf /var/lib/apt/lists/*
One API
dockerfile
FROM cr.infini-ai.com/te-b905754427352261/node:22 AS builder
# 克隆 one-api 代码仓库
WORKDIR /app
RUN git clone https://ghfast.top/https://github.com/songquanpeng/one-api.git .
RUN echo "registry=https://registry.npmmirror.com" > .npmrc
WORKDIR /app/web
WORKDIR /app/web/default
RUN npm install
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ../../VERSION) npm run build
WORKDIR /app/web/berry
RUN npm install
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ../../VERSION) npm run build
WORKDIR /app/web/air
RUN npm install
RUN DISABLE_ESLINT_PLUGIN='true' REACT_APP_VERSION=$(cat ../../VERSION) npm run build
FROM cr.infini-ai.com/te-b905754427352261/golang:alpine AS builder2
RUN apk add --no-cache g++ git
ENV GO111MODULE=on \
CGO_ENABLED=1 \
GOOS=linux \
GOPROXY=https://goproxy.cn,direct
WORKDIR /build
# 从上一个阶段获取构建产物
COPY --from=builder /app/go.mod /app/go.sum ./
RUN go mod download
COPY --from=builder /app/ .
COPY --from=builder /app/web/build ./web/build
RUN go build -trimpath -ldflags "-s -w -X 'github.com/songquanpeng/one-api/common.Version=$(cat VERSION)' -extldflags '-static'" -o one-api
FROM cr.infini-ai.com/te-b905754427352261/alpine:latest
RUN apk update \
&& apk upgrade \
&& apk add --no-cache ca-certificates tzdata \
&& update-ca-certificates 2>/dev/null || true
COPY --from=builder2 /build/one-api /
EXPOSE 3000
WORKDIR /data
CMD ["/bin/sh"]
CUDA 12.2.2
下面是一个 Dockerfile 示例,以 Ubuntu 22.04 为基础镜像,构建一个 CUDA 12.2.2 的镜像。
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
# 设置环境变量
# DEBIAN_FRONTEND=noninteractive 防止安装过程中的交互式提示
# PATH 添加CUDA可执行文件路径
# LD_LIBRARY_PATH 添加CUDA库文件路径
# CUDA_HOME 设置CUDA根目录
ENV PATH=/usr/local/cuda/bin:/usr/local/cuda-12.2/bin${PATH:+:${PATH}} \
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-12.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} \
CUDA_HOME=/usr/local/cuda
# 安装CUDA并配置系统环境
# 1. 更新apt并安装必要工具
# 2. 清理apt缓存减小镜像体积
# 3. 下载CUDA安装包
# 4. 静默安装CUDA工具包(请勿安装 driver)
# 5. 配置动态链接库搜索路径
# 6. 更新动态链接库缓存
# 7. 清理安装包
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y wget build-essential \
&& rm -rf /var/lib/apt/lists/* \
&& wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run \
&& sh cuda_12.2.2_535.104.05_linux.run --toolkit --silent --override \
&& echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf \
&& echo '/usr/local/cuda-12.2/lib64' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda_12.2.2_535.104.05_linux.run
CUDA 12.2.2 + cudnn9.x
下面的 Dockerfile 示例,以 Ubuntu 22.04 为基础镜像,安装 CUDA 12.2.2 和 cuDNN 最新版。
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
# 设置环境变量
# DEBIAN_FRONTEND=noninteractive 防止安装过程中的交互式提示
# PATH 添加CUDA可执行文件路径
# LD_LIBRARY_PATH 添加CUDA库文件路径
# CUDA_HOME 设置CUDA根目录
ENV PATH=/usr/local/cuda/bin:/usr/local/cuda-12.2/bin${PATH:+:${PATH}} \
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-12.2/lib64:/usr/lib/x86_64-linux-gnu${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} \
CUDA_HOME=/usr/local/cuda \
CPATH=/usr/include${CPATH:+:${CPATH}} \
LIBRARY_PATH=/usr/lib/x86_64-linux-gnu${LIBRARY_PATH:+:${LIBRARY_PATH}}
# 安装CUDA并配置系统环境
# 1. 更新apt并安装必要工具
# 2. 清理apt缓存减小镜像体积
# 3. 下载CUDA安装包
# 4. 静默安装CUDA工具包(请勿安装 driver)
# 5. 配置动态链接库搜索路径
# 6. 更新动态链接库缓存
# 7. 清理安装包
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y wget build-essential \
&& rm -rf /var/lib/apt/lists/* \
&& wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run \
&& sh cuda_12.2.2_535.104.05_linux.run --toolkit --silent --override \
&& echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf \
&& echo '/usr/local/cuda-12.2/lib64' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda_12.2.2_535.104.05_linux.run
# 安装cuDNN并配置环境
# 1. 下载并安装NVIDIA CUDA仓库公钥
# 2. 更新apt源
# 3. 安装cuDNN
# 4. 配置额外的库路径
# 5. 更新动态链接库缓存
# 6. 清理安装包和apt缓存
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get install -y cudnn-cuda-12 \
&& echo '/usr/lib/x86_64-linux-gnu' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda-keyring_1.1-1_all.deb \
&& rm -rf /var/lib/apt/lists/*
CUDA 12.2.2 + cudnn8.x
下面的 Dockerfile 示例,以 Ubuntu 22.04 为基础镜像,安装 CUDA 12.2.2 和 cuDNN 8.9.7.29。
dockerfile
FROM cr.infini-ai.com/infini-ai/ubuntu:22.04-20240429
# 设置环境变量
# DEBIAN_FRONTEND=noninteractive 防止安装过程中的交互式提示
# PATH 添加CUDA可执行文件路径
# LD_LIBRARY_PATH 添加CUDA库文件路径
# CUDA_HOME 设置CUDA根目录
ENV PATH=/usr/local/cuda/bin:/usr/local/cuda-12.2/bin${PATH:+:${PATH}} \
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda-12.2/lib64:/usr/lib/x86_64-linux-gnu${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} \
CUDA_HOME=/usr/local/cuda \
CPATH=/usr/include${CPATH:+:${CPATH}} \
LIBRARY_PATH=/usr/lib/x86_64-linux-gnu${LIBRARY_PATH:+:${LIBRARY_PATH}}
# 安装CUDA并配置系统环境
# 1. 更新apt并安装必要工具
# 2. 清理apt缓存减小镜像体积
# 3. 下载CUDA安装包
# 4. 静默安装CUDA工具包(请勿安装 driver)
# 5. 配置动态链接库搜索路径
# 6. 更新动态链接库缓存
# 7. 清理安装包
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y wget build-essential \
&& rm -rf /var/lib/apt/lists/* \
&& wget https://developer.download.nvidia.com/compute/cuda/12.2.2/local_installers/cuda_12.2.2_535.104.05_linux.run \
&& sh cuda_12.2.2_535.104.05_linux.run --toolkit --silent --override \
&& echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf \
&& echo '/usr/local/cuda-12.2/lib64' >> /etc/ld.so.conf \
&& ldconfig \
&& rm cuda_12.2.2_535.104.05_linux.run
# 安装cuDNN并配置环境 (Tar File Installation) see https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-897/install-guide/index.html#installlinux
# 1. 下载 NVIDIA 官方的 Redistrib JSON 解析工具
# 2. 下载历史版本(8.9.7.29)版本的 cudnn
# 3. 安装cuDNN
# 4. 更新动态链接库缓存
# 5. 清理安装文件
RUN git clone https://ghfast.top/https://github.com/NVIDIA/build-system-archive-import-examples.git \
&& echo "Git clone completed successfully" \
&& cd build-system-archive-import-examples \
&& python3 -u ./parse_redist.py --product cudnn --label 8.9.7.29 --os linux --arch x86_64 \
&& ls -la flat/linux-x86_64/cuda12* \
&& test -d flat/linux-x86_64/cuda12/include && test -d flat/linux-x86_64/cuda12/lib \
&& cp flat/linux-x86_64/cuda12/include/cudnn*.h /usr/local/cuda/include \
&& echo "Copied header files" \
&& cp -P flat/linux-x86_64/cuda12/lib/libcudnn* /usr/local/cuda/lib64 \
&& echo "Copied library files" \
&& chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn* \
&& ldconfig \
&& cd / \
&& rm -rf build-system-archive-import-examples
注意
由于从 cuDNN Archive 下载 cuDNN 1.x-8.x 历史版本需要 NVIDIA 开发者账号,且仅提供需要验证的下载链接,因此在 Dockerfile 中处理较为不便。如果需要 cuDNN 8.x 或更早版本,可以参考上面的 dockerfile,使用 Tar File Installation 方式安装 cudnn。其他版本参见 cuDNN redist JSON。