世外天堂
返回首页 2026年5月10日3 分钟阅读约 665 字 热度 --

前端大文件上传分片/断点续传/并发控制实现

#前端 #大文件上传 #分片 #断点续传

前端大文件上传分片、断点续传

const CHUNK_SIZE = 5 * 1024 * 1024

function createChunks(file: File) {
    const chunks = []

    let cur = 0

    while (cur < file.size) {
        chunks.push(
            file.slice(cur, cur + CHUNK_SIZE)
        )

        cur += CHUNK_SIZE
    }

    return chunks
}

为什么需要 hash/md5?

  1. 校验文件完整性
  2. 避免重复上传

前端如何计算 hash?

npm i spark-md5
import SparkMD5 from 'spark-md5'

async function calculateHash(file: File) {
  return new Promise((resolve) => {
    const spark = new SparkMD5.ArrayBuffer()

    const reader = new FileReader()

    reader.readAsArrayBuffer(file)

    reader.onload = (e) => {
      spark.append(e.target?.result as ArrayBuffer)

      resolve(spark.end())
    }
  })
}

上传chunk

const formData = new FormData()

formData.append('chunk', chunk)
formData.append('hash', fileHash)
formData.append('index', index.toString())

await axios.post('/upload-chunk', formData)

简单并发池实现

async function asyncPool(limit, tasks) {
  const pool = []
  const results = []

  for (const task of tasks) {
    const p = task()

    results.push(p)

    if (limit <= tasks.length) {
      const e = p.then(() => {
        pool.splice(pool.indexOf(e), 1)
      })

      pool.push(e)

      if (pool.length >= limit) {
        await Promise.race(pool)
      }
    }
  }

  return Promise.all(results)
}

断点续传核心

POST /check-file

传: hash

服务器返回:

{
  "uploadedList": [0,1,2,3]
}

前端:跳过已上传 chunk

上传进度怎么算?

onUploadProgress中拿到loaded和total

现代框架用哪些库?

simple-uploader.js/ali-oss

import OSS from 'ali-oss'

const client = new OSS({
  region: 'oss-cn-hangzhou',
  accessKeyId: 'xxx',
  accessKeySecret: 'xxx',
  bucket: 'test',
})

await client.multipartUpload(
  file.name,
  file,
  {
    parallel: 4,
    partSize: 1024 * 1024,
    progress(p) {
      console.log(p)
    },
  }
)

上一篇

uni-app学习笔记

下一篇

Vue实现购物车

评论