Simple multipart downloading and uploading suites in nodejs.
Just for learning purpose. Dont use it in production environment although it contains several unit-test cases.
- middlewares
- express
- koa
- client library
- base impl
- uploading lifecycle hooks
- unit-tests
- express-middleware
- koa-middleware
- sync/async store
- client
- uploading
- serial uploading
- parallel uploading
- second pass
- downloading
- break-point resume
- examples
The module exports two route factory functions. Just use the route instance as standard express or koa router. The module also provide a simple client library.
create an express or koa router instance with options.
store
: persistence layer of chunk meta infochunkSize
: the limit of chunk sizechunkFolder
: the target folder of chunksassetsFolder
: the target folder of assetssecondPass
: if the server support secondPass featureonUploaded
: event hook when asset was uploaded successfully
you could also set other options that provided to express.Router()
and koa-router
.
create a client instance
chunkSize
: the size of chunk to split the assetfetch
: request instance, default iswindow.fetch
uuid
: the uuid factory for identifying the asset and its chunks, normally it should bemd5
of asset content.host
: the server hostprefix
: the api endpoint prefix, default is/dnu
onSecondPass
: event hook when asset was secondPassed.onChunkUploaded
: event hook when chunk was uploaded.onStart
: lifecycle when uploading task started.onSuccess
: lifecycle when uploading task succeed.onError
: lifecycle when uploading task stopped due to error.onEnd
: lifecycle when uploading task ended(succeed or happen error).
The work example code please refer to /examples folder.
filename
: the name of assetab
: theArrayBuffer
of assetoptions
override
: if reset uploading task with same uuidconcurrency
: the max limit of uploading tasks in parellel mode
the abstract class of chunk meta info persistence layer, as follow:
export abstract class DnuStore<M = ChunkMeta> {
abstract get (uuid: string): M | undefined | PromiseLike<M | undefined>
abstract set (uuid: string, meta: M): any
abstract delete (uuid: string): boolean | PromiseLike<boolean>
abstract exist (uuid: string): boolean | PromiseLike<boolean>
isChunkMeta (meta: any): boolean | PromiseLike<boolean> {
return meta ? 'cur' in meta && 'total' in meta : false
}
}
the default is memory store
. It is easy to implement an async type store, refer to json store
. Or use other store module as it adjusts to DnuStore
abstract class.
the meta info of chunk
cur
: in serial mode, it indicates to the index of current uploading task, in parellel mode, it refers to total tasks that had done.total
: the total number of chunksfilename
: the filename fo assetdone
: if current asset has been uploadedserial
: upload mode, true is serial mode, false is parellel mode
Refer to maidfile.md
or use maid help
.