nodejs / undici
- четверг, 18 февраля 2021 г. в 00:28:34
JavaScript
An HTTP/1.1 client, written from scratch for Node.js
A HTTP/1.1 client, written from scratch for Node.js.
Undici means eleven in Italian. 1.1 -> 11 -> Eleven -> Undici. It is also a Stranger Things reference.
npm i undici
Machine: AMD EPYC 7502P
Node 15
http - keepalive x 12,028 ops/sec ±2.60% (265 runs sampled)
undici - pipeline x 31,321 ops/sec ±0.77% (276 runs sampled)
undici - request x 36,612 ops/sec ±0.71% (277 runs sampled)
undici - stream x 41,291 ops/sec ±0.90% (268 runs sampled)
undici - dispatch x 47,319 ops/sec ±1.17% (263 runs sampled)
The benchmark is a simple hello world example using a
single unix socket with pipelining.
import { request } from 'undici'
const {
statusCode,
headers,
trailers,
body
} = await request('http://localhost:3000/foo')
console.log('response received', statusCode)
console.log('headers', headers)
for await (const data of body) {
console.log('data', data)
}
console.log('trailers', trailers)new undici.Client(url, opts)A basic HTTP/1.1 client, mapped on top of a single TCP/TLS connection. Pipelining is disabled by default.
Requests are not guaranteeed to be dispatched in order of invocation.
url can be a string or a URL object.
It should only include the protocol, hostname, and port.
Options:
socketPath: String|Null, an IPC endpoint, either Unix domain socket or Windows named pipe.
Default: null.
keepAliveTimeout: Number, the timeout after which a socket without active requests
will time out. Monitors time between activity on a connected socket.
This value may be overridden by keep-alive hints from the server.
Default: 4e3 milliseconds (4s).
keepAliveMaxTimeout: Number, the maximum allowed keepAliveTimeout when overridden by
keep-alive hints from the server.
Default: 600e3 milliseconds (10min).
keepAliveTimeoutThreshold: Number, a number subtracted from server keep-alive hints
when overriding keepAliveTimeout to account for timing inaccuracies caused by e.g.
transport latency.
Default: 1e3 milliseconds (1s).
headersTimeout: Number, the timeout after which a request will time out, in
milliseconds. Monitors time between receiving complete headers.
Use 0 to disable it entirely. Default: 30e3 milliseconds (30s).
bodyTimeout: Number, the timeout after which a request will time out, in
milliseconds. Monitors time between receiving body data.
Use 0 to disable it entirely. Default: 30e3 milliseconds (30s).
pipelining: Number, the amount of concurrent requests to be sent over the
single TCP/TLS connection according to RFC7230.
Carefully consider your workload and environment before enabling concurrent requests
as pipelining may reduce performance if used incorrectly. Pipelining is sensitive
to network stack settings as well as head of line blocking caused by e.g. long running requests.
Set to 0 to disable keep-alive connections.
Default: 1.
tls: Object|Null, an options object which in the case of https will be passed to
tls.connect.
Default: null.
maxHeaderSize: Number, the maximum length of request headers in bytes.
Default: 16384 (16KiB).
client.request(opts[, callback(err, data)]): Promise|VoidPerforms a HTTP request.
Options:
path: Stringmethod: Stringopaque: Anybody: String|Buffer|Uint8Array|stream.Readable|Null
Default: null.headers: Object|Array|Null, an object with header-value pairs or an array with header-value pairs bi-indexed (['header1', 'value1', 'header2', 'value2']).
Default: null.signal: AbortSignal|EventEmitter|Null
Default: null.idempotent: Boolean, whether the requests can be safely retried or not.
If false the request won't be sent until all preceding
requests in the pipeline has completed.
Default: true if method is HEAD or GET.Headers are represented by an object like this:
{
'content-length': '123',
'content-type': 'text/plain',
connection: 'keep-alive',
host: 'mysite.com',
accept: '*/*'
}Or an array like this:
[
'content-length', '123',
'content-type', 'text/plain',
'connection', 'keep-alive',
'host', 'mysite.com',
'accept', '*/*'
]Keys are lowercased. Values are not modified.
If you don't specify a host header, it will be derived from the url of the client instance.
The data parameter in callback is defined as follow:
statusCode: Numberopaque: Anyheaders: Object, an object where all keys have been lowercased.trailers: Object, an object where all keys have been lowercased. This object start out
as empty and will be mutated to contain trailers after body has emitted 'end'.body: stream.Readable response payload. A user must
either fully consume or destroy the body unless there is an error, or no further requests
will be processed.Returns a promise if no callback is provided.
Example:
const { Client } = require('undici')
const client = new Client(`http://localhost:3000`)
client.request({
path: '/',
method: 'GET'
}, function (err, data) {
if (err) {
// handle this in some way!
return
}
const {
statusCode,
headers,
trailers,
body
} = data
console.log('response received', statusCode)
console.log('headers', headers)
body.setEncoding('utf8')
body.on('data', console.log)
body.on('end', () => {
console.log('trailers', trailers)
})
client.close()
})Non-idempotent requests will not be pipelined in order to avoid indirect failures.
Idempotent requests will be automatically retried if they fail due to indirect failure from the request at the head of the pipeline. This does not apply to idempotent requests with a stream request body.
A request can be aborted using either an AbortController or an EventEmitter.
To use AbortController in Node.js versions earlier than 15, you will need to
install a shim - npm i abort-controller.
const { Client } = require('undici')
const client = new Client('http://localhost:3000')
const abortController = new AbortController()
client.request({
path: '/',
method: 'GET',
signal: abortController.signal
}, function (err, data) {
console.log(err) // RequestAbortedError
client.close()
})
abortController.abort()Alternatively, any EventEmitter that emits an 'abort' event may be used as an abort controller:
const EventEmitter = require('events')
const { Client } = require('undici')
const client = new Client('http://localhost:3000')
const ee = new EventEmitter()
client.request({
path: '/',
method: 'GET',
signal: ee
}, function (err, data) {
console.log(err) // RequestAbortedError
client.close()
})
ee.emit('abort')Destroying the request or response body will have the same effect.
client.stream(opts, factory(data)[, callback(err)]): Promise|VoidA faster version of request.
Unlike request this method expects factory
to return a Writable which the response will be
written to. This improves performance by avoiding
creating an intermediate Readable when the user
expects to directly pipe the response body to a
Writable.
Options:
client.request(opts[, callback]).The data parameter in factory is defined as follow:
statusCode: Numberheaders: Object, an object where all keys have been lowercased.opaque: AnyThe data parameter in callback is defined as follow:
opaque: Anytrailers: Object, an object where all keys have been lowercased.Returns a promise if no callback is provided.
const { Client } = require('undici')
const client = new Client(`http://localhost:3000`)
const fs = require('fs')
client.stream({
path: '/',
method: 'GET',
opaque: filename
}, ({ statusCode, headers, opaque: filename }) => {
console.log('response received', statusCode)
console.log('headers', headers)
return fs.createWriteStream(filename)
}, (err) => {
if (err) {
console.error('failure', err)
} else {
console.log('success')
}
})opaque makes it possible to avoid creating a closure
for the factory method:
function (req, res) {
return client.stream({ ...opts, opaque: res }, proxy)
}Instead of:
function (req, res) {
return client.stream(opts, (data) => {
// Creates closure to capture `res`.
proxy({ ...data, opaque: res })
}
}client.pipeline(opts, handler(data)): DuplexFor easy use with stream.pipeline.
Options:
client.request(opts, callback).objectMode: Boolean, true if the handler will return an object stream.
Default: falseThe data parameter in handler is defined as follow:
statusCode: Numberheaders: Object, an object where all keys have been lowercased.opaque: Anybody: stream.Readable response payload. A user must
either fully consume or destroy the body unless there is an error, or no further requests
will be processed.handler should return a Readable from which the result will be
read. Usually it should just return the body argument unless
some kind of transformation needs to be performed based on e.g.
headers or statusCode.
The handler should validate the response and save any
required state. If there is an error it should be thrown.
Returns a Duplex which writes to the request and reads from
the response.
const { Client } = require('undici')
const client = new Client(`http://localhost:3000`)
const fs = require('fs')
const stream = require('stream')
stream.pipeline(
fs.createReadStream('source.raw'),
client.pipeline({
path: '/',
method: 'PUT',
}, ({ statusCode, headers, body }) => {
if (statusCode !== 201) {
throw new Error('invalid response')
}
if (isZipped(headers)) {
return pipeline(body, unzip(), () => {})
}
return body
}),
fs.createWriteStream('response.raw'),
(err) => {
if (err) {
console.error('failed')
} else {
console.log('succeeded')
}
}
)client.upgrade(opts[, callback(err, data)]): Promise|VoidUpgrade to a different protocol.
Options:
path: Stringopaque: Anymethod: String
Default: GETheaders: Object|Null, an object with header-value pairs.
Default: nullsignal: AbortSignal|EventEmitter|Null.
Default: nullprotocol: String, a string of comma separated protocols, in descending preference order.
Default: Websocket.The data parameter in callback is defined as follow:
headers: Object, an object where all keys have been lowercased.socket: DuplexopaqueReturns a promise if no callback is provided.
client.connect(opts[, callback(err, data)]): Promise|VoidStarts two-way communications with the requested resource.
Options:
path: Stringopaque: Anyheaders: Object|Null, an object with header-value pairs.
Default: nullsignal: AbortSignal|EventEmitter|Null.
Default: nullThe data parameter in callback is defined as follow:
statusCode: Numberheaders: Object, an object where all keys have been lowercased.socket: Duplexopaque: AnyReturns a promise if no callback is provided.
client.dispatch(opts, handler): VoidThis is the low level API which all the preceding APIs are implemented on top of.
This API is expected to evolve through semver-major versions and is less stable than the preceding higher level APIs. It is primarily intended for library developers who implement higher level APIs on top of this.
Multiple handler methods may be invoked in the same tick.
Options:
path: Stringmethod: Stringbody: String|Buffer|Uint8Array|stream.Readable|Null
Default: null.headers: Object|Null, an object with header-value pairs.
Default: null.idempotent: Boolean, whether the requests can be safely retried or not.
If false the request won't be sent until all preceding
requests in the pipeline has completed.
Default: true if method is HEAD or GET.The handler parameter is defined as follow:
onConnect(abort), invoked before request is dispatched on socket.
May be invoked multiple times when a request is retried when the request at the head of the pipeline fails.
abort(): Void, abort request.onUpgrade(statusCode, headers, socket): Void, invoked when request is upgraded either due to a Upgrade header or CONNECT method.
statusCode: Numberheaders: Array|Nullsocket: DuplexonHeaders(statusCode, headers, resume): Boolean, invoked when statusCode and headers have been received.
May be invoked multiple times due to 1xx informational headers.
statusCode: Numberheaders: Array|Null, an array of key-value pairs. Keys are not automatically lowercased.resume(): Void, resume onData after returning false.onData(chunk): Boolean, invoked when response payload data is received.
chunk: BufferonComplete(trailers): Void, invoked when response payload and trailers have been received and the request has completed.
trailers: Array|NullonError(err): Void, invoked when an error has occurred.
err: ErrorThe caller is responsible for handling the body argument, in terms of 'error' events and destroy():ing up until
the onConnect handler has been invoked.
client.close([callback]): Promise|VoidCloses the client and gracefully waits for enqueued requests to complete before invoking the callback.
Returns a promise if no callback is provided.
client.destroy([err][, callback]): Promise|VoidDestroy the client abruptly with the given err. All the pending and running
requests will be asynchronously aborted and error. Waits until socket is closed
before invoking the callback. Since this operation is asynchronously dispatched
there might still be some progress on dispatched requests.
Returns a promise if no callback is provided.
client.url: URLReturns url passed to undici.Pool(url, opts).
client.pipelining: NumberProperty to get and set the pipelining factor.
client.pending: NumberNumber of queued requests.
client.running: NumberNumber of inflight requests.
client.size: NumberNumber of pending and running requests.
client.connected: NumberThruthy if the client has an active connection. The client will lazily
create a connection when it receives a request and will destroy it
if there is no activity for the duration of the timeout value.
client.busy: BooleanTrue if pipeline is saturated or blocked. Indicates whether dispatching further requests is meaningful.
client.closed: BooleanTrue after client.close() has been called.
client.destroyed: BooleanTrue after client.destroyed() has been called or client.close() has been
called and the client shutdown has completed.
'drain', emitted when pipeline is no longer fully
saturated.
'connect', emitted when a socket has been created and
connected. The first argument is the Client instance.
The client will connect once client.size > 0.
'disconnect', emitted when socket has disconnected. The
first argument of the event is the error which caused the
socket to disconnect. The second argument is the
Client instance. The client will reconnect if or once
client.size > 0.
new undici.Pool(url, opts)A pool of Client connected to the same upstream target.
Implements the same api as Client.
Requests are not guaranteeed to be dispatched in order of invocation.
Options:
Client.connections, the number of clients to create.
Default 10.new undici.Agent(opts)undici.Pool.options - options passed through to Pool constructorReturns: Agent
Returns a new Agent instance for use with pool based requests or the following top-level methods request, pipeline, and stream.
agent.get(origin): Poolstring - A pool origin to be retrieved from the Agent.This method retrieves Pool instances from the Agent. If the pool does not exist it is automatically added. You do not need to manually close these pools as they are automatically removed using a WeakCache based on WeakRef and FinalizationRegistry.
The following methods request, pipeline, and stream utilize this feature.
agent.close(): PromiseReturns a Promise.all operation closing all of the pool instances in the Agent instance. This calls pool.close under the hood.
agent.destroy(): PromiseReturns a Promise.all operation destroying all of the pool instances in the Agent instance. This calls pool.destroy under the hood.
undici.setGlobalAgent(agent)AgentSets the global agent used by request, pipeline, and stream methods.
The default global agent creates undici.Pools with no max number of
connections.
The agent must only implement the Agent API; not necessary extend from it.
undici.request(url[, opts]): Promisestring | URL | object{ agent: Agent } & client.request.optsurl may contain path. opts may not contain path. opts.method is GET by default.
Calls pool.request(opts) on the pool returned from either the globalAgent (see setGlobalAgent) or the agent passed to the opts argument.
Returns a promise with the result of the request method.
undici.stream(url, opts, factory): Promisestring | URL | object{ agent: Agent } & client.stream.optsclient.stream.factoryurl may contain path. opts may not contain path.
See client.stream for details on the opts and factory arguments.
Calls pool.stream(opts, factory) on the pool returned from either the globalAgent (see setGlobalAgent) or the agent passed to the opts argument.
Result is returned in the factory function. See client.stream for more details.
undici.pipeline(url, opts, handler): Duplexstring | URL | object{ agent: Agent } & client.pipeline.optsclient.pipeline.handlerurl may contain path. opts may not contain path.
See client.pipeline for details on the opts and handler arguments.
Calls pool.pipeline(opts, factory) on the pool returned from either the globalAgent (see setGlobalAgent) or the agent passed to the opts argument.
See client.pipeline for more details.
client.upgrade(opts[, callback(err, data)]): Promise|Voidstring | URL | object{ agent: Agent } & client.upgrade.optsurl may contain path. opts may not contain path.
client.connect(opts[, callback(err, data)]): Promise|Voidstring | URL | object{ agent: Agent } & client.connect.optsurl may contain path. opts may not contain path.
undici.errorsUndici exposes a variety of error objects that you can use to enhance your error handling.
You can find all the error objects inside the errors key.
const { errors } = require('undici')| Error | Error Codes | Description |
|---|---|---|
InvalidArgumentError |
UND_ERR_INVALID_ARG |
passed an invalid argument. |
InvalidReturnValueError |
UND_ERR_INVALID_RETURN_VALUE |
returned an invalid value. |
RequestAbortedError |
UND_ERR_ABORTED |
the request has been aborted by the user |
ClientDestroyedError |
UND_ERR_DESTROYED |
trying to use a destroyed client. |
ClientClosedError |
UND_ERR_CLOSED |
trying to use a closed client. |
SocketError |
UND_ERR_SOCKET |
there is an error with the socket. |
NotSupportedError |
UND_ERR_NOT_SUPPORTED |
encountered unsupported functionality. |
ContentLengthMismatchError |
UND_ERR_CONTENT_LENGTH_MISMATCH |
body does not match content-length header |
InformationalError |
UND_ERR_INFO |
expected error with reason |
TrailerMismatchError |
UND_ERR_TRAILER_MISMATCH |
trailers did not match specification |
This section documents parts of the HTTP/1.1 specification which Undici does not support or does not fully implement.
Undici does not support the Expect request header field. The request
body is always immediately sent and the 100 Continue response will be
ignored.
Refs: https://tools.ietf.org/html/rfc7231#section-5.1.1
Uncidi will only use pipelining if configured with a pipelining factor
greater than 1.
Undici always assumes that connections are persistent and will immediately pipeline requests, without checking whether the connection is persistent. Hence, automatic fallback to HTTP/1.0 or HTTP/1.1 without pipelining is not supported.
Undici will immediately pipeline when retrying requests afters a failed connection. However, Undici will not retry the first remaining requests in the prior pipeline and instead error the corresponding callback/promise/stream.
Refs: https://tools.ietf.org/html/rfc2616#section-8.1.2.2
Refs: https://tools.ietf.org/html/rfc7230#section-6.3.2
MIT