Last commit july 5th

This commit is contained in:
2024-07-05 13:46:23 +02:00
parent dad0d86e8c
commit b0e4dfbb76
24982 changed files with 2621219 additions and 413 deletions

20
spa/node_modules/@npmcli/arborist/LICENSE.md generated vendored Normal file
View File

@@ -0,0 +1,20 @@
<!-- This file is automatically added by @npmcli/template-oss. Do not edit. -->
ISC License
Copyright npm, Inc.
Permission to use, copy, modify, and/or distribute this
software for any purpose with or without fee is hereby
granted, provided that the above copyright notice and this
permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND NPM DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO
EVENT SHALL NPM BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

349
spa/node_modules/@npmcli/arborist/README.md generated vendored Normal file
View File

@@ -0,0 +1,349 @@
# @npmcli/arborist
[![npm version](https://img.shields.io/npm/v/@npmcli/arborist.svg)](https://npm.im/@npmcli/arborist)
[![license](https://img.shields.io/npm/l/@npmcli/arborist.svg)](https://npm.im/@npmcli/arborist)
[![CI - @npmcli/arborist](https://github.com/npm/cli/actions/workflows/ci-npmcli-arborist.yml/badge.svg)](https://github.com/npm/cli/actions/workflows/ci-npmcli-arborist.yml)
Inspect and manage `node_modules` trees.
![a tree with the word ARBORIST superimposed on it](https://raw.githubusercontent.com/npm/arborist/main/docs/logo.svg?sanitize=true)
There's more documentation [in the docs
folder](https://github.com/npm/cli/tree/latest/workspaces/arborist/docs).
## USAGE
```js
const Arborist = require('@npmcli/arborist')
const arb = new Arborist({
// options object
// where we're doing stuff. defaults to cwd.
path: '/path/to/package/root',
// url to the default registry. defaults to npm's default registry
registry: 'https://registry.npmjs.org',
// scopes can be mapped to a different registry
'@foo:registry': 'https://registry.foo.com/',
// Auth can be provided in a couple of different ways. If none are
// provided, then requests are anonymous, and private packages will 404.
// Arborist doesn't do anything with these, it just passes them down
// the chain to pacote and npm-registry-fetch.
// Safest: a bearer token provided by a registry:
// 1. an npm auth token, used with the default registry
token: 'deadbeefcafebad',
// 2. an alias for the same thing:
_authToken: 'deadbeefcafebad',
// insecure options:
// 3. basic auth, username:password, base64 encoded
auth: 'aXNhYWNzOm5vdCBteSByZWFsIHBhc3N3b3Jk',
// 4. username and base64 encoded password
username: 'isaacs',
password: 'bm90IG15IHJlYWwgcGFzc3dvcmQ=',
// auth configs can also be scoped to a given registry with this
// rather unusual pattern:
'//registry.foo.com:token': 'blahblahblah',
'//basic.auth.only.foo.com:_auth': 'aXNhYWNzOm5vdCBteSByZWFsIHBhc3N3b3Jk',
'//registry.foo.com:always-auth': true,
})
// READING
// returns a promise. reads the actual contents of node_modules
arb.loadActual().then(tree => {
// tree is also stored at arb.virtualTree
})
// read just what the package-lock.json/npm-shrinkwrap says
// This *also* loads the yarn.lock file, but that's only relevant
// when building the ideal tree.
arb.loadVirtual().then(tree => {
// tree is also stored at arb.virtualTree
// now arb.virtualTree is loaded
// this fails if there's no package-lock.json or package.json in the folder
// note that loading this way should only be done if there's no
// node_modules folder
})
// OPTIMIZING AND DESIGNING
// build an ideal tree from the package.json and various lockfiles.
arb.buildIdealTree(options).then(() => {
// next step is to reify that ideal tree onto disk.
// options can be:
// rm: array of package names to remove at top level
// add: Array of package specifiers to add at the top level. Each of
// these will be resolved with pacote.manifest if the name can't be
// determined from the spec. (Eg, `github:foo/bar` vs `foo@somespec`.)
// The dep will be saved in the location where it already exists,
// (or pkg.dependencies) unless a different saveType is specified.
// saveType: Save added packages in a specific dependency set.
// - null (default) Wherever they exist already, or 'dependencies'
// - prod: definitely in 'dependencies'
// - optional: in 'optionalDependencies'
// - dev: devDependencies
// - peer: save in peerDependencies, and remove any optional flag from
// peerDependenciesMeta if one exists
// - peerOptional: save in peerDependencies, and add a
// peerDepsMeta[name].optional flag
// saveBundle: add newly added deps to the bundleDependencies list
// update: Either `true` to just go ahead and update everything, or an
// object with any or all of the following fields:
// - all: boolean. set to true to just update everything
// - names: names of packages update (like `npm update foo`)
// prune: boolean, default true. Prune extraneous nodes from the tree.
// preferDedupe: prefer to deduplicate packages if possible, rather than
// choosing a newer version of a dependency. Defaults to false, ie,
// always try to get the latest and greatest deps.
// legacyBundling: Nest every dep under the node requiring it, npm v2 style.
// No unnecessary deduplication. Default false.
// At the end of this process, arb.idealTree is set.
})
// WRITING
// Make the idealTree be the thing that's on disk
arb.reify({
// write the lockfile(s) back to disk, and package.json with any updates
// defaults to 'true'
save: true,
}).then(() => {
// node modules has been written to match the idealTree
})
```
## DATA STRUCTURES
A `node_modules` tree is a logical graph of dependencies overlaid on a
physical tree of folders.
A `Node` represents a package folder on disk, either at the root of the
package, or within a `node_modules` folder. The physical structure of the
folder tree is represented by the `node.parent` reference to the containing
folder, and `node.children` map of nodes within its `node_modules`
folder, where the key in the map is the name of the folder in
`node_modules`, and the value is the child node.
A node without a parent is a top of tree.
A `Link` represents a symbolic link to a package on disk. This can be a
symbolic link to a package folder within the current tree, or elsewhere on
disk. The `link.target` is a reference to the actual node. Links differ
from Nodes in that dependencies are resolved from the _target_ location,
rather than from the link location.
An `Edge` represents a dependency relationship. Each node has an `edgesIn`
set, and an `edgesOut` map. Each edge has a `type` which specifies what
kind of dependency it represents: `'prod'` for regular dependencies,
`'peer'` for peerDependencies, `'dev'` for devDependencies, and
`'optional'` for optionalDependencies. `edge.from` is a reference to the
node that has the dependency, and `edge.to` is a reference to the node that
requires the dependency.
As nodes are moved around in the tree, the graph edges are automatically
updated to point at the new module resolution targets. In other words,
`edge.from`, `edge.name`, and `edge.spec` are immutable; `edge.to` is
updated automatically when a node's parent changes.
### class Node
All arborist trees are `Node` objects. A `Node` refers
to a package folder, which may have children in `node_modules`.
* `node.name` The name of this node's folder in `node_modules`.
* `node.parent` Physical parent node in the tree. The package in whose
`node_modules` folder this package lives. Null if node is top of tree.
Setting `node.parent` will automatically update `node.location` and all
graph edges affected by the move.
* `node.meta` A `Shrinkwrap` object which looks up `resolved` and
`integrity` values for all modules in this tree. Only relevant on `root`
nodes.
* `node.children` Map of packages located in the node's `node_modules`
folder.
* `node.package` The contents of this node's `package.json` file.
* `node.path` File path to this package. If the node is a link, then this
is the path to the link, not to the link target. If the node is _not_ a
link, then this matches `node.realpath`.
* `node.realpath` The full real filepath on disk where this node lives.
* `node.location` A slash-normalized relative path from the root node to
this node's path.
* `node.isLink` Whether this represents a symlink. Always `false` for Node
objects, always `true` for Link objects.
* `node.isRoot` True if this node is a root node. (Ie, if `node.root ===
node`.)
* `node.root` The root node where we are working. If not assigned to some
other value, resolves to the node itself. (Ie, the root node's `root`
property refers to itself.)
* `node.isTop` True if this node is the top of its tree (ie, has no
`parent`, false otherwise).
* `node.top` The top node in this node's tree. This will be equal to
`node.root` for simple trees, but link targets will frequently be outside
of (or nested somewhere within) a `node_modules` hierarchy, and so will
have a different `top`.
* `node.dev`, `node.optional`, `node.devOptional`, `node.peer`, Indicators
as to whether this node is a dev, optional, and/or peer dependency.
These flags are relevant when pruning dependencies out of the tree or
deciding what to reify. See **Package Dependency Flags** below for
explanations.
* `node.edgesOut` Edges in the dependency graph indicating nodes that this
node depends on, which resolve its dependencies.
* `node.edgesIn` Edges in the dependency graph indicating nodes that depend
on this node.
* `extraneous` True if this package is not required by any other for any
reason. False for top of tree.
* `node.resolve(name)` Identify the node that will be returned when code
in this package runs `require(name)`
* `node.errors` Array of errors encountered while parsing package.json or
version specifiers.
### class Link
Link objects represent a symbolic link within the `node_modules` folder.
They have most of the same properties and methods as `Node` objects, with a
few differences.
* `link.target` A Node object representing the package that the link
references. If this is a Node already present within the tree, then it
will be the same object. If it's outside of the tree, then it will be
treated as the top of its own tree.
* `link.isLink` Always true.
* `link.children` This is always an empty map, since links don't have their
own children directly.
### class Edge
Edge objects represent a dependency relationship a package node to the
point in the tree where the dependency will be loaded. As nodes are moved
within the tree, Edges automatically update to point to the appropriate
location.
* `new Edge({ from, type, name, spec })` Creates a new edge with the
specified fields. After instantiation, none of the fields can be
changed directly.
* `edge.from` The node that has the dependency.
* `edge.type` The type of dependency. One of `'prod'`, `'dev'`, `'peer'`,
or `'optional'`.
* `edge.name` The name of the dependency. Ie, the key in the
relevant `package.json` dependencies object.
* `edge.spec` The specifier that is required. This can be a version,
range, tag name, git url, or tarball URL. Any specifier allowed by npm
is supported.
* `edge.to` Automatically set to the node in the tree that matches the
`name` field.
* `edge.valid` True if `edge.to` satisfies the specifier.
* `edge.error` A string indicating the type of error if there is a problem,
or `null` if it's valid. Values, in order of precedence:
* `DETACHED` Indicates that the edge has been detached from its
`edge.from` node, typically because a new edge was created when a
dependency specifier was modified.
* `MISSING` Indicates that the dependency is unmet. Note that this is
_not_ set for unmet dependencies of the `optional` type.
* `PEER LOCAL` Indicates that a `peerDependency` is found in the
node's local `node_modules` folder, and the node is not the top of
the tree. This violates the `peerDependency` contract, because it
means that the dependency is not a peer.
* `INVALID` Indicates that the dependency does not satisfy `edge.spec`.
* `edge.reload()` Re-resolve to find the appropriate value for `edge.to`.
Called automatically from the `Node` class when the tree is mutated.
### Package Dependency Flags
The dependency type of a node can be determined efficiently by looking at
the `dev`, `optional`, and `devOptional` flags on the node object. These
are updated by arborist when necessary whenever the tree is modified in
such a way that the dependency graph can change, and are relevant when
pruning nodes from the tree.
```
| extraneous | peer | dev | optional | devOptional | meaning | prune? |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | | | | | production dep | never |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| X | N/A | N/A | N/A | N/A | nothing depends on | always |
| | | | | | this, it is trash | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | | X | | X | devDependency, or | if pruning dev |
| | | | | not in lock | only depended upon | |
| | | | | | by devDependencies | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | | | X | X | optionalDependency, | if pruning |
| | | | | not in lock | or only depended on | optional |
| | | | | | by optionalDeps | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | | X | X | X | Optional dependency | if pruning EITHER |
| | | | | not in lock | of dep(s) in the | dev OR optional |
| | | | | | dev hierarchy | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | | | | X | BOTH a non-optional | if pruning BOTH |
| | | | | in lock | dep within the dev | dev AND optional |
| | | | | | hierarchy, AND a | |
| | | | | | dep within the | |
| | | | | | optional hierarchy | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | X | | | | peer dependency, or | if pruning peers |
| | | | | | only depended on by | |
| | | | | | peer dependencies | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | X | X | | X | peer dependency of | if pruning peer |
| | | | | not in lock | dev node hierarchy | OR dev deps |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | X | | X | X | peer dependency of | if pruning peer |
| | | | | not in lock | optional nodes, or | OR optional deps |
| | | | | | peerOptional dep | |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | X | X | X | X | peer optional deps | if pruning peer |
| | | | | not in lock | of the dev dep | OR optional OR |
| | | | | | hierarchy | dev |
|------------+------+-----+----------+-------------+---------------------+-------------------|
| | X | | | X | BOTH a non-optional | if pruning peers |
| | | | | in lock | peer dep within the | OR: |
| | | | | | dev hierarchy, AND | BOTH optional |
| | | | | | a peer optional dep | AND dev deps |
+------------+------+-----+----------+-------------+---------------------+-------------------+
```
* If none of these flags are set, then the node is required by the
dependency and/or peerDependency hierarchy. It should not be pruned.
* If _both_ `node.dev` and `node.optional` are set, then the node is an
optional dependency of one of the packages in the devDependency
hierarchy. It should be pruned if _either_ dev or optional deps are
being removed.
* If `node.dev` is set, but `node.optional` is not, then the node is
required in the devDependency hierarchy. It should be pruned if dev
dependencies are being removed.
* If `node.optional` is set, but `node.dev` is not, then the node is
required in the optionalDependency hierarchy. It should be pruned if
optional dependencies are being removed.
* If `node.devOptional` is set, then the node is a (non-optional)
dependency within the devDependency hierarchy, _and_ a dependency
within the `optionalDependency` hierarchy. It should be pruned if
_both_ dev and optional dependencies are being removed.
* If `node.peer` is set, then all the same semantics apply as above, except
that the dep is brought in by a peer dep at some point, rather than a
normal non-peer dependency.
Note: `devOptional` is only set in the shrinkwrap/package-lock file if
_neither_ `dev` nor `optional` are set, as it would be redundant.
## BIN
Arborist ships with a cli that can be used to run arborist specific commands outside of the context of the npm CLI. This script is currently not part of the public API and is subject to breaking changes outside of major version bumps.
To see the usage run:
```
npx @npmcli/arborist --help
```

19
spa/node_modules/@npmcli/arborist/bin/actual.js generated vendored Normal file
View File

@@ -0,0 +1,19 @@
const Arborist = require('../')
const printTree = require('./lib/print-tree.js')
module.exports = (options, time) => new Arborist(options)
.loadActual(options)
.then(time)
.then(async ({ timing, result: tree }) => {
printTree(tree)
if (options.save) {
await tree.meta.save()
}
if (options.saveHidden) {
tree.meta.hiddenLockfile = true
tree.meta.filename = options.path + '/node_modules/.package-lock.json'
await tree.meta.save()
}
return `read ${tree.inventory.size} deps in ${timing.ms}`
})

51
spa/node_modules/@npmcli/arborist/bin/audit.js generated vendored Normal file
View File

@@ -0,0 +1,51 @@
const Arborist = require('../')
const printTree = require('./lib/print-tree.js')
const log = require('./lib/logging.js')
const Vuln = require('../lib/vuln.js')
const printReport = report => {
for (const vuln of report.values()) {
log.info(printVuln(vuln))
}
if (report.topVulns.size) {
log.info('\n# top-level vulnerabilities')
for (const vuln of report.topVulns.values()) {
log.info(printVuln(vuln))
}
}
}
const printVuln = vuln => {
return {
__proto__: { constructor: Vuln },
name: vuln.name,
issues: [...vuln.advisories].map(a => printAdvisory(a)),
range: vuln.simpleRange,
nodes: [...vuln.nodes].map(node => `${node.name} ${node.location || '#ROOT'}`),
...(vuln.topNodes.size === 0 ? {} : {
topNodes: [...vuln.topNodes].map(node => `${node.location || '#ROOT'}`),
}),
}
}
const printAdvisory = a => `${a.title}${a.url ? ' ' + a.url : ''}`
module.exports = (options, time) => {
const arb = new Arborist(options)
return arb
.audit(options)
.then(time)
.then(async ({ timing, result: tree }) => {
if (options.fix) {
printTree(tree)
}
printReport(arb.auditReport)
if (tree.meta && options.save) {
await tree.meta.save()
}
return options.fix
? `resolved ${tree.inventory.size} deps in ${timing.seconds}`
: `done in ${timing.seconds}`
})
}

38
spa/node_modules/@npmcli/arborist/bin/funding.js generated vendored Normal file
View File

@@ -0,0 +1,38 @@
const Arborist = require('../')
const log = require('./lib/logging.js')
module.exports = (options, time) => {
const query = options._.shift()
const a = new Arborist(options)
return a
.loadVirtual()
.then(tree => {
// only load the actual tree if the virtual one doesn't have modern metadata
if (!tree.meta || !(tree.meta.originalLockfileVersion >= 2)) {
log.error('old metadata, load actual')
throw 'load actual'
} else {
log.error('meta ok, return virtual tree')
return tree
}
})
.catch(() => a.loadActual())
.then(time)
.then(({ timing, result: tree }) => {
if (!query) {
for (const node of tree.inventory.values()) {
if (node.package.funding) {
log.info(node.name, node.location, node.package.funding)
}
}
} else {
for (const node of tree.inventory.query('name', query)) {
if (node.package.funding) {
log.info(node.name, node.location, node.package.funding)
}
}
}
return `read ${tree.inventory.size} deps in ${timing.ms}`
})
}

14
spa/node_modules/@npmcli/arborist/bin/ideal.js generated vendored Normal file
View File

@@ -0,0 +1,14 @@
const Arborist = require('../')
const printTree = require('./lib/print-tree.js')
module.exports = (options, time) => new Arborist(options)
.buildIdealTree(options)
.then(time)
.then(async ({ timing, result: tree }) => {
printTree(tree)
if (tree.meta && options.save) {
await tree.meta.save()
}
return `resolved ${tree.inventory.size} deps in ${timing.seconds}`
})

111
spa/node_modules/@npmcli/arborist/bin/index.js generated vendored Executable file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env node
const fs = require('fs')
const path = require('path')
const { bin, arb: options } = require('./lib/options')
const version = require('../package.json').version
const usage = (message = '') => `Arborist - the npm tree doctor
Version: ${version}
${message && '\n' + message + '\n'}
# USAGE
arborist <cmd> [path] [options...]
# COMMANDS
* reify: reify ideal tree to node_modules (install, update, rm, ...)
* prune: prune the ideal tree and reify (like npm prune)
* ideal: generate and print the ideal tree
* actual: read and print the actual tree in node_modules
* virtual: read and print the virtual tree in the local shrinkwrap file
* shrinkwrap: load a local shrinkwrap and print its data
* audit: perform a security audit on project dependencies
* funding: query funding information in the local package tree. A second
positional argument after the path name can limit to a package name.
* license: query license information in the local package tree. A second
positional argument after the path name can limit to a license type.
* help: print this text
* version: print the version
# OPTIONS
Most npm options are supported, but in camelCase rather than css-case. For
example, instead of '--dry-run', use '--dryRun'.
Additionally:
* --loglevel=warn|--quiet will supppress the printing of package trees
* --logfile <file|bool> will output logs to a file
* --timing will show timing information
* Instead of 'npm install <pkg>', use 'arborist reify --add=<pkg>'.
The '--add=<pkg>' option can be specified multiple times.
* Instead of 'npm rm <pkg>', use 'arborist reify --rm=<pkg>'.
The '--rm=<pkg>' option can be specified multiple times.
* Instead of 'npm update', use 'arborist reify --update-all'.
* 'npm audit fix' is 'arborist audit --fix'
`
const commands = {
version: () => console.log(version),
help: () => console.log(usage()),
exit: () => {
process.exitCode = 1
console.error(
usage(`Error: command '${bin.command}' does not exist.`)
)
},
}
const commandFiles = fs.readdirSync(__dirname).filter((f) => path.extname(f) === '.js' && f !== __filename)
for (const file of commandFiles) {
const command = require(`./${file}`)
const name = path.basename(file, '.js')
const totalTime = `bin:${name}:init`
const scriptTime = `bin:${name}:script`
commands[name] = () => {
const timers = require('./lib/timers')
const log = require('./lib/logging')
log.info(name, options)
process.emit('time', totalTime)
process.emit('time', scriptTime)
return command(options, (result) => {
process.emit('timeEnd', scriptTime)
return {
result,
timing: {
seconds: `${timers.get(scriptTime) / 1e9}s`,
ms: `${timers.get(scriptTime) / 1e6}ms`,
},
}
})
.then((result) => {
log.info(result)
return result
})
.catch((err) => {
process.exitCode = 1
log.error(err)
return err
})
.then((r) => {
process.emit('timeEnd', totalTime)
if (bin.loglevel !== 'silent') {
console[process.exitCode ? 'error' : 'log'](r)
}
return r
})
}
}
if (commands[bin.command]) {
commands[bin.command]()
} else {
commands.exit()
}

77
spa/node_modules/@npmcli/arborist/bin/lib/logging.js generated vendored Normal file
View File

@@ -0,0 +1,77 @@
const log = require('proc-log')
const fs = require('fs')
const { dirname } = require('path')
const os = require('os')
const { inspect, format } = require('util')
const { bin: options } = require('./options.js')
// add a meta method to proc-log for passing optional
// metadata through to log handlers
const META = Symbol('meta')
const parseArgs = (...args) => {
const { [META]: isMeta } = args[args.length - 1] || {}
return isMeta
? [args[args.length - 1], ...args.slice(0, args.length - 1)]
: [{}, ...args]
}
log.meta = (meta = {}) => ({ [META]: true, ...meta })
const levels = new Map([
'silly',
'verbose',
'info',
'http',
'notice',
'warn',
'error',
'silent',
].map((level, index) => [level, index]))
const addLogListener = (write, { eol = os.EOL, loglevel = 'silly', colors = false } = {}) => {
const levelIndex = levels.get(loglevel)
const magenta = m => colors ? `\x1B[35m${m}\x1B[39m` : m
const dim = m => colors ? `\x1B[2m${m}\x1B[22m` : m
const red = m => colors ? `\x1B[31m${m}\x1B[39m` : m
const formatter = (level, ...args) => {
const depth = level === 'error' && args[0] && args[0].code === 'ERESOLVE' ? Infinity : 10
if (level === 'info' && args[0] === 'timeEnd') {
args[1] = dim(args[1])
} else if (level === 'error' && args[0] === 'timeError') {
args[1] = red(args[1])
}
const messages = args.map(a => typeof a === 'string' ? a : inspect(a, { depth, colors }))
const pref = `${process.pid} ${magenta(level)} `
return pref + format(...messages).trim().split('\n').join(`${eol}${pref}`) + eol
}
process.on('log', (...args) => {
const [meta, level, ...logArgs] = parseArgs(...args)
if (levelIndex <= levels.get(level) || meta.force) {
write(formatter(level, ...logArgs))
}
})
}
if (options.loglevel !== 'silent') {
addLogListener((v) => process.stderr.write(v), {
eol: '\n',
colors: options.colors,
loglevel: options.loglevel,
})
}
if (options.logfile) {
log.silly('logfile', options.logfile)
fs.mkdirSync(dirname(options.logfile), { recursive: true })
const fd = fs.openSync(options.logfile, 'a')
addLogListener((str) => fs.writeSync(fd, str))
}
module.exports = log

123
spa/node_modules/@npmcli/arborist/bin/lib/options.js generated vendored Normal file
View File

@@ -0,0 +1,123 @@
const nopt = require('nopt')
const path = require('path')
const has = (o, k) => Object.prototype.hasOwnProperty.call(o, k)
const cleanPath = (val) => {
const k = Symbol('key')
const data = {}
nopt.typeDefs.path.validate(data, k, val)
return data[k]
}
const parse = (...noptArgs) => {
const binOnlyOpts = {
command: String,
loglevel: String,
colors: Boolean,
timing: ['always', Boolean],
logfile: String,
}
const arbOpts = {
add: Array,
rm: Array,
omit: Array,
update: Array,
workspaces: Array,
global: Boolean,
force: Boolean,
'global-style': Boolean,
'prefer-dedupe': Boolean,
'legacy-peer-deps': Boolean,
'update-all': Boolean,
before: Date,
path: path,
cache: path,
...binOnlyOpts,
}
const short = {
quiet: ['--loglevel', 'warn'],
logs: ['--logfile', 'true'],
w: '--workspaces',
g: '--global',
f: '--force',
}
const defaults = {
// key order is important for command and path
// since they shift positional args
// command is 1st, path is 2nd
command: (o) => o.argv.remain.shift(),
path: (o) => cleanPath(o.argv.remain.shift() || '.'),
colors: has(process.env, 'NO_COLOR') ? false : !!process.stderr.isTTY,
loglevel: 'silly',
timing: (o) => o.loglevel === 'silly',
cache: `${process.env.HOME}/.npm/_cacache`,
}
const derived = [
// making update either `all` or an array of names but not both
({ updateAll: all, update: names, ...o }) => {
if (all || names) {
o.update = all != null ? { all } : { names }
}
return o
},
({ logfile, ...o }) => {
// logfile is parsed as a string so if its true or set but empty
// then set the default logfile
if (logfile === 'true' || logfile === '') {
logfile = `arb-log-${new Date().toISOString().replace(/[.:]/g, '_')}.log`
}
// then parse it the same as nopt parses other paths
if (logfile) {
o.logfile = cleanPath(logfile)
}
return o
},
]
const transforms = [
// Camelcase all top level keys
(o) => {
const entries = Object.entries(o).map(([k, v]) => [
k.replace(/-./g, s => s[1].toUpperCase()),
v,
])
return Object.fromEntries(entries)
},
// Set defaults on unset keys
(o) => {
for (const [k, v] of Object.entries(defaults)) {
if (!has(o, k)) {
o[k] = typeof v === 'function' ? v(o) : v
}
}
return o
},
// Set/unset derived values
...derived.map((derive) => (o) => derive(o) || o),
// Separate bin and arborist options
({ argv: { remain: _ }, ...o }) => {
const bin = { _ }
for (const k of Object.keys(binOnlyOpts)) {
if (has(o, k)) {
bin[k] = o[k]
delete o[k]
}
}
return { bin, arb: o }
},
]
let options = nopt(arbOpts, short, ...noptArgs)
for (const t of transforms) {
options = t(options)
}
return options
}
module.exports = parse()

View File

@@ -0,0 +1,4 @@
const { inspect } = require('util')
const log = require('./logging.js')
module.exports = tree => log.info(inspect(tree.toJSON(), { depth: Infinity }))

33
spa/node_modules/@npmcli/arborist/bin/lib/timers.js generated vendored Normal file
View File

@@ -0,0 +1,33 @@
const { bin: options } = require('./options.js')
const log = require('./logging.js')
const timers = new Map()
const finished = new Map()
process.on('time', name => {
if (timers.has(name)) {
throw new Error('conflicting timer! ' + name)
}
timers.set(name, process.hrtime.bigint())
})
process.on('timeEnd', name => {
if (!timers.has(name)) {
throw new Error('timer not started! ' + name)
}
const elapsed = Number(process.hrtime.bigint() - timers.get(name))
timers.delete(name)
finished.set(name, elapsed)
if (options.timing) {
log.info('timeEnd', `${name} ${elapsed / 1e9}s`, log.meta({ force: options.timing === 'always' }))
}
})
process.on('exit', () => {
for (const name of timers.keys()) {
log.error('timeError', 'Dangling timer:', name)
process.exitCode = 1
}
})
module.exports = finished

48
spa/node_modules/@npmcli/arborist/bin/license.js generated vendored Normal file
View File

@@ -0,0 +1,48 @@
const localeCompare = require('@isaacs/string-locale-compare')('en')
const Arborist = require('../')
const log = require('./lib/logging.js')
module.exports = (options, time) => {
const query = options._.shift()
const a = new Arborist(options)
return a
.loadVirtual()
.then(tree => {
// only load the actual tree if the virtual one doesn't have modern metadata
if (!tree.meta || !(tree.meta.originalLockfileVersion >= 2)) {
throw 'load actual'
} else {
return tree
}
}).catch((er) => {
log.error('loading actual tree', er)
return a.loadActual()
})
.then(time)
.then(({ result: tree }) => {
const output = []
if (!query) {
const set = []
for (const license of tree.inventory.query('license')) {
set.push([tree.inventory.query('license', license).size, license])
}
for (const [count, license] of set.sort((a, b) =>
a[1] && b[1] ? b[0] - a[0] || localeCompare(a[1], b[1])
: a[1] ? -1
: b[1] ? 1
: 0)) {
output.push(`${count} ${license}`)
log.info(count, license)
}
} else {
for (const node of tree.inventory.query('license', query === 'undefined' ? undefined : query)) {
const msg = `${node.name} ${node.location} ${node.package.description || ''}`
output.push(msg)
log.info(msg)
}
}
return output.join('\n')
})
}

48
spa/node_modules/@npmcli/arborist/bin/prune.js generated vendored Normal file
View File

@@ -0,0 +1,48 @@
const Arborist = require('../')
const printTree = require('./lib/print-tree.js')
const log = require('./lib/logging.js')
const printDiff = diff => {
const { depth } = require('treeverse')
depth({
tree: diff,
visit: d => {
if (d.location === '') {
return
}
switch (d.action) {
case 'REMOVE':
log.info('REMOVE', d.actual.location)
break
case 'ADD':
log.info('ADD', d.ideal.location, d.ideal.resolved)
break
case 'CHANGE':
log.info('CHANGE', d.actual.location, {
from: d.actual.resolved,
to: d.ideal.resolved,
})
break
}
},
getChildren: d => d.children,
})
}
module.exports = (options, time) => {
const arb = new Arborist(options)
return arb
.prune(options)
.then(time)
.then(async ({ timing, result: tree }) => {
printTree(tree)
if (options.dryRun) {
printDiff(arb.diff)
}
if (tree.meta && options.save) {
await tree.meta.save()
}
return `resolved ${tree.inventory.size} deps in ${timing.seconds}`
})
}

48
spa/node_modules/@npmcli/arborist/bin/reify.js generated vendored Normal file
View File

@@ -0,0 +1,48 @@
const Arborist = require('../')
const printTree = require('./lib/print-tree.js')
const log = require('./lib/logging.js')
const printDiff = diff => {
const { depth } = require('treeverse')
depth({
tree: diff,
visit: d => {
if (d.location === '') {
return
}
switch (d.action) {
case 'REMOVE':
log.info('REMOVE', d.actual.location)
break
case 'ADD':
log.info('ADD', d.ideal.location, d.ideal.resolved)
break
case 'CHANGE':
log.info('CHANGE', d.actual.location, {
from: d.actual.resolved,
to: d.ideal.resolved,
})
break
}
},
getChildren: d => d.children,
})
}
module.exports = (options, time) => {
const arb = new Arborist(options)
return arb
.reify(options)
.then(time)
.then(async ({ timing, result: tree }) => {
printTree(tree)
if (options.dryRun) {
printDiff(arb.diff)
}
if (tree.meta && options.save) {
await tree.meta.save()
}
return `resolved ${tree.inventory.size} deps in ${timing.seconds}`
})
}

7
spa/node_modules/@npmcli/arborist/bin/shrinkwrap.js generated vendored Normal file
View File

@@ -0,0 +1,7 @@
const Shrinkwrap = require('../lib/shrinkwrap.js')
module.exports = (options, time) => Shrinkwrap
.load(options)
.then((s) => s.commit())
.then(time)
.then(({ result: s }) => JSON.stringify(s, 0, 2))

14
spa/node_modules/@npmcli/arborist/bin/virtual.js generated vendored Normal file
View File

@@ -0,0 +1,14 @@
const Arborist = require('../')
const printTree = require('./lib/print-tree.js')
module.exports = (options, time) => new Arborist(options)
.loadVirtual()
.then(time)
.then(async ({ timing, result: tree }) => {
printTree(tree)
if (options.save) {
await tree.meta.save()
}
return `read ${tree.inventory.size} deps in ${timing.ms}`
})

View File

@@ -0,0 +1,143 @@
// add and remove dependency specs to/from pkg manifest
const log = require('proc-log')
const localeCompare = require('@isaacs/string-locale-compare')('en')
const add = ({ pkg, add, saveBundle, saveType }) => {
for (const { name, rawSpec } of add) {
let addSaveType = saveType
// if the user does not give us a type, we infer which type(s)
// to keep based on the same order of priority we do when
// building the tree as defined in the _loadDeps method of
// the node class.
if (!addSaveType) {
addSaveType = inferSaveType(pkg, name)
}
if (addSaveType === 'prod') {
// a production dependency can only exist as production (rpj ensures it
// doesn't coexist w/ optional)
deleteSubKey(pkg, 'devDependencies', name, 'dependencies')
deleteSubKey(pkg, 'peerDependencies', name, 'dependencies')
} else if (addSaveType === 'dev') {
// a dev dependency may co-exist as peer, or optional, but not production
deleteSubKey(pkg, 'dependencies', name, 'devDependencies')
} else if (addSaveType === 'optional') {
// an optional dependency may co-exist as dev (rpj ensures it doesn't
// coexist w/ prod)
deleteSubKey(pkg, 'peerDependencies', name, 'optionalDependencies')
} else { // peer or peerOptional is all that's left
// a peer dependency may coexist as dev
deleteSubKey(pkg, 'dependencies', name, 'peerDependencies')
deleteSubKey(pkg, 'optionalDependencies', name, 'peerDependencies')
}
const depType = saveTypeMap.get(addSaveType)
pkg[depType] = pkg[depType] || {}
if (rawSpec !== '*' || pkg[depType][name] === undefined) {
pkg[depType][name] = rawSpec
}
if (addSaveType === 'optional') {
// Affordance for previous npm versions that require this behaviour
pkg.dependencies = pkg.dependencies || {}
pkg.dependencies[name] = pkg.optionalDependencies[name]
}
if (addSaveType === 'peer' || addSaveType === 'peerOptional') {
const pdm = pkg.peerDependenciesMeta || {}
if (addSaveType === 'peer' && pdm[name] && pdm[name].optional) {
pdm[name].optional = false
} else if (addSaveType === 'peerOptional') {
pdm[name] = pdm[name] || {}
pdm[name].optional = true
pkg.peerDependenciesMeta = pdm
}
// peerDeps are often also a devDep, so that they can be tested when
// using package managers that don't auto-install peer deps
if (pkg.devDependencies && pkg.devDependencies[name] !== undefined) {
pkg.devDependencies[name] = pkg.peerDependencies[name]
}
}
if (saveBundle && addSaveType !== 'peer' && addSaveType !== 'peerOptional') {
// keep it sorted, keep it unique
const bd = new Set(pkg.bundleDependencies || [])
bd.add(name)
pkg.bundleDependencies = [...bd].sort(localeCompare)
}
}
return pkg
}
// Canonical source of both the map between saveType and where it correlates to
// in the package, and the names of all our dependencies attributes
const saveTypeMap = new Map([
['dev', 'devDependencies'],
['optional', 'optionalDependencies'],
['prod', 'dependencies'],
['peerOptional', 'peerDependencies'],
['peer', 'peerDependencies'],
])
// Finds where the package is already in the spec and infers saveType from that
const inferSaveType = (pkg, name) => {
for (const saveType of saveTypeMap.keys()) {
if (hasSubKey(pkg, saveTypeMap.get(saveType), name)) {
if (
saveType === 'peerOptional' &&
(!hasSubKey(pkg, 'peerDependenciesMeta', name) ||
!pkg.peerDependenciesMeta[name].optional)
) {
return 'peer'
}
return saveType
}
}
return 'prod'
}
const hasSubKey = (pkg, depType, name) => {
return pkg[depType] && Object.prototype.hasOwnProperty.call(pkg[depType], name)
}
// Removes a subkey and warns about it if it's being replaced
const deleteSubKey = (pkg, depType, name, replacedBy) => {
if (hasSubKey(pkg, depType, name)) {
if (replacedBy) {
log.warn('idealTree', `Removing ${depType}.${name} in favor of ${replacedBy}.${name}`)
}
delete pkg[depType][name]
// clean up peerDepsMeta if we are removing something from peerDependencies
if (depType === 'peerDependencies' && pkg.peerDependenciesMeta) {
delete pkg.peerDependenciesMeta[name]
if (!Object.keys(pkg.peerDependenciesMeta).length) {
delete pkg.peerDependenciesMeta
}
}
if (!Object.keys(pkg[depType]).length) {
delete pkg[depType]
}
}
}
const rm = (pkg, rm) => {
for (const depType of new Set(saveTypeMap.values())) {
for (const name of rm) {
deleteSubKey(pkg, depType, name)
}
}
if (pkg.bundleDependencies) {
pkg.bundleDependencies = pkg.bundleDependencies
.filter(name => !rm.includes(name))
if (!pkg.bundleDependencies.length) {
delete pkg.bundleDependencies
}
}
return pkg
}
module.exports = { add, rm, saveTypeMap, hasSubKey }

View File

@@ -0,0 +1,51 @@
// mixin implementing the audit method
const AuditReport = require('../audit-report.js')
// shared with reify
const _global = Symbol.for('global')
const _workspaces = Symbol.for('workspaces')
const _includeWorkspaceRoot = Symbol.for('includeWorkspaceRoot')
module.exports = cls => class Auditor extends cls {
async audit (options = {}) {
this.addTracker('audit')
if (this[_global]) {
throw Object.assign(
new Error('`npm audit` does not support testing globals'),
{ code: 'EAUDITGLOBAL' }
)
}
// allow the user to set options on the ctor as well.
// XXX: deprecate separate method options objects.
options = { ...this.options, ...options }
process.emit('time', 'audit')
let tree
if (options.packageLock === false) {
// build ideal tree
await this.loadActual(options)
await this.buildIdealTree()
tree = this.idealTree
} else {
tree = await this.loadVirtual()
}
if (this[_workspaces] && this[_workspaces].length) {
options.filterSet = this.workspaceDependencySet(
tree,
this[_workspaces],
this[_includeWorkspaceRoot]
)
}
if (!options.workspacesEnabled) {
options.filterSet =
this.excludeWorkspacesDependencySet(tree)
}
this.auditReport = await AuditReport.load(tree, options)
const ret = options.fix ? this.reify(options) : this.auditReport
process.emit('timeEnd', 'audit')
this.finishTracker('audit')
return ret
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
module.exports = cls => class Deduper extends cls {
async dedupe (options = {}) {
// allow the user to set options on the ctor as well.
// XXX: deprecate separate method options objects.
options = { ...this.options, ...options }
const tree = await this.loadVirtual().catch(() => this.loadActual())
const names = []
for (const name of tree.inventory.query('name')) {
if (tree.inventory.query('name', name).size > 1) {
names.push(name)
}
}
return this.reify({
...options,
preferDedupe: true,
update: { names },
})
}
}

167
spa/node_modules/@npmcli/arborist/lib/arborist/index.js generated vendored Normal file
View File

@@ -0,0 +1,167 @@
// The arborist manages three trees:
// - actual
// - virtual
// - ideal
//
// The actual tree is what's present on disk in the node_modules tree
// and elsewhere that links may extend.
//
// The virtual tree is loaded from metadata (package.json and lock files).
//
// The ideal tree is what we WANT that actual tree to become. This starts
// with the virtual tree, and then applies the options requesting
// add/remove/update actions.
//
// To reify a tree, we calculate a diff between the ideal and actual trees,
// and then turn the actual tree into the ideal tree by taking the actions
// required. At the end of the reification process, the actualTree is
// updated to reflect the changes.
//
// Each tree has an Inventory at the root. Shrinkwrap is tracked by Arborist
// instance. It always refers to the actual tree, but is updated (and written
// to disk) on reification.
// Each of the mixin "classes" adds functionality, but are not dependent on
// constructor call order. So, we just load them in an array, and build up
// the base class, so that the overall voltron class is easier to test and
// cover, and separation of concerns can be maintained.
const { resolve } = require('path')
const { homedir } = require('os')
const { depth } = require('treeverse')
const { saveTypeMap } = require('../add-rm-pkg-deps.js')
const mixins = [
require('../tracker.js'),
require('./pruner.js'),
require('./deduper.js'),
require('./audit.js'),
require('./build-ideal-tree.js'),
require('./set-workspaces.js'),
require('./load-actual.js'),
require('./load-virtual.js'),
require('./rebuild.js'),
require('./reify.js'),
require('./isolated-reifier.js'),
]
const _workspacesEnabled = Symbol.for('workspacesEnabled')
const Base = mixins.reduce((a, b) => b(a), require('events'))
const getWorkspaceNodes = require('../get-workspace-nodes.js')
// if it's 1, 2, or 3, set it explicitly that.
// if undefined or null, set it null
// otherwise, throw.
const lockfileVersion = lfv => {
if (lfv === 1 || lfv === 2 || lfv === 3) {
return lfv
}
if (lfv === undefined || lfv === null) {
return null
}
throw new TypeError('Invalid lockfileVersion config: ' + lfv)
}
class Arborist extends Base {
constructor (options = {}) {
process.emit('time', 'arborist:ctor')
super(options)
this.options = {
nodeVersion: process.version,
...options,
Arborist: this.constructor,
path: options.path || '.',
cache: options.cache || `${homedir()}/.npm/_cacache`,
packumentCache: options.packumentCache || new Map(),
workspacesEnabled: options.workspacesEnabled !== false,
replaceRegistryHost: options.replaceRegistryHost,
lockfileVersion: lockfileVersion(options.lockfileVersion),
installStrategy: options.global ? 'shallow' : (options.installStrategy ? options.installStrategy : 'hoisted'),
}
this.replaceRegistryHost = this.options.replaceRegistryHost =
(!this.options.replaceRegistryHost || this.options.replaceRegistryHost === 'npmjs') ?
'registry.npmjs.org' : this.options.replaceRegistryHost
this[_workspacesEnabled] = this.options.workspacesEnabled
if (options.saveType && !saveTypeMap.get(options.saveType)) {
throw new Error(`Invalid saveType ${options.saveType}`)
}
this.cache = resolve(this.options.cache)
this.path = resolve(this.options.path)
process.emit('timeEnd', 'arborist:ctor')
}
// TODO: We should change these to static functions instead
// of methods for the next major version
// returns an array of the actual nodes for all the workspaces
workspaceNodes (tree, workspaces) {
return getWorkspaceNodes(tree, workspaces)
}
// returns a set of workspace nodes and all their deps
workspaceDependencySet (tree, workspaces, includeWorkspaceRoot) {
const wsNodes = this.workspaceNodes(tree, workspaces)
if (includeWorkspaceRoot) {
for (const edge of tree.edgesOut.values()) {
if (edge.type !== 'workspace' && edge.to) {
wsNodes.push(edge.to)
}
}
}
const wsDepSet = new Set(wsNodes)
const extraneous = new Set()
for (const node of wsDepSet) {
for (const edge of node.edgesOut.values()) {
const dep = edge.to
if (dep) {
wsDepSet.add(dep)
if (dep.isLink) {
wsDepSet.add(dep.target)
}
}
}
for (const child of node.children.values()) {
if (child.extraneous) {
extraneous.add(child)
}
}
}
for (const extra of extraneous) {
wsDepSet.add(extra)
}
return wsDepSet
}
// returns a set of root dependencies, excluding dependencies that are
// exclusively workspace dependencies
excludeWorkspacesDependencySet (tree) {
const rootDepSet = new Set()
depth({
tree,
visit: node => {
for (const { to } of node.edgesOut.values()) {
if (!to || to.isWorkspace) {
continue
}
for (const edgeIn of to.edgesIn.values()) {
if (edgeIn.from.isRoot || rootDepSet.has(edgeIn.from)) {
rootDepSet.add(to)
}
}
}
return node
},
filter: node => node,
getChildren: (node, tree) =>
[...tree.edgesOut.values()].map(edge => edge.to),
})
return rootDepSet
}
}
module.exports = Arborist

View File

@@ -0,0 +1,453 @@
const _makeIdealGraph = Symbol('makeIdealGraph')
const _createIsolatedTree = Symbol.for('createIsolatedTree')
const _createBundledTree = Symbol('createBundledTree')
const fs = require('fs')
const pacote = require('pacote')
const { join } = require('path')
const { depth } = require('treeverse')
const crypto = require('crypto')
// cache complicated function results
const memoize = (fn) => {
const memo = new Map()
return async function (arg) {
const key = arg
if (memo.has(key)) {
return memo.get(key)
}
const result = {}
memo.set(key, result)
await fn(result, arg)
return result
}
}
module.exports = cls => class IsolatedReifier extends cls {
/**
* Create an ideal graph.
*
* An implementation of npm RFC-0042
* https://github.com/npm/rfcs/blob/main/accepted/0042-isolated-mode.md
*
* This entire file should be considered technical debt that will be resolved
* with an Arborist refactor or rewrite. Embedded logic in Nodes and Links,
* and the incremental state of building trees and reifying contains too many
* assumptions to do a linked mode properly.
*
* Instead, this approach takes a tree built from build-ideal-tree, and
* returns a new tree-like structure without the embedded logic of Node and
* Link classes.
*
* Since the RFC requires leaving the package-lock in place, this approach
* temporarily replaces the tree state for a couple of steps of reifying.
*
**/
async [_makeIdealGraph] (options) {
/* Make sure that the ideal tree is build as the rest of
* the algorithm depends on it.
*/
const bitOpt = {
...options,
complete: false,
}
await this.buildIdealTree(bitOpt)
const idealTree = this.idealTree
this.rootNode = {}
const root = this.rootNode
this.counter = 0
// memoize to cache generating proxy Nodes
this.externalProxyMemo = memoize(this.externalProxy.bind(this))
this.workspaceProxyMemo = memoize(this.workspaceProxy.bind(this))
root.external = []
root.isProjectRoot = true
root.localLocation = idealTree.location
root.localPath = idealTree.path
root.workspaces = await Promise.all(
Array.from(idealTree.fsChildren.values(), this.workspaceProxyMemo))
const processed = new Set()
const queue = [idealTree, ...idealTree.fsChildren]
while (queue.length !== 0) {
const next = queue.pop()
if (processed.has(next.location)) {
continue
}
processed.add(next.location)
next.edgesOut.forEach(e => {
if (!e.to || (next.package.bundleDependencies || next.package.bundledDependencies || []).includes(e.to.name)) {
return
}
queue.push(e.to)
})
if (!next.isProjectRoot && !next.isWorkspace) {
root.external.push(await this.externalProxyMemo(next))
}
}
await this.assignCommonProperties(idealTree, root)
this.idealGraph = root
}
async workspaceProxy (result, node) {
result.localLocation = node.location
result.localPath = node.path
result.isWorkspace = true
result.resolved = node.resolved
await this.assignCommonProperties(node, result)
}
async externalProxy (result, node) {
await this.assignCommonProperties(node, result)
if (node.hasShrinkwrap) {
const dir = join(
node.root.path,
'node_modules',
'.store',
`${node.name}@${node.version}`
)
fs.mkdirSync(dir, { recursive: true })
// TODO this approach feels wrong
// and shouldn't be necessary for shrinkwraps
await pacote.extract(node.resolved, dir, {
...this.options,
resolved: node.resolved,
integrity: node.integrity,
})
const Arborist = this.constructor
const arb = new Arborist({ ...this.options, path: dir })
await arb[_makeIdealGraph]({ dev: false })
this.rootNode.external.push(...arb.idealGraph.external)
arb.idealGraph.external.forEach(e => {
e.root = this.rootNode
e.id = `${node.id}=>${e.id}`
})
result.localDependencies = []
result.externalDependencies = arb.idealGraph.externalDependencies
result.externalOptionalDependencies = arb.idealGraph.externalOptionalDependencies
result.dependencies = [
...result.externalDependencies,
...result.localDependencies,
...result.externalOptionalDependencies,
]
}
result.optional = node.optional
result.resolved = node.resolved
result.version = node.version
}
async assignCommonProperties (node, result) {
function validEdgesOut (node) {
return [...node.edgesOut.values()].filter(e => e.to && e.to.target && !(node.package.bundledDepenedencies || node.package.bundleDependencies || []).includes(e.to.name))
}
const edges = validEdgesOut(node)
const optionalDeps = edges.filter(e => e.optional).map(e => e.to.target)
const nonOptionalDeps = edges.filter(e => !e.optional).map(e => e.to.target)
result.localDependencies = await Promise.all(nonOptionalDeps.filter(n => n.isWorkspace).map(this.workspaceProxyMemo))
result.externalDependencies = await Promise.all(nonOptionalDeps.filter(n => !n.isWorkspace).map(this.externalProxyMemo))
result.externalOptionalDependencies = await Promise.all(optionalDeps.map(this.externalProxyMemo))
result.dependencies = [
...result.externalDependencies,
...result.localDependencies,
...result.externalOptionalDependencies,
]
result.root = this.rootNode
result.id = this.counter++
result.name = node.name
result.package = { ...node.package }
result.package.bundleDependencies = undefined
result.hasInstallScript = node.hasInstallScript
}
async [_createBundledTree] () {
// TODO: make sure that idealTree object exists
const idealTree = this.idealTree
// TODO: test workspaces having bundled deps
const queue = []
for (const [, edge] of idealTree.edgesOut) {
if (edge.to && (idealTree.package.bundleDependencies || idealTree.package.bundledDependencies || []).includes(edge.to.name)) {
queue.push({ from: idealTree, to: edge.to })
}
}
for (const child of idealTree.fsChildren) {
for (const [, edge] of child.edgesOut) {
if (edge.to && (child.package.bundleDependencies || child.package.bundledDependencies || []).includes(edge.to.name)) {
queue.push({ from: child, to: edge.to })
}
}
}
const processed = new Set()
const nodes = new Map()
const edges = []
while (queue.length !== 0) {
const nextEdge = queue.pop()
const key = `${nextEdge.from.location}=>${nextEdge.to.location}`
// should be impossible, unless bundled is duped
/* istanbul ignore next */
if (processed.has(key)) {
continue
}
processed.add(key)
const from = nextEdge.from
if (!from.isRoot && !from.isWorkspace) {
nodes.set(from.location, { location: from.location, resolved: from.resolved, name: from.name, optional: from.optional, pkg: { ...from.package, bundleDependencies: undefined } })
}
const to = nextEdge.to
nodes.set(to.location, { location: to.location, resolved: to.resolved, name: to.name, optional: to.optional, pkg: { ...to.package, bundleDependencies: undefined } })
edges.push({ from: from.isRoot ? 'root' : from.location, to: to.location })
to.edgesOut.forEach(e => {
// an edge out should always have a to
/* istanbul ignore else */
if (e.to) {
queue.push({ from: e.from, to: e.to })
}
})
}
return { edges, nodes }
}
async [_createIsolatedTree] (idealTree) {
await this[_makeIdealGraph](this.options)
const proxiedIdealTree = this.idealGraph
const bundledTree = await this[_createBundledTree]()
const treeHash = (startNode) => {
// generate short hash based on the dependency tree
// starting at this node
const deps = []
const branch = []
depth({
tree: startNode,
getChildren: node => node.dependencies,
filter: node => node,
visit: node => {
branch.push(`${node.name}@${node.version}`)
deps.push(`${branch.join('->')}::${node.resolved}`)
},
leave: () => {
branch.pop()
},
})
deps.sort()
return crypto.createHash('shake256', { outputLength: 16 })
.update(deps.join(','))
.digest('base64')
// Node v14 doesn't support base64url
.replace(/\+/g, '-')
.replace(/\//g, '_')
.replace(/=+$/m, '')
}
const getKey = (idealTreeNode) => {
return `${idealTreeNode.name}@${idealTreeNode.version}-${treeHash(idealTreeNode)}`
}
const root = {
fsChildren: [],
integrity: null,
inventory: new Map(),
isLink: false,
isRoot: true,
binPaths: [],
edgesIn: new Set(),
edgesOut: new Map(),
hasShrinkwrap: false,
parent: null,
// TODO: we should probably not reference this.idealTree
resolved: this.idealTree.resolved,
isTop: true,
path: proxiedIdealTree.root.localPath,
realpath: proxiedIdealTree.root.localPath,
package: proxiedIdealTree.root.package,
meta: { loadedFromDisk: false },
global: false,
isProjectRoot: true,
children: [],
}
// root.inventory.set('', t)
// root.meta = this.idealTree.meta
// TODO We should mock better the inventory object because it is used by audit-report.js ... maybe
root.inventory.query = () => {
return []
}
const processed = new Set()
proxiedIdealTree.workspaces.forEach(c => {
const workspace = {
edgesIn: new Set(),
edgesOut: new Map(),
children: [],
hasInstallScript: c.hasInstallScript,
binPaths: [],
package: c.package,
location: c.localLocation,
path: c.localPath,
realpath: c.localPath,
resolved: c.resolved,
}
root.fsChildren.push(workspace)
root.inventory.set(workspace.location, workspace)
})
const generateChild = (node, location, pkg, inStore) => {
const newChild = {
global: false,
globalTop: false,
isProjectRoot: false,
isTop: false,
location,
name: node.name,
optional: node.optional,
top: { path: proxiedIdealTree.root.localPath },
children: [],
edgesIn: new Set(),
edgesOut: new Map(),
binPaths: [],
fsChildren: [],
/* istanbul ignore next -- emulate Node */
getBundler () {
return null
},
hasShrinkwrap: false,
inDepBundle: false,
integrity: null,
isLink: false,
isRoot: false,
isInStore: inStore,
path: join(proxiedIdealTree.root.localPath, location),
realpath: join(proxiedIdealTree.root.localPath, location),
resolved: node.resolved,
version: pkg.version,
package: pkg,
}
newChild.target = newChild
root.children.push(newChild)
root.inventory.set(newChild.location, newChild)
}
proxiedIdealTree.external.forEach(c => {
const key = getKey(c)
if (processed.has(key)) {
return
}
processed.add(key)
const location = join('node_modules', '.store', key, 'node_modules', c.name)
generateChild(c, location, c.package, true)
})
bundledTree.nodes.forEach(node => {
generateChild(node, node.location, node.pkg, false)
})
bundledTree.edges.forEach(e => {
const from = e.from === 'root' ? root : root.inventory.get(e.from)
const to = root.inventory.get(e.to)
// Maybe optional should be propagated from the original edge
const edge = { optional: false, from, to }
from.edgesOut.set(to.name, edge)
to.edgesIn.add(edge)
})
const memo = new Set()
function processEdges (node, externalEdge) {
externalEdge = !!externalEdge
const key = getKey(node)
if (memo.has(key)) {
return
}
memo.add(key)
let from, nmFolder
if (externalEdge) {
const fromLocation = join('node_modules', '.store', key, 'node_modules', node.name)
from = root.children.find(c => c.location === fromLocation)
nmFolder = join('node_modules', '.store', key, 'node_modules')
} else {
from = node.isProjectRoot ? root : root.fsChildren.find(c => c.location === node.localLocation)
nmFolder = join(node.localLocation, 'node_modules')
}
const processDeps = (dep, optional, external) => {
optional = !!optional
external = !!external
const location = join(nmFolder, dep.name)
const binNames = dep.package.bin && Object.keys(dep.package.bin) || []
const toKey = getKey(dep)
let target
if (external) {
const toLocation = join('node_modules', '.store', toKey, 'node_modules', dep.name)
target = root.children.find(c => c.location === toLocation)
} else {
target = root.fsChildren.find(c => c.location === dep.localLocation)
}
// TODO: we should no-op is an edge has already been created with the same fromKey and toKey
binNames.forEach(bn => {
target.binPaths.push(join(from.realpath, 'node_modules', '.bin', bn))
})
const link = {
global: false,
globalTop: false,
isProjectRoot: false,
edgesIn: new Set(),
edgesOut: new Map(),
binPaths: [],
isTop: false,
optional,
location: location,
path: join(dep.root.localPath, nmFolder, dep.name),
realpath: target.path,
name: toKey,
resolved: dep.resolved,
top: { path: dep.root.localPath },
children: [],
fsChildren: [],
isLink: true,
isStoreLink: true,
isRoot: false,
package: { _id: 'abc', bundleDependencies: undefined, deprecated: undefined, bin: target.package.bin, scripts: dep.package.scripts },
target,
}
const newEdge1 = { optional, from, to: link }
from.edgesOut.set(dep.name, newEdge1)
link.edgesIn.add(newEdge1)
const newEdge2 = { optional: false, from: link, to: target }
link.edgesOut.set(dep.name, newEdge2)
target.edgesIn.add(newEdge2)
root.children.push(link)
}
for (const dep of node.localDependencies) {
processEdges(dep, false)
// nonOptional, local
processDeps(dep, false, false)
}
for (const dep of node.externalDependencies) {
processEdges(dep, true)
// nonOptional, external
processDeps(dep, false, true)
}
for (const dep of node.externalOptionalDependencies) {
processEdges(dep, true)
// optional, external
processDeps(dep, true, true)
}
}
processEdges(proxiedIdealTree, false)
for (const node of proxiedIdealTree.workspaces) {
processEdges(node, false)
}
root.children.forEach(c => c.parent = root)
root.children.forEach(c => c.root = root)
root.root = root
root.target = root
return root
}
}

View File

@@ -0,0 +1,440 @@
// mix-in implementing the loadActual method
const { relative, dirname, resolve, join, normalize } = require('path')
const rpj = require('read-package-json-fast')
const { readdirScoped } = require('@npmcli/fs')
const { walkUp } = require('walk-up-path')
const ancestorPath = require('common-ancestor-path')
const treeCheck = require('../tree-check.js')
const Shrinkwrap = require('../shrinkwrap.js')
const calcDepFlags = require('../calc-dep-flags.js')
const Node = require('../node.js')
const Link = require('../link.js')
const realpath = require('../realpath.js')
// public symbols
const _changePath = Symbol.for('_changePath')
const _global = Symbol.for('global')
const _setWorkspaces = Symbol.for('setWorkspaces')
const _rpcache = Symbol.for('realpathCache')
const _stcache = Symbol.for('statCache')
module.exports = cls => class ActualLoader extends cls {
#actualTree
// ensure when walking the tree that we don't call loadTree on the same
// actual node more than one time.
#actualTreeLoaded = new Set()
#actualTreePromise
// cache of nodes when loading the actualTree, so that we avoid loaded the
// same node multiple times when symlinks attack.
#cache = new Map()
#filter
// cache of link targets for setting fsParent links
// We don't do fsParent as a magic getter/setter, because it'd be too costly
// to keep up to date along the walk.
// And, we know that it can ONLY be relevant when the node is a target of a
// link, otherwise it'd be in a node_modules folder, so take advantage of
// that to limit the scans later.
#topNodes = new Set()
#transplantFilter
constructor (options) {
super(options)
this[_global] = !!options.global
// the tree of nodes on disk
this.actualTree = options.actualTree
// caches for cached realpath calls
const cwd = process.cwd()
// assume that the cwd is real enough for our purposes
this[_rpcache] = new Map([[cwd, cwd]])
this[_stcache] = new Map()
}
// public method
async loadActual (options = {}) {
// In the past this.actualTree was set as a promise that eventually
// resolved, and overwrite this.actualTree with the resolved value. This
// was a problem because virtually no other code expects this.actualTree to
// be a promise. Instead we only set it once resolved, and also return it
// from the promise so that it is what's returned from this function when
// awaited.
if (this.actualTree) {
return this.actualTree
}
if (!this.#actualTreePromise) {
// allow the user to set options on the ctor as well.
// XXX: deprecate separate method options objects.
options = { ...this.options, ...options }
this.#actualTreePromise = this.#loadActual(options)
.then(tree => {
// reset all deps to extraneous prior to recalc
if (!options.root) {
for (const node of tree.inventory.values()) {
node.extraneous = true
}
}
// only reset root flags if we're not re-rooting,
// otherwise leave as-is
calcDepFlags(tree, !options.root)
this.actualTree = treeCheck(tree)
return this.actualTree
})
}
return this.#actualTreePromise
}
// return the promise so that we don't ever have more than one going at the
// same time. This is so that buildIdealTree can default to the actualTree
// if no shrinkwrap present, but reify() can still call buildIdealTree and
// loadActual in parallel safely.
async #loadActual (options) {
// mostly realpath to throw if the root doesn't exist
const {
global = false,
filter = () => true,
root = null,
transplantFilter = () => true,
ignoreMissing = false,
forceActual = false,
} = options
this.#filter = filter
this.#transplantFilter = transplantFilter
if (global) {
const real = await realpath(this.path, this[_rpcache], this[_stcache])
const params = {
path: this.path,
realpath: real,
pkg: {},
global,
loadOverrides: true,
}
if (this.path === real) {
this.#actualTree = this.#newNode(params)
} else {
this.#actualTree = await this.#newLink(params)
}
} else {
// not in global mode, hidden lockfile is allowed, load root pkg too
this.#actualTree = await this.#loadFSNode({
path: this.path,
real: await realpath(this.path, this[_rpcache], this[_stcache]),
loadOverrides: true,
})
this.#actualTree.assertRootOverrides()
// if forceActual is set, don't even try the hidden lockfile
if (!forceActual) {
// Note: hidden lockfile will be rejected if it's not the latest thing
// in the folder, or if any of the entries in the hidden lockfile are
// missing.
const meta = await Shrinkwrap.load({
path: this.#actualTree.path,
hiddenLockfile: true,
resolveOptions: this.options,
})
if (meta.loadedFromDisk) {
this.#actualTree.meta = meta
// have to load on a new Arborist object, so we don't assign
// the virtualTree on this one! Also, the weird reference is because
// we can't easily get a ref to Arborist in this module, without
// creating a circular reference, since this class is a mixin used
// to build up the Arborist class itself.
await new this.constructor({ ...this.options }).loadVirtual({
root: this.#actualTree,
})
await this[_setWorkspaces](this.#actualTree)
this.#transplant(root)
return this.#actualTree
}
}
const meta = await Shrinkwrap.load({
path: this.#actualTree.path,
lockfileVersion: this.options.lockfileVersion,
resolveOptions: this.options,
})
this.#actualTree.meta = meta
}
await this.#loadFSTree(this.#actualTree)
await this[_setWorkspaces](this.#actualTree)
// if there are workspace targets without Link nodes created, load
// the targets, so that we know what they are.
if (this.#actualTree.workspaces && this.#actualTree.workspaces.size) {
const promises = []
for (const path of this.#actualTree.workspaces.values()) {
if (!this.#cache.has(path)) {
// workspace overrides use the root overrides
const p = this.#loadFSNode({ path, root: this.#actualTree, useRootOverrides: true })
.then(node => this.#loadFSTree(node))
promises.push(p)
}
}
await Promise.all(promises)
}
if (!ignoreMissing) {
await this.#findMissingEdges()
}
// try to find a node that is the parent in a fs tree sense, but not a
// node_modules tree sense, of any link targets. this allows us to
// resolve deps that node will find, but a legacy npm view of the
// world would not have noticed.
for (const path of this.#topNodes) {
const node = this.#cache.get(path)
if (node && !node.parent && !node.fsParent) {
for (const p of walkUp(dirname(path))) {
if (this.#cache.has(p)) {
node.fsParent = this.#cache.get(p)
break
}
}
}
}
this.#transplant(root)
if (global) {
// need to depend on the children, or else all of them
// will end up being flagged as extraneous, since the
// global root isn't a "real" project
const tree = this.#actualTree
const actualRoot = tree.isLink ? tree.target : tree
const { dependencies = {} } = actualRoot.package
for (const [name, kid] of actualRoot.children.entries()) {
const def = kid.isLink ? `file:${kid.realpath.replace(/#/g, '%23')}` : '*'
dependencies[name] = dependencies[name] || def
}
actualRoot.package = { ...actualRoot.package, dependencies }
}
return this.#actualTree
}
#transplant (root) {
if (!root || root === this.#actualTree) {
return
}
this.#actualTree[_changePath](root.path)
for (const node of this.#actualTree.children.values()) {
if (!this.#transplantFilter(node)) {
node.root = null
}
}
root.replace(this.#actualTree)
for (const node of this.#actualTree.fsChildren) {
node.root = this.#transplantFilter(node) ? root : null
}
this.#actualTree = root
}
async #loadFSNode ({ path, parent, real, root, loadOverrides, useRootOverrides }) {
if (!real) {
try {
real = await realpath(path, this[_rpcache], this[_stcache])
} catch (error) {
// if realpath fails, just provide a dummy error node
return new Node({
error,
path,
realpath: path,
parent,
root,
loadOverrides,
})
}
}
const cached = this.#cache.get(path)
let node
// missing edges get a dummy node, assign the parent and return it
if (cached && !cached.dummy) {
cached.parent = parent
return cached
} else {
const params = {
installLinks: this.installLinks,
legacyPeerDeps: this.legacyPeerDeps,
path,
realpath: real,
parent,
root,
loadOverrides,
}
try {
const pkg = await rpj(join(real, 'package.json'))
params.pkg = pkg
if (useRootOverrides && root.overrides) {
params.overrides = root.overrides.getNodeRule({ name: pkg.name, version: pkg.version })
}
} catch (err) {
params.error = err
}
// soldier on if read-package-json raises an error, passing it to the
// Node which will attach it to its errors array (Link passes it along to
// its target node)
if (normalize(path) === real) {
node = this.#newNode(params)
} else {
node = await this.#newLink(params)
}
}
this.#cache.set(path, node)
return node
}
#newNode (options) {
// check it for an fsParent if it's a tree top. there's a decent chance
// it'll get parented later, making the fsParent scan a no-op, but better
// safe than sorry, since it's cheap.
const { parent, realpath } = options
if (!parent) {
this.#topNodes.add(realpath)
}
return new Node(options)
}
async #newLink (options) {
const { realpath } = options
this.#topNodes.add(realpath)
const target = this.#cache.get(realpath)
const link = new Link({ ...options, target })
if (!target) {
// Link set its target itself in this case
this.#cache.set(realpath, link.target)
// if a link target points at a node outside of the root tree's
// node_modules hierarchy, then load that node as well.
await this.#loadFSTree(link.target)
}
return link
}
async #loadFSTree (node) {
const did = this.#actualTreeLoaded
if (!did.has(node.target.realpath)) {
did.add(node.target.realpath)
await this.#loadFSChildren(node.target)
return Promise.all(
[...node.target.children.entries()]
.filter(([name, kid]) => !did.has(kid.realpath))
.map(([name, kid]) => this.#loadFSTree(kid))
)
}
}
// create child nodes for all the entries in node_modules
// and attach them to the node as a parent
async #loadFSChildren (node) {
const nm = resolve(node.realpath, 'node_modules')
try {
const kids = await readdirScoped(nm).then(paths => paths.map(p => p.replace(/\\/g, '/')))
return Promise.all(
// ignore . dirs and retired scoped package folders
kids.filter(kid => !/^(@[^/]+\/)?\./.test(kid))
.filter(kid => this.#filter(node, kid))
.map(kid => this.#loadFSNode({
parent: node,
path: resolve(nm, kid),
})))
} catch {
// error in the readdir is not fatal, just means no kids
}
}
async #findMissingEdges () {
// try to resolve any missing edges by walking up the directory tree,
// checking for the package in each node_modules folder. stop at the
// root directory.
// The tricky move here is that we load a "dummy" node for the folder
// containing the node_modules folder, so that it can be assigned as
// the fsParent. It's a bad idea to *actually* load that full node,
// because people sometimes develop in ~/projects/node_modules/...
// so we'd end up loading a massive tree with lots of unrelated junk.
const nmContents = new Map()
const tree = this.#actualTree
for (const node of tree.inventory.values()) {
const ancestor = ancestorPath(node.realpath, this.path)
const depPromises = []
for (const [name, edge] of node.edgesOut.entries()) {
const notMissing = !edge.missing &&
!(edge.to && (edge.to.dummy || edge.to.parent !== node))
if (notMissing) {
continue
}
// start the walk from the dirname, because we would have found
// the dep in the loadFSTree step already if it was local.
for (const p of walkUp(dirname(node.realpath))) {
// only walk as far as the nearest ancestor
// this keeps us from going into completely unrelated
// places when a project is just missing something, but
// allows for finding the transitive deps of link targets.
// ie, if it has to go up and back out to get to the path
// from the nearest common ancestor, we've gone too far.
if (ancestor && /^\.\.(?:[\\/]|$)/.test(relative(ancestor, p))) {
break
}
let entries
if (!nmContents.has(p)) {
entries = await readdirScoped(p + '/node_modules')
.catch(() => []).then(paths => paths.map(p => p.replace(/\\/g, '/')))
nmContents.set(p, entries)
} else {
entries = nmContents.get(p)
}
if (!entries.includes(name)) {
continue
}
let d
if (!this.#cache.has(p)) {
d = new Node({ path: p, root: node.root, dummy: true })
this.#cache.set(p, d)
} else {
d = this.#cache.get(p)
}
if (d.dummy) {
// it's a placeholder, so likely would not have loaded this dep,
// unless another dep in the tree also needs it.
const depPath = normalize(`${p}/node_modules/${name}`)
const cached = this.#cache.get(depPath)
if (!cached || cached.dummy) {
depPromises.push(this.#loadFSNode({
path: depPath,
root: node.root,
parent: d,
}).then(node => this.#loadFSTree(node)))
}
}
break
}
}
await Promise.all(depPromises)
}
}
}

View File

@@ -0,0 +1,303 @@
// mixin providing the loadVirtual method
const mapWorkspaces = require('@npmcli/map-workspaces')
const { resolve } = require('path')
const nameFromFolder = require('@npmcli/name-from-folder')
const consistentResolve = require('../consistent-resolve.js')
const Shrinkwrap = require('../shrinkwrap.js')
const Node = require('../node.js')
const Link = require('../link.js')
const relpath = require('../relpath.js')
const calcDepFlags = require('../calc-dep-flags.js')
const rpj = require('read-package-json-fast')
const treeCheck = require('../tree-check.js')
const flagsSuspect = Symbol.for('flagsSuspect')
const setWorkspaces = Symbol.for('setWorkspaces')
module.exports = cls => class VirtualLoader extends cls {
#rootOptionProvided
constructor (options) {
super(options)
// the virtual tree we load from a shrinkwrap
this.virtualTree = options.virtualTree
this[flagsSuspect] = false
}
// public method
async loadVirtual (options = {}) {
if (this.virtualTree) {
return this.virtualTree
}
// allow the user to set reify options on the ctor as well.
// XXX: deprecate separate reify() options object.
options = { ...this.options, ...options }
if (options.root && options.root.meta) {
await this.#loadFromShrinkwrap(options.root.meta, options.root)
return treeCheck(this.virtualTree)
}
const s = await Shrinkwrap.load({
path: this.path,
lockfileVersion: this.options.lockfileVersion,
resolveOptions: this.options,
})
if (!s.loadedFromDisk && !options.root) {
const er = new Error('loadVirtual requires existing shrinkwrap file')
throw Object.assign(er, { code: 'ENOLOCK' })
}
// when building the ideal tree, we pass in a root node to this function
// otherwise, load it from the root package json or the lockfile
const {
root = await this.#loadRoot(s),
} = options
this.#rootOptionProvided = options.root
await this.#loadFromShrinkwrap(s, root)
root.assertRootOverrides()
return treeCheck(this.virtualTree)
}
async #loadRoot (s) {
const pj = this.path + '/package.json'
const pkg = await rpj(pj).catch(() => s.data.packages['']) || {}
return this[setWorkspaces](this.#loadNode('', pkg, true))
}
async #loadFromShrinkwrap (s, root) {
if (!this.#rootOptionProvided) {
// root is never any of these things, but might be a brand new
// baby Node object that never had its dep flags calculated.
root.extraneous = false
root.dev = false
root.optional = false
root.devOptional = false
root.peer = false
} else {
this[flagsSuspect] = true
}
this.#checkRootEdges(s, root)
root.meta = s
this.virtualTree = root
const { links, nodes } = this.#resolveNodes(s, root)
await this.#resolveLinks(links, nodes)
if (!(s.originalLockfileVersion >= 2)) {
this.#assignBundles(nodes)
}
if (this[flagsSuspect]) {
// reset all dep flags
// can't use inventory here, because virtualTree might not be root
for (const node of nodes.values()) {
if (node.isRoot || node === this.#rootOptionProvided) {
continue
}
node.extraneous = true
node.dev = true
node.optional = true
node.devOptional = true
node.peer = true
}
calcDepFlags(this.virtualTree, !this.#rootOptionProvided)
}
return root
}
// check the lockfile deps, and see if they match. if they do not
// then we have to reset dep flags at the end. for example, if the
// user manually edits their package.json file, then we need to know
// that the idealTree is no longer entirely trustworthy.
#checkRootEdges (s, root) {
// loaded virtually from tree, no chance of being out of sync
// ancient lockfiles are critically damaged by this process,
// so we need to just hope for the best in those cases.
if (!s.loadedFromDisk || s.ancientLockfile) {
return
}
const lock = s.get('')
const prod = lock.dependencies || {}
const dev = lock.devDependencies || {}
const optional = lock.optionalDependencies || {}
const peer = lock.peerDependencies || {}
const peerOptional = {}
if (lock.peerDependenciesMeta) {
for (const [name, meta] of Object.entries(lock.peerDependenciesMeta)) {
if (meta.optional && peer[name] !== undefined) {
peerOptional[name] = peer[name]
delete peer[name]
}
}
}
for (const name of Object.keys(optional)) {
delete prod[name]
}
const lockWS = {}
const workspaces = mapWorkspaces.virtual({
cwd: this.path,
lockfile: s.data,
})
for (const [name, path] of workspaces.entries()) {
lockWS[name] = `file:${path.replace(/#/g, '%23')}`
}
// Should rootNames exclude optional?
const rootNames = new Set(root.edgesOut.keys())
const lockByType = ({ dev, optional, peer, peerOptional, prod, workspace: lockWS })
// Find anything in shrinkwrap deps that doesn't match root's type or spec
for (const type in lockByType) {
const deps = lockByType[type]
for (const name in deps) {
const edge = root.edgesOut.get(name)
if (!edge || edge.type !== type || edge.spec !== deps[name]) {
return this[flagsSuspect] = true
}
rootNames.delete(name)
}
}
// Something was in root that's not accounted for in shrinkwrap
if (rootNames.size) {
return this[flagsSuspect] = true
}
}
// separate out link metadatas, and create Node objects for nodes
#resolveNodes (s, root) {
const links = new Map()
const nodes = new Map([['', root]])
for (const [location, meta] of Object.entries(s.data.packages)) {
// skip the root because we already got it
if (!location) {
continue
}
if (meta.link) {
links.set(location, meta)
} else {
nodes.set(location, this.#loadNode(location, meta))
}
}
return { links, nodes }
}
// links is the set of metadata, and nodes is the map of non-Link nodes
// Set the targets to nodes in the set, if we have them (we might not)
async #resolveLinks (links, nodes) {
for (const [location, meta] of links.entries()) {
const targetPath = resolve(this.path, meta.resolved)
const targetLoc = relpath(this.path, targetPath)
const target = nodes.get(targetLoc)
const link = this.#loadLink(location, targetLoc, target, meta)
nodes.set(location, link)
nodes.set(targetLoc, link.target)
// we always need to read the package.json for link targets
// outside node_modules because they can be changed by the local user
if (!link.target.parent) {
const pj = link.realpath + '/package.json'
const pkg = await rpj(pj).catch(() => null)
if (pkg) {
link.target.package = pkg
}
}
}
}
#assignBundles (nodes) {
for (const [location, node] of nodes) {
// Skip assignment of parentage for the root package
if (!location || node.isLink && !node.target.location) {
continue
}
const { name, parent, package: { inBundle } } = node
if (!parent) {
continue
}
// read inBundle from package because 'package' here is
// actually a v2 lockfile metadata entry.
// If the *parent* is also bundled, though, or if the parent has
// no dependency on it, then we assume that it's being pulled in
// just by virtue of its parent or a transitive dep being bundled.
const { package: ppkg } = parent
const { inBundle: parentBundled } = ppkg
if (inBundle && !parentBundled && parent.edgesOut.has(node.name)) {
if (!ppkg.bundleDependencies) {
ppkg.bundleDependencies = [name]
} else {
ppkg.bundleDependencies.push(name)
}
}
}
}
#loadNode (location, sw, loadOverrides) {
const p = this.virtualTree ? this.virtualTree.realpath : this.path
const path = resolve(p, location)
// shrinkwrap doesn't include package name unless necessary
if (!sw.name) {
sw.name = nameFromFolder(path)
}
const dev = sw.dev
const optional = sw.optional
const devOptional = dev || optional || sw.devOptional
const peer = sw.peer
const node = new Node({
installLinks: this.installLinks,
legacyPeerDeps: this.legacyPeerDeps,
root: this.virtualTree,
path,
realpath: path,
integrity: sw.integrity,
resolved: consistentResolve(sw.resolved, this.path, path),
pkg: sw,
hasShrinkwrap: sw.hasShrinkwrap,
dev,
optional,
devOptional,
peer,
loadOverrides,
})
// cast to boolean because they're undefined in the lock file when false
node.extraneous = !!sw.extraneous
node.devOptional = !!(sw.devOptional || sw.dev || sw.optional)
node.peer = !!sw.peer
node.optional = !!sw.optional
node.dev = !!sw.dev
return node
}
#loadLink (location, targetLoc, target, meta) {
const path = resolve(this.path, location)
const link = new Link({
installLinks: this.installLinks,
legacyPeerDeps: this.legacyPeerDeps,
path,
realpath: resolve(this.path, targetLoc),
target,
pkg: target && target.package,
})
link.extraneous = target.extraneous
link.devOptional = target.devOptional
link.peer = target.peer
link.optional = target.optional
link.dev = target.dev
return link
}
}

View File

@@ -0,0 +1,30 @@
const _idealTreePrune = Symbol.for('idealTreePrune')
const _workspacesEnabled = Symbol.for('workspacesEnabled')
const _addNodeToTrashList = Symbol.for('addNodeToTrashList')
module.exports = cls => class Pruner extends cls {
async prune (options = {}) {
// allow the user to set options on the ctor as well.
// XXX: deprecate separate method options objects.
options = { ...this.options, ...options }
await this.buildIdealTree(options)
this[_idealTreePrune]()
if (!this[_workspacesEnabled]) {
const excludeNodes = this.excludeWorkspacesDependencySet(this.idealTree)
for (const node of this.idealTree.inventory.values()) {
if (
node.parent !== null
&& !node.isProjectRoot
&& !excludeNodes.has(node)
) {
this[_addNodeToTrashList](node)
}
}
}
return this.reify(options)
}
}

View File

@@ -0,0 +1,433 @@
// Arborist.rebuild({path = this.path}) will do all the binlinks and
// bundle building needed. Called by reify, and by `npm rebuild`.
const localeCompare = require('@isaacs/string-locale-compare')('en')
const { depth: dfwalk } = require('treeverse')
const promiseAllRejectLate = require('promise-all-reject-late')
const rpj = require('read-package-json-fast')
const binLinks = require('bin-links')
const runScript = require('@npmcli/run-script')
const promiseCallLimit = require('promise-call-limit')
const { resolve } = require('path')
const {
isNodeGypPackage,
defaultGypInstallScript,
} = require('@npmcli/node-gyp')
const log = require('proc-log')
const boolEnv = b => b ? '1' : ''
const sortNodes = (a, b) =>
(a.depth - b.depth) || localeCompare(a.path, b.path)
const _workspaces = Symbol.for('workspaces')
const _build = Symbol('build')
const _loadDefaultNodes = Symbol('loadDefaultNodes')
const _retrieveNodesByType = Symbol('retrieveNodesByType')
const _resetQueues = Symbol('resetQueues')
const _rebuildBundle = Symbol('rebuildBundle')
const _ignoreScripts = Symbol('ignoreScripts')
const _binLinks = Symbol('binLinks')
const _oldMeta = Symbol('oldMeta')
const _createBinLinks = Symbol('createBinLinks')
const _doHandleOptionalFailure = Symbol('doHandleOptionalFailure')
const _linkAllBins = Symbol('linkAllBins')
const _runScripts = Symbol('runScripts')
const _buildQueues = Symbol('buildQueues')
const _addToBuildSet = Symbol('addToBuildSet')
const _checkBins = Symbol.for('checkBins')
const _queues = Symbol('queues')
const _scriptShell = Symbol('scriptShell')
const _includeWorkspaceRoot = Symbol.for('includeWorkspaceRoot')
const _workspacesEnabled = Symbol.for('workspacesEnabled')
const _force = Symbol.for('force')
const _global = Symbol.for('global')
// defined by reify mixin
const _handleOptionalFailure = Symbol.for('handleOptionalFailure')
const _trashList = Symbol.for('trashList')
module.exports = cls => class Builder extends cls {
constructor (options) {
super(options)
const {
ignoreScripts = false,
scriptShell,
binLinks = true,
rebuildBundle = true,
} = options
this.scriptsRun = new Set()
this[_binLinks] = binLinks
this[_ignoreScripts] = !!ignoreScripts
this[_scriptShell] = scriptShell
this[_rebuildBundle] = !!rebuildBundle
this[_resetQueues]()
this[_oldMeta] = null
}
async rebuild ({ nodes, handleOptionalFailure = false } = {}) {
// nothing to do if we're not building anything!
if (this[_ignoreScripts] && !this[_binLinks]) {
return
}
// when building for the first time, as part of reify, we ignore
// failures in optional nodes, and just delete them. however, when
// running JUST a rebuild, we treat optional failures as real fails
this[_doHandleOptionalFailure] = handleOptionalFailure
if (!nodes) {
nodes = await this[_loadDefaultNodes]()
}
// separates links nodes so that it can run
// prepare scripts and link bins in the expected order
process.emit('time', 'build')
const {
depNodes,
linkNodes,
} = this[_retrieveNodesByType](nodes)
// build regular deps
await this[_build](depNodes, {})
// build link deps
if (linkNodes.size) {
this[_resetQueues]()
await this[_build](linkNodes, { type: 'links' })
}
process.emit('timeEnd', 'build')
}
// if we don't have a set of nodes, then just rebuild
// the actual tree on disk.
async [_loadDefaultNodes] () {
let nodes
const tree = await this.loadActual()
let filterSet
if (!this[_workspacesEnabled]) {
filterSet = this.excludeWorkspacesDependencySet(tree)
nodes = tree.inventory.filter(node =>
filterSet.has(node) || node.isProjectRoot
)
} else if (this[_workspaces] && this[_workspaces].length) {
filterSet = this.workspaceDependencySet(
tree,
this[_workspaces],
this[_includeWorkspaceRoot]
)
nodes = tree.inventory.filter(node => filterSet.has(node))
} else {
nodes = tree.inventory.values()
}
return nodes
}
[_retrieveNodesByType] (nodes) {
const depNodes = new Set()
const linkNodes = new Set()
const storeNodes = new Set()
for (const node of nodes) {
if (node.isStoreLink) {
storeNodes.add(node)
} else if (node.isLink) {
linkNodes.add(node)
} else {
depNodes.add(node)
}
}
// Make sure that store linked nodes are processed last.
// We can't process store links separately or else lifecycle scripts on
// standard nodes might not have bin links yet.
for (const node of storeNodes) {
depNodes.add(node)
}
// deduplicates link nodes and their targets, avoids
// calling lifecycle scripts twice when running `npm rebuild`
// ref: https://github.com/npm/cli/issues/2905
//
// we avoid doing so if global=true since `bin-links` relies
// on having the target nodes available in global mode.
if (!this[_global]) {
for (const node of linkNodes) {
depNodes.delete(node.target)
}
}
return {
depNodes,
linkNodes,
}
}
[_resetQueues] () {
this[_queues] = {
preinstall: [],
install: [],
postinstall: [],
prepare: [],
bin: [],
}
}
async [_build] (nodes, { type = 'deps' }) {
process.emit('time', `build:${type}`)
await this[_buildQueues](nodes)
if (!this[_ignoreScripts]) {
await this[_runScripts]('preinstall')
}
// links should run prepare scripts and only link bins after that
if (type === 'links') {
await this[_runScripts]('prepare')
}
if (this[_binLinks]) {
await this[_linkAllBins]()
}
if (!this[_ignoreScripts]) {
await this[_runScripts]('install')
await this[_runScripts]('postinstall')
}
process.emit('timeEnd', `build:${type}`)
}
async [_buildQueues] (nodes) {
process.emit('time', 'build:queue')
const set = new Set()
const promises = []
for (const node of nodes) {
promises.push(this[_addToBuildSet](node, set))
// if it has bundle deps, add those too, if rebuildBundle
if (this[_rebuildBundle] !== false) {
const bd = node.package.bundleDependencies
if (bd && bd.length) {
dfwalk({
tree: node,
leave: node => promises.push(this[_addToBuildSet](node, set)),
getChildren: node => [...node.children.values()],
filter: node => node.inBundle,
})
}
}
}
await promiseAllRejectLate(promises)
// now sort into the queues for the 4 things we have to do
// run in the same predictable order that buildIdealTree uses
// there's no particular reason for doing it in this order rather
// than another, but sorting *somehow* makes it consistent.
const queue = [...set].sort(sortNodes)
for (const node of queue) {
const { package: { bin, scripts = {} } } = node.target
const { preinstall, install, postinstall, prepare } = scripts
const tests = { bin, preinstall, install, postinstall, prepare }
for (const [key, has] of Object.entries(tests)) {
if (has) {
this[_queues][key].push(node)
}
}
}
process.emit('timeEnd', 'build:queue')
}
async [_checkBins] (node) {
// if the node is a global top, and we're not in force mode, then
// any existing bins need to either be missing, or a symlink into
// the node path. Otherwise a package can have a preinstall script
// that unlinks something, to allow them to silently overwrite system
// binaries, which is unsafe and insecure.
if (!node.globalTop || this[_force]) {
return
}
const { path, package: pkg } = node
await binLinks.checkBins({ pkg, path, top: true, global: true })
}
async [_addToBuildSet] (node, set, refreshed = false) {
if (set.has(node)) {
return
}
if (this[_oldMeta] === null) {
const { root: { meta } } = node
this[_oldMeta] = meta && meta.loadedFromDisk &&
!(meta.originalLockfileVersion >= 2)
}
const { package: pkg, hasInstallScript } = node.target
const { gypfile, bin, scripts = {} } = pkg
const { preinstall, install, postinstall, prepare } = scripts
const anyScript = preinstall || install || postinstall || prepare
if (!refreshed && !anyScript && (hasInstallScript || this[_oldMeta])) {
// we either have an old metadata (and thus might have scripts)
// or we have an indication that there's install scripts (but
// don't yet know what they are) so we have to load the package.json
// from disk to see what the deal is. Failure here just means
// no scripts to add, probably borked package.json.
// add to the set then remove while we're reading the pj, so we
// don't accidentally hit it multiple times.
set.add(node)
const pkg = await rpj(node.path + '/package.json').catch(() => ({}))
set.delete(node)
const { scripts = {} } = pkg
node.package.scripts = scripts
return this[_addToBuildSet](node, set, true)
}
// Rebuild node-gyp dependencies lacking an install or preinstall script
// note that 'scripts' might be missing entirely, and the package may
// set gypfile:false to avoid this automatic detection.
const isGyp = gypfile !== false &&
!install &&
!preinstall &&
await isNodeGypPackage(node.path)
if (bin || preinstall || install || postinstall || prepare || isGyp) {
if (bin) {
await this[_checkBins](node)
}
if (isGyp) {
scripts.install = defaultGypInstallScript
node.package.scripts = scripts
}
set.add(node)
}
}
async [_runScripts] (event) {
const queue = this[_queues][event]
if (!queue.length) {
return
}
process.emit('time', `build:run:${event}`)
const stdio = this.options.foregroundScripts ? 'inherit' : 'pipe'
const limit = this.options.foregroundScripts ? 1 : undefined
await promiseCallLimit(queue.map(node => async () => {
const {
path,
integrity,
resolved,
optional,
peer,
dev,
devOptional,
package: pkg,
location,
isStoreLink,
} = node.target
// skip any that we know we'll be deleting
// or storeLinks
if (this[_trashList].has(path) || isStoreLink) {
return
}
const timer = `build:run:${event}:${location}`
process.emit('time', timer)
log.info('run', pkg._id, event, location, pkg.scripts[event])
const env = {
npm_package_resolved: resolved,
npm_package_integrity: integrity,
npm_package_json: resolve(path, 'package.json'),
npm_package_optional: boolEnv(optional),
npm_package_dev: boolEnv(dev),
npm_package_peer: boolEnv(peer),
npm_package_dev_optional:
boolEnv(devOptional && !dev && !optional),
}
const runOpts = {
event,
path,
pkg,
stdio,
env,
scriptShell: this[_scriptShell],
}
const p = runScript(runOpts).catch(er => {
const { code, signal } = er
log.info('run', pkg._id, event, { code, signal })
throw er
}).then(({ args, code, signal, stdout, stderr }) => {
this.scriptsRun.add({
pkg,
path,
event,
// I do not know why this needs to be on THIS line but refactoring
// this function would be quite a process
// eslint-disable-next-line promise/always-return
cmd: args && args[args.length - 1],
env,
code,
signal,
stdout,
stderr,
})
log.info('run', pkg._id, event, { code, signal })
})
await (this[_doHandleOptionalFailure]
? this[_handleOptionalFailure](node, p)
: p)
process.emit('timeEnd', timer)
}), limit)
process.emit('timeEnd', `build:run:${event}`)
}
async [_linkAllBins] () {
const queue = this[_queues].bin
if (!queue.length) {
return
}
process.emit('time', 'build:link')
const promises = []
// sort the queue by node path, so that the module-local collision
// detector in bin-links will always resolve the same way.
for (const node of queue.sort(sortNodes)) {
promises.push(this[_createBinLinks](node))
}
await promiseAllRejectLate(promises)
process.emit('timeEnd', 'build:link')
}
async [_createBinLinks] (node) {
if (this[_trashList].has(node.path)) {
return
}
process.emit('time', `build:link:${node.location}`)
const p = binLinks({
pkg: node.package,
path: node.path,
top: !!(node.isTop || node.globalTop),
force: this[_force],
global: !!node.globalTop,
})
await (this[_doHandleOptionalFailure]
? this[_handleOptionalFailure](node, p)
: p)
process.emit('timeEnd', `build:link:${node.location}`)
}
}

1593
spa/node_modules/@npmcli/arborist/lib/arborist/reify.js generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,19 @@
const mapWorkspaces = require('@npmcli/map-workspaces')
// shared ref used by other mixins/Arborist
const _setWorkspaces = Symbol.for('setWorkspaces')
module.exports = cls => class MapWorkspaces extends cls {
async [_setWorkspaces] (node) {
const workspaces = await mapWorkspaces({
cwd: node.path,
pkg: node.package,
})
if (node && workspaces.size) {
node.workspaces = workspaces
}
return node
}
}

414
spa/node_modules/@npmcli/arborist/lib/audit-report.js generated vendored Normal file
View File

@@ -0,0 +1,414 @@
// an object representing the set of vulnerabilities in a tree
/* eslint camelcase: "off" */
const localeCompare = require('@isaacs/string-locale-compare')('en')
const npa = require('npm-package-arg')
const pickManifest = require('npm-pick-manifest')
const Vuln = require('./vuln.js')
const Calculator = require('@npmcli/metavuln-calculator')
const _getReport = Symbol('getReport')
const _fixAvailable = Symbol('fixAvailable')
const _checkTopNode = Symbol('checkTopNode')
const _init = Symbol('init')
const _omit = Symbol('omit')
const log = require('proc-log')
const fetch = require('npm-registry-fetch')
class AuditReport extends Map {
static load (tree, opts) {
return new AuditReport(tree, opts).run()
}
get auditReportVersion () {
return 2
}
toJSON () {
const obj = {
auditReportVersion: this.auditReportVersion,
vulnerabilities: {},
metadata: {
vulnerabilities: {
info: 0,
low: 0,
moderate: 0,
high: 0,
critical: 0,
total: this.size,
},
dependencies: {
prod: 0,
dev: 0,
optional: 0,
peer: 0,
peerOptional: 0,
total: this.tree.inventory.size - 1,
},
},
}
for (const node of this.tree.inventory.values()) {
const { dependencies } = obj.metadata
let prod = true
for (const type of [
'dev',
'optional',
'peer',
'peerOptional',
]) {
if (node[type]) {
dependencies[type]++
prod = false
}
}
if (prod) {
dependencies.prod++
}
}
// if it doesn't have any topVulns, then it's fixable with audit fix
// for each topVuln, figure out if it's fixable with audit fix --force,
// or if we have to just delete the thing, and if the fix --force will
// require a semver major update.
const vulnerabilities = []
for (const [name, vuln] of this.entries()) {
vulnerabilities.push([name, vuln.toJSON()])
obj.metadata.vulnerabilities[vuln.severity]++
}
obj.vulnerabilities = vulnerabilities
.sort(([a], [b]) => localeCompare(a, b))
.reduce((set, [name, vuln]) => {
set[name] = vuln
return set
}, {})
return obj
}
constructor (tree, opts = {}) {
super()
const { omit } = opts
this[_omit] = new Set(omit || [])
this.topVulns = new Map()
this.calculator = new Calculator(opts)
this.error = null
this.options = opts
this.tree = tree
this.filterSet = opts.filterSet
}
async run () {
this.report = await this[_getReport]()
log.silly('audit report', this.report)
if (this.report) {
await this[_init]()
}
return this
}
isVulnerable (node) {
const vuln = this.get(node.packageName)
return !!(vuln && vuln.isVulnerable(node))
}
async [_init] () {
process.emit('time', 'auditReport:init')
const promises = []
for (const [name, advisories] of Object.entries(this.report)) {
for (const advisory of advisories) {
promises.push(this.calculator.calculate(name, advisory))
}
}
// now the advisories are calculated with a set of versions
// and the packument. turn them into our style of vuln objects
// which also have the affected nodes, and also create entries
// for all the metavulns that we find from dependents.
const advisories = new Set(await Promise.all(promises))
const seen = new Set()
for (const advisory of advisories) {
const { name, range } = advisory
const k = `${name}@${range}`
const vuln = this.get(name) || new Vuln({ name, advisory })
if (this.has(name)) {
vuln.addAdvisory(advisory)
}
super.set(name, vuln)
// don't flag the exact same name/range more than once
// adding multiple advisories with the same range is fine, but no
// need to search for nodes we already would have added.
if (!seen.has(k)) {
const p = []
for (const node of this.tree.inventory.query('packageName', name)) {
if (!shouldAudit(node, this[_omit], this.filterSet)) {
continue
}
// if not vulnerable by this advisory, keep searching
if (!advisory.testVersion(node.version)) {
continue
}
// we will have loaded the source already if this is a metavuln
if (advisory.type === 'metavuln') {
vuln.addVia(this.get(advisory.dependency))
}
// already marked this one, no need to do it again
if (vuln.nodes.has(node)) {
continue
}
// haven't marked this one yet. get its dependents.
vuln.nodes.add(node)
for (const { from: dep, spec } of node.edgesIn) {
if (dep.isTop && !vuln.topNodes.has(dep)) {
this[_checkTopNode](dep, vuln, spec)
} else {
// calculate a metavuln, if necessary
const calc = this.calculator.calculate(dep.packageName, advisory)
// eslint-disable-next-line promise/always-return
p.push(calc.then(meta => {
// eslint-disable-next-line promise/always-return
if (meta.testVersion(dep.version, spec)) {
advisories.add(meta)
}
}))
}
}
}
await Promise.all(p)
seen.add(k)
}
// make sure we actually got something. if not, remove it
// this can happen if you are loading from a lockfile created by
// npm v5, since it lists the current version of all deps,
// rather than the range that is actually depended upon,
// or if using --omit with the older audit endpoint.
if (this.get(name).nodes.size === 0) {
this.delete(name)
continue
}
// if the vuln is valid, but THIS advisory doesn't apply to any of
// the nodes it references, then remove it from the advisory list.
// happens when using omit with old audit endpoint.
for (const advisory of vuln.advisories) {
const relevant = [...vuln.nodes]
.some(n => advisory.testVersion(n.version))
if (!relevant) {
vuln.deleteAdvisory(advisory)
}
}
}
process.emit('timeEnd', 'auditReport:init')
}
[_checkTopNode] (topNode, vuln, spec) {
vuln.fixAvailable = this[_fixAvailable](topNode, vuln, spec)
if (vuln.fixAvailable !== true) {
// now we know the top node is vulnerable, and cannot be
// upgraded out of the bad place without --force. But, there's
// no need to add it to the actual vulns list, because nothing
// depends on root.
this.topVulns.set(vuln.name, vuln)
vuln.topNodes.add(topNode)
}
}
// check whether the top node is vulnerable.
// check whether we can get out of the bad place with --force, and if
// so, whether that update is SemVer Major
[_fixAvailable] (topNode, vuln, spec) {
// this will always be set to at least {name, versions:{}}
const paku = vuln.packument
if (!vuln.testSpec(spec)) {
return true
}
// similarly, even if we HAVE a packument, but we're looking for it
// somewhere other than the registry, and we got something vulnerable,
// then we're stuck with it.
const specObj = npa(spec)
if (!specObj.registry) {
return false
}
if (specObj.subSpec) {
spec = specObj.subSpec.rawSpec
}
// We don't provide fixes for top nodes other than root, but we
// still check to see if the node is fixable with a different version,
// and if that is a semver major bump.
try {
const {
_isSemVerMajor: isSemVerMajor,
version,
name,
} = pickManifest(paku, spec, {
...this.options,
before: null,
avoid: vuln.range,
avoidStrict: true,
})
return { name, version, isSemVerMajor }
} catch (er) {
return false
}
}
set () {
throw new Error('do not call AuditReport.set() directly')
}
// convert a quick-audit into a bulk advisory listing
static auditToBulk (report) {
if (!report.advisories) {
// tack on the report json where the response body would go
throw Object.assign(new Error('Invalid advisory report'), {
body: JSON.stringify(report),
})
}
const bulk = {}
const { advisories } = report
for (const advisory of Object.values(advisories)) {
const {
id,
url,
title,
severity = 'high',
vulnerable_versions = '*',
module_name: name,
} = advisory
bulk[name] = bulk[name] || []
bulk[name].push({ id, url, title, severity, vulnerable_versions })
}
return bulk
}
async [_getReport] () {
// if we're not auditing, just return false
if (this.options.audit === false || this.options.offline === true || this.tree.inventory.size === 1) {
return null
}
process.emit('time', 'auditReport:getReport')
try {
try {
// first try the super fast bulk advisory listing
const body = prepareBulkData(this.tree, this[_omit], this.filterSet)
log.silly('audit', 'bulk request', body)
// no sense asking if we don't have anything to audit,
// we know it'll be empty
if (!Object.keys(body).length) {
return null
}
const res = await fetch('/-/npm/v1/security/advisories/bulk', {
...this.options,
registry: this.options.auditRegistry || this.options.registry,
method: 'POST',
gzip: true,
body,
})
return await res.json()
} catch (er) {
log.silly('audit', 'bulk request failed', String(er.body))
// that failed, try the quick audit endpoint
const body = prepareData(this.tree, this.options)
const res = await fetch('/-/npm/v1/security/audits/quick', {
...this.options,
registry: this.options.auditRegistry || this.options.registry,
method: 'POST',
gzip: true,
body,
})
return AuditReport.auditToBulk(await res.json())
}
} catch (er) {
log.verbose('audit error', er)
log.silly('audit error', String(er.body))
this.error = er
return null
} finally {
process.emit('timeEnd', 'auditReport:getReport')
}
}
}
// return true if we should audit this one
const shouldAudit = (node, omit, filterSet) =>
!node.version ? false
: node.isRoot ? false
: filterSet && filterSet.size !== 0 && !filterSet.has(node) ? false
: omit.size === 0 ? true
: !( // otherwise, just ensure we're not omitting this one
node.dev && omit.has('dev') ||
node.optional && omit.has('optional') ||
node.devOptional && omit.has('dev') && omit.has('optional') ||
node.peer && omit.has('peer')
)
const prepareBulkData = (tree, omit, filterSet) => {
const payload = {}
for (const name of tree.inventory.query('packageName')) {
const set = new Set()
for (const node of tree.inventory.query('packageName', name)) {
if (!shouldAudit(node, omit, filterSet)) {
continue
}
set.add(node.version)
}
if (set.size) {
payload[name] = [...set]
}
}
return payload
}
const prepareData = (tree, opts) => {
const { npmVersion: npm_version } = opts
const node_version = process.version
const { platform, arch } = process
const { NODE_ENV: node_env } = process.env
const data = tree.meta.commit()
// the legacy audit endpoint doesn't support any kind of pre-filtering
// we just have to get the advisories and skip over them in the report
return {
name: data.name,
version: data.version,
requires: {
...(tree.package.devDependencies || {}),
...(tree.package.peerDependencies || {}),
...(tree.package.optionalDependencies || {}),
...(tree.package.dependencies || {}),
},
dependencies: data.dependencies,
metadata: {
node_version,
npm_version,
platform,
arch,
node_env,
},
}
}
module.exports = AuditReport

119
spa/node_modules/@npmcli/arborist/lib/calc-dep-flags.js generated vendored Normal file
View File

@@ -0,0 +1,119 @@
const { depth } = require('treeverse')
const calcDepFlags = (tree, resetRoot = true) => {
if (resetRoot) {
tree.dev = false
tree.optional = false
tree.devOptional = false
tree.peer = false
}
const ret = depth({
tree,
visit: node => calcDepFlagsStep(node),
filter: node => node,
getChildren: (node, tree) =>
[...tree.edgesOut.values()].map(edge => edge.to),
})
return ret
}
const calcDepFlagsStep = (node) => {
// This rewalk is necessary to handle cases where devDep and optional
// or normal dependency graphs overlap deep in the dep graph.
// Since we're only walking through deps that are not already flagged
// as non-dev/non-optional, it's typically a very shallow traversal
node.extraneous = false
resetParents(node, 'extraneous')
resetParents(node, 'dev')
resetParents(node, 'peer')
resetParents(node, 'devOptional')
resetParents(node, 'optional')
// for links, map their hierarchy appropriately
if (node.isLink) {
node.target.dev = node.dev
node.target.optional = node.optional
node.target.devOptional = node.devOptional
node.target.peer = node.peer
return calcDepFlagsStep(node.target)
}
node.edgesOut.forEach(({ peer, optional, dev, to }) => {
// if the dep is missing, then its flags are already maximally unset
if (!to) {
return
}
// everything with any kind of edge into it is not extraneous
to.extraneous = false
// devOptional is the *overlap* of the dev and optional tree.
// however, for convenience and to save an extra rewalk, we leave
// it set when we are in *either* tree, and then omit it from the
// package-lock if either dev or optional are set.
const unsetDevOpt = !node.devOptional && !node.dev && !node.optional && !dev && !optional
// if we are not in the devOpt tree, then we're also not in
// either the dev or opt trees
const unsetDev = unsetDevOpt || !node.dev && !dev
const unsetOpt = unsetDevOpt || !node.optional && !optional
const unsetPeer = !node.peer && !peer
if (unsetPeer) {
unsetFlag(to, 'peer')
}
if (unsetDevOpt) {
unsetFlag(to, 'devOptional')
}
if (unsetDev) {
unsetFlag(to, 'dev')
}
if (unsetOpt) {
unsetFlag(to, 'optional')
}
})
return node
}
const resetParents = (node, flag) => {
if (node[flag]) {
return
}
for (let p = node; p && (p === node || p[flag]); p = p.resolveParent) {
p[flag] = false
}
}
// typically a short walk, since it only traverses deps that have the flag set.
const unsetFlag = (node, flag) => {
if (node[flag]) {
node[flag] = false
depth({
tree: node,
visit: node => {
node.extraneous = node[flag] = false
if (node.isLink) {
node.target.extraneous = node.target[flag] = false
}
},
getChildren: node => {
const children = []
for (const edge of node.target.edgesOut.values()) {
if (edge.to && edge.to[flag] &&
(flag !== 'peer' && edge.type === 'peer' || edge.type === 'prod')
) {
children.push(edge.to)
}
}
return children
},
})
}
}
module.exports = calcDepFlags

436
spa/node_modules/@npmcli/arborist/lib/can-place-dep.js generated vendored Normal file
View File

@@ -0,0 +1,436 @@
// Internal methods used by buildIdealTree.
// Answer the question: "can I put this dep here?"
//
// IMPORTANT: *nothing* in this class should *ever* modify or mutate the tree
// at all. The contract here is strictly limited to read operations. We call
// this in the process of walking through the ideal tree checking many
// different potential placement targets for a given node. If a change is made
// to the tree along the way, that can cause serious problems!
//
// In order to enforce this restriction, in debug mode, canPlaceDep() will
// snapshot the tree at the start of the process, and then at the end, will
// verify that it still matches the snapshot, and throw an error if any changes
// occurred.
//
// The algorithm is roughly like this:
// - check the node itself:
// - if there is no version present, and no conflicting edges from target,
// OK, provided all peers can be placed at or above the target.
// - if the current version matches, KEEP
// - if there is an older version present, which can be replaced, then
// - if satisfying and preferDedupe? KEEP
// - else: REPLACE
// - if there is a newer version present, and preferDedupe, REPLACE
// - if the version present satisfies the edge, KEEP
// - else: CONFLICT
// - if the node is not in conflict, check each of its peers:
// - if the peer can be placed in the target, continue
// - else if the peer can be placed in a parent, and there is no other
// conflicting version shadowing it, continue
// - else CONFLICT
// - If the peers are not in conflict, return the original node's value
//
// An exception to this logic is that if the target is the deepest location
// that a node can be placed, and the conflicting node can be placed deeper,
// then we will return REPLACE rather than CONFLICT, and Arborist will queue
// the replaced node for resolution elsewhere.
const localeCompare = require('@isaacs/string-locale-compare')('en')
const semver = require('semver')
const debug = require('./debug.js')
const peerEntrySets = require('./peer-entry-sets.js')
const deepestNestingTarget = require('./deepest-nesting-target.js')
const CONFLICT = Symbol('CONFLICT')
const OK = Symbol('OK')
const REPLACE = Symbol('REPLACE')
const KEEP = Symbol('KEEP')
class CanPlaceDep {
// dep is a dep that we're trying to place. it should already live in
// a virtual tree where its peer set is loaded as children of the root.
// target is the actual place where we're trying to place this dep
// in a node_modules folder.
// edge is the edge that we're trying to satisfy with this placement.
// parent is the CanPlaceDep object of the entry node when placing a peer.
constructor (options) {
const {
dep,
target,
edge,
preferDedupe,
parent = null,
peerPath = [],
explicitRequest = false,
} = options
debug(() => {
if (!dep) {
throw new Error('no dep provided to CanPlaceDep')
}
if (!target) {
throw new Error('no target provided to CanPlaceDep')
}
if (!edge) {
throw new Error('no edge provided to CanPlaceDep')
}
this._treeSnapshot = JSON.stringify([...target.root.inventory.entries()]
.map(([loc, { packageName, version, resolved }]) => {
return [loc, packageName, version, resolved]
}).sort(([a], [b]) => localeCompare(a, b)))
})
// the result of whether we can place it or not
this.canPlace = null
// if peers conflict, but this one doesn't, then that is useful info
this.canPlaceSelf = null
this.dep = dep
this.target = target
this.edge = edge
this.explicitRequest = explicitRequest
// preventing cycles when we check peer sets
this.peerPath = peerPath
// we always prefer to dedupe peers, because they are trying
// a bit harder to be singletons.
this.preferDedupe = !!preferDedupe || edge.peer
this.parent = parent
this.children = []
this.isSource = target === this.peerSetSource
this.name = edge.name
this.current = target.children.get(this.name)
this.targetEdge = target.edgesOut.get(this.name)
this.conflicts = new Map()
// check if this dep was already subject to a peerDep override while
// building the peerSet.
this.edgeOverride = !dep.satisfies(edge)
this.canPlace = this.checkCanPlace()
if (!this.canPlaceSelf) {
this.canPlaceSelf = this.canPlace
}
debug(() => {
const treeSnapshot = JSON.stringify([...target.root.inventory.entries()]
.map(([loc, { packageName, version, resolved }]) => {
return [loc, packageName, version, resolved]
}).sort(([a], [b]) => localeCompare(a, b)))
/* istanbul ignore if */
if (this._treeSnapshot !== treeSnapshot) {
throw Object.assign(new Error('tree changed in CanPlaceDep'), {
expect: this._treeSnapshot,
actual: treeSnapshot,
})
}
})
}
checkCanPlace () {
const { target, targetEdge, current, dep } = this
// if the dep failed to load, we're going to fail the build or
// prune it out anyway, so just move forward placing/replacing it.
if (dep.errors.length) {
return current ? REPLACE : OK
}
// cannot place peers inside their dependents, except for tops
if (targetEdge && targetEdge.peer && !target.isTop) {
return CONFLICT
}
// skip this test if there's a current node, because we might be able
// to dedupe against it anyway
if (!current &&
targetEdge &&
!dep.satisfies(targetEdge) &&
targetEdge !== this.edge) {
return CONFLICT
}
return current ? this.checkCanPlaceCurrent() : this.checkCanPlaceNoCurrent()
}
// we know that the target has a dep by this name in its node_modules
// already. Can return KEEP, REPLACE, or CONFLICT.
checkCanPlaceCurrent () {
const { preferDedupe, explicitRequest, current, target, edge, dep } = this
if (dep.matches(current)) {
if (current.satisfies(edge) || this.edgeOverride) {
return explicitRequest ? REPLACE : KEEP
}
}
const { version: curVer } = current
const { version: newVer } = dep
const tryReplace = curVer && newVer && semver.gte(newVer, curVer)
if (tryReplace && dep.canReplace(current)) {
// It's extremely rare that a replaceable node would be a conflict, if
// the current one wasn't a conflict, but it is theoretically possible
// if peer deps are pinned. In that case we treat it like any other
// conflict, and keep trying.
const cpp = this.canPlacePeers(REPLACE)
if (cpp !== CONFLICT) {
return cpp
}
}
// ok, can't replace the current with new one, but maybe current is ok?
if (current.satisfies(edge) && (!explicitRequest || preferDedupe)) {
return KEEP
}
// if we prefer deduping, then try replacing newer with older
if (preferDedupe && !tryReplace && dep.canReplace(current)) {
const cpp = this.canPlacePeers(REPLACE)
if (cpp !== CONFLICT) {
return cpp
}
}
// Check for interesting cases!
// First, is this the deepest place that this thing can go, and NOT the
// deepest place where the conflicting dep can go? If so, replace it,
// and let it re-resolve deeper in the tree.
const myDeepest = this.deepestNestingTarget
// ok, i COULD be placed deeper, so leave the current one alone.
if (target !== myDeepest) {
return CONFLICT
}
// if we are not checking a peerDep, then we MUST place it here, in the
// target that has a non-peer dep on it.
if (!edge.peer && target === edge.from) {
return this.canPlacePeers(REPLACE)
}
// if we aren't placing a peer in a set, then we're done here.
// This is ignored because it SHOULD be redundant, as far as I can tell,
// with the deepest target and target===edge.from tests. But until we
// can prove that isn't possible, this condition is here for safety.
/* istanbul ignore if - allegedly impossible */
if (!this.parent && !edge.peer) {
return CONFLICT
}
// check the deps in the peer group for each edge into that peer group
// if ALL of them can be pushed deeper, or if it's ok to replace its
// members with the contents of the new peer group, then we're good.
let canReplace = true
for (const [entryEdge, currentPeers] of peerEntrySets(current)) {
if (entryEdge === this.edge || entryEdge === this.peerEntryEdge) {
continue
}
// First, see if it's ok to just replace the peerSet entirely.
// we do this by walking out from the entryEdge, because in a case like
// this:
//
// v -> PEER(a@1||2)
// a@1 -> PEER(b@1)
// a@2 -> PEER(b@2)
// b@1 -> PEER(a@1)
// b@2 -> PEER(a@2)
//
// root
// +-- v
// +-- a@2
// +-- b@2
//
// Trying to place a peer group of (a@1, b@1) would fail to note that
// they can be replaced, if we did it by looping 1 by 1. If we are
// replacing something, we don't have to check its peer deps, because
// the peerDeps in the placed peerSet will presumably satisfy.
const entryNode = entryEdge.to
const entryRep = dep.parent.children.get(entryNode.name)
if (entryRep) {
if (entryRep.canReplace(entryNode, dep.parent.children.keys())) {
continue
}
}
let canClobber = !entryRep
if (!entryRep) {
const peerReplacementWalk = new Set([entryNode])
OUTER: for (const currentPeer of peerReplacementWalk) {
for (const edge of currentPeer.edgesOut.values()) {
if (!edge.peer || !edge.valid) {
continue
}
const rep = dep.parent.children.get(edge.name)
if (!rep) {
if (edge.to) {
peerReplacementWalk.add(edge.to)
}
continue
}
if (!rep.satisfies(edge)) {
canClobber = false
break OUTER
}
}
}
}
if (canClobber) {
continue
}
// ok, we can't replace, but maybe we can nest the current set deeper?
let canNestCurrent = true
for (const currentPeer of currentPeers) {
if (!canNestCurrent) {
break
}
// still possible to nest this peerSet
const curDeep = deepestNestingTarget(entryEdge.from, currentPeer.name)
if (curDeep === target || target.isDescendantOf(curDeep)) {
canNestCurrent = false
canReplace = false
}
if (canNestCurrent) {
continue
}
}
}
// if we can nest or replace all the current peer groups, we can replace.
if (canReplace) {
return this.canPlacePeers(REPLACE)
}
return CONFLICT
}
checkCanPlaceNoCurrent () {
const { target, peerEntryEdge, dep, name } = this
// check to see what that name resolves to here, and who may depend on
// being able to reach it by crawling up past the parent. we know
// that it's not the target's direct child node, and if it was a direct
// dep of the target, we would have conflicted earlier.
const current = target !== peerEntryEdge.from && target.resolve(name)
if (current) {
for (const edge of current.edgesIn.values()) {
if (edge.from.isDescendantOf(target) && edge.valid) {
if (!dep.satisfies(edge)) {
return CONFLICT
}
}
}
}
// no objections, so this is fine as long as peers are ok here.
return this.canPlacePeers(OK)
}
get deepestNestingTarget () {
const start = this.parent ? this.parent.deepestNestingTarget
: this.edge.from
return deepestNestingTarget(start, this.name)
}
get conflictChildren () {
return this.allChildren.filter(c => c.canPlace === CONFLICT)
}
get allChildren () {
const set = new Set(this.children)
for (const child of set) {
for (const grandchild of child.children) {
set.add(grandchild)
}
}
return [...set]
}
get top () {
return this.parent ? this.parent.top : this
}
// check if peers can go here. returns state or CONFLICT
canPlacePeers (state) {
this.canPlaceSelf = state
if (this._canPlacePeers) {
return this._canPlacePeers
}
// TODO: represent peerPath in ERESOLVE error somehow?
const peerPath = [...this.peerPath, this.dep]
let sawConflict = false
for (const peerEdge of this.dep.edgesOut.values()) {
if (!peerEdge.peer || !peerEdge.to || peerPath.includes(peerEdge.to)) {
continue
}
const peer = peerEdge.to
// it may be the case that the *initial* dep can be nested, but a peer
// of that dep needs to be placed shallower, because the target has
// a peer dep on the peer as well.
const target = deepestNestingTarget(this.target, peer.name)
const cpp = new CanPlaceDep({
dep: peer,
target,
parent: this,
edge: peerEdge,
peerPath,
// always place peers in preferDedupe mode
preferDedupe: true,
})
/* istanbul ignore next */
debug(() => {
if (this.children.some(c => c.dep === cpp.dep)) {
throw new Error('checking same dep repeatedly')
}
})
this.children.push(cpp)
if (cpp.canPlace === CONFLICT) {
sawConflict = true
}
}
this._canPlacePeers = sawConflict ? CONFLICT : state
return this._canPlacePeers
}
// what is the node that is causing this peerSet to be placed?
get peerSetSource () {
return this.parent ? this.parent.peerSetSource : this.edge.from
}
get peerEntryEdge () {
return this.top.edge
}
static get CONFLICT () {
return CONFLICT
}
static get OK () {
return OK
}
static get REPLACE () {
return REPLACE
}
static get KEEP () {
return KEEP
}
get description () {
const { canPlace } = this
return canPlace && canPlace.description ||
/* istanbul ignore next - old node affordance */ canPlace
}
}
module.exports = CanPlaceDep

View File

@@ -0,0 +1,50 @@
// package children are represented with a Map object, but many file systems
// are case-insensitive and unicode-normalizing, so we need to treat
// node.children.get('FOO') and node.children.get('foo') as the same thing.
const _keys = Symbol('keys')
const _normKey = Symbol('normKey')
const normalize = s => s.normalize('NFKD').toLowerCase()
const OGMap = Map
module.exports = class Map extends OGMap {
constructor (items = []) {
super()
this[_keys] = new OGMap()
for (const [key, val] of items) {
this.set(key, val)
}
}
[_normKey] (key) {
return typeof key === 'string' ? normalize(key) : key
}
get (key) {
const normKey = this[_normKey](key)
return this[_keys].has(normKey) ? super.get(this[_keys].get(normKey))
: undefined
}
set (key, val) {
const normKey = this[_normKey](key)
if (this[_keys].has(normKey)) {
super.delete(this[_keys].get(normKey))
}
this[_keys].set(normKey, key)
return super.set(key, val)
}
delete (key) {
const normKey = this[_normKey](key)
if (this[_keys].has(normKey)) {
const prevKey = this[_keys].get(normKey)
this[_keys].delete(normKey)
return super.delete(prevKey)
}
}
has (key) {
const normKey = this[_normKey](key)
return this[_keys].has(normKey) && super.has(this[_keys].get(normKey))
}
}

View File

@@ -0,0 +1,45 @@
// take a path and a resolved value, and turn it into a resolution from
// the given new path. This is used with converting a package.json's
// relative file: path into one suitable for a lockfile, or between
// lockfiles, and for converting hosted git repos to a consistent url type.
const npa = require('npm-package-arg')
const relpath = require('./relpath.js')
const consistentResolve = (resolved, fromPath, toPath, relPaths = false) => {
if (!resolved) {
return null
}
try {
const hostedOpt = { noCommittish: false }
const {
fetchSpec,
saveSpec,
type,
hosted,
rawSpec,
raw,
} = npa(resolved, fromPath)
if (type === 'file' || type === 'directory') {
const cleanFetchSpec = fetchSpec.replace(/#/g, '%23')
if (relPaths && toPath) {
return `file:${relpath(toPath, cleanFetchSpec)}`
}
return `file:${cleanFetchSpec}`
}
if (hosted) {
return `git+${hosted.auth ? hosted.https(hostedOpt) : hosted.sshurl(hostedOpt)}`
}
if (type === 'git') {
return saveSpec
}
if (rawSpec === '*') {
return raw
}
return rawSpec
} catch (_) {
// whatever we passed in was not acceptable to npa.
// leave it 100% untouched.
return resolved
}
}
module.exports = consistentResolve

31
spa/node_modules/@npmcli/arborist/lib/debug.js generated vendored Normal file
View File

@@ -0,0 +1,31 @@
// certain assertions we should do only when testing arborist itself, because
// they are too expensive or aggressive and would break user programs if we
// miss a situation where they are actually valid.
//
// call like this:
//
// /* istanbul ignore next - debug check */
// debug(() => {
// if (someExpensiveCheck)
// throw new Error('expensive check should have returned false')
// })
// run in debug mode if explicitly requested, running arborist tests,
// or working in the arborist project directory.
const debug = process.env.ARBORIST_DEBUG !== '0' && (
process.env.ARBORIST_DEBUG === '1' ||
/\barborist\b/.test(process.env.NODE_DEBUG || '') ||
process.env.npm_package_name === '@npmcli/arborist' &&
['test', 'snap'].includes(process.env.npm_lifecycle_event) ||
process.cwd() === require('path').resolve(__dirname, '..')
)
module.exports = debug ? fn => fn() : () => {}
const red = process.stderr.isTTY ? msg => `\x1B[31m${msg}\x1B[39m` : m => m
module.exports.log = (...msg) => module.exports(() => {
const { format } = require('util')
const prefix = `\n${process.pid} ${red(format(msg.shift()))} `
msg = (prefix + format(...msg).trim().split('\n').join(prefix)).trim()
console.error(msg)
})

View File

@@ -0,0 +1,18 @@
// given a starting node, what is the *deepest* target where name could go?
// This is not on the Node class for the simple reason that we sometimes
// need to check the deepest *potential* target for a Node that is not yet
// added to the tree where we are checking.
const deepestNestingTarget = (start, name) => {
for (const target of start.ancestry()) {
// note: this will skip past the first target if edge is peer
if (target.isProjectRoot || !target.resolveParent || target.globalTop) {
return target
}
const targetEdge = target.edgesOut.get(name)
if (!targetEdge || !targetEdge.peer) {
return target
}
}
}
module.exports = deepestNestingTarget

150
spa/node_modules/@npmcli/arborist/lib/dep-valid.js generated vendored Normal file
View File

@@ -0,0 +1,150 @@
// Do not rely on package._fields, so that we don't throw
// false failures if a tree is generated by other clients.
// Only relies on child.resolved, which MAY come from
// client-specific package.json meta _fields, but most of
// the time will be pulled out of a lockfile
const semver = require('semver')
const npa = require('npm-package-arg')
const { relative } = require('path')
const fromPath = require('./from-path.js')
const depValid = (child, requested, requestor) => {
// NB: we don't do much to verify 'tag' type requests.
// Just verify that we got a remote resolution. Presumably, it
// came from a registry and was tagged at some point.
if (typeof requested === 'string') {
try {
// tarball/dir must have resolved to the same tgz on disk, but for
// file: deps that depend on other files/dirs, we must resolve the
// location based on the *requestor* file/dir, not where it ends up.
// '' is equivalent to '*'
requested = npa.resolve(child.name, requested || '*', fromPath(requestor, requestor.edgesOut.get(child.name)))
} catch (er) {
// Not invalid because the child doesn't match, but because
// the spec itself is not supported. Nothing would match,
// so the edge is definitely not valid and never can be.
er.dependency = child.name
er.requested = requested
requestor.errors.push(er)
return false
}
}
// if the lockfile is super old, or hand-modified,
// then it's possible to hit this state.
if (!requested) {
const er = new Error('Invalid dependency specifier')
er.dependency = child.name
er.requested = requested
requestor.errors.push(er)
return false
}
switch (requested.type) {
case 'range':
if (requested.fetchSpec === '*') {
return true
}
// fallthrough
case 'version':
// if it's a version or a range other than '*', semver it
return semver.satisfies(child.version, requested.fetchSpec, true)
case 'directory':
return linkValid(child, requested, requestor)
case 'file':
return tarballValid(child, requested, requestor)
case 'alias':
// check that the alias target is valid
return depValid(child, requested.subSpec, requestor)
case 'tag':
// if it's a tag, we just verify that it has a tarball resolution
// presumably, it came from the registry and was tagged at some point
return child.resolved && npa(child.resolved).type === 'remote'
case 'remote':
// verify that we got it from the desired location
return child.resolved === requested.fetchSpec
case 'git': {
// if it's a git type, verify that they're the same repo
//
// if it specifies a definite commit, then it must have the
// same commit to be considered the same repo
//
// if it has a #semver:<range> specifier, verify that the
// version in the package is in the semver range
const resRepo = npa(child.resolved || '')
const resHost = resRepo.hosted
const reqHost = requested.hosted
const reqCommit = /^[a-fA-F0-9]{40}$/.test(requested.gitCommittish || '')
const nc = { noCommittish: !reqCommit }
if (!resHost) {
if (resRepo.fetchSpec !== requested.fetchSpec) {
return false
}
} else {
if (reqHost?.ssh(nc) !== resHost.ssh(nc)) {
return false
}
}
if (!requested.gitRange) {
return true
}
return semver.satisfies(child.package.version, requested.gitRange, {
loose: true,
})
}
default: // unpossible, just being cautious
break
}
const er = new Error('Unsupported dependency type')
er.dependency = child.name
er.requested = requested
requestor.errors.push(er)
return false
}
const linkValid = (child, requested, requestor) => {
const isLink = !!child.isLink
// if we're installing links and the node is a link, then it's invalid because we want
// a real node to be there. Except for workspaces. They are always links.
if (requestor.installLinks && !child.isWorkspace) {
return !isLink
}
// directory must be a link to the specified folder
return isLink && relative(child.realpath, requested.fetchSpec) === ''
}
const tarballValid = (child, requested, requestor) => {
if (child.isLink) {
return false
}
if (child.resolved) {
return child.resolved.replace(/\\/g, '/') === `file:${requested.fetchSpec.replace(/\\/g, '/')}`
}
// if we have a legacy mutated package.json file. we can't be 100%
// sure that it resolved to the same file, but if it was the same
// request, that's a pretty good indicator of sameness.
if (child.package._requested) {
return child.package._requested.saveSpec === requested.saveSpec
}
// ok, we're probably dealing with some legacy cruft here, not much
// we can do at this point unfortunately.
return false
}
module.exports = (child, requested, accept, requestor) =>
depValid(child, requested, requestor) ||
(typeof accept === 'string' ? depValid(child, accept, requestor) : false)

306
spa/node_modules/@npmcli/arborist/lib/diff.js generated vendored Normal file
View File

@@ -0,0 +1,306 @@
// a tree representing the difference between two trees
// A Diff node's parent is not necessarily the parent of
// the node location it refers to, but rather the highest level
// node that needs to be either changed or removed.
// Thus, the root Diff node is the shallowest change required
// for a given branch of the tree being mutated.
const { depth } = require('treeverse')
const { existsSync } = require('fs')
const ssri = require('ssri')
class Diff {
constructor ({ actual, ideal, filterSet, shrinkwrapInflated }) {
this.filterSet = filterSet
this.shrinkwrapInflated = shrinkwrapInflated
this.children = []
this.actual = actual
this.ideal = ideal
if (this.ideal) {
this.resolved = this.ideal.resolved
this.integrity = this.ideal.integrity
}
this.action = getAction(this)
this.parent = null
// the set of leaf nodes that we rake up to the top level
this.leaves = []
// the set of nodes that don't change in this branch of the tree
this.unchanged = []
// the set of nodes that will be removed in this branch of the tree
this.removed = []
}
static calculate ({
actual,
ideal,
filterNodes = [],
shrinkwrapInflated = new Set(),
}) {
// if there's a filterNode, then:
// - get the path from the root to the filterNode. The root or
// root.target should have an edge either to the filterNode or
// a link to the filterNode. If not, abort. Add the path to the
// filterSet.
// - Add set of Nodes depended on by the filterNode to filterSet.
// - Anything outside of that set should be ignored by getChildren
const filterSet = new Set()
const extraneous = new Set()
for (const filterNode of filterNodes) {
const { root } = filterNode
if (root !== ideal && root !== actual) {
throw new Error('invalid filterNode: outside idealTree/actualTree')
}
const rootTarget = root.target
const edge = [...rootTarget.edgesOut.values()].filter(e => {
return e.to && (e.to === filterNode || e.to.target === filterNode)
})[0]
filterSet.add(root)
filterSet.add(rootTarget)
filterSet.add(ideal)
filterSet.add(actual)
if (edge && edge.to) {
filterSet.add(edge.to)
filterSet.add(edge.to.target)
}
filterSet.add(filterNode)
depth({
tree: filterNode,
visit: node => filterSet.add(node),
getChildren: node => {
node = node.target
const loc = node.location
const idealNode = ideal.inventory.get(loc)
const ideals = !idealNode ? []
: [...idealNode.edgesOut.values()].filter(e => e.to).map(e => e.to)
const actualNode = actual.inventory.get(loc)
const actuals = !actualNode ? []
: [...actualNode.edgesOut.values()].filter(e => e.to).map(e => e.to)
if (actualNode) {
for (const child of actualNode.children.values()) {
if (child.extraneous) {
extraneous.add(child)
}
}
}
return ideals.concat(actuals)
},
})
}
for (const extra of extraneous) {
filterSet.add(extra)
}
return depth({
tree: new Diff({ actual, ideal, filterSet, shrinkwrapInflated }),
getChildren,
leave,
})
}
}
const getAction = ({ actual, ideal }) => {
if (!ideal) {
return 'REMOVE'
}
// bundled meta-deps are copied over to the ideal tree when we visit it,
// so they'll appear to be missing here. There's no need to handle them
// in the diff, though, because they'll be replaced at reify time anyway
// Otherwise, add the missing node.
if (!actual) {
return ideal.inDepBundle ? null : 'ADD'
}
// always ignore the root node
if (ideal.isRoot && actual.isRoot) {
return null
}
// if the versions don't match, it's a change no matter what
if (ideal.version !== actual.version) {
return 'CHANGE'
}
const binsExist = ideal.binPaths.every((path) => existsSync(path))
// top nodes, links, and git deps won't have integrity, but do have resolved
// if neither node has integrity, the bins exist, and either (a) neither
// node has a resolved value or (b) they both do and match, then we can
// leave this one alone since we already know the versions match due to
// the condition above. The "neither has resolved" case (a) cannot be
// treated as a 'mark CHANGE and refetch', because shrinkwraps, bundles,
// and link deps may lack this information, and we don't want to try to
// go to the registry for something that isn't there.
const noIntegrity = !ideal.integrity && !actual.integrity
const noResolved = !ideal.resolved && !actual.resolved
const resolvedMatch = ideal.resolved && ideal.resolved === actual.resolved
if (noIntegrity && binsExist && (resolvedMatch || noResolved)) {
return null
}
// otherwise, verify that it's the same bits
// note that if ideal has integrity, and resolved doesn't, we treat
// that as a 'change', so that it gets re-fetched and locked down.
const integrityMismatch = !ideal.integrity || !actual.integrity ||
!ssri.parse(ideal.integrity).match(actual.integrity)
if (integrityMismatch || !binsExist) {
return 'CHANGE'
}
return null
}
const allChildren = node => {
if (!node) {
return new Map()
}
// if the node is root, and also a link, then what we really
// want is to traverse the target's children
if (node.isRoot && node.isLink) {
return allChildren(node.target)
}
const kids = new Map()
for (const n of [node, ...node.fsChildren]) {
for (const kid of n.children.values()) {
kids.set(kid.path, kid)
}
}
return kids
}
// functions for the walk options when we traverse the trees
// to create the diff tree
const getChildren = diff => {
const children = []
const {
actual,
ideal,
unchanged,
removed,
filterSet,
shrinkwrapInflated,
} = diff
// Note: we DON'T diff fsChildren themselves, because they are either
// included in the package contents, or part of some other project, and
// will never appear in legacy shrinkwraps anyway. but we _do_ include the
// child nodes of fsChildren, because those are nodes that we are typically
// responsible for installing.
const actualKids = allChildren(actual)
const idealKids = allChildren(ideal)
if (ideal && ideal.hasShrinkwrap && !shrinkwrapInflated.has(ideal)) {
// Guaranteed to get a diff.leaves here, because we always
// be called with a proper Diff object when ideal has a shrinkwrap
// that has not been inflated.
diff.leaves.push(diff)
return children
}
const paths = new Set([...actualKids.keys(), ...idealKids.keys()])
for (const path of paths) {
const actual = actualKids.get(path)
const ideal = idealKids.get(path)
diffNode({
actual,
ideal,
children,
unchanged,
removed,
filterSet,
shrinkwrapInflated,
})
}
if (diff.leaves && !children.length) {
diff.leaves.push(diff)
}
return children
}
const diffNode = ({
actual,
ideal,
children,
unchanged,
removed,
filterSet,
shrinkwrapInflated,
}) => {
if (filterSet.size && !(filterSet.has(ideal) || filterSet.has(actual))) {
return
}
const action = getAction({ actual, ideal })
// if it's a match, then get its children
// otherwise, this is the child diff node
if (action || (!shrinkwrapInflated.has(ideal) && ideal.hasShrinkwrap)) {
if (action === 'REMOVE') {
removed.push(actual)
}
children.push(new Diff({ actual, ideal, filterSet, shrinkwrapInflated }))
} else {
unchanged.push(ideal)
// !*! Weird dirty hack warning !*!
//
// Bundled deps aren't loaded in the ideal tree, because we don't know
// what they are going to be without unpacking. Swap them over now if
// the bundling node isn't changing, so we don't prune them later.
//
// It's a little bit dirty to be doing this here, since it means that
// diffing trees can mutate them, but otherwise we have to walk over
// all unchanging bundlers and correct the diff later, so it's more
// efficient to just fix it while we're passing through already.
//
// Note that moving over a bundled dep will break the links to other
// deps under this parent, which may have been transitively bundled.
// Breaking those links means that we'll no longer see the transitive
// dependency, meaning that it won't appear as bundled any longer!
// In order to not end up dropping transitively bundled deps, we have
// to get the list of nodes to move, then move them all at once, rather
// than moving them one at a time in the first loop.
const bd = ideal.package.bundleDependencies
if (actual && bd && bd.length) {
const bundledChildren = []
for (const node of actual.children.values()) {
if (node.inBundle) {
bundledChildren.push(node)
}
}
for (const node of bundledChildren) {
node.parent = ideal
}
}
children.push(...getChildren({
actual,
ideal,
unchanged,
removed,
filterSet,
shrinkwrapInflated,
}))
}
}
// set the parentage in the leave step so that we aren't attaching
// child nodes only to remove them later. also bubble up the unchanged
// nodes so that we can move them out of staging in the reification step.
const leave = (diff, children) => {
children.forEach(kid => {
kid.parent = diff
diff.leaves.push(...kid.leaves)
diff.unchanged.push(...kid.unchanged)
diff.removed.push(...kid.removed)
})
diff.children = children
return diff
}
module.exports = Diff

301
spa/node_modules/@npmcli/arborist/lib/edge.js generated vendored Normal file
View File

@@ -0,0 +1,301 @@
// An edge in the dependency graph
// Represents a dependency relationship of some kind
const util = require('util')
const npa = require('npm-package-arg')
const depValid = require('./dep-valid.js')
class ArboristEdge {
constructor (edge) {
this.name = edge.name
this.spec = edge.spec
this.type = edge.type
const edgeFrom = edge.from?.location
const edgeTo = edge.to?.location
const override = edge.overrides?.value
if (edgeFrom != null) {
this.from = edgeFrom
}
if (edgeTo) {
this.to = edgeTo
}
if (edge.error) {
this.error = edge.error
}
if (edge.peerConflicted) {
this.peerConflicted = true
}
if (override) {
this.overridden = override
}
}
}
class Edge {
#accept
#error
#explanation
#from
#name
#spec
#to
#type
static types = Object.freeze([
'prod',
'dev',
'optional',
'peer',
'peerOptional',
'workspace',
])
// XXX where is this used?
static errors = Object.freeze([
'DETACHED',
'MISSING',
'PEER LOCAL',
'INVALID',
])
constructor (options) {
const { type, name, spec, accept, from, overrides } = options
// XXX are all of these error states even possible?
if (typeof spec !== 'string') {
throw new TypeError('must provide string spec')
}
if (!Edge.types.includes(type)) {
throw new TypeError(`invalid type: ${type}\n(valid types are: ${Edge.types.join(', ')})`)
}
if (type === 'workspace' && npa(spec).type !== 'directory') {
throw new TypeError('workspace edges must be a symlink')
}
if (typeof name !== 'string') {
throw new TypeError('must provide dependency name')
}
if (!from) {
throw new TypeError('must provide "from" node')
}
if (accept !== undefined) {
if (typeof accept !== 'string') {
throw new TypeError('accept field must be a string if provided')
}
this.#accept = accept || '*'
}
if (overrides !== undefined) {
this.overrides = overrides
}
this.#name = name
this.#type = type
this.#spec = spec
this.#explanation = null
this.#from = from
from.edgesOut.get(this.#name)?.detach()
from.addEdgeOut(this)
this.reload(true)
this.peerConflicted = false
}
satisfiedBy (node) {
if (node.name !== this.#name) {
return false
}
// NOTE: this condition means we explicitly do not support overriding
// bundled or shrinkwrapped dependencies
if (node.hasShrinkwrap || node.inShrinkwrap || node.inBundle) {
return depValid(node, this.rawSpec, this.#accept, this.#from)
}
return depValid(node, this.spec, this.#accept, this.#from)
}
// return the edge data, and an explanation of how that edge came to be here
explain (seen = []) {
if (!this.#explanation) {
const explanation = {
type: this.#type,
name: this.#name,
spec: this.spec,
}
if (this.rawSpec !== this.spec) {
explanation.rawSpec = this.rawSpec
explanation.overridden = true
}
if (this.bundled) {
explanation.bundled = this.bundled
}
if (this.error) {
explanation.error = this.error
}
if (this.#from) {
explanation.from = this.#from.explain(null, seen)
}
this.#explanation = explanation
}
return this.#explanation
}
get bundled () {
return !!this.#from?.package?.bundleDependencies?.includes(this.#name)
}
get workspace () {
return this.#type === 'workspace'
}
get prod () {
return this.#type === 'prod'
}
get dev () {
return this.#type === 'dev'
}
get optional () {
return this.#type === 'optional' || this.#type === 'peerOptional'
}
get peer () {
return this.#type === 'peer' || this.#type === 'peerOptional'
}
get type () {
return this.#type
}
get name () {
return this.#name
}
get rawSpec () {
return this.#spec
}
get spec () {
if (this.overrides?.value && this.overrides.value !== '*' && this.overrides.name === this.#name) {
if (this.overrides.value.startsWith('$')) {
const ref = this.overrides.value.slice(1)
// we may be a virtual root, if we are we want to resolve reference overrides
// from the real root, not the virtual one
const pkg = this.#from.sourceReference
? this.#from.sourceReference.root.package
: this.#from.root.package
if (pkg.devDependencies?.[ref]) {
return pkg.devDependencies[ref]
}
if (pkg.optionalDependencies?.[ref]) {
return pkg.optionalDependencies[ref]
}
if (pkg.dependencies?.[ref]) {
return pkg.dependencies[ref]
}
if (pkg.peerDependencies?.[ref]) {
return pkg.peerDependencies[ref]
}
throw new Error(`Unable to resolve reference ${this.overrides.value}`)
}
return this.overrides.value
}
return this.#spec
}
get accept () {
return this.#accept
}
get valid () {
return !this.error
}
get missing () {
return this.error === 'MISSING'
}
get invalid () {
return this.error === 'INVALID'
}
get peerLocal () {
return this.error === 'PEER LOCAL'
}
get error () {
if (!this.#error) {
if (!this.#to) {
if (this.optional) {
this.#error = null
} else {
this.#error = 'MISSING'
}
} else if (this.peer && this.#from === this.#to.parent && !this.#from.isTop) {
this.#error = 'PEER LOCAL'
} else if (!this.satisfiedBy(this.#to)) {
this.#error = 'INVALID'
} else {
this.#error = 'OK'
}
}
if (this.#error === 'OK') {
return null
}
return this.#error
}
reload (hard = false) {
this.#explanation = null
if (this.#from.overrides) {
this.overrides = this.#from.overrides.getEdgeRule(this)
} else {
delete this.overrides
}
const newTo = this.#from.resolve(this.#name)
if (newTo !== this.#to) {
if (this.#to) {
this.#to.edgesIn.delete(this)
}
this.#to = newTo
this.#error = null
if (this.#to) {
this.#to.addEdgeIn(this)
}
} else if (hard) {
this.#error = null
}
}
detach () {
this.#explanation = null
if (this.#to) {
this.#to.edgesIn.delete(this)
}
this.#from.edgesOut.delete(this.#name)
this.#to = null
this.#error = 'DETACHED'
this.#from = null
}
get from () {
return this.#from
}
get to () {
return this.#to
}
toJSON () {
return new ArboristEdge(this)
}
[util.inspect.custom] () {
return this.toJSON()
}
}
module.exports = Edge

30
spa/node_modules/@npmcli/arborist/lib/from-path.js generated vendored Normal file
View File

@@ -0,0 +1,30 @@
// file dependencies need their dependencies resolved based on the location
// where the tarball was found, not the location where they end up getting
// installed. directory (ie, symlink) deps also need to be resolved based on
// their targets, but that's what realpath is
const { dirname } = require('path')
const npa = require('npm-package-arg')
const fromPath = (node, edge) => {
if (edge && edge.overrides && edge.overrides.name === edge.name && edge.overrides.value) {
// fromPath could be called with a node that has a virtual root, if that
// happens we want to make sure we get the real root node when overrides
// are in use. this is to allow things like overriding a dependency with a
// tarball file that's a relative path from the project root
if (node.sourceReference) {
return node.sourceReference.root.realpath
}
return node.root.realpath
}
if (node.resolved) {
const spec = npa(node.resolved)
if (spec?.type === 'file') {
return dirname(spec.fetchSpec)
}
}
return node.realpath
}
module.exports = fromPath

View File

@@ -0,0 +1,43 @@
// Given a set of nodes in a tree, and a filter function to test
// incoming edges to the dep set that should be ignored otherwise.
//
// find the set of deps that are only depended upon by nodes in the set, or
// their dependencies, or edges that are ignored.
//
// Used when figuring out what to prune when replacing a node with a newer
// version, or when an optional dep fails to install.
const gatherDepSet = (set, edgeFilter) => {
const deps = new Set(set)
// add the full set of dependencies. note that this loop will continue
// as the deps set increases in size.
for (const node of deps) {
for (const edge of node.edgesOut.values()) {
if (edge.to && edgeFilter(edge)) {
deps.add(edge.to)
}
}
}
// now remove all nodes in the set that have a dependant outside the set
// if any change is made, then re-check
// continue until no changes made, or deps set evaporates fully.
let changed = true
while (changed === true && deps.size > 0) {
changed = false
for (const dep of deps) {
for (const edge of dep.edgesIn) {
if (!deps.has(edge.from) && edgeFilter(edge)) {
changed = true
deps.delete(dep)
break
}
}
}
}
return deps
}
module.exports = gatherDepSet

View File

@@ -0,0 +1,36 @@
// Get the actual nodes corresponding to a root node's child workspaces,
// given a list of workspace names.
const log = require('proc-log')
const relpath = require('./relpath.js')
const getWorkspaceNodes = (tree, workspaces) => {
const wsMap = tree.workspaces
if (!wsMap) {
log.warn('workspaces', 'filter set, but no workspaces present')
return []
}
const nodes = []
for (const name of workspaces) {
const path = wsMap.get(name)
if (!path) {
log.warn('workspaces', `${name} in filter set, but not in workspaces`)
continue
}
const loc = relpath(tree.realpath, path)
const node = tree.inventory.get(loc)
if (!node) {
log.warn('workspaces', `${name} in filter set, but no workspace folder present`)
continue
}
nodes.push(node)
}
return nodes
}
module.exports = getWorkspaceNodes

8
spa/node_modules/@npmcli/arborist/lib/index.js generated vendored Normal file
View File

@@ -0,0 +1,8 @@
module.exports = require('./arborist/index.js')
module.exports.Arborist = module.exports
module.exports.Node = require('./node.js')
module.exports.Link = require('./link.js')
module.exports.Edge = require('./edge.js')
module.exports.Shrinkwrap = require('./shrinkwrap.js')
// XXX export the other classes, too. shrinkwrap, diff, etc.
// they're handy!

138
spa/node_modules/@npmcli/arborist/lib/inventory.js generated vendored Normal file
View File

@@ -0,0 +1,138 @@
// a class to manage an inventory and set of indexes of a set of objects based
// on specific fields.
const { hasOwnProperty } = Object.prototype
const debug = require('./debug.js')
const keys = ['name', 'license', 'funding', 'realpath', 'packageName']
class Inventory extends Map {
#index
constructor () {
super()
this.#index = new Map()
for (const key of keys) {
this.#index.set(key, new Map())
}
}
// XXX where is this used?
get primaryKey () {
return 'location'
}
// XXX where is this used?
get indexes () {
return [...keys]
}
* filter (fn) {
for (const node of this.values()) {
if (fn(node)) {
yield node
}
}
}
add (node) {
const root = super.get('')
if (root && node.root !== root && node.root !== root.root) {
debug(() => {
throw Object.assign(new Error('adding external node to inventory'), {
root: root.path,
node: node.path,
nodeRoot: node.root.path,
})
})
return
}
const current = super.get(node.location)
if (current) {
if (current === node) {
return
}
this.delete(current)
}
super.set(node.location, node)
for (const [key, map] of this.#index.entries()) {
let val
if (hasOwnProperty.call(node, key)) {
// if the node has the value, use it even if it's false
val = node[key]
} else if (key === 'license' && node.package) {
// handling for the outdated "licenses" array, just pick the first one
// also support the alternative spelling "licence"
if (node.package.license) {
val = node.package.license
} else if (node.package.licence) {
val = node.package.licence
} else if (Array.isArray(node.package.licenses)) {
val = node.package.licenses[0]
} else if (Array.isArray(node.package.licences)) {
val = node.package.licences[0]
}
} else if (node[key]) {
val = node[key]
} else {
val = node.package?.[key]
}
if (val && typeof val === 'object') {
// We currently only use license and funding
/* istanbul ignore next - not used */
if (key === 'license') {
val = val.type
} else if (key === 'funding') {
val = val.url
}
}
if (!map.has(val)) {
map.set(val, new Set())
}
map.get(val).add(node)
}
}
delete (node) {
if (!this.has(node)) {
return
}
super.delete(node.location)
for (const [key, map] of this.#index.entries()) {
let val
if (node[key] !== undefined) {
val = node[key]
} else {
val = node.package?.[key]
}
const set = map.get(val)
if (set) {
set.delete(node)
if (set.size === 0) {
map.delete(node[key])
}
}
}
}
query (key, val) {
const map = this.#index.get(key)
if (arguments.length === 2) {
if (map.has(val)) {
return map.get(val)
}
return new Set()
}
return map.keys()
}
has (node) {
return super.get(node.location) === node
}
set (k, v) {
throw new Error('direct set() not supported, use inventory.add(node)')
}
}
module.exports = Inventory

126
spa/node_modules/@npmcli/arborist/lib/link.js generated vendored Normal file
View File

@@ -0,0 +1,126 @@
const relpath = require('./relpath.js')
const Node = require('./node.js')
const _loadDeps = Symbol.for('Arborist.Node._loadDeps')
const _target = Symbol.for('_target')
const { dirname } = require('path')
// defined by Node class
const _delistFromMeta = Symbol.for('_delistFromMeta')
const _refreshLocation = Symbol.for('_refreshLocation')
class Link extends Node {
constructor (options) {
const { root, realpath, target, parent, fsParent, isStoreLink } = options
if (!realpath && !(target && target.path)) {
throw new TypeError('must provide realpath for Link node')
}
super({
...options,
realpath: realpath || target.path,
root: root || (parent ? parent.root
: fsParent ? fsParent.root
: target ? target.root
: null),
})
this.isStoreLink = isStoreLink || false
if (target) {
this.target = target
} else if (this.realpath === this.root.path) {
this.target = this.root
} else {
this.target = new Node({
...options,
path: realpath,
parent: null,
fsParent: null,
root: this.root,
})
}
}
get version () {
return this.target ? this.target.version : this.package.version || ''
}
get target () {
return this[_target]
}
set target (target) {
const current = this[_target]
if (target === current) {
return
}
if (!target) {
if (current && current.linksIn) {
current.linksIn.delete(this)
}
if (this.path) {
this[_delistFromMeta]()
this[_target] = null
this.package = {}
this[_refreshLocation]()
} else {
this[_target] = null
}
return
}
if (!this.path) {
// temp node pending assignment to a tree
// we know it's not in the inventory yet, because no path.
if (target.path) {
this.realpath = target.path
} else {
target.path = target.realpath = this.realpath
}
target.root = this.root
this[_target] = target
target.linksIn.add(this)
this.package = target.package
return
}
// have to refresh metadata, because either realpath or package
// is very likely changing.
this[_delistFromMeta]()
this.package = target.package
this.realpath = target.path
this[_refreshLocation]()
target.root = this.root
}
// a link always resolves to the relative path to its target
get resolved () {
// the path/realpath guard is there for the benefit of setting
// these things in the "wrong" order
return this.path && this.realpath
? `file:${relpath(dirname(this.path), this.realpath).replace(/#/g, '%23')}`
: null
}
set resolved (r) {}
// deps are resolved on the target, not the Link
// so this is a no-op
[_loadDeps] () {}
// links can't have children, only their targets can
// fix it to an empty list so that we can still call
// things that iterate over them, just as a no-op
get children () {
return new Map()
}
set children (c) {}
get isLink () {
return true
}
}
module.exports = Link

1473
spa/node_modules/@npmcli/arborist/lib/node.js generated vendored Normal file

File diff suppressed because it is too large Load Diff

38
spa/node_modules/@npmcli/arborist/lib/optional-set.js generated vendored Normal file
View File

@@ -0,0 +1,38 @@
// when an optional dep fails to install, we need to remove the branch of the
// graph up to the first optionalDependencies, as well as any nodes that are
// only required by other nodes in the set.
//
// This function finds the set of nodes that will need to be removed in that
// case.
//
// Note that this is *only* going to work with trees where calcDepFlags
// has been called, because we rely on the node.optional flag.
const gatherDepSet = require('./gather-dep-set.js')
const optionalSet = node => {
if (!node.optional) {
return new Set()
}
// start with the node, then walk up the dependency graph until we
// get to the boundaries that define the optional set. since the
// node is optional, we know that all paths INTO this area of the
// graph are optional, but there may be non-optional dependencies
// WITHIN the area.
const set = new Set([node])
for (const node of set) {
for (const edge of node.edgesIn) {
if (!edge.optional) {
set.add(edge.from)
}
}
}
// now that we've hit the boundary, gather the rest of the nodes in
// the optional section. that's the set of dependencies that are only
// depended upon by other nodes within the set, or optional dependencies
// from outside the set.
return gatherDepSet(set, edge => !edge.optional)
}
module.exports = optionalSet

View File

@@ -0,0 +1,11 @@
function overrideResolves (resolved, opts) {
const { omitLockfileRegistryResolved = false } = opts
if (omitLockfileRegistryResolved) {
return undefined
}
return resolved
}
module.exports = { overrideResolves }

147
spa/node_modules/@npmcli/arborist/lib/override-set.js generated vendored Normal file
View File

@@ -0,0 +1,147 @@
const npa = require('npm-package-arg')
const semver = require('semver')
class OverrideSet {
constructor ({ overrides, key, parent }) {
this.parent = parent
this.children = new Map()
if (typeof overrides === 'string') {
overrides = { '.': overrides }
}
// change a literal empty string to * so we can use truthiness checks on
// the value property later
if (overrides['.'] === '') {
overrides['.'] = '*'
}
if (parent) {
const spec = npa(key)
if (!spec.name) {
throw new Error(`Override without name: ${key}`)
}
this.name = spec.name
spec.name = ''
this.key = key
this.keySpec = spec.toString()
this.value = overrides['.'] || this.keySpec
}
for (const [key, childOverrides] of Object.entries(overrides)) {
if (key === '.') {
continue
}
const child = new OverrideSet({
parent: this,
key,
overrides: childOverrides,
})
this.children.set(child.key, child)
}
}
getEdgeRule (edge) {
for (const rule of this.ruleset.values()) {
if (rule.name !== edge.name) {
continue
}
// if keySpec is * we found our override
if (rule.keySpec === '*') {
return rule
}
let spec = npa(`${edge.name}@${edge.spec}`)
if (spec.type === 'alias') {
spec = spec.subSpec
}
if (spec.type === 'git') {
if (spec.gitRange && semver.intersects(spec.gitRange, rule.keySpec)) {
return rule
}
continue
}
if (spec.type === 'range' || spec.type === 'version') {
if (semver.intersects(spec.fetchSpec, rule.keySpec)) {
return rule
}
continue
}
// if we got this far, the spec type is one of tag, directory or file
// which means we have no real way to make version comparisons, so we
// just accept the override
return rule
}
return this
}
getNodeRule (node) {
for (const rule of this.ruleset.values()) {
if (rule.name !== node.name) {
continue
}
if (semver.satisfies(node.version, rule.keySpec) ||
semver.satisfies(node.version, rule.value)) {
return rule
}
}
return this
}
getMatchingRule (node) {
for (const rule of this.ruleset.values()) {
if (rule.name !== node.name) {
continue
}
if (semver.satisfies(node.version, rule.keySpec) ||
semver.satisfies(node.version, rule.value)) {
return rule
}
}
return null
}
* ancestry () {
for (let ancestor = this; ancestor; ancestor = ancestor.parent) {
yield ancestor
}
}
get isRoot () {
return !this.parent
}
get ruleset () {
const ruleset = new Map()
for (const override of this.ancestry()) {
for (const kid of override.children.values()) {
if (!ruleset.has(kid.key)) {
ruleset.set(kid.key, kid)
}
}
if (!override.isRoot && !ruleset.has(override.key)) {
ruleset.set(override.key, override)
}
}
return ruleset
}
}
module.exports = OverrideSet

View File

@@ -0,0 +1,77 @@
// Given a node in a tree, return all of the peer dependency sets that
// it is a part of, with the entry (top or non-peer) edges into the sets
// identified.
//
// With this information, we can determine whether it is appropriate to
// replace the entire peer set with another (and remove the old one),
// push the set deeper into the tree, and so on.
//
// Returns a Map of { edge => Set(peerNodes) },
const peerEntrySets = node => {
// this is the union of all peer groups that the node is a part of
// later, we identify all of the entry edges, and create a set of
// 1 or more overlapping sets that this node is a part of.
const unionSet = new Set([node])
for (const node of unionSet) {
for (const edge of node.edgesOut.values()) {
if (edge.valid && edge.peer && edge.to) {
unionSet.add(edge.to)
}
}
for (const edge of node.edgesIn) {
if (edge.valid && edge.peer) {
unionSet.add(edge.from)
}
}
}
const entrySets = new Map()
for (const peer of unionSet) {
for (const edge of peer.edgesIn) {
// if not valid, it doesn't matter anyway. either it's been previously
// peerConflicted, or it's the thing we're interested in replacing.
if (!edge.valid) {
continue
}
// this is the entry point into the peer set
if (!edge.peer || edge.from.isTop) {
// get the subset of peer brought in by this peer entry edge
const sub = new Set([peer])
for (const peer of sub) {
for (const edge of peer.edgesOut.values()) {
if (edge.valid && edge.peer && edge.to) {
sub.add(edge.to)
}
}
}
// if this subset does not include the node we are focused on,
// then it is not relevant for our purposes. Example:
//
// a -> (b, c, d)
// b -> PEER(d) b -> d -> e -> f <-> g
// c -> PEER(f, h) c -> (f <-> g, h -> g)
// d -> PEER(e) d -> e -> f <-> g
// e -> PEER(f)
// f -> PEER(g)
// g -> PEER(f)
// h -> PEER(g)
//
// The unionSet(e) will include c, but we don't actually care about
// it. We only expanded to the edge of the peer nodes in order to
// find the entry edges that caused the inclusion of peer sets
// including (e), so we want:
// Map{
// Edge(a->b) => Set(b, d, e, f, g)
// Edge(a->d) => Set(d, e, f, g)
// }
if (sub.has(node)) {
entrySets.set(edge, sub)
}
}
}
}
return entrySets
}
module.exports = peerEntrySets

569
spa/node_modules/@npmcli/arborist/lib/place-dep.js generated vendored Normal file
View File

@@ -0,0 +1,569 @@
// Given a dep, a node that depends on it, and the edge representing that
// dependency, place the dep somewhere in the node's tree, and all of its
// peer dependencies.
//
// Handles all of the tree updating needed to place the dep, including
// removing replaced nodes, pruning now-extraneous or invalidated nodes,
// and saves a set of what was placed and what needs re-evaluation as
// a result.
const localeCompare = require('@isaacs/string-locale-compare')('en')
const log = require('proc-log')
const { cleanUrl } = require('npm-registry-fetch')
const deepestNestingTarget = require('./deepest-nesting-target.js')
const CanPlaceDep = require('./can-place-dep.js')
const {
KEEP,
CONFLICT,
} = CanPlaceDep
const debug = require('./debug.js')
const Link = require('./link.js')
const gatherDepSet = require('./gather-dep-set.js')
const peerEntrySets = require('./peer-entry-sets.js')
class PlaceDep {
constructor (options) {
this.auditReport = options.auditReport
this.dep = options.dep
this.edge = options.edge
this.explicitRequest = options.explicitRequest
this.force = options.force
this.installLinks = options.installLinks
this.installStrategy = options.installStrategy
this.legacyPeerDeps = options.legacyPeerDeps
this.parent = options.parent || null
this.preferDedupe = options.preferDedupe
this.strictPeerDeps = options.strictPeerDeps
this.updateNames = options.updateNames
this.canPlace = null
this.canPlaceSelf = null
// XXX this only appears to be used by tests
this.checks = new Map()
this.children = []
this.needEvaluation = new Set()
this.peerConflict = null
this.placed = null
this.target = null
this.current = this.edge.to
this.name = this.edge.name
this.top = this.parent?.top || this
// nothing to do if the edge is fine as it is
if (this.edge.to &&
!this.edge.error &&
!this.explicitRequest &&
!this.updateNames.includes(this.edge.name) &&
!this.auditReport?.isVulnerable(this.edge.to)) {
return
}
// walk up the tree until we hit either a top/root node, or a place
// where the dep is not a peer dep.
const start = this.getStartNode()
for (const target of start.ancestry()) {
// if the current location has a peerDep on it, then we can't place here
// this is pretty rare to hit, since we always prefer deduping peers,
// and the getStartNode will start us out above any peers from the
// thing that depends on it. but we could hit it with something like:
//
// a -> (b@1, c@1)
// +-- c@1
// +-- b -> PEEROPTIONAL(v) (c@2)
// +-- c@2 -> (v)
//
// So we check if we can place v under c@2, that's fine.
// Then we check under b, and can't, because of the optional peer dep.
// but we CAN place it under a, so the correct thing to do is keep
// walking up the tree.
const targetEdge = target.edgesOut.get(this.edge.name)
if (!target.isTop && targetEdge && targetEdge.peer) {
continue
}
const cpd = new CanPlaceDep({
dep: this.dep,
edge: this.edge,
// note: this sets the parent's canPlace as the parent of this
// canPlace, but it does NOT add this canPlace to the parent's
// children. This way, we can know that it's a peer dep, and
// get the top edge easily, while still maintaining the
// tree of checks that factored into the original decision.
parent: this.parent && this.parent.canPlace,
target,
preferDedupe: this.preferDedupe,
explicitRequest: this.explicitRequest,
})
this.checks.set(target, cpd)
// It's possible that a "conflict" is a conflict among the *peers* of
// a given node we're trying to place, but there actually is no current
// node. Eg,
// root -> (a, b)
// a -> PEER(c)
// b -> PEER(d)
// d -> PEER(c@2)
// We place (a), and get a peer of (c) along with it.
// then we try to place (b), and get CONFLICT in the check, because
// of the conflicting peer from (b)->(d)->(c@2). In that case, we
// should treat (b) and (d) as OK, and place them in the last place
// where they did not themselves conflict, and skip c@2 if conflict
// is ok by virtue of being forced or not ours and not strict.
if (cpd.canPlaceSelf !== CONFLICT) {
this.canPlaceSelf = cpd
}
// we found a place this can go, along with all its peer friends.
// we break when we get the first conflict
if (cpd.canPlace !== CONFLICT) {
this.canPlace = cpd
} else {
break
}
// if it's a load failure, just plop it in the first place attempted,
// since we're going to crash the build or prune it out anyway.
// but, this will frequently NOT be a successful canPlace, because
// it'll have no version or other information.
if (this.dep.errors.length) {
break
}
// nest packages like npm v1 and v2
// very disk-inefficient
if (this.installStrategy === 'nested') {
break
}
// when installing globally, or just in global style, we never place
// deps above the first level.
if (this.installStrategy === 'shallow') {
const rp = target.resolveParent
if (rp && rp.isProjectRoot) {
break
}
}
}
// if we can't find a target, that means that the last place checked,
// and all the places before it, had a conflict.
if (!this.canPlace) {
// if not forced, and it's our dep, or strictPeerDeps is set, then
// this is an ERESOLVE error.
if (!this.force && (this.isMine || this.strictPeerDeps)) {
return this.failPeerConflict()
}
// ok! we're gonna allow the conflict, but we should still warn
// if we have a current, then we treat CONFLICT as a KEEP.
// otherwise, we just skip it. Only warn on the one that actually
// could not be placed somewhere.
if (!this.canPlaceSelf) {
this.warnPeerConflict()
return
}
this.canPlace = this.canPlaceSelf
}
// now we have a target, a tree of CanPlaceDep results for the peer group,
// and we are ready to go
/* istanbul ignore next */
if (!this.canPlace) {
debug(() => {
throw new Error('canPlace not set, but trying to place in tree')
})
return
}
const { target } = this.canPlace
log.silly(
'placeDep',
target.location || 'ROOT',
`${this.dep.name}@${this.dep.version}`,
this.canPlace.description,
`for: ${this.edge.from.package._id || this.edge.from.location}`,
`want: ${cleanUrl(this.edge.spec || '*')}`
)
const placementType = this.canPlace.canPlace === CONFLICT
? this.canPlace.canPlaceSelf
: this.canPlace.canPlace
// if we're placing in the tree with --force, we can get here even though
// it's a conflict. Treat it as a KEEP, but warn and move on.
if (placementType === KEEP) {
// this was a peerConflicted peer dep
if (this.edge.peer && !this.edge.valid) {
this.warnPeerConflict()
}
// if we get a KEEP in a update scenario, then we MAY have something
// already duplicating this unnecessarily! For example:
// ```
// root (dep: y@1)
// +-- x (dep: y@1.1)
// | +-- y@1.1.0 (replacing with 1.1.2, got KEEP at the root)
// +-- y@1.1.2 (updated already from 1.0.0)
// ```
// Now say we do `reify({update:['y']})`, and the latest version is
// 1.1.2, which we now have in the root. We'll try to place y@1.1.2
// first in x, then in the root, ending with KEEP, because we already
// have it. In that case, we ought to REMOVE the nm/x/nm/y node, because
// it is an unnecessary duplicate.
this.pruneDedupable(target)
return
}
// we were told to place it here in the target, so either it does not
// already exist in the tree, OR it's shadowed.
// handle otherwise unresolvable dependency nesting loops by
// creating a symbolic link
// a1 -> b1 -> a2 -> b2 -> a1 -> ...
// instead of nesting forever, when the loop occurs, create
// a symbolic link to the earlier instance
for (let p = target; p; p = p.resolveParent) {
if (p.matches(this.dep) && !p.isTop) {
this.placed = new Link({ parent: target, target: p })
return
}
}
// XXX if we are replacing SOME of a peer entry group, we will need to
// remove any that are not being replaced and will now be invalid, and
// re-evaluate them deeper into the tree.
const virtualRoot = this.dep.parent
this.placed = new this.dep.constructor({
name: this.dep.name,
pkg: this.dep.package,
resolved: this.dep.resolved,
integrity: this.dep.integrity,
installLinks: this.installLinks,
legacyPeerDeps: this.legacyPeerDeps,
error: this.dep.errors[0],
...(this.dep.overrides ? { overrides: this.dep.overrides } : {}),
...(this.dep.isLink ? { target: this.dep.target, realpath: this.dep.realpath } : {}),
})
this.oldDep = target.children.get(this.name)
if (this.oldDep) {
this.replaceOldDep()
} else {
this.placed.parent = target
}
// if it's a peerConflicted peer dep, warn about it
if (this.edge.peer && !this.placed.satisfies(this.edge)) {
this.warnPeerConflict()
}
// If the edge is not an error, then we're updating something, and
// MAY end up putting a better/identical node further up the tree in
// a way that causes an unnecessary duplication. If so, remove the
// now-unnecessary node.
if (this.edge.valid && this.edge.to && this.edge.to !== this.placed) {
this.pruneDedupable(this.edge.to, false)
}
// in case we just made some duplicates that can be removed,
// prune anything deeper in the tree that can be replaced by this
for (const node of target.root.inventory.query('name', this.name)) {
if (node.isDescendantOf(target) && !node.isTop) {
this.pruneDedupable(node, false)
// only walk the direct children of the ones we kept
if (node.root === target.root) {
for (const kid of node.children.values()) {
this.pruneDedupable(kid, false)
}
}
}
}
// also place its unmet or invalid peer deps at this location
// loop through any peer deps from the thing we just placed, and place
// those ones as well. it's safe to do this with the virtual nodes,
// because we're copying rather than moving them out of the virtual root,
// otherwise they'd be gone and the peer set would change throughout
// this loop.
for (const peerEdge of this.placed.edgesOut.values()) {
if (peerEdge.valid || !peerEdge.peer || peerEdge.peerConflicted) {
continue
}
const peer = virtualRoot.children.get(peerEdge.name)
// Note: if the virtualRoot *doesn't* have the peer, then that means
// it's an optional peer dep. If it's not being properly met (ie,
// peerEdge.valid is false), then this is likely heading for an
// ERESOLVE error, unless it can walk further up the tree.
if (!peer) {
continue
}
// peerConflicted peerEdge, just accept what's there already
if (!peer.satisfies(peerEdge)) {
continue
}
this.children.push(new PlaceDep({
auditReport: this.auditReport,
explicitRequest: this.explicitRequest,
force: this.force,
installLinks: this.installLinks,
installStrategy: this.installStrategy,
legacyPeerDeps: this.legaycPeerDeps,
preferDedupe: this.preferDedupe,
strictPeerDeps: this.strictPeerDeps,
updateNames: this.updateName,
parent: this,
dep: peer,
node: this.placed,
edge: peerEdge,
}))
}
}
replaceOldDep () {
const target = this.oldDep.parent
// XXX handle replacing an entire peer group?
// what about cases where we need to push some other peer groups deeper
// into the tree? all the tree updating should be done here, and track
// all the things that we add and remove, so that we can know what
// to re-evaluate.
// if we're replacing, we should also remove any nodes for edges that
// are now invalid, and where this (or its deps) is the only dependent,
// and also recurse on that pruning. Otherwise leaving that dep node
// around can result in spurious conflicts pushing nodes deeper into
// the tree than needed in the case of cycles that will be removed
// later anyway.
const oldDeps = []
for (const [name, edge] of this.oldDep.edgesOut.entries()) {
if (!this.placed.edgesOut.has(name) && edge.to) {
oldDeps.push(...gatherDepSet([edge.to], e => e.to !== edge.to))
}
}
// gather all peer edgesIn which are at this level, and will not be
// satisfied by the new dependency. Those are the peer sets that need
// to be either warned about (if they cannot go deeper), or removed and
// re-placed (if they can).
const prunePeerSets = []
for (const edge of this.oldDep.edgesIn) {
if (this.placed.satisfies(edge) ||
!edge.peer ||
edge.from.parent !== target ||
edge.peerConflicted) {
// not a peer dep, not invalid, or not from this level, so it's fine
// to just let it re-evaluate as a problemEdge later, or let it be
// satisfied by the new dep being placed.
continue
}
for (const entryEdge of peerEntrySets(edge.from).keys()) {
// either this one needs to be pruned and re-evaluated, or marked
// as peerConflicted and warned about. If the entryEdge comes in from
// the root or a workspace, then we have to leave it alone, and in that
// case, it will have already warned or crashed by getting to this point
const entryNode = entryEdge.to
const deepestTarget = deepestNestingTarget(entryNode)
if (deepestTarget !== target &&
!(entryEdge.from.isProjectRoot || entryEdge.from.isWorkspace)) {
prunePeerSets.push(...gatherDepSet([entryNode], e => {
return e.to !== entryNode && !e.peerConflicted
}))
} else {
this.warnPeerConflict(edge, this.dep)
}
}
}
this.placed.replace(this.oldDep)
this.pruneForReplacement(this.placed, oldDeps)
for (const dep of prunePeerSets) {
for (const edge of dep.edgesIn) {
this.needEvaluation.add(edge.from)
}
dep.root = null
}
}
pruneForReplacement (node, oldDeps) {
// gather up all the now-invalid/extraneous edgesOut, as long as they are
// only depended upon by the old node/deps
const invalidDeps = new Set([...node.edgesOut.values()]
.filter(e => e.to && !e.valid).map(e => e.to))
for (const dep of oldDeps) {
const set = gatherDepSet([dep], e => e.to !== dep && e.valid)
for (const dep of set) {
invalidDeps.add(dep)
}
}
// ignore dependency edges from the node being replaced, but
// otherwise filter the set down to just the set with no
// dependencies from outside the set, except the node in question.
const deps = gatherDepSet(invalidDeps, edge =>
edge.from !== node && edge.to !== node && edge.valid)
// now just delete whatever's left, because it's junk
for (const dep of deps) {
dep.root = null
}
}
// prune all the nodes in a branch of the tree that can be safely removed
// This is only the most basic duplication detection; it finds if there
// is another satisfying node further up the tree, and if so, dedupes.
// Even in installStategy is nested, we do this amount of deduplication.
pruneDedupable (node, descend = true) {
if (node.canDedupe(this.preferDedupe)) {
// gather up all deps that have no valid edges in from outside
// the dep set, except for this node we're deduping, so that we
// also prune deps that would be made extraneous.
const deps = gatherDepSet([node], e => e.to !== node && e.valid)
for (const node of deps) {
node.root = null
}
return
}
if (descend) {
// sort these so that they're deterministically ordered
// otherwise, resulting tree shape is dependent on the order
// in which they happened to be resolved.
const nodeSort = (a, b) => localeCompare(a.location, b.location)
const children = [...node.children.values()].sort(nodeSort)
for (const child of children) {
this.pruneDedupable(child)
}
const fsChildren = [...node.fsChildren].sort(nodeSort)
for (const topNode of fsChildren) {
const children = [...topNode.children.values()].sort(nodeSort)
for (const child of children) {
this.pruneDedupable(child)
}
}
}
}
get isMine () {
const { edge } = this.top
const { from: node } = edge
if (node.isWorkspace || node.isProjectRoot) {
return true
}
if (!edge.peer) {
return false
}
// re-entry case. check if any non-peer edges come from the project,
// or any entryEdges on peer groups are from the root.
let hasPeerEdges = false
for (const edge of node.edgesIn) {
if (edge.peer) {
hasPeerEdges = true
continue
}
if (edge.from.isWorkspace || edge.from.isProjectRoot) {
return true
}
}
if (hasPeerEdges) {
for (const edge of peerEntrySets(node).keys()) {
if (edge.from.isWorkspace || edge.from.isProjectRoot) {
return true
}
}
}
return false
}
warnPeerConflict (edge, dep) {
edge = edge || this.edge
dep = dep || this.dep
edge.peerConflicted = true
const expl = this.explainPeerConflict(edge, dep)
log.warn('ERESOLVE', 'overriding peer dependency', expl)
}
failPeerConflict (edge, dep) {
edge = edge || this.top.edge
dep = dep || this.top.dep
const expl = this.explainPeerConflict(edge, dep)
throw Object.assign(new Error('could not resolve'), expl)
}
explainPeerConflict (edge, dep) {
const { from: node } = edge
const curNode = node.resolve(edge.name)
// XXX decorate more with this.canPlace and this.canPlaceSelf,
// this.checks, this.children, walk over conflicted peers, etc.
const expl = {
code: 'ERESOLVE',
edge: edge.explain(),
dep: dep.explain(edge),
force: this.force,
isMine: this.isMine,
strictPeerDeps: this.strictPeerDeps,
}
if (this.parent) {
// this is the conflicted peer
expl.current = curNode && curNode.explain(edge)
expl.peerConflict = this.current && this.current.explain(this.edge)
} else {
expl.current = curNode && curNode.explain()
if (this.canPlaceSelf && this.canPlaceSelf.canPlaceSelf !== CONFLICT) {
// failed while checking for a child dep
const cps = this.canPlaceSelf
for (const peer of cps.conflictChildren) {
if (peer.current) {
expl.peerConflict = {
current: peer.current.explain(),
peer: peer.dep.explain(peer.edge),
}
break
}
}
} else {
expl.peerConflict = {
current: this.current && this.current.explain(),
peer: this.dep.explain(this.edge),
}
}
}
return expl
}
getStartNode () {
// if we are a peer, then we MUST be at least as shallow as the peer
// dependent
const from = this.parent?.getStartNode() || this.edge.from
return deepestNestingTarget(from, this.name)
}
// XXX this only appears to be used by tests
get allChildren () {
const set = new Set(this.children)
for (const child of set) {
for (const grandchild of child.children) {
set.add(grandchild)
}
}
return [...set]
}
}
module.exports = PlaceDep

198
spa/node_modules/@npmcli/arborist/lib/printable.js generated vendored Normal file
View File

@@ -0,0 +1,198 @@
// helper function to output a clearer visualization
// of the current node and its descendents
const localeCompare = require('@isaacs/string-locale-compare')('en')
const util = require('util')
const relpath = require('./relpath.js')
class ArboristNode {
constructor (tree, path) {
this.name = tree.name
if (tree.packageName && tree.packageName !== this.name) {
this.packageName = tree.packageName
}
if (tree.version) {
this.version = tree.version
}
this.location = tree.location
this.path = tree.path
if (tree.realpath !== this.path) {
this.realpath = tree.realpath
}
if (tree.resolved !== null) {
this.resolved = tree.resolved
}
if (tree.extraneous) {
this.extraneous = true
}
if (tree.dev) {
this.dev = true
}
if (tree.optional) {
this.optional = true
}
if (tree.devOptional && !tree.dev && !tree.optional) {
this.devOptional = true
}
if (tree.peer) {
this.peer = true
}
if (tree.inBundle) {
this.bundled = true
}
if (tree.inDepBundle) {
this.bundler = tree.getBundler().location
}
if (tree.isProjectRoot) {
this.isProjectRoot = true
}
if (tree.isWorkspace) {
this.isWorkspace = true
}
const bd = tree.package && tree.package.bundleDependencies
if (bd && bd.length) {
this.bundleDependencies = bd
}
if (tree.inShrinkwrap) {
this.inShrinkwrap = true
} else if (tree.hasShrinkwrap) {
this.hasShrinkwrap = true
}
if (tree.error) {
this.error = treeError(tree.error)
}
if (tree.errors && tree.errors.length) {
this.errors = tree.errors.map(treeError)
}
if (tree.overrides) {
this.overrides = new Map([...tree.overrides.ruleset.values()]
.map((override) => [override.key, override.value]))
}
// edgesOut sorted by name
if (tree.edgesOut.size) {
this.edgesOut = new Map([...tree.edgesOut.entries()]
.sort(([a], [b]) => localeCompare(a, b))
.map(([name, edge]) => [name, new EdgeOut(edge)]))
}
// edgesIn sorted by location
if (tree.edgesIn.size) {
this.edgesIn = new Set([...tree.edgesIn]
.sort((a, b) => localeCompare(a.from.location, b.from.location))
.map(edge => new EdgeIn(edge)))
}
if (tree.workspaces && tree.workspaces.size) {
this.workspaces = new Map([...tree.workspaces.entries()]
.map(([name, path]) => [name, relpath(tree.root.realpath, path)]))
}
// fsChildren sorted by path
if (tree.fsChildren.size) {
this.fsChildren = new Set([...tree.fsChildren]
.sort(({ path: a }, { path: b }) => localeCompare(a, b))
.map(tree => printableTree(tree, path)))
}
// children sorted by name
if (tree.children.size) {
this.children = new Map([...tree.children.entries()]
.sort(([a], [b]) => localeCompare(a, b))
.map(([name, tree]) => [name, printableTree(tree, path)]))
}
}
}
class ArboristVirtualNode extends ArboristNode {
constructor (tree, path) {
super(tree, path)
this.sourceReference = printableTree(tree.sourceReference, path)
}
}
class ArboristLink extends ArboristNode {
constructor (tree, path) {
super(tree, path)
this.target = printableTree(tree.target, path)
}
}
const treeError = ({ code, path }) => ({
code,
...(path ? { path } : {}),
})
// print out edges without dumping the full node all over again
// this base class will toJSON as a plain old object, but the
// util.inspect() output will be a bit cleaner
class Edge {
constructor (edge) {
this.type = edge.type
this.name = edge.name
this.spec = edge.rawSpec || '*'
if (edge.rawSpec !== edge.spec) {
this.override = edge.spec
}
if (edge.error) {
this.error = edge.error
}
if (edge.peerConflicted) {
this.peerConflicted = edge.peerConflicted
}
}
}
// don't care about 'from' for edges out
class EdgeOut extends Edge {
constructor (edge) {
super(edge)
this.to = edge.to && edge.to.location
}
[util.inspect.custom] () {
return `{ ${this.type} ${this.name}@${this.spec}${
this.override ? ` overridden:${this.override}` : ''
}${
this.to ? ' -> ' + this.to : ''
}${
this.error ? ' ' + this.error : ''
}${
this.peerConflicted ? ' peerConflicted' : ''
} }`
}
}
// don't care about 'to' for edges in
class EdgeIn extends Edge {
constructor (edge) {
super(edge)
this.from = edge.from && edge.from.location
}
[util.inspect.custom] () {
return `{ ${this.from || '""'} ${this.type} ${this.name}@${this.spec}${
this.error ? ' ' + this.error : ''
}${
this.peerConflicted ? ' peerConflicted' : ''
} }`
}
}
const printableTree = (tree, path = []) => {
if (!tree) {
return tree
}
const Cls = tree.isLink ? ArboristLink
: tree.sourceReference ? ArboristVirtualNode
: ArboristNode
if (path.includes(tree)) {
const obj = Object.create(Cls.prototype)
return Object.assign(obj, { location: tree.location })
}
path.push(tree)
return new Cls(tree, path)
}
module.exports = printableTree

View File

@@ -0,0 +1,858 @@
'use strict'
const { resolve } = require('path')
const { parser, arrayDelimiter } = require('@npmcli/query')
const localeCompare = require('@isaacs/string-locale-compare')('en')
const log = require('proc-log')
const { minimatch } = require('minimatch')
const npa = require('npm-package-arg')
const pacote = require('pacote')
const semver = require('semver')
// handle results for parsed query asts, results are stored in a map that has a
// key that points to each ast selector node and stores the resulting array of
// arborist nodes as its value, that is essential to how we handle multiple
// query selectors, e.g: `#a, #b, #c` <- 3 diff ast selector nodes
class Results {
#currentAstSelector
#initialItems
#inventory
#outdatedCache = new Map()
#pendingCombinator
#results = new Map()
#targetNode
constructor (opts) {
this.#currentAstSelector = opts.rootAstNode.nodes[0]
this.#inventory = opts.inventory
this.#initialItems = opts.initialItems
this.#targetNode = opts.targetNode
this.currentResults = this.#initialItems
// We get this when first called and need to pass it to pacote
this.flatOptions = opts.flatOptions || {}
// reset by rootAstNode walker
this.currentAstNode = opts.rootAstNode
}
get currentResults () {
return this.#results.get(this.#currentAstSelector)
}
set currentResults (value) {
this.#results.set(this.#currentAstSelector, value)
}
// retrieves the initial items to which start the filtering / matching
// for most of the different types of recognized ast nodes, e.g: class (aka
// depType), id, *, etc in different contexts we need to start with the
// current list of filtered results, for example a query for `.workspace`
// actually means the same as `*.workspace` so we want to start with the full
// inventory if that's the first ast node we're reading but if it appears in
// the middle of a query it should respect the previous filtered results,
// combinators are a special case in which we always want to have the
// complete inventory list in order to use the left-hand side ast node as a
// filter combined with the element on its right-hand side
get initialItems () {
const firstParsed =
(this.currentAstNode.parent.nodes[0] === this.currentAstNode) &&
(this.currentAstNode.parent.parent.type === 'root')
if (firstParsed) {
return this.#initialItems
}
if (this.currentAstNode.prev().type === 'combinator') {
return this.#inventory
}
return this.currentResults
}
// combinators need information about previously filtered items along
// with info of the items parsed / retrieved from the selector right
// past the combinator, for this reason combinators are stored and
// only ran as the last part of each selector logic
processPendingCombinator (nextResults) {
if (this.#pendingCombinator) {
const res = this.#pendingCombinator(this.currentResults, nextResults)
this.#pendingCombinator = null
this.currentResults = res
} else {
this.currentResults = nextResults
}
}
// when collecting results to a root astNode, we traverse the list of child
// selector nodes and collect all of their resulting arborist nodes into a
// single/flat Set of items, this ensures we also deduplicate items
collect (rootAstNode) {
return new Set(rootAstNode.nodes.flatMap(n => this.#results.get(n)))
}
// selector types map to the '.type' property of the ast nodes via `${astNode.type}Type`
//
// attribute selector [name=value], etc
attributeType () {
const nextResults = this.initialItems.filter(node =>
attributeMatch(this.currentAstNode, node.package)
)
this.processPendingCombinator(nextResults)
}
// dependency type selector (i.e. .prod, .dev, etc)
// css calls this class, we interpret is as dependency type
classType () {
const depTypeFn = depTypes[String(this.currentAstNode)]
if (!depTypeFn) {
throw Object.assign(
new Error(`\`${String(this.currentAstNode)}\` is not a supported dependency type.`),
{ code: 'EQUERYNODEPTYPE' }
)
}
const nextResults = depTypeFn(this.initialItems)
this.processPendingCombinator(nextResults)
}
// combinators (i.e. '>', ' ', '~')
combinatorType () {
this.#pendingCombinator = combinators[String(this.currentAstNode)]
}
// name selectors (i.e. #foo)
// css calls this id, we interpret it as name
idType () {
const name = this.currentAstNode.value
const nextResults = this.initialItems.filter(node =>
(name === node.name) || (name === node.package.name)
)
this.processPendingCombinator(nextResults)
}
// pseudo selectors (prefixed with :)
async pseudoType () {
const pseudoFn = `${this.currentAstNode.value.slice(1)}Pseudo`
if (!this[pseudoFn]) {
throw Object.assign(
new Error(`\`${this.currentAstNode.value
}\` is not a supported pseudo selector.`),
{ code: 'EQUERYNOPSEUDO' }
)
}
const nextResults = await this[pseudoFn]()
this.processPendingCombinator(nextResults)
}
selectorType () {
this.#currentAstSelector = this.currentAstNode
// starts a new array in which resulting items
// can be stored for each given ast selector
if (!this.currentResults) {
this.currentResults = []
}
}
universalType () {
this.processPendingCombinator(this.initialItems)
}
// pseudo selectors map to the 'value' property of the pseudo selectors in the ast nodes
// via selectors via `${value.slice(1)}Pseudo`
attrPseudo () {
const { lookupProperties, attributeMatcher } = this.currentAstNode
return this.initialItems.filter(node => {
let objs = [node.package]
for (const prop of lookupProperties) {
// if an isArray symbol is found that means we'll need to iterate
// over the previous found array to basically make sure we traverse
// all its indexes testing for possible objects that may eventually
// hold more keys specified in a selector
if (prop === arrayDelimiter) {
objs = objs.flat()
continue
}
// otherwise just maps all currently found objs
// to the next prop from the lookup properties list,
// filters out any empty key lookup
objs = objs.flatMap(obj => obj[prop] || [])
// in case there's no property found in the lookup
// just filters that item out
const noAttr = objs.every(obj => !obj)
if (noAttr) {
return false
}
}
// if any of the potential object matches
// that item should be in the final result
return objs.some(obj => attributeMatch(attributeMatcher, obj))
})
}
emptyPseudo () {
return this.initialItems.filter(node => node.edgesOut.size === 0)
}
extraneousPseudo () {
return this.initialItems.filter(node => node.extraneous)
}
async hasPseudo () {
const found = []
for (const item of this.initialItems) {
// This is the one time initialItems differs from inventory
const res = await retrieveNodesFromParsedAst({
flatOptions: this.flatOptions,
initialItems: [item],
inventory: this.#inventory,
rootAstNode: this.currentAstNode.nestedNode,
targetNode: item,
})
if (res.size > 0) {
found.push(item)
}
}
return found
}
invalidPseudo () {
const found = []
for (const node of this.initialItems) {
for (const edge of node.edgesIn) {
if (edge.invalid) {
found.push(node)
break
}
}
}
return found
}
async isPseudo () {
const res = await retrieveNodesFromParsedAst({
flatOptions: this.flatOptions,
initialItems: this.initialItems,
inventory: this.#inventory,
rootAstNode: this.currentAstNode.nestedNode,
targetNode: this.currentAstNode,
})
return [...res]
}
linkPseudo () {
return this.initialItems.filter(node => node.isLink || (node.isTop && !node.isRoot))
}
missingPseudo () {
return this.#inventory.reduce((res, node) => {
for (const edge of node.edgesOut.values()) {
if (edge.missing) {
const pkg = { name: edge.name, version: edge.spec }
res.push(new this.#targetNode.constructor({ pkg }))
}
}
return res
}, [])
}
async notPseudo () {
const res = await retrieveNodesFromParsedAst({
flatOptions: this.flatOptions,
initialItems: this.initialItems,
inventory: this.#inventory,
rootAstNode: this.currentAstNode.nestedNode,
targetNode: this.currentAstNode,
})
const internalSelector = new Set(res)
return this.initialItems.filter(node =>
!internalSelector.has(node))
}
overriddenPseudo () {
return this.initialItems.filter(node => node.overridden)
}
pathPseudo () {
return this.initialItems.filter(node => {
if (!this.currentAstNode.pathValue) {
return true
}
return minimatch(
node.realpath.replace(/\\+/g, '/'),
resolve(node.root.realpath, this.currentAstNode.pathValue).replace(/\\+/g, '/')
)
})
}
privatePseudo () {
return this.initialItems.filter(node => node.package.private)
}
rootPseudo () {
return this.initialItems.filter(node => node === this.#targetNode.root)
}
scopePseudo () {
return this.initialItems.filter(node => node === this.#targetNode)
}
semverPseudo () {
const {
attributeMatcher,
lookupProperties,
semverFunc = 'infer',
semverValue,
} = this.currentAstNode
const { qualifiedAttribute } = attributeMatcher
if (!semverValue) {
// DEPRECATED: remove this warning and throw an error as part of @npmcli/arborist@6
log.warn('query', 'usage of :semver() with no parameters is deprecated')
return this.initialItems
}
if (!semver.valid(semverValue) && !semver.validRange(semverValue)) {
throw Object.assign(
new Error(`\`${semverValue}\` is not a valid semver version or range`),
{ code: 'EQUERYINVALIDSEMVER' })
}
const valueIsVersion = !!semver.valid(semverValue)
const nodeMatches = (node, obj) => {
// if we already have an operator, the user provided some test as part of the selector
// we evaluate that first because if it fails we don't want this node anyway
if (attributeMatcher.operator) {
if (!attributeMatch(attributeMatcher, obj)) {
// if the initial operator doesn't match, we're done
return false
}
}
const attrValue = obj[qualifiedAttribute]
// both valid and validRange return null for undefined, so this will skip both nodes that
// do not have the attribute defined as well as those where the attribute value is invalid
// and those where the value from the package.json is not a string
if ((!semver.valid(attrValue) && !semver.validRange(attrValue)) ||
typeof attrValue !== 'string') {
return false
}
const attrIsVersion = !!semver.valid(attrValue)
let actualFunc = semverFunc
// if we're asked to infer, we examine outputs to make a best guess
if (actualFunc === 'infer') {
if (valueIsVersion && attrIsVersion) {
// two versions -> semver.eq
actualFunc = 'eq'
} else if (!valueIsVersion && !attrIsVersion) {
// two ranges -> semver.intersects
actualFunc = 'intersects'
} else {
// anything else -> semver.satisfies
actualFunc = 'satisfies'
}
}
if (['eq', 'neq', 'gt', 'gte', 'lt', 'lte'].includes(actualFunc)) {
// both sides must be versions, but one is not
if (!valueIsVersion || !attrIsVersion) {
return false
}
return semver[actualFunc](attrValue, semverValue)
} else if (['gtr', 'ltr', 'satisfies'].includes(actualFunc)) {
// at least one side must be a version, but neither is
if (!valueIsVersion && !attrIsVersion) {
return false
}
return valueIsVersion
? semver[actualFunc](semverValue, attrValue)
: semver[actualFunc](attrValue, semverValue)
} else if (['intersects', 'subset'].includes(actualFunc)) {
// these accept two ranges and since a version is also a range, anything goes
return semver[actualFunc](attrValue, semverValue)
} else {
// user provided a function we don't know about, throw an error
throw Object.assign(new Error(`\`semver.${actualFunc}\` is not a supported operator.`),
{ code: 'EQUERYINVALIDOPERATOR' })
}
}
return this.initialItems.filter((node) => {
// no lookupProperties just means its a top level property, see if it matches
if (!lookupProperties.length) {
return nodeMatches(node, node.package)
}
// this code is mostly duplicated from attrPseudo to traverse into the package until we get
// to our deepest requested object
let objs = [node.package]
for (const prop of lookupProperties) {
if (prop === arrayDelimiter) {
objs = objs.flat()
continue
}
objs = objs.flatMap(obj => obj[prop] || [])
const noAttr = objs.every(obj => !obj)
if (noAttr) {
return false
}
return objs.some(obj => nodeMatches(node, obj))
}
})
}
typePseudo () {
if (!this.currentAstNode.typeValue) {
return this.initialItems
}
return this.initialItems
.flatMap(node => {
const found = []
for (const edge of node.edgesIn) {
if (npa(`${edge.name}@${edge.spec}`).type === this.currentAstNode.typeValue) {
found.push(edge.to)
}
}
return found
})
}
dedupedPseudo () {
return this.initialItems.filter(node => node.target.edgesIn.size > 1)
}
async outdatedPseudo () {
const { outdatedKind = 'any' } = this.currentAstNode
// filter the initialItems
// NOTE: this uses a Promise.all around a map without in-line concurrency handling
// since the only async action taken is retrieving the packument, which is limited
// based on the max-sockets config in make-fetch-happen
const initialResults = await Promise.all(this.initialItems.map(async (node) => {
// the root can't be outdated, skip it
if (node.isProjectRoot) {
return false
}
// we cache the promise representing the full versions list, this helps reduce the
// number of requests we send by keeping population of the cache in a single tick
// making it less likely that multiple requests for the same package will be inflight
if (!this.#outdatedCache.has(node.name)) {
this.#outdatedCache.set(node.name, getPackageVersions(node.name, this.flatOptions))
}
const availableVersions = await this.#outdatedCache.get(node.name)
// we attach _all_ versions to the queryContext to allow consumers to do their own
// filtering and comparisons
node.queryContext.versions = availableVersions
// next we further reduce the set to versions that are greater than the current one
const greaterVersions = availableVersions.filter((available) => {
return semver.gt(available, node.version)
})
// no newer versions than the current one, drop this node from the result set
if (!greaterVersions.length) {
return false
}
// if we got here, we know that newer versions exist, if the kind is 'any' we're done
if (outdatedKind === 'any') {
return node
}
// look for newer versions that differ from current by a specific part of the semver version
if (['major', 'minor', 'patch'].includes(outdatedKind)) {
// filter the versions greater than our current one based on semver.diff
const filteredVersions = greaterVersions.filter((version) => {
return semver.diff(node.version, version) === outdatedKind
})
// no available versions are of the correct diff type
if (!filteredVersions.length) {
return false
}
return node
}
// look for newer versions that satisfy at least one edgeIn to this node
if (outdatedKind === 'in-range') {
const inRangeContext = []
for (const edge of node.edgesIn) {
const inRangeVersions = greaterVersions.filter((version) => {
return semver.satisfies(version, edge.spec)
})
// this edge has no in-range candidates, just move on
if (!inRangeVersions.length) {
continue
}
inRangeContext.push({
from: edge.from.location,
versions: inRangeVersions,
})
}
// if we didn't find at least one match, drop this node
if (!inRangeContext.length) {
return false
}
// now add to the context each version that is in-range for each edgeIn
node.queryContext.outdated = {
...node.queryContext.outdated,
inRange: inRangeContext,
}
return node
}
// look for newer versions that _do not_ satisfy at least one edgeIn
if (outdatedKind === 'out-of-range') {
const outOfRangeContext = []
for (const edge of node.edgesIn) {
const outOfRangeVersions = greaterVersions.filter((version) => {
return !semver.satisfies(version, edge.spec)
})
// this edge has no out-of-range candidates, skip it
if (!outOfRangeVersions.length) {
continue
}
outOfRangeContext.push({
from: edge.from.location,
versions: outOfRangeVersions,
})
}
// if we didn't add at least one thing to the context, this node is not a match
if (!outOfRangeContext.length) {
return false
}
// attach the out-of-range context to the node
node.queryContext.outdated = {
...node.queryContext.outdated,
outOfRange: outOfRangeContext,
}
return node
}
// any other outdatedKind is unknown and will never match
return false
}))
// return an array with the holes for non-matching nodes removed
return initialResults.filter(Boolean)
}
}
// operators for attribute selectors
const attributeOperators = {
// attribute value is equivalent
'=' ({ attr, value, insensitive }) {
return attr === value
},
// attribute value contains word
'~=' ({ attr, value, insensitive }) {
return (attr.match(/\w+/g) || []).includes(value)
},
// attribute value contains string
'*=' ({ attr, value, insensitive }) {
return attr.includes(value)
},
// attribute value is equal or starts with
'|=' ({ attr, value, insensitive }) {
return attr.startsWith(`${value}-`)
},
// attribute value starts with
'^=' ({ attr, value, insensitive }) {
return attr.startsWith(value)
},
// attribute value ends with
'$=' ({ attr, value, insensitive }) {
return attr.endsWith(value)
},
}
const attributeOperator = ({ attr, value, insensitive, operator }) => {
if (typeof attr === 'number') {
attr = String(attr)
}
if (typeof attr !== 'string') {
// It's an object or an array, bail
return false
}
if (insensitive) {
attr = attr.toLowerCase()
}
return attributeOperators[operator]({
attr,
insensitive,
value,
})
}
const attributeMatch = (matcher, obj) => {
const insensitive = !!matcher.insensitive
const operator = matcher.operator || ''
const attribute = matcher.qualifiedAttribute
let value = matcher.value || ''
// return early if checking existence
if (operator === '') {
return Boolean(obj[attribute])
}
if (insensitive) {
value = value.toLowerCase()
}
// in case the current object is an array
// then we try to match every item in the array
if (Array.isArray(obj[attribute])) {
return obj[attribute].find((i, index) => {
const attr = obj[attribute][index] || ''
return attributeOperator({ attr, value, insensitive, operator })
})
} else {
const attr = obj[attribute] || ''
return attributeOperator({ attr, value, insensitive, operator })
}
}
const edgeIsType = (node, type, seen = new Set()) => {
for (const edgeIn of node.edgesIn) {
// TODO Need a test with an infinite loop
if (seen.has(edgeIn)) {
continue
}
seen.add(edgeIn)
if (edgeIn.type === type || edgeIn.from[type] || edgeIsType(edgeIn.from, type, seen)) {
return true
}
}
return false
}
const filterByType = (nodes, type) => {
const found = []
for (const node of nodes) {
if (node[type] || edgeIsType(node, type)) {
found.push(node)
}
}
return found
}
const depTypes = {
// dependency
'.prod' (prevResults) {
const found = []
for (const node of prevResults) {
if (!node.dev) {
found.push(node)
}
}
return found
},
// devDependency
'.dev' (prevResults) {
return filterByType(prevResults, 'dev')
},
// optionalDependency
'.optional' (prevResults) {
return filterByType(prevResults, 'optional')
},
// peerDependency
'.peer' (prevResults) {
return filterByType(prevResults, 'peer')
},
// workspace
'.workspace' (prevResults) {
return prevResults.filter(node => node.isWorkspace)
},
// bundledDependency
'.bundled' (prevResults) {
return prevResults.filter(node => node.inBundle)
},
}
// checks if a given node has a direct parent in any of the nodes provided in
// the compare nodes array
const hasParent = (node, compareNodes) => {
// All it takes is one so we loop and return on the first hit
for (const compareNode of compareNodes) {
// follows logical parent for link anscestors
if (node.isTop && (node.resolveParent === compareNode)) {
return true
}
// follows edges-in to check if they match a possible parent
for (const edge of node.edgesIn) {
if (edge && edge.from === compareNode) {
return true
}
}
}
return false
}
// checks if a given node is a descendant of any of the nodes provided in the
// compareNodes array
const hasAscendant = (node, compareNodes, seen = new Set()) => {
// TODO (future) loop over ancestry property
if (hasParent(node, compareNodes)) {
return true
}
if (node.isTop && node.resolveParent) {
/* istanbul ignore if - investigate if linksIn check obviates need for this */
if (hasAscendant(node.resolveParent, compareNodes)) {
return true
}
}
for (const edge of node.edgesIn) {
// TODO Need a test with an infinite loop
if (seen.has(edge)) {
continue
}
seen.add(edge)
if (edge && edge.from && hasAscendant(edge.from, compareNodes, seen)) {
return true
}
}
for (const linkNode of node.linksIn) {
if (hasAscendant(linkNode, compareNodes, seen)) {
return true
}
}
return false
}
const combinators = {
// direct descendant
'>' (prevResults, nextResults) {
return nextResults.filter(node => hasParent(node, prevResults))
},
// any descendant
' ' (prevResults, nextResults) {
return nextResults.filter(node => hasAscendant(node, prevResults))
},
// sibling
'~' (prevResults, nextResults) {
// Return any node in nextResults that is a sibling of (aka shares a
// parent with) a node in prevResults
const parentNodes = new Set() // Parents of everything in prevResults
for (const node of prevResults) {
for (const edge of node.edgesIn) {
// edge.from always exists cause it's from another node's edgesIn
parentNodes.add(edge.from)
}
}
return nextResults.filter(node =>
!prevResults.includes(node) && hasParent(node, [...parentNodes])
)
},
}
// get a list of available versions of a package filtered to respect --before
// NOTE: this runs over each node and should not throw
const getPackageVersions = async (name, opts) => {
let packument
try {
packument = await pacote.packument(name, {
...opts,
fullMetadata: false, // we only need the corgi
})
} catch (err) {
// if the fetch fails, log a warning and pretend there are no versions
log.warn('query', `could not retrieve packument for ${name}: ${err.message}`)
return []
}
// start with a sorted list of all versions (lowest first)
let candidates = Object.keys(packument.versions).sort(semver.compare)
// if the packument has a time property, and the user passed a before flag, then
// we filter this list down to only those versions that existed before the specified date
if (packument.time && opts.before) {
candidates = candidates.filter((version) => {
// this version isn't found in the times at all, drop it
if (!packument.time[version]) {
return false
}
return Date.parse(packument.time[version]) <= opts.before
})
}
return candidates
}
const retrieveNodesFromParsedAst = async (opts) => {
// when we first call this it's the parsed query. all other times it's
// results.currentNode.nestedNode
const rootAstNode = opts.rootAstNode
if (!rootAstNode.nodes) {
return new Set()
}
const results = new Results(opts)
const astNodeQueue = new Set()
// walk is sync, so we have to build up our async functions and then await them later
rootAstNode.walk((nextAstNode) => {
astNodeQueue.add(nextAstNode)
})
for (const nextAstNode of astNodeQueue) {
// This is the only place we reset currentAstNode
results.currentAstNode = nextAstNode
const updateFn = `${results.currentAstNode.type}Type`
if (typeof results[updateFn] !== 'function') {
throw Object.assign(
new Error(`\`${results.currentAstNode.type}\` is not a supported selector.`),
{ code: 'EQUERYNOSELECTOR' }
)
}
await results[updateFn]()
}
return results.collect(rootAstNode)
}
// We are keeping this async in the event that we do add async operators, we
// won't have to have a breaking change on this function signature.
const querySelectorAll = async (targetNode, query, flatOptions) => {
// This never changes ever we just pass it around. But we can't scope it to
// this whole file if we ever want to support concurrent calls to this
// function.
const inventory = [...targetNode.root.inventory.values()]
// res is a Set of items returned for each parsed css ast selector
const res = await retrieveNodesFromParsedAst({
initialItems: inventory,
inventory,
flatOptions,
rootAstNode: parser(query),
targetNode,
})
// returns nodes ordered by realpath
return [...res].sort((a, b) => localeCompare(a.location, b.location))
}
module.exports = querySelectorAll

95
spa/node_modules/@npmcli/arborist/lib/realpath.js generated vendored Normal file
View File

@@ -0,0 +1,95 @@
// look up the realpath, but cache stats to minimize overhead
// If the parent folder is in the realpath cache, then we just
// lstat the child, since there's no need to do a full realpath
// This is not a separate module, and is much simpler than Node's
// built-in fs.realpath, because we only care about symbolic links,
// so we can handle many fewer edge cases.
const { lstat, readlink } = require('fs/promises')
const { resolve, basename, dirname } = require('path')
const realpathCached = (path, rpcache, stcache, depth) => {
// just a safety against extremely deep eloops
/* istanbul ignore next */
if (depth > 2000) {
throw eloop(path)
}
path = resolve(path)
if (rpcache.has(path)) {
return Promise.resolve(rpcache.get(path))
}
const dir = dirname(path)
const base = basename(path)
if (base && rpcache.has(dir)) {
return realpathChild(dir, base, rpcache, stcache, depth)
}
// if it's the root, then we know it's real
if (!base) {
rpcache.set(dir, dir)
return Promise.resolve(dir)
}
// the parent, what is that?
// find out, and then come back.
return realpathCached(dir, rpcache, stcache, depth + 1).then(() =>
realpathCached(path, rpcache, stcache, depth + 1))
}
const lstatCached = (path, stcache) => {
if (stcache.has(path)) {
return Promise.resolve(stcache.get(path))
}
const p = lstat(path).then(st => {
stcache.set(path, st)
return st
})
stcache.set(path, p)
return p
}
// This is a slight fib, as it doesn't actually occur during a stat syscall.
// But file systems are giant piles of lies, so whatever.
const eloop = path =>
Object.assign(new Error(
`ELOOP: too many symbolic links encountered, stat '${path}'`), {
errno: -62,
syscall: 'stat',
code: 'ELOOP',
path: path,
})
const realpathChild = (dir, base, rpcache, stcache, depth) => {
const realdir = rpcache.get(dir)
// that unpossible
/* istanbul ignore next */
if (typeof realdir === 'undefined') {
throw new Error('in realpathChild without parent being in realpath cache')
}
const realish = resolve(realdir, base)
return lstatCached(realish, stcache).then(st => {
if (!st.isSymbolicLink()) {
rpcache.set(resolve(dir, base), realish)
return realish
}
return readlink(realish).then(target => {
const resolved = resolve(realdir, target)
if (realish === resolved) {
throw eloop(realish)
}
return realpathCached(resolved, rpcache, stcache, depth + 1)
}).then(real => {
rpcache.set(resolve(dir, base), real)
return real
})
})
}
module.exports = realpathCached

3
spa/node_modules/@npmcli/arborist/lib/relpath.js generated vendored Normal file
View File

@@ -0,0 +1,3 @@
const { relative } = require('path')
const relpath = (from, to) => relative(from, to).replace(/\\/g, '/')
module.exports = relpath

View File

@@ -0,0 +1,15 @@
// Sometimes we need to actually do a walk from the root, because you can
// have a cycle of deps that all depend on each other, but no path from root.
// Also, since the ideal tree is loaded from the shrinkwrap, it had extraneous
// flags set false that might now be actually extraneous, and dev/optional
// flags that are also now incorrect. This method sets all flags to true, so
// we can find the set that is actually extraneous.
module.exports = tree => {
for (const node of tree.inventory.values()) {
node.extraneous = true
node.dev = true
node.devOptional = true
node.peer = true
node.optional = true
}
}

19
spa/node_modules/@npmcli/arborist/lib/retire-path.js generated vendored Normal file
View File

@@ -0,0 +1,19 @@
const crypto = require('crypto')
const { dirname, basename, resolve } = require('path')
// use sha1 because it's faster, and collisions extremely unlikely anyway
const pathSafeHash = s =>
crypto.createHash('sha1')
.update(s)
.digest('base64')
.replace(/[^a-zA-Z0-9]+/g, '')
.slice(0, 8)
const retirePath = from => {
const d = dirname(from)
const b = basename(from)
const hash = pathSafeHash(from)
return resolve(d, `.${b}-${hash}`)
}
module.exports = retirePath

1150
spa/node_modules/@npmcli/arborist/lib/shrinkwrap.js generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,74 @@
const signals = require('./signals.js')
// for testing, expose the process being used
module.exports = Object.assign(fn => setup(fn), { process })
// do all of this in a setup function so that we can call it
// multiple times for multiple reifies that might be going on.
// Otherwise, Arborist.reify() is a global action, which is a
// new constraint we'd be adding with this behavior.
const setup = fn => {
const { process } = module.exports
const sigListeners = { loaded: false }
const unload = () => {
if (!sigListeners.loaded) {
return
}
for (const sig of signals) {
try {
process.removeListener(sig, sigListeners[sig])
} catch {
// ignore errors
}
}
process.removeListener('beforeExit', onBeforeExit)
sigListeners.loaded = false
}
const onBeforeExit = () => {
// this trick ensures that we exit with the same signal we caught
// Ie, if you press ^C and npm gets a SIGINT, we'll do the rollback
// and then exit with a SIGINT signal once we've removed the handler.
// The timeout is there because signals are asynchronous, so we need
// the process to NOT exit on its own, which means we have to have
// something keeping the event loop looping. Hence this hack.
unload()
process.kill(process.pid, signalReceived)
setTimeout(() => {}, 500)
}
let signalReceived = null
const listener = (sig, fn) => () => {
signalReceived = sig
// if we exit normally, but caught a signal which would have been fatal,
// then re-send it once we're done with whatever cleanup we have to do.
unload()
if (process.listeners(sig).length < 1) {
process.once('beforeExit', onBeforeExit)
}
fn({ signal: sig })
}
// do the actual loading here
for (const sig of signals) {
sigListeners[sig] = listener(sig, fn)
const max = process.getMaxListeners()
try {
// if we call this a bunch of times, avoid triggering the warning
const { length } = process.listeners(sig)
if (length >= max) {
process.setMaxListeners(length + 1)
}
process.on(sig, sigListeners[sig])
} catch {
// ignore errors
}
}
sigListeners.loaded = true
return unload
}

58
spa/node_modules/@npmcli/arborist/lib/signals.js generated vendored Normal file
View File

@@ -0,0 +1,58 @@
// copied from signal-exit
// This is not the set of all possible signals.
//
// It IS, however, the set of all signals that trigger
// an exit on either Linux or BSD systems. Linux is a
// superset of the signal names supported on BSD, and
// the unknown signals just fail to register, so we can
// catch that easily enough.
//
// Don't bother with SIGKILL. It's uncatchable, which
// means that we can't fire any callbacks anyway.
//
// If a user does happen to register a handler on a non-
// fatal signal like SIGWINCH or something, and then
// exit, it'll end up firing `process.emit('exit')`, so
// the handler will be fired anyway.
//
// SIGBUS, SIGFPE, SIGSEGV and SIGILL, when not raised
// artificially, inherently leave the process in a
// state from which it is not safe to try and enter JS
// listeners.
const platform = global.__ARBORIST_FAKE_PLATFORM__ || process.platform
module.exports = [
'SIGABRT',
'SIGALRM',
'SIGHUP',
'SIGINT',
'SIGTERM',
]
if (platform !== 'win32') {
module.exports.push(
'SIGVTALRM',
'SIGXCPU',
'SIGXFSZ',
'SIGUSR2',
'SIGTRAP',
'SIGSYS',
'SIGQUIT',
'SIGIOT'
// should detect profiler and enable/disable accordingly.
// see #21
// 'SIGPROF'
)
}
if (platform === 'linux') {
module.exports.push(
'SIGIO',
'SIGPOLL',
'SIGPWR',
'SIGSTKFLT',
'SIGUNUSED'
)
}

View File

@@ -0,0 +1,34 @@
const npa = require('npm-package-arg')
// extracted from npm v6 lib/install/realize-shrinkwrap-specifier.js
const specFromLock = (name, lock, where) => {
try {
if (lock.version) {
const spec = npa.resolve(name, lock.version, where)
if (lock.integrity || spec.type === 'git') {
return spec
}
}
if (lock.from) {
// legacy metadata includes "from", but not integrity
const spec = npa.resolve(name, lock.from, where)
if (spec.registry && lock.version) {
return npa.resolve(name, lock.version, where)
} else if (!lock.resolved) {
return spec
}
}
if (lock.resolved) {
return npa.resolve(name, lock.resolved, where)
}
} catch {
// ignore errors
}
try {
return npa.resolve(name, lock.version, where)
} catch {
return {}
}
}
module.exports = specFromLock

102
spa/node_modules/@npmcli/arborist/lib/tracker.js generated vendored Normal file
View File

@@ -0,0 +1,102 @@
const _progress = Symbol('_progress')
const _onError = Symbol('_onError')
const _setProgress = Symbol('_setProgess')
const npmlog = require('npmlog')
module.exports = cls => class Tracker extends cls {
constructor (options = {}) {
super(options)
this[_setProgress] = !!options.progress
this[_progress] = new Map()
}
addTracker (section, subsection = null, key = null) {
if (section === null || section === undefined) {
this[_onError](`Tracker can't be null or undefined`)
}
if (key === null) {
key = subsection
}
const hasTracker = this[_progress].has(section)
const hasSubtracker = this[_progress].has(`${section}:${key}`)
if (hasTracker && subsection === null) {
// 0. existing tracker, no subsection
this[_onError](`Tracker "${section}" already exists`)
} else if (!hasTracker && subsection === null) {
// 1. no existing tracker, no subsection
// Create a new tracker from npmlog
// starts progress bar
if (this[_setProgress] && this[_progress].size === 0) {
npmlog.enableProgress()
}
this[_progress].set(section, npmlog.newGroup(section))
} else if (!hasTracker && subsection !== null) {
// 2. no parent tracker and subsection
this[_onError](`Parent tracker "${section}" does not exist`)
} else if (!hasTracker || !hasSubtracker) {
// 3. existing parent tracker, no subsection tracker
// Create a new subtracker in this[_progress] from parent tracker
this[_progress].set(`${section}:${key}`,
this[_progress].get(section).newGroup(`${section}:${subsection}`)
)
}
// 4. existing parent tracker, existing subsection tracker
// skip it
}
finishTracker (section, subsection = null, key = null) {
if (section === null || section === undefined) {
this[_onError](`Tracker can't be null or undefined`)
}
if (key === null) {
key = subsection
}
const hasTracker = this[_progress].has(section)
const hasSubtracker = this[_progress].has(`${section}:${key}`)
// 0. parent tracker exists, no subsection
// Finish parent tracker and remove from this[_progress]
if (hasTracker && subsection === null) {
// check if parent tracker does
// not have any remaining children
const keys = this[_progress].keys()
for (const key of keys) {
if (key.match(new RegExp(section + ':'))) {
this.finishTracker(section, key)
}
}
// remove parent tracker
this[_progress].get(section).finish()
this[_progress].delete(section)
// remove progress bar if all
// trackers are finished
if (this[_setProgress] && this[_progress].size === 0) {
npmlog.disableProgress()
}
} else if (!hasTracker && subsection === null) {
// 1. no existing parent tracker, no subsection
this[_onError](`Tracker "${section}" does not exist`)
} else if (!hasTracker || hasSubtracker) {
// 2. subtracker exists
// Finish subtracker and remove from this[_progress]
this[_progress].get(`${section}:${key}`).finish()
this[_progress].delete(`${section}:${key}`)
}
// 3. existing parent tracker, no subsection
}
[_onError] (msg) {
if (this[_setProgress]) {
npmlog.disableProgress()
}
throw new Error(msg)
}
}

155
spa/node_modules/@npmcli/arborist/lib/tree-check.js generated vendored Normal file
View File

@@ -0,0 +1,155 @@
const debug = require('./debug.js')
const checkTree = (tree, checkUnreachable = true) => {
const log = [['START TREE CHECK', tree.path]]
// this can only happen in tests where we have a "tree" object
// that isn't actually a tree.
if (!tree.root || !tree.root.inventory) {
return tree
}
const { inventory } = tree.root
const seen = new Set()
const check = (node, via = tree, viaType = 'self') => {
log.push([
'CHECK',
node && node.location,
via && via.location,
viaType,
'seen=' + seen.has(node),
'promise=' + !!(node && node.then),
'root=' + !!(node && node.isRoot),
])
if (!node || seen.has(node) || node.then) {
return
}
seen.add(node)
if (node.isRoot && node !== tree.root) {
throw Object.assign(new Error('double root'), {
node: node.path,
realpath: node.realpath,
tree: tree.path,
root: tree.root.path,
via: via.path,
viaType,
log,
})
}
if (node.root !== tree.root) {
throw Object.assign(new Error('node from other root in tree'), {
node: node.path,
realpath: node.realpath,
tree: tree.path,
root: tree.root.path,
via: via.path,
viaType,
otherRoot: node.root && node.root.path,
log,
})
}
if (!node.isRoot && node.inventory.size !== 0) {
throw Object.assign(new Error('non-root has non-zero inventory'), {
node: node.path,
tree: tree.path,
root: tree.root.path,
via: via.path,
viaType,
inventory: [...node.inventory.values()].map(node =>
[node.path, node.location]),
log,
})
}
if (!node.isRoot && !inventory.has(node) && !node.dummy) {
throw Object.assign(new Error('not in inventory'), {
node: node.path,
tree: tree.path,
root: tree.root.path,
via: via.path,
viaType,
log,
})
}
const devEdges = [...node.edgesOut.values()].filter(e => e.dev)
if (!node.isTop && devEdges.length) {
throw Object.assign(new Error('dev edges on non-top node'), {
node: node.path,
tree: tree.path,
root: tree.root.path,
via: via.path,
viaType,
devEdges: devEdges.map(e => [e.type, e.name, e.spec, e.error]),
log,
})
}
if (node.path === tree.root.path && node !== tree.root) {
throw Object.assign(new Error('node with same path as root'), {
node: node.path,
tree: tree.path,
root: tree.root.path,
via: via.path,
viaType,
log,
})
}
if (!node.isLink && node.path !== node.realpath) {
throw Object.assign(new Error('non-link with mismatched path/realpath'), {
node: node.path,
tree: tree.path,
realpath: node.realpath,
root: tree.root.path,
via: via.path,
viaType,
log,
})
}
const { parent, fsParent, target } = node
check(parent, node, 'parent')
check(fsParent, node, 'fsParent')
check(target, node, 'target')
log.push(['CHILDREN', node.location, ...node.children.keys()])
for (const kid of node.children.values()) {
check(kid, node, 'children')
}
for (const kid of node.fsChildren) {
check(kid, node, 'fsChildren')
}
for (const link of node.linksIn) {
check(link, node, 'linksIn')
}
for (const top of node.tops) {
check(top, node, 'tops')
}
log.push(['DONE', node.location])
}
check(tree)
if (checkUnreachable) {
for (const node of inventory.values()) {
if (!seen.has(node) && node !== tree.root) {
throw Object.assign(new Error('unreachable in inventory'), {
node: node.path,
realpath: node.realpath,
location: node.location,
root: tree.root.path,
tree: tree.path,
log,
})
}
}
}
return tree
}
// should only ever run this check in debug mode
module.exports = tree => tree
debug(() => module.exports = checkTree)

View File

@@ -0,0 +1,48 @@
/* eslint node/no-deprecated-api: "off" */
const semver = require('semver')
const { basename } = require('path')
const { parse } = require('url')
module.exports = (name, tgz) => {
const base = basename(tgz)
if (!base.endsWith('.tgz')) {
return null
}
const u = parse(tgz)
if (/^https?:/.test(u.protocol)) {
// registry url? check for most likely pattern.
// either /@foo/bar/-/bar-1.2.3.tgz or
// /foo/-/foo-1.2.3.tgz, and fall through to
// basename checking. Note that registries can
// be mounted below the root url, so /a/b/-/x/y/foo/-/foo-1.2.3.tgz
// is a potential option.
const tfsplit = u.path.slice(1).split('/-/')
if (tfsplit.length > 1) {
const afterTF = tfsplit.pop()
if (afterTF === base) {
const pre = tfsplit.pop()
const preSplit = pre.split(/\/|%2f/i)
const project = preSplit.pop()
const scope = preSplit.pop()
return versionFromBaseScopeName(base, scope, project)
}
}
}
const split = name.split(/\/|%2f/i)
const project = split.pop()
const scope = split.pop()
return versionFromBaseScopeName(base, scope, project)
}
const versionFromBaseScopeName = (base, scope, name) => {
if (!base.startsWith(name + '-')) {
return null
}
const parsed = semver.parse(base.substring(name.length + 1, base.length - 4))
return parsed ? {
name: scope && scope.charAt(0) === '@' ? `${scope}/${name}` : name,
version: parsed.version,
} : null
}

217
spa/node_modules/@npmcli/arborist/lib/vuln.js generated vendored Normal file
View File

@@ -0,0 +1,217 @@
// An object representing a vulnerability either as the result of an
// advisory or due to the package in question depending exclusively on
// vulnerable versions of a dep.
//
// - name: package name
// - range: Set of vulnerable versions
// - nodes: Set of nodes affected
// - effects: Set of vulns triggered by this one
// - advisories: Set of advisories (including metavulns) causing this vuln.
// All of the entries in via are vulnerability objects returned by
// @npmcli/metavuln-calculator
// - via: dependency vulns which cause this one
const { satisfies, simplifyRange } = require('semver')
const semverOpt = { loose: true, includePrerelease: true }
const localeCompare = require('@isaacs/string-locale-compare')('en')
const npa = require('npm-package-arg')
const _range = Symbol('_range')
const _simpleRange = Symbol('_simpleRange')
const _fixAvailable = Symbol('_fixAvailable')
const severities = new Map([
['info', 0],
['low', 1],
['moderate', 2],
['high', 3],
['critical', 4],
[null, -1],
])
for (const [name, val] of severities.entries()) {
severities.set(val, name)
}
class Vuln {
constructor ({ name, advisory }) {
this.name = name
this.via = new Set()
this.advisories = new Set()
this.severity = null
this.effects = new Set()
this.topNodes = new Set()
this[_range] = null
this[_simpleRange] = null
this.nodes = new Set()
// assume a fix is available unless it hits a top node
// that locks it in place, setting this false or {isSemVerMajor, version}.
this[_fixAvailable] = true
this.addAdvisory(advisory)
this.packument = advisory.packument
this.versions = advisory.versions
}
get fixAvailable () {
return this[_fixAvailable]
}
set fixAvailable (f) {
this[_fixAvailable] = f
// if there's a fix available for this at the top level, it means that
// it will also fix the vulns that led to it being there. to get there,
// we set the vias to the most "strict" of fix availables.
// - false: no fix is available
// - {name, version, isSemVerMajor} fix requires -f, is semver major
// - {name, version} fix requires -f, not semver major
// - true: fix does not require -f
// TODO: duped entries may require different fixes but the current
// structure does not support this, so the case were a top level fix
// corrects a duped entry may mean you have to run fix more than once
for (const v of this.via) {
// don't blow up on loops
if (v.fixAvailable === f) {
continue
}
if (f === false) {
v.fixAvailable = f
} else if (v.fixAvailable === true) {
v.fixAvailable = f
} else if (typeof f === 'object' && (
typeof v.fixAvailable !== 'object' || !v.fixAvailable.isSemVerMajor)) {
v.fixAvailable = f
}
}
}
get isDirect () {
for (const node of this.nodes.values()) {
for (const edge of node.edgesIn) {
if (edge.from.isProjectRoot || edge.from.isWorkspace) {
return true
}
}
}
return false
}
testSpec (spec) {
const specObj = npa(spec)
if (!specObj.registry) {
return true
}
if (specObj.subSpec) {
spec = specObj.subSpec.rawSpec
}
for (const v of this.versions) {
if (satisfies(v, spec) && !satisfies(v, this.range, semverOpt)) {
return false
}
}
return true
}
toJSON () {
return {
name: this.name,
severity: this.severity,
isDirect: this.isDirect,
// just loop over the advisories, since via is only Vuln objects,
// and calculated advisories have all the info we need
via: [...this.advisories].map(v => v.type === 'metavuln' ? v.dependency : {
...v,
versions: undefined,
vulnerableVersions: undefined,
id: undefined,
}).sort((a, b) =>
localeCompare(String(a.source || a), String(b.source || b))),
effects: [...this.effects].map(v => v.name).sort(localeCompare),
range: this.simpleRange,
nodes: [...this.nodes].map(n => n.location).sort(localeCompare),
fixAvailable: this[_fixAvailable],
}
}
addVia (v) {
this.via.add(v)
v.effects.add(this)
// call the setter since we might add vias _after_ setting fixAvailable
this.fixAvailable = this.fixAvailable
}
deleteVia (v) {
this.via.delete(v)
v.effects.delete(this)
}
deleteAdvisory (advisory) {
this.advisories.delete(advisory)
// make sure we have the max severity of all the vulns causing this one
this.severity = null
this[_range] = null
this[_simpleRange] = null
// refresh severity
for (const advisory of this.advisories) {
this.addAdvisory(advisory)
}
// remove any effects that are no longer relevant
const vias = new Set([...this.advisories].map(a => a.dependency))
for (const via of this.via) {
if (!vias.has(via.name)) {
this.deleteVia(via)
}
}
}
addAdvisory (advisory) {
this.advisories.add(advisory)
const sev = severities.get(advisory.severity)
this[_range] = null
this[_simpleRange] = null
if (sev > severities.get(this.severity)) {
this.severity = advisory.severity
}
}
get range () {
return this[_range] ||
(this[_range] = [...this.advisories].map(v => v.range).join(' || '))
}
get simpleRange () {
if (this[_simpleRange] && this[_simpleRange] === this[_range]) {
return this[_simpleRange]
}
const versions = [...this.advisories][0].versions
const range = this.range
const simple = simplifyRange(versions, range, semverOpt)
return this[_simpleRange] = this[_range] = simple
}
isVulnerable (node) {
if (this.nodes.has(node)) {
return true
}
const { version } = node.package
if (!version) {
return false
}
for (const v of this.advisories) {
if (v.testVersion(version)) {
this.nodes.add(node)
return true
}
}
return false
}
}
module.exports = Vuln

377
spa/node_modules/@npmcli/arborist/lib/yarn-lock.js generated vendored Normal file
View File

@@ -0,0 +1,377 @@
// parse a yarn lock file
// basic format
//
// <request spec>[, <request spec> ...]:
// <key> <value>
// <subkey>:
// <key> <value>
//
// Assume that any key or value might be quoted, though that's only done
// in practice if certain chars are in the string. When writing back, we follow
// Yarn's rules for quoting, to cause minimal friction.
//
// The data format would support nested objects, but at this time, it
// appears that yarn does not use that for anything, so in the interest
// of a simpler parser algorithm, this implementation only supports a
// single layer of sub objects.
//
// This doesn't deterministically define the shape of the tree, and so
// cannot be used (on its own) for Arborist.loadVirtual.
// But it can give us resolved, integrity, and version, which is useful
// for Arborist.loadActual and for building the ideal tree.
//
// At the very least, when a yarn.lock file is present, we update it
// along the way, and save it back in Shrinkwrap.save()
//
// NIHing this rather than using @yarnpkg/lockfile because that module
// is an impenetrable 10kloc of webpack flow output, which is overkill
// for something relatively simple and tailored to Arborist's use case.
const localeCompare = require('@isaacs/string-locale-compare')('en')
const consistentResolve = require('./consistent-resolve.js')
const { dirname } = require('path')
const { breadth } = require('treeverse')
// Sort Yarn entries respecting the yarn.lock sort order
const yarnEntryPriorities = {
name: 1,
version: 2,
uid: 3,
resolved: 4,
integrity: 5,
registry: 6,
dependencies: 7,
}
const priorityThenLocaleCompare = (a, b) => {
if (!yarnEntryPriorities[a] && !yarnEntryPriorities[b]) {
return localeCompare(a, b)
}
/* istanbul ignore next */
return (yarnEntryPriorities[a] || 100) > (yarnEntryPriorities[b] || 100) ? 1 : -1
}
const quoteIfNeeded = val => {
if (
typeof val === 'boolean' ||
typeof val === 'number' ||
val.startsWith('true') ||
val.startsWith('false') ||
/[:\s\n\\",[\]]/g.test(val) ||
!/^[a-zA-Z]/g.test(val)
) {
return JSON.stringify(val)
}
return val
}
// sort a key/value object into a string of JSON stringified keys and vals
const sortKV = obj => Object.keys(obj)
.sort(localeCompare)
.map(k => ` ${quoteIfNeeded(k)} ${quoteIfNeeded(obj[k])}`)
.join('\n')
// for checking against previous entries
const match = (p, n) =>
p.integrity && n.integrity ? p.integrity === n.integrity
: p.resolved && n.resolved ? p.resolved === n.resolved
: p.version && n.version ? p.version === n.version
: true
const prefix =
`# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
`
const nullSymbol = Symbol('null')
class YarnLock {
static parse (data) {
return new YarnLock().parse(data)
}
static fromTree (tree) {
return new YarnLock().fromTree(tree)
}
constructor () {
this.entries = null
this.endCurrent()
}
endCurrent () {
this.current = null
this.subkey = nullSymbol
}
parse (data) {
const ENTRY_START = /^[^\s].*:$/
const SUBKEY = /^ {2}[^\s]+:$/
const SUBVAL = /^ {4}[^\s]+ .+$/
const METADATA = /^ {2}[^\s]+ .+$/
this.entries = new Map()
this.current = null
const linere = /([^\r\n]*)\r?\n/gm
let match
let lineNum = 0
if (!/\n$/.test(data)) {
data += '\n'
}
while (match = linere.exec(data)) {
const line = match[1]
lineNum++
if (line.charAt(0) === '#') {
continue
}
if (line === '') {
this.endCurrent()
continue
}
if (ENTRY_START.test(line)) {
this.endCurrent()
const specs = this.splitQuoted(line.slice(0, -1), /, */)
this.current = new YarnLockEntry(specs)
specs.forEach(spec => this.entries.set(spec, this.current))
continue
}
if (SUBKEY.test(line)) {
this.subkey = line.slice(2, -1)
this.current[this.subkey] = {}
continue
}
if (SUBVAL.test(line) && this.current && this.current[this.subkey]) {
const subval = this.splitQuoted(line.trimLeft(), ' ')
if (subval.length === 2) {
this.current[this.subkey][subval[0]] = subval[1]
continue
}
}
// any other metadata
if (METADATA.test(line) && this.current) {
const metadata = this.splitQuoted(line.trimLeft(), ' ')
if (metadata.length === 2) {
// strip off the legacy shasum hashes
if (metadata[0] === 'resolved') {
metadata[1] = metadata[1].replace(/#.*/, '')
}
this.current[metadata[0]] = metadata[1]
continue
}
}
throw Object.assign(new Error('invalid or corrupted yarn.lock file'), {
position: match.index,
content: match[0],
line: lineNum,
})
}
this.endCurrent()
return this
}
splitQuoted (str, delim) {
// a,"b,c",d"e,f => ['a','"b','c"','d"e','f'] => ['a','b,c','d"e','f']
const split = str.split(delim)
const out = []
let o = 0
for (let i = 0; i < split.length; i++) {
const chunk = split[i]
if (/^".*"$/.test(chunk)) {
out[o++] = chunk.trim().slice(1, -1)
} else if (/^"/.test(chunk)) {
let collect = chunk.trimLeft().slice(1)
while (++i < split.length) {
const n = split[i]
// something that is not a slash, followed by an even number
// of slashes then a " then end => ending on an unescaped "
if (/[^\\](\\\\)*"$/.test(n)) {
collect += n.trimRight().slice(0, -1)
break
} else {
collect += n
}
}
out[o++] = collect
} else {
out[o++] = chunk.trim()
}
}
return out
}
toString () {
return prefix + [...new Set([...this.entries.values()])]
.map(e => e.toString())
.sort((a, b) => localeCompare(a.replace(/"/g, ''), b.replace(/"/g, ''))).join('\n\n') + '\n'
}
fromTree (tree) {
this.entries = new Map()
// walk the tree in a deterministic order, breadth-first, alphabetical
breadth({
tree,
visit: node => this.addEntryFromNode(node),
getChildren: node => [...node.children.values(), ...node.fsChildren]
.sort((a, b) => a.depth - b.depth || localeCompare(a.name, b.name)),
})
return this
}
addEntryFromNode (node) {
const specs = [...node.edgesIn]
.map(e => `${node.name}@${e.spec}`)
.sort(localeCompare)
// Note:
// yarn will do excessive duplication in a case like this:
// root -> (x@1.x, y@1.x, z@1.x)
// y@1.x -> (x@1.1, z@2.x)
// z@1.x -> ()
// z@2.x -> (x@1.x)
//
// where x@1.2 exists, because the "x@1.x" spec will *always* resolve
// to x@1.2, which doesn't work for y's dep on x@1.1, so you'll get this:
//
// root
// +-- x@1.2.0
// +-- y
// | +-- x@1.1.0
// | +-- z@2
// | +-- x@1.2.0
// +-- z@1
//
// instead of this more deduped tree that arborist builds by default:
//
// root
// +-- x@1.2.0 (dep is x@1.x, from root)
// +-- y
// | +-- x@1.1.0
// | +-- z@2 (dep on x@1.x deduped to x@1.1.0 under y)
// +-- z@1
//
// In order to not create an invalid yarn.lock file with conflicting
// entries, AND not tell yarn to create an invalid tree, we need to
// ignore the x@1.x spec coming from z, since it's already in the entries.
//
// So, if the integrity and resolved don't match a previous entry, skip it.
// We call this method on shallower nodes first, so this is fine.
const n = this.entryDataFromNode(node)
let priorEntry = null
const newSpecs = []
for (const s of specs) {
const prev = this.entries.get(s)
// no previous entry for this spec at all, so it's new
if (!prev) {
// if we saw a match already, then assign this spec to it as well
if (priorEntry) {
priorEntry.addSpec(s)
} else {
newSpecs.push(s)
}
continue
}
const m = match(prev, n)
// there was a prior entry, but a different thing. skip this one
if (!m) {
continue
}
// previous matches, but first time seeing it, so already has this spec.
// go ahead and add all the previously unseen specs, though
if (!priorEntry) {
priorEntry = prev
for (const s of newSpecs) {
priorEntry.addSpec(s)
this.entries.set(s, priorEntry)
}
newSpecs.length = 0
continue
}
// have a prior entry matching n, and matching the prev we just saw
// add the spec to it
priorEntry.addSpec(s)
this.entries.set(s, priorEntry)
}
// if we never found a matching prior, then this is a whole new thing
if (!priorEntry) {
const entry = Object.assign(new YarnLockEntry(newSpecs), n)
for (const s of newSpecs) {
this.entries.set(s, entry)
}
} else {
// pick up any new info that we got for this node, so that we can
// decorate with integrity/resolved/etc.
Object.assign(priorEntry, n)
}
}
entryDataFromNode (node) {
const n = {}
if (node.package.dependencies) {
n.dependencies = node.package.dependencies
}
if (node.package.optionalDependencies) {
n.optionalDependencies = node.package.optionalDependencies
}
if (node.version) {
n.version = node.version
}
if (node.resolved) {
n.resolved = consistentResolve(
node.resolved,
node.isLink ? dirname(node.path) : node.path,
node.root.path,
true
)
}
if (node.integrity) {
n.integrity = node.integrity
}
return n
}
static get Entry () {
return YarnLockEntry
}
}
const _specs = Symbol('_specs')
class YarnLockEntry {
constructor (specs) {
this[_specs] = new Set(specs)
this.resolved = null
this.version = null
this.integrity = null
this.dependencies = null
this.optionalDependencies = null
}
toString () {
// sort objects to the bottom, then alphabetical
return ([...this[_specs]]
.sort(localeCompare)
.map(quoteIfNeeded).join(', ') +
':\n' +
Object.getOwnPropertyNames(this)
.filter(prop => this[prop] !== null)
.sort(priorityThenLocaleCompare)
.map(prop =>
typeof this[prop] !== 'object'
? ` ${prop} ${prop === 'integrity' ? this[prop] : JSON.stringify(this[prop])}\n`
: Object.keys(this[prop]).length === 0 ? ''
: ` ${prop}:\n` + sortKV(this[prop]) + '\n')
.join('')).trim()
}
addSpec (spec) {
this[_specs].add(spec)
}
}
module.exports = YarnLock

View File

@@ -0,0 +1 @@
../glob/dist/esm/bin.mjs

View File

@@ -0,0 +1 @@
../nopt/bin/nopt.js

View File

@@ -0,0 +1 @@
../semver/bin/semver.js

View File

@@ -0,0 +1,20 @@
<!-- This file is automatically added by @npmcli/template-oss. Do not edit. -->
ISC License
Copyright npm, Inc.
Permission to use, copy, modify, and/or distribute this
software for any purpose with or without fee is hereby
granted, provided that the above copyright notice and this
permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND NPM DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO
EVENT SHALL NPM BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,97 @@
# @npmcli/fs
polyfills, and extensions, of the core `fs` module.
## Features
- `fs.cp` polyfill for node < 16.7.0
- `fs.withTempDir` added
- `fs.readdirScoped` added
- `fs.moveFile` added
## `fs.withTempDir(root, fn, options) -> Promise`
### Parameters
- `root`: the directory in which to create the temporary directory
- `fn`: a function that will be called with the path to the temporary directory
- `options`
- `tmpPrefix`: a prefix to be used in the generated directory name
### Usage
The `withTempDir` function creates a temporary directory, runs the provided
function (`fn`), then removes the temporary directory and resolves or rejects
based on the result of `fn`.
```js
const fs = require('@npmcli/fs')
const os = require('os')
// this function will be called with the full path to the temporary directory
// it is called with `await` behind the scenes, so can be async if desired.
const myFunction = async (tempPath) => {
return 'done!'
}
const main = async () => {
const result = await fs.withTempDir(os.tmpdir(), myFunction)
// result === 'done!'
}
main()
```
## `fs.readdirScoped(root) -> Promise`
### Parameters
- `root`: the directory to read
### Usage
Like `fs.readdir` but handling `@org/module` dirs as if they were
a single entry.
```javascript
const { readdirScoped } = require('@npmcli/fs')
const entries = await readdirScoped('node_modules')
// entries will be something like: ['a', '@org/foo', '@org/bar']
```
## `fs.moveFile(source, dest, options) -> Promise`
A fork of [move-file](https://github.com/sindresorhus/move-file) with
support for Common JS.
### Highlights
- Promise API.
- Supports moving a file across partitions and devices.
- Optionally prevent overwriting an existing file.
- Creates non-existent destination directories for you.
- Automatically recurses when source is a directory.
### Parameters
- `source`: File, or directory, you want to move.
- `dest`: Where you want the file or directory moved.
- `options`
- `overwrite` (`boolean`, default: `true`): Overwrite existing destination file(s).
### Usage
The built-in
[`fs.rename()`](https://nodejs.org/api/fs.html#fs_fs_rename_oldpath_newpath_callback)
is just a JavaScript wrapper for the C `rename(2)` function, which doesn't
support moving files across partitions or devices. This module is what you
would have expected `fs.rename()` to be.
```js
const { moveFile } = require('@npmcli/fs');
(async () => {
await moveFile('source/unicorn.png', 'destination/unicorn.png');
console.log('The file has been moved');
})();
```

View File

@@ -0,0 +1,20 @@
// given an input that may or may not be an object, return an object that has
// a copy of every defined property listed in 'copy'. if the input is not an
// object, assign it to the property named by 'wrap'
const getOptions = (input, { copy, wrap }) => {
const result = {}
if (input && typeof input === 'object') {
for (const prop of copy) {
if (input[prop] !== undefined) {
result[prop] = input[prop]
}
}
} else {
result[wrap] = input
}
return result
}
module.exports = getOptions

View File

@@ -0,0 +1,9 @@
const semver = require('semver')
const satisfies = (range) => {
return semver.satisfies(process.version, range, { includePrerelease: true })
}
module.exports = {
satisfies,
}

View File

@@ -0,0 +1,15 @@
(The MIT License)
Copyright (c) 2011-2017 JP Richardson
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files
(the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,129 @@
'use strict'
const { inspect } = require('util')
// adapted from node's internal/errors
// https://github.com/nodejs/node/blob/c8a04049/lib/internal/errors.js
// close copy of node's internal SystemError class.
class SystemError {
constructor (code, prefix, context) {
// XXX context.code is undefined in all constructors used in cp/polyfill
// that may be a bug copied from node, maybe the constructor should use
// `code` not `errno`? nodejs/node#41104
let message = `${prefix}: ${context.syscall} returned ` +
`${context.code} (${context.message})`
if (context.path !== undefined) {
message += ` ${context.path}`
}
if (context.dest !== undefined) {
message += ` => ${context.dest}`
}
this.code = code
Object.defineProperties(this, {
name: {
value: 'SystemError',
enumerable: false,
writable: true,
configurable: true,
},
message: {
value: message,
enumerable: false,
writable: true,
configurable: true,
},
info: {
value: context,
enumerable: true,
configurable: true,
writable: false,
},
errno: {
get () {
return context.errno
},
set (value) {
context.errno = value
},
enumerable: true,
configurable: true,
},
syscall: {
get () {
return context.syscall
},
set (value) {
context.syscall = value
},
enumerable: true,
configurable: true,
},
})
if (context.path !== undefined) {
Object.defineProperty(this, 'path', {
get () {
return context.path
},
set (value) {
context.path = value
},
enumerable: true,
configurable: true,
})
}
if (context.dest !== undefined) {
Object.defineProperty(this, 'dest', {
get () {
return context.dest
},
set (value) {
context.dest = value
},
enumerable: true,
configurable: true,
})
}
}
toString () {
return `${this.name} [${this.code}]: ${this.message}`
}
[Symbol.for('nodejs.util.inspect.custom')] (_recurseTimes, ctx) {
return inspect(this, {
...ctx,
getters: true,
customInspect: false,
})
}
}
function E (code, message) {
module.exports[code] = class NodeError extends SystemError {
constructor (ctx) {
super(code, message, ctx)
}
}
}
E('ERR_FS_CP_DIR_TO_NON_DIR', 'Cannot overwrite directory with non-directory')
E('ERR_FS_CP_EEXIST', 'Target already exists')
E('ERR_FS_CP_EINVAL', 'Invalid src or dest')
E('ERR_FS_CP_FIFO_PIPE', 'Cannot copy a FIFO pipe')
E('ERR_FS_CP_NON_DIR_TO_DIR', 'Cannot overwrite non-directory with directory')
E('ERR_FS_CP_SOCKET', 'Cannot copy a socket file')
E('ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY', 'Cannot overwrite symlink in subdirectory of self')
E('ERR_FS_CP_UNKNOWN', 'Cannot copy an unknown file type')
E('ERR_FS_EISDIR', 'Path is a directory')
module.exports.ERR_INVALID_ARG_TYPE = class ERR_INVALID_ARG_TYPE extends Error {
constructor (name, expected, actual) {
super()
this.code = 'ERR_INVALID_ARG_TYPE'
this.message = `The ${name} argument must be ${expected}. Received ${typeof actual}`
}
}

View File

@@ -0,0 +1,22 @@
const fs = require('fs/promises')
const getOptions = require('../common/get-options.js')
const node = require('../common/node.js')
const polyfill = require('./polyfill.js')
// node 16.7.0 added fs.cp
const useNative = node.satisfies('>=16.7.0')
const cp = async (src, dest, opts) => {
const options = getOptions(opts, {
copy: ['dereference', 'errorOnExist', 'filter', 'force', 'preserveTimestamps', 'recursive'],
})
// the polyfill is tested separately from this module, no need to hack
// process.version to try to trigger it just for coverage
// istanbul ignore next
return useNative
? fs.cp(src, dest, options)
: polyfill(src, dest, options)
}
module.exports = cp

View File

@@ -0,0 +1,428 @@
// this file is a modified version of the code in node 17.2.0
// which is, in turn, a modified version of the fs-extra module on npm
// node core changes:
// - Use of the assert module has been replaced with core's error system.
// - All code related to the glob dependency has been removed.
// - Bring your own custom fs module is not currently supported.
// - Some basic code cleanup.
// changes here:
// - remove all callback related code
// - drop sync support
// - change assertions back to non-internal methods (see options.js)
// - throws ENOTDIR when rmdir gets an ENOENT for a path that exists in Windows
'use strict'
const {
ERR_FS_CP_DIR_TO_NON_DIR,
ERR_FS_CP_EEXIST,
ERR_FS_CP_EINVAL,
ERR_FS_CP_FIFO_PIPE,
ERR_FS_CP_NON_DIR_TO_DIR,
ERR_FS_CP_SOCKET,
ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY,
ERR_FS_CP_UNKNOWN,
ERR_FS_EISDIR,
ERR_INVALID_ARG_TYPE,
} = require('./errors.js')
const {
constants: {
errno: {
EEXIST,
EISDIR,
EINVAL,
ENOTDIR,
},
},
} = require('os')
const {
chmod,
copyFile,
lstat,
mkdir,
readdir,
readlink,
stat,
symlink,
unlink,
utimes,
} = require('fs/promises')
const {
dirname,
isAbsolute,
join,
parse,
resolve,
sep,
toNamespacedPath,
} = require('path')
const { fileURLToPath } = require('url')
const defaultOptions = {
dereference: false,
errorOnExist: false,
filter: undefined,
force: true,
preserveTimestamps: false,
recursive: false,
}
async function cp (src, dest, opts) {
if (opts != null && typeof opts !== 'object') {
throw new ERR_INVALID_ARG_TYPE('options', ['Object'], opts)
}
return cpFn(
toNamespacedPath(getValidatedPath(src)),
toNamespacedPath(getValidatedPath(dest)),
{ ...defaultOptions, ...opts })
}
function getValidatedPath (fileURLOrPath) {
const path = fileURLOrPath != null && fileURLOrPath.href
&& fileURLOrPath.origin
? fileURLToPath(fileURLOrPath)
: fileURLOrPath
return path
}
async function cpFn (src, dest, opts) {
// Warn about using preserveTimestamps on 32-bit node
// istanbul ignore next
if (opts.preserveTimestamps && process.arch === 'ia32') {
const warning = 'Using the preserveTimestamps option in 32-bit ' +
'node is not recommended'
process.emitWarning(warning, 'TimestampPrecisionWarning')
}
const stats = await checkPaths(src, dest, opts)
const { srcStat, destStat } = stats
await checkParentPaths(src, srcStat, dest)
if (opts.filter) {
return handleFilter(checkParentDir, destStat, src, dest, opts)
}
return checkParentDir(destStat, src, dest, opts)
}
async function checkPaths (src, dest, opts) {
const { 0: srcStat, 1: destStat } = await getStats(src, dest, opts)
if (destStat) {
if (areIdentical(srcStat, destStat)) {
throw new ERR_FS_CP_EINVAL({
message: 'src and dest cannot be the same',
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
if (srcStat.isDirectory() && !destStat.isDirectory()) {
throw new ERR_FS_CP_DIR_TO_NON_DIR({
message: `cannot overwrite directory ${src} ` +
`with non-directory ${dest}`,
path: dest,
syscall: 'cp',
errno: EISDIR,
})
}
if (!srcStat.isDirectory() && destStat.isDirectory()) {
throw new ERR_FS_CP_NON_DIR_TO_DIR({
message: `cannot overwrite non-directory ${src} ` +
`with directory ${dest}`,
path: dest,
syscall: 'cp',
errno: ENOTDIR,
})
}
}
if (srcStat.isDirectory() && isSrcSubdir(src, dest)) {
throw new ERR_FS_CP_EINVAL({
message: `cannot copy ${src} to a subdirectory of self ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
return { srcStat, destStat }
}
function areIdentical (srcStat, destStat) {
return destStat.ino && destStat.dev && destStat.ino === srcStat.ino &&
destStat.dev === srcStat.dev
}
function getStats (src, dest, opts) {
const statFunc = opts.dereference ?
(file) => stat(file, { bigint: true }) :
(file) => lstat(file, { bigint: true })
return Promise.all([
statFunc(src),
statFunc(dest).catch((err) => {
// istanbul ignore next: unsure how to cover.
if (err.code === 'ENOENT') {
return null
}
// istanbul ignore next: unsure how to cover.
throw err
}),
])
}
async function checkParentDir (destStat, src, dest, opts) {
const destParent = dirname(dest)
const dirExists = await pathExists(destParent)
if (dirExists) {
return getStatsForCopy(destStat, src, dest, opts)
}
await mkdir(destParent, { recursive: true })
return getStatsForCopy(destStat, src, dest, opts)
}
function pathExists (dest) {
return stat(dest).then(
() => true,
// istanbul ignore next: not sure when this would occur
(err) => (err.code === 'ENOENT' ? false : Promise.reject(err)))
}
// Recursively check if dest parent is a subdirectory of src.
// It works for all file types including symlinks since it
// checks the src and dest inodes. It starts from the deepest
// parent and stops once it reaches the src parent or the root path.
async function checkParentPaths (src, srcStat, dest) {
const srcParent = resolve(dirname(src))
const destParent = resolve(dirname(dest))
if (destParent === srcParent || destParent === parse(destParent).root) {
return
}
let destStat
try {
destStat = await stat(destParent, { bigint: true })
} catch (err) {
// istanbul ignore else: not sure when this would occur
if (err.code === 'ENOENT') {
return
}
// istanbul ignore next: not sure when this would occur
throw err
}
if (areIdentical(srcStat, destStat)) {
throw new ERR_FS_CP_EINVAL({
message: `cannot copy ${src} to a subdirectory of self ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
return checkParentPaths(src, srcStat, destParent)
}
const normalizePathToArray = (path) =>
resolve(path).split(sep).filter(Boolean)
// Return true if dest is a subdir of src, otherwise false.
// It only checks the path strings.
function isSrcSubdir (src, dest) {
const srcArr = normalizePathToArray(src)
const destArr = normalizePathToArray(dest)
return srcArr.every((cur, i) => destArr[i] === cur)
}
async function handleFilter (onInclude, destStat, src, dest, opts, cb) {
const include = await opts.filter(src, dest)
if (include) {
return onInclude(destStat, src, dest, opts, cb)
}
}
function startCopy (destStat, src, dest, opts) {
if (opts.filter) {
return handleFilter(getStatsForCopy, destStat, src, dest, opts)
}
return getStatsForCopy(destStat, src, dest, opts)
}
async function getStatsForCopy (destStat, src, dest, opts) {
const statFn = opts.dereference ? stat : lstat
const srcStat = await statFn(src)
// istanbul ignore else: can't portably test FIFO
if (srcStat.isDirectory() && opts.recursive) {
return onDir(srcStat, destStat, src, dest, opts)
} else if (srcStat.isDirectory()) {
throw new ERR_FS_EISDIR({
message: `${src} is a directory (not copied)`,
path: src,
syscall: 'cp',
errno: EINVAL,
})
} else if (srcStat.isFile() ||
srcStat.isCharacterDevice() ||
srcStat.isBlockDevice()) {
return onFile(srcStat, destStat, src, dest, opts)
} else if (srcStat.isSymbolicLink()) {
return onLink(destStat, src, dest)
} else if (srcStat.isSocket()) {
throw new ERR_FS_CP_SOCKET({
message: `cannot copy a socket file: ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
} else if (srcStat.isFIFO()) {
throw new ERR_FS_CP_FIFO_PIPE({
message: `cannot copy a FIFO pipe: ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
// istanbul ignore next: should be unreachable
throw new ERR_FS_CP_UNKNOWN({
message: `cannot copy an unknown file type: ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
function onFile (srcStat, destStat, src, dest, opts) {
if (!destStat) {
return _copyFile(srcStat, src, dest, opts)
}
return mayCopyFile(srcStat, src, dest, opts)
}
async function mayCopyFile (srcStat, src, dest, opts) {
if (opts.force) {
await unlink(dest)
return _copyFile(srcStat, src, dest, opts)
} else if (opts.errorOnExist) {
throw new ERR_FS_CP_EEXIST({
message: `${dest} already exists`,
path: dest,
syscall: 'cp',
errno: EEXIST,
})
}
}
async function _copyFile (srcStat, src, dest, opts) {
await copyFile(src, dest)
if (opts.preserveTimestamps) {
return handleTimestampsAndMode(srcStat.mode, src, dest)
}
return setDestMode(dest, srcStat.mode)
}
async function handleTimestampsAndMode (srcMode, src, dest) {
// Make sure the file is writable before setting the timestamp
// otherwise open fails with EPERM when invoked with 'r+'
// (through utimes call)
if (fileIsNotWritable(srcMode)) {
await makeFileWritable(dest, srcMode)
return setDestTimestampsAndMode(srcMode, src, dest)
}
return setDestTimestampsAndMode(srcMode, src, dest)
}
function fileIsNotWritable (srcMode) {
return (srcMode & 0o200) === 0
}
function makeFileWritable (dest, srcMode) {
return setDestMode(dest, srcMode | 0o200)
}
async function setDestTimestampsAndMode (srcMode, src, dest) {
await setDestTimestamps(src, dest)
return setDestMode(dest, srcMode)
}
function setDestMode (dest, srcMode) {
return chmod(dest, srcMode)
}
async function setDestTimestamps (src, dest) {
// The initial srcStat.atime cannot be trusted
// because it is modified by the read(2) system call
// (See https://nodejs.org/api/fs.html#fs_stat_time_values)
const updatedSrcStat = await stat(src)
return utimes(dest, updatedSrcStat.atime, updatedSrcStat.mtime)
}
function onDir (srcStat, destStat, src, dest, opts) {
if (!destStat) {
return mkDirAndCopy(srcStat.mode, src, dest, opts)
}
return copyDir(src, dest, opts)
}
async function mkDirAndCopy (srcMode, src, dest, opts) {
await mkdir(dest)
await copyDir(src, dest, opts)
return setDestMode(dest, srcMode)
}
async function copyDir (src, dest, opts) {
const dir = await readdir(src)
for (let i = 0; i < dir.length; i++) {
const item = dir[i]
const srcItem = join(src, item)
const destItem = join(dest, item)
const { destStat } = await checkPaths(srcItem, destItem, opts)
await startCopy(destStat, srcItem, destItem, opts)
}
}
async function onLink (destStat, src, dest) {
let resolvedSrc = await readlink(src)
if (!isAbsolute(resolvedSrc)) {
resolvedSrc = resolve(dirname(src), resolvedSrc)
}
if (!destStat) {
return symlink(resolvedSrc, dest)
}
let resolvedDest
try {
resolvedDest = await readlink(dest)
} catch (err) {
// Dest exists and is a regular file or directory,
// Windows may throw UNKNOWN error. If dest already exists,
// fs throws error anyway, so no need to guard against it here.
// istanbul ignore next: can only test on windows
if (err.code === 'EINVAL' || err.code === 'UNKNOWN') {
return symlink(resolvedSrc, dest)
}
// istanbul ignore next: should not be possible
throw err
}
if (!isAbsolute(resolvedDest)) {
resolvedDest = resolve(dirname(dest), resolvedDest)
}
if (isSrcSubdir(resolvedSrc, resolvedDest)) {
throw new ERR_FS_CP_EINVAL({
message: `cannot copy ${resolvedSrc} to a subdirectory of self ` +
`${resolvedDest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
// Do not copy if src is a subdir of dest since unlinking
// dest in this case would result in removing src contents
// and therefore a broken symlink would be created.
const srcStat = await stat(src)
if (srcStat.isDirectory() && isSrcSubdir(resolvedDest, resolvedSrc)) {
throw new ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY({
message: `cannot overwrite ${resolvedDest} with ${resolvedSrc}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
return copyLink(resolvedSrc, dest)
}
async function copyLink (resolvedSrc, dest) {
await unlink(dest)
return symlink(resolvedSrc, dest)
}
module.exports = cp

View File

@@ -0,0 +1,13 @@
'use strict'
const cp = require('./cp/index.js')
const withTempDir = require('./with-temp-dir.js')
const readdirScoped = require('./readdir-scoped.js')
const moveFile = require('./move-file.js')
module.exports = {
cp,
withTempDir,
readdirScoped,
moveFile,
}

View File

@@ -0,0 +1,78 @@
const { dirname, join, resolve, relative, isAbsolute } = require('path')
const fs = require('fs/promises')
const pathExists = async path => {
try {
await fs.access(path)
return true
} catch (er) {
return er.code !== 'ENOENT'
}
}
const moveFile = async (source, destination, options = {}, root = true, symlinks = []) => {
if (!source || !destination) {
throw new TypeError('`source` and `destination` file required')
}
options = {
overwrite: true,
...options,
}
if (!options.overwrite && await pathExists(destination)) {
throw new Error(`The destination file exists: ${destination}`)
}
await fs.mkdir(dirname(destination), { recursive: true })
try {
await fs.rename(source, destination)
} catch (error) {
if (error.code === 'EXDEV' || error.code === 'EPERM') {
const sourceStat = await fs.lstat(source)
if (sourceStat.isDirectory()) {
const files = await fs.readdir(source)
await Promise.all(files.map((file) =>
moveFile(join(source, file), join(destination, file), options, false, symlinks)
))
} else if (sourceStat.isSymbolicLink()) {
symlinks.push({ source, destination })
} else {
await fs.copyFile(source, destination)
}
} else {
throw error
}
}
if (root) {
await Promise.all(symlinks.map(async ({ source: symSource, destination: symDestination }) => {
let target = await fs.readlink(symSource)
// junction symlinks in windows will be absolute paths, so we need to
// make sure they point to the symlink destination
if (isAbsolute(target)) {
target = resolve(symDestination, relative(symSource, target))
}
// try to determine what the actual file is so we can create the correct
// type of symlink in windows
let targetStat = 'file'
try {
targetStat = await fs.stat(resolve(dirname(symSource), target))
if (targetStat.isDirectory()) {
targetStat = 'junction'
}
} catch {
// targetStat remains 'file'
}
await fs.symlink(
target,
symDestination,
targetStat
)
}))
await fs.rm(source, { recursive: true, force: true })
}
}
module.exports = moveFile

View File

@@ -0,0 +1,20 @@
const { readdir } = require('fs/promises')
const { join } = require('path')
const readdirScoped = async (dir) => {
const results = []
for (const item of await readdir(dir)) {
if (item.startsWith('@')) {
for (const scopedItem of await readdir(join(dir, item))) {
results.push(join(item, scopedItem))
}
} else {
results.push(item)
}
}
return results
}
module.exports = readdirScoped

View File

@@ -0,0 +1,39 @@
const { join, sep } = require('path')
const getOptions = require('./common/get-options.js')
const { mkdir, mkdtemp, rm } = require('fs/promises')
// create a temp directory, ensure its permissions match its parent, then call
// the supplied function passing it the path to the directory. clean up after
// the function finishes, whether it throws or not
const withTempDir = async (root, fn, opts) => {
const options = getOptions(opts, {
copy: ['tmpPrefix'],
})
// create the directory
await mkdir(root, { recursive: true })
const target = await mkdtemp(join(`${root}${sep}`, options.tmpPrefix || ''))
let err
let result
try {
result = await fn(target)
} catch (_err) {
err = _err
}
try {
await rm(target, { force: true, recursive: true })
} catch {
// ignore errors
}
if (err) {
throw err
}
return result
}
module.exports = withTempDir

View File

@@ -0,0 +1,52 @@
{
"name": "@npmcli/fs",
"version": "3.1.1",
"description": "filesystem utilities for the npm cli",
"main": "lib/index.js",
"files": [
"bin/",
"lib/"
],
"scripts": {
"snap": "tap",
"test": "tap",
"npmclilint": "npmcli-lint",
"lint": "eslint \"**/*.{js,cjs,ts,mjs,jsx,tsx}\"",
"lintfix": "npm run lint -- --fix",
"posttest": "npm run lint",
"postsnap": "npm run lintfix --",
"postlint": "template-oss-check",
"template-oss-apply": "template-oss-apply --force"
},
"repository": {
"type": "git",
"url": "git+https://github.com/npm/fs.git"
},
"keywords": [
"npm",
"oss"
],
"author": "GitHub Inc.",
"license": "ISC",
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.22.0",
"tap": "^16.0.1"
},
"dependencies": {
"semver": "^7.3.5"
},
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.22.0"
},
"tap": {
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
}
}

View File

@@ -0,0 +1,46 @@
This software is dual-licensed under the ISC and MIT licenses.
You may use this software under EITHER of the following licenses.
----------
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
----------
Copyright Isaac Z. Schlueter and Contributors
All rights reserved.
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,23 @@
# abbrev-js
Just like [ruby's Abbrev](http://apidock.com/ruby/Abbrev).
Usage:
var abbrev = require("abbrev");
abbrev("foo", "fool", "folding", "flop");
// returns:
{ fl: 'flop'
, flo: 'flop'
, flop: 'flop'
, fol: 'folding'
, fold: 'folding'
, foldi: 'folding'
, foldin: 'folding'
, folding: 'folding'
, foo: 'foo'
, fool: 'fool'
}
This is handy for command-line scripts, or other cases where you want to be able to accept shorthands.

View File

@@ -0,0 +1,50 @@
module.exports = abbrev
function abbrev (...args) {
let list = args.length === 1 || Array.isArray(args[0]) ? args[0] : args
for (let i = 0, l = list.length; i < l; i++) {
list[i] = typeof list[i] === 'string' ? list[i] : String(list[i])
}
// sort them lexicographically, so that they're next to their nearest kin
list = list.sort(lexSort)
// walk through each, seeing how much it has in common with the next and previous
const abbrevs = {}
let prev = ''
for (let ii = 0, ll = list.length; ii < ll; ii++) {
const current = list[ii]
const next = list[ii + 1] || ''
let nextMatches = true
let prevMatches = true
if (current === next) {
continue
}
let j = 0
const cl = current.length
for (; j < cl; j++) {
const curChar = current.charAt(j)
nextMatches = nextMatches && curChar === next.charAt(j)
prevMatches = prevMatches && curChar === prev.charAt(j)
if (!nextMatches && !prevMatches) {
j++
break
}
}
prev = current
if (j === cl) {
abbrevs[current] = current
continue
}
for (let a = current.slice(0, j); j <= cl; j++) {
abbrevs[a] = current
a += current.charAt(j)
}
}
return abbrevs
}
function lexSort (a, b) {
return a === b ? 0 : a > b ? 1 : -1
}

View File

@@ -0,0 +1,43 @@
{
"name": "abbrev",
"version": "2.0.0",
"description": "Like ruby's abbrev module, but in js",
"author": "GitHub Inc.",
"main": "lib/index.js",
"scripts": {
"test": "tap",
"lint": "eslint \"**/*.js\"",
"postlint": "template-oss-check",
"template-oss-apply": "template-oss-apply --force",
"lintfix": "npm run lint -- --fix",
"snap": "tap",
"posttest": "npm run lint"
},
"repository": {
"type": "git",
"url": "https://github.com/npm/abbrev-js.git"
},
"license": "ISC",
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.8.0",
"tap": "^16.3.0"
},
"tap": {
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
},
"files": [
"bin/",
"lib/"
],
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.8.0"
}
}

View File

@@ -0,0 +1,18 @@
ISC License
Copyright npm, Inc.
Permission to use, copy, modify, and/or distribute this
software for any purpose with or without fee is hereby
granted, provided that the above copyright notice and this
permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND NPM DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO
EVENT SHALL NPM BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,208 @@
are-we-there-yet
----------------
Track complex hierarchies of asynchronous task completion statuses. This is
intended to give you a way of recording and reporting the progress of the big
recursive fan-out and gather type workflows that are so common in async.
What you do with this completion data is up to you, but the most common use case is to
feed it to one of the many progress bar modules.
Most progress bar modules include a rudimentary version of this, but my
needs were more complex.
Usage
=====
```javascript
var TrackerGroup = require("are-we-there-yet").TrackerGroup
var top = new TrackerGroup("program")
var single = top.newItem("one thing", 100)
single.completeWork(20)
console.log(top.completed()) // 0.2
fs.stat("file", function(er, stat) {
if (er) throw er
var stream = top.newStream("file", stat.size)
console.log(top.completed()) // now 0.1 as single is 50% of the job and is 20% complete
// and 50% * 20% == 10%
fs.createReadStream("file").pipe(stream).on("data", function (chunk) {
// do stuff with chunk
})
top.on("change", function (name) {
// called each time a chunk is read from "file"
// top.completed() will start at 0.1 and fill up to 0.6 as the file is read
})
})
```
Shared Methods
==============
* var completed = tracker.completed()
Implemented in: `Tracker`, `TrackerGroup`, `TrackerStream`
Returns the ratio of completed work to work to be done. Range of 0 to 1.
* tracker.finish()
Implemented in: `Tracker`, `TrackerGroup`
Marks the tracker as completed. With a TrackerGroup this marks all of its
components as completed.
Marks all of the components of this tracker as finished, which in turn means
that `tracker.completed()` for this will now be 1.
This will result in one or more `change` events being emitted.
Events
======
All tracker objects emit `change` events with the following arguments:
```
function (name, completed, tracker)
```
`name` is the name of the tracker that originally emitted the event,
or if it didn't have one, the first containing tracker group that had one.
`completed` is the percent complete (as returned by `tracker.completed()` method).
`tracker` is the tracker object that you are listening for events on.
TrackerGroup
============
* var tracker = new TrackerGroup(**name**)
* **name** *(optional)* - The name of this tracker group, used in change
notifications if the component updating didn't have a name. Defaults to undefined.
Creates a new empty tracker aggregation group. These are trackers whose
completion status is determined by the completion status of other trackers added to this aggregation group.
Ex.
```javascript
var tracker = new TrackerGroup("parent")
var foo = tracker.newItem("firstChild", 100)
var bar = tracker.newItem("secondChild", 100)
foo.finish()
console.log(tracker.completed()) // 0.5
bar.finish()
console.log(tracker.completed()) // 1
```
* tracker.addUnit(**otherTracker**, **weight**)
* **otherTracker** - Any of the other are-we-there-yet tracker objects
* **weight** *(optional)* - The weight to give the tracker, defaults to 1.
Adds the **otherTracker** to this aggregation group. The weight determines
how long you expect this tracker to take to complete in proportion to other
units. So for instance, if you add one tracker with a weight of 1 and
another with a weight of 2, you're saying the second will take twice as long
to complete as the first. As such, the first will account for 33% of the
completion of this tracker and the second will account for the other 67%.
Returns **otherTracker**.
* var subGroup = tracker.newGroup(**name**, **weight**)
The above is exactly equivalent to:
```javascript
var subGroup = tracker.addUnit(new TrackerGroup(name), weight)
```
* var subItem = tracker.newItem(**name**, **todo**, **weight**)
The above is exactly equivalent to:
```javascript
var subItem = tracker.addUnit(new Tracker(name, todo), weight)
```
* var subStream = tracker.newStream(**name**, **todo**, **weight**)
The above is exactly equivalent to:
```javascript
var subStream = tracker.addUnit(new TrackerStream(name, todo), weight)
```
* console.log( tracker.debug() )
Returns a tree showing the completion of this tracker group and all of its
children, including recursively entering all of the children.
Tracker
=======
* var tracker = new Tracker(**name**, **todo**)
* **name** *(optional)* The name of this counter to report in change
events. Defaults to undefined.
* **todo** *(optional)* The amount of work todo (a number). Defaults to 0.
Ordinarily these are constructed as a part of a tracker group (via
`newItem`).
* var completed = tracker.completed()
Returns the ratio of completed work to work to be done. Range of 0 to 1. If
total work to be done is 0 then it will return 0.
* tracker.addWork(**todo**)
* **todo** A number to add to the amount of work to be done.
Increases the amount of work to be done, thus decreasing the completion
percentage. Triggers a `change` event.
* tracker.completeWork(**completed**)
* **completed** A number to add to the work complete
Increase the amount of work complete, thus increasing the completion percentage.
Will never increase the work completed past the amount of work todo. That is,
percentages > 100% are not allowed. Triggers a `change` event.
* tracker.finish()
Marks this tracker as finished, tracker.completed() will now be 1. Triggers
a `change` event.
TrackerStream
=============
* var tracker = new TrackerStream(**name**, **size**, **options**)
* **name** *(optional)* The name of this counter to report in change
events. Defaults to undefined.
* **size** *(optional)* The number of bytes being sent through this stream.
* **options** *(optional)* A hash of stream options
The tracker stream object is a pass through stream that updates an internal
tracker object each time a block passes through. It's intended to track
downloads, file extraction and other related activities. You use it by piping
your data source into it and then using it as your data source.
If your data has a length attribute then that's used as the amount of work
completed when the chunk is passed through. If it does not (eg, object
streams) then each chunk counts as completing 1 unit of work, so your size
should be the total number of objects being streamed.
* tracker.addWork(**todo**)
* **todo** Increase the expected overall size by **todo** bytes.
Increases the amount of work to be done, thus decreasing the completion
percentage. Triggers a `change` event.

View File

@@ -0,0 +1,4 @@
'use strict'
exports.TrackerGroup = require('./tracker-group.js')
exports.Tracker = require('./tracker.js')
exports.TrackerStream = require('./tracker-stream.js')

View File

@@ -0,0 +1,13 @@
'use strict'
const EventEmitter = require('events')
let trackerId = 0
class TrackerBase extends EventEmitter {
constructor (name) {
super()
this.id = ++trackerId
this.name = name
}
}
module.exports = TrackerBase

View File

@@ -0,0 +1,112 @@
'use strict'
const TrackerBase = require('./tracker-base.js')
const Tracker = require('./tracker.js')
const TrackerStream = require('./tracker-stream.js')
class TrackerGroup extends TrackerBase {
parentGroup = null
trackers = []
completion = {}
weight = {}
totalWeight = 0
finished = false
bubbleChange = bubbleChange(this)
nameInTree () {
var names = []
var from = this
while (from) {
names.unshift(from.name)
from = from.parentGroup
}
return names.join('/')
}
addUnit (unit, weight) {
if (unit.addUnit) {
var toTest = this
while (toTest) {
if (unit === toTest) {
throw new Error(
'Attempted to add tracker group ' +
unit.name + ' to tree that already includes it ' +
this.nameInTree(this))
}
toTest = toTest.parentGroup
}
unit.parentGroup = this
}
this.weight[unit.id] = weight || 1
this.totalWeight += this.weight[unit.id]
this.trackers.push(unit)
this.completion[unit.id] = unit.completed()
unit.on('change', this.bubbleChange)
if (!this.finished) {
this.emit('change', unit.name, this.completion[unit.id], unit)
}
return unit
}
completed () {
if (this.trackers.length === 0) {
return 0
}
var valPerWeight = 1 / this.totalWeight
var completed = 0
for (var ii = 0; ii < this.trackers.length; ii++) {
var trackerId = this.trackers[ii].id
completed +=
valPerWeight * this.weight[trackerId] * this.completion[trackerId]
}
return completed
}
newGroup (name, weight) {
return this.addUnit(new TrackerGroup(name), weight)
}
newItem (name, todo, weight) {
return this.addUnit(new Tracker(name, todo), weight)
}
newStream (name, todo, weight) {
return this.addUnit(new TrackerStream(name, todo), weight)
}
finish () {
this.finished = true
if (!this.trackers.length) {
this.addUnit(new Tracker(), 1, true)
}
for (var ii = 0; ii < this.trackers.length; ii++) {
var tracker = this.trackers[ii]
tracker.finish()
tracker.removeListener('change', this.bubbleChange)
}
this.emit('change', this.name, 1, this)
}
debug (depth = 0) {
const indent = ' '.repeat(depth)
let output = `${indent}${this.name || 'top'}: ${this.completed()}\n`
this.trackers.forEach(function (tracker) {
output += tracker instanceof TrackerGroup
? tracker.debug(depth + 1)
: `${indent} ${tracker.name}: ${tracker.completed()}\n`
})
return output
}
}
function bubbleChange (trackerGroup) {
return function (name, completed, tracker) {
trackerGroup.completion[tracker.id] = completed
if (trackerGroup.finished) {
return
}
trackerGroup.emit('change', name || trackerGroup.name, trackerGroup.completed(), trackerGroup)
}
}
module.exports = TrackerGroup

View File

@@ -0,0 +1,42 @@
'use strict'
const stream = require('stream')
const Tracker = require('./tracker.js')
class TrackerStream extends stream.Transform {
constructor (name, size, options) {
super(options)
this.tracker = new Tracker(name, size)
this.name = name
this.id = this.tracker.id
this.tracker.on('change', this.trackerChange.bind(this))
}
trackerChange (name, completion) {
this.emit('change', name, completion, this)
}
_transform (data, encoding, cb) {
this.tracker.completeWork(data.length ? data.length : 1)
this.push(data)
cb()
}
_flush (cb) {
this.tracker.finish()
cb()
}
completed () {
return this.tracker.completed()
}
addWork (work) {
return this.tracker.addWork(work)
}
finish () {
return this.tracker.finish()
}
}
module.exports = TrackerStream

View File

@@ -0,0 +1,34 @@
'use strict'
const TrackerBase = require('./tracker-base.js')
class Tracker extends TrackerBase {
constructor (name, todo) {
super(name)
this.workDone = 0
this.workTodo = todo || 0
}
completed () {
return this.workTodo === 0 ? 0 : this.workDone / this.workTodo
}
addWork (work) {
this.workTodo += work
this.emit('change', this.name, this.completed(), this)
}
completeWork (work) {
this.workDone += work
if (this.workDone > this.workTodo) {
this.workDone = this.workTodo
}
this.emit('change', this.name, this.completed(), this)
}
finish () {
this.workTodo = this.workDone = 1
this.emit('change', this.name, 1, this)
}
}
module.exports = Tracker

View File

@@ -0,0 +1,53 @@
{
"name": "are-we-there-yet",
"version": "4.0.2",
"description": "Keep track of the overall completion of many disparate processes",
"main": "lib/index.js",
"scripts": {
"test": "tap",
"lint": "eslint \"**/*.{js,cjs,ts,mjs,jsx,tsx}\"",
"lintfix": "npm run lint -- --fix",
"posttest": "npm run lint",
"postsnap": "npm run lintfix --",
"snap": "tap",
"postlint": "template-oss-check",
"template-oss-apply": "template-oss-apply --force"
},
"repository": {
"type": "git",
"url": "https://github.com/npm/are-we-there-yet.git"
},
"author": "GitHub Inc.",
"license": "ISC",
"bugs": {
"url": "https://github.com/npm/are-we-there-yet/issues"
},
"homepage": "https://github.com/npm/are-we-there-yet",
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.21.3",
"tap": "^16.0.1"
},
"files": [
"bin/",
"lib/"
],
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"tap": {
"branches": 68,
"statements": 92,
"functions": 86,
"lines": 92,
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.21.3",
"publish": true
}
}

View File

@@ -0,0 +1,2 @@
tidelift: "npm/brace-expansion"
patreon: juliangruber

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2013 Julian Gruber <julian@juliangruber.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,135 @@
# brace-expansion
[Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html),
as known from sh/bash, in JavaScript.
[![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion)
[![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion)
[![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/)
[![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion)
## Example
```js
var expand = require('brace-expansion');
expand('file-{a,b,c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
expand('-v{,,}')
// => ['-v', '-v', '-v']
expand('file{0..2}.jpg')
// => ['file0.jpg', 'file1.jpg', 'file2.jpg']
expand('file-{a..c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
expand('file{2..0}.jpg')
// => ['file2.jpg', 'file1.jpg', 'file0.jpg']
expand('file{0..4..2}.jpg')
// => ['file0.jpg', 'file2.jpg', 'file4.jpg']
expand('file-{a..e..2}.jpg')
// => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg']
expand('file{00..10..5}.jpg')
// => ['file00.jpg', 'file05.jpg', 'file10.jpg']
expand('{{A..C},{a..c}}')
// => ['A', 'B', 'C', 'a', 'b', 'c']
expand('ppp{,config,oe{,conf}}')
// => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf']
```
## API
```js
var expand = require('brace-expansion');
```
### var expanded = expand(str)
Return an array of all possible and valid expansions of `str`. If none are
found, `[str]` is returned.
Valid expansions are:
```js
/^(.*,)+(.+)?$/
// {a,b,...}
```
A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`.
```js
/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
// {x..y[..incr]}
```
A numeric sequence from `x` to `y` inclusive, with optional increment.
If `x` or `y` start with a leading `0`, all the numbers will be padded
to have equal length. Negative numbers and backwards iteration work too.
```js
/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
// {x..y[..incr]}
```
An alphabetic sequence from `x` to `y` inclusive, with optional increment.
`x` and `y` must be exactly one character, and if given, `incr` must be a
number.
For compatibility reasons, the string `${` is not eligible for brace expansion.
## Installation
With [npm](https://npmjs.org) do:
```bash
npm install brace-expansion
```
## Contributors
- [Julian Gruber](https://github.com/juliangruber)
- [Isaac Z. Schlueter](https://github.com/isaacs)
## Sponsors
This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)!
Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)!
## Security contact information
To report a security vulnerability, please use the
[Tidelift security contact](https://tidelift.com/security).
Tidelift will coordinate the fix and disclosure.
## License
(MIT)
Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt;
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,203 @@
var balanced = require('balanced-match');
module.exports = expandTop;
var escSlash = '\0SLASH'+Math.random()+'\0';
var escOpen = '\0OPEN'+Math.random()+'\0';
var escClose = '\0CLOSE'+Math.random()+'\0';
var escComma = '\0COMMA'+Math.random()+'\0';
var escPeriod = '\0PERIOD'+Math.random()+'\0';
function numeric(str) {
return parseInt(str, 10) == str
? parseInt(str, 10)
: str.charCodeAt(0);
}
function escapeBraces(str) {
return str.split('\\\\').join(escSlash)
.split('\\{').join(escOpen)
.split('\\}').join(escClose)
.split('\\,').join(escComma)
.split('\\.').join(escPeriod);
}
function unescapeBraces(str) {
return str.split(escSlash).join('\\')
.split(escOpen).join('{')
.split(escClose).join('}')
.split(escComma).join(',')
.split(escPeriod).join('.');
}
// Basically just str.split(","), but handling cases
// where we have nested braced sections, which should be
// treated as individual members, like {a,{b,c},d}
function parseCommaParts(str) {
if (!str)
return [''];
var parts = [];
var m = balanced('{', '}', str);
if (!m)
return str.split(',');
var pre = m.pre;
var body = m.body;
var post = m.post;
var p = pre.split(',');
p[p.length-1] += '{' + body + '}';
var postParts = parseCommaParts(post);
if (post.length) {
p[p.length-1] += postParts.shift();
p.push.apply(p, postParts);
}
parts.push.apply(parts, p);
return parts;
}
function expandTop(str) {
if (!str)
return [];
// I don't know why Bash 4.3 does this, but it does.
// Anything starting with {} will have the first two bytes preserved
// but *only* at the top level, so {},a}b will not expand to anything,
// but a{},b}c will be expanded to [a}c,abc].
// One could argue that this is a bug in Bash, but since the goal of
// this module is to match Bash's rules, we escape a leading {}
if (str.substr(0, 2) === '{}') {
str = '\\{\\}' + str.substr(2);
}
return expand(escapeBraces(str), true).map(unescapeBraces);
}
function embrace(str) {
return '{' + str + '}';
}
function isPadded(el) {
return /^-?0\d/.test(el);
}
function lte(i, y) {
return i <= y;
}
function gte(i, y) {
return i >= y;
}
function expand(str, isTop) {
var expansions = [];
var m = balanced('{', '}', str);
if (!m) return [str];
// no need to expand pre, since it is guaranteed to be free of brace-sets
var pre = m.pre;
var post = m.post.length
? expand(m.post, false)
: [''];
if (/\$$/.test(m.pre)) {
for (var k = 0; k < post.length; k++) {
var expansion = pre+ '{' + m.body + '}' + post[k];
expansions.push(expansion);
}
} else {
var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body);
var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body);
var isSequence = isNumericSequence || isAlphaSequence;
var isOptions = m.body.indexOf(',') >= 0;
if (!isSequence && !isOptions) {
// {a},b}
if (m.post.match(/,.*\}/)) {
str = m.pre + '{' + m.body + escClose + m.post;
return expand(str);
}
return [str];
}
var n;
if (isSequence) {
n = m.body.split(/\.\./);
} else {
n = parseCommaParts(m.body);
if (n.length === 1) {
// x{{a,b}}y ==> x{a}y x{b}y
n = expand(n[0], false).map(embrace);
if (n.length === 1) {
return post.map(function(p) {
return m.pre + n[0] + p;
});
}
}
}
// at this point, n is the parts, and we know it's not a comma set
// with a single entry.
var N;
if (isSequence) {
var x = numeric(n[0]);
var y = numeric(n[1]);
var width = Math.max(n[0].length, n[1].length)
var incr = n.length == 3
? Math.abs(numeric(n[2]))
: 1;
var test = lte;
var reverse = y < x;
if (reverse) {
incr *= -1;
test = gte;
}
var pad = n.some(isPadded);
N = [];
for (var i = x; test(i, y); i += incr) {
var c;
if (isAlphaSequence) {
c = String.fromCharCode(i);
if (c === '\\')
c = '';
} else {
c = String(i);
if (pad) {
var need = width - c.length;
if (need > 0) {
var z = new Array(need + 1).join('0');
if (i < 0)
c = '-' + z + c.slice(1);
else
c = z + c;
}
}
}
N.push(c);
}
} else {
N = [];
for (var j = 0; j < n.length; j++) {
N.push.apply(N, expand(n[j], false));
}
}
for (var j = 0; j < N.length; j++) {
for (var k = 0; k < post.length; k++) {
var expansion = pre + N[j] + post[k];
if (!isTop || isSequence || expansion)
expansions.push(expansion);
}
}
}
return expansions;
}

View File

@@ -0,0 +1,46 @@
{
"name": "brace-expansion",
"description": "Brace expansion as known from sh/bash",
"version": "2.0.1",
"repository": {
"type": "git",
"url": "git://github.com/juliangruber/brace-expansion.git"
},
"homepage": "https://github.com/juliangruber/brace-expansion",
"main": "index.js",
"scripts": {
"test": "tape test/*.js",
"gentest": "bash test/generate.sh",
"bench": "matcha test/perf/bench.js"
},
"dependencies": {
"balanced-match": "^1.0.0"
},
"devDependencies": {
"@c4312/matcha": "^1.3.1",
"tape": "^4.6.0"
},
"keywords": [],
"author": {
"name": "Julian Gruber",
"email": "mail@juliangruber.com",
"url": "http://juliangruber.com"
},
"license": "MIT",
"testling": {
"files": "test/*.js",
"browsers": [
"ie/8..latest",
"firefox/20..latest",
"firefox/nightly",
"chrome/25..latest",
"chrome/canary",
"opera/12..latest",
"opera/next",
"safari/5.1..latest",
"ipad/6.0..latest",
"iphone/6.0..latest",
"android-browser/4.2..latest"
]
}
}

View File

@@ -0,0 +1,16 @@
ISC License
Copyright (c) npm, Inc.
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE COPYRIGHT HOLDER DISCLAIMS
ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,716 @@
# cacache [![npm version](https://img.shields.io/npm/v/cacache.svg)](https://npm.im/cacache) [![license](https://img.shields.io/npm/l/cacache.svg)](https://npm.im/cacache) [![Travis](https://img.shields.io/travis/npm/cacache.svg)](https://travis-ci.org/npm/cacache) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/cacache?svg=true)](https://ci.appveyor.com/project/npm/cacache) [![Coverage Status](https://coveralls.io/repos/github/npm/cacache/badge.svg?branch=latest)](https://coveralls.io/github/npm/cacache?branch=latest)
[`cacache`](https://github.com/npm/cacache) is a Node.js library for managing
local key and content address caches. It's really fast, really good at
concurrency, and it will never give you corrupted data, even if cache files
get corrupted or manipulated.
On systems that support user and group settings on files, cacache will
match the `uid` and `gid` values to the folder where the cache lives, even
when running as `root`.
It was written to be used as [npm](https://npm.im)'s local cache, but can
just as easily be used on its own.
## Install
`$ npm install --save cacache`
## Table of Contents
* [Example](#example)
* [Features](#features)
* [Contributing](#contributing)
* [API](#api)
* [Using localized APIs](#localized-api)
* Reading
* [`ls`](#ls)
* [`ls.stream`](#ls-stream)
* [`get`](#get-data)
* [`get.stream`](#get-stream)
* [`get.info`](#get-info)
* [`get.hasContent`](#get-hasContent)
* Writing
* [`put`](#put-data)
* [`put.stream`](#put-stream)
* [`rm.all`](#rm-all)
* [`rm.entry`](#rm-entry)
* [`rm.content`](#rm-content)
* [`index.compact`](#index-compact)
* [`index.insert`](#index-insert)
* Utilities
* [`clearMemoized`](#clear-memoized)
* [`tmp.mkdir`](#tmp-mkdir)
* [`tmp.withTmp`](#with-tmp)
* Integrity
* [Subresource Integrity](#integrity)
* [`verify`](#verify)
* [`verify.lastRun`](#verify-last-run)
### Example
```javascript
const cacache = require('cacache')
const fs = require('fs')
const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'
// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
console.log(`Saved content to ${cachePath}.`)
})
const destination = '/tmp/mytar.tgz'
// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
cachePath, key
).pipe(
fs.createWriteStream(destination)
).on('finish', () => {
console.log('done extracting!')
})
// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
fs.writeFile(destination, data, err => {
console.log('tarball data fetched based on its sha512sum and written out!')
})
})
```
### Features
* Extraction by key or by content address (shasum, etc)
* [Subresource Integrity](#integrity) web standard support
* Multi-hash support - safely host sha1, sha512, etc, in a single cache
* Automatic content deduplication
* Fault tolerance (immune to corruption, partial writes, process races, etc)
* Consistency guarantees on read and write (full data verification)
* Lockless, high-concurrency cache access
* Streaming support
* Promise support
* Fast -- sub-millisecond reads and writes including verification
* Arbitrary metadata storage
* Garbage collection and additional offline verification
* Thorough test coverage
* There's probably a bloom filter in there somewhere. Those are cool, right? 🤔
### Contributing
The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.
All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other.
Please refer to the [Changelog](CHANGELOG.md) for project history details, too.
Happy hacking!
### API
#### <a name="ls"></a> `> cacache.ls(cache) -> Promise<Object>`
Lists info for all entries currently in the cache as a single large object. Each
entry in the object will be keyed by the unique index key, with corresponding
[`get.info`](#get-info) objects as the values.
##### Example
```javascript
cacache.ls(cachePath).then(console.log)
// Output
{
'my-thing': {
key: 'my-thing',
integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 4023948,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
},
'other-thing': {
key: 'other-thing',
integrity: 'sha1-ANothER+hasH=',
path: '.testcache/content/bada55',
time: 11992309289,
size: 111112
}
}
```
#### <a name="ls-stream"></a> `> cacache.ls.stream(cache) -> Readable`
Lists info for all entries currently in the cache as a single large object.
This works just like [`ls`](#ls), except [`get.info`](#get-info) entries are
returned as `'data'` events on the returned stream.
##### Example
```javascript
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
key: 'my-thing',
integrity: 'sha512-BaSe64HaSh',
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 13423,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
{
key: 'other-thing',
integrity: 'whirlpool-WoWSoMuchSupport',
path: '.testcache/content/bada55',
time: 11992309289,
size: 498023984029
}
{
...
}
```
#### <a name="get-data"></a> `> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})`
Returns an object with the cached data, digest, and metadata identified by
`key`. The `data` property of this object will be a `Buffer` instance that
presumably holds some data that means something to you. I'm sure you know what
to do with it! cacache just won't care.
`integrity` is a [Subresource
Integrity](#integrity)
string. That is, a string that can be used to verify `data`, which looks like
`<hash-algorithm>-<base64-integrity-hash>`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, the promise will be rejected.
A sub-function, `get.byDigest` may be used for identical behavior, except lookup
will happen by integrity hash, bypassing the index entirely. This version of the
function *only* returns `data` itself, without any wrapper.
See: [options](#get-options)
##### Note
This function loads the entire cache entry into memory before returning it. If
you're dealing with Very Large data, consider using [`get.stream`](#get-stream)
instead.
##### Example
```javascript
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
metadata: {
thingName: 'my'
},
integrity: 'sha512-BaSe64HaSh',
data: Buffer#<deadbeef>,
size: 9320
}
// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>
```
#### <a name="get-stream"></a> `> cacache.get.stream(cache, key, [opts]) -> Readable`
Returns a [Readable Stream](https://nodejs.org/api/stream.html#stream_readable_streams) of the cached data identified by `key`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, an error will be emitted.
`metadata` and `integrity` events will be emitted before the stream closes, if
you need to collect that extra data about the cached entry.
A sub-function, `get.stream.byDigest` may be used for identical behavior,
except lookup will happen by integrity hash, bypassing the index entirely. This
version does not emit the `metadata` and `integrity` events at all.
See: [options](#get-options)
##### Example
```javascript
// Look up by key
cache.get.stream(
cachePath, 'my-thing'
).on('metadata', metadata => {
console.log('metadata:', metadata)
}).on('integrity', integrity => {
console.log('integrity:', integrity)
}).pipe(
fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='
// Look up by digest
cache.get.stream.byDigest(
cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
fs.createWriteStream('./x.tgz')
)
```
#### <a name="get-info"></a> `> cacache.get.info(cache, key) -> Promise`
Looks up `key` in the cache index, returning information about the entry if
one exists.
##### Fields
* `key` - Key the entry was looked up under. Matches the `key` argument.
* `integrity` - [Subresource Integrity hash](#integrity) for the content this entry refers to.
* `path` - Filesystem path where content is stored, joined with `cache` argument.
* `time` - Timestamp the entry was first added on.
* `metadata` - User-assigned metadata associated with the entry/content.
##### Example
```javascript
cacache.get.info(cachePath, 'my-thing').then(console.log)
// Output
{
key: 'my-thing',
integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
path: '.testcache/content/deadbeef',
time: 12345698490,
size: 849234,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
```
#### <a name="get-hasContent"></a> `> cacache.get.hasContent(cache, integrity) -> Promise`
Looks up a [Subresource Integrity hash](#integrity) in the cache. If content
exists for this `integrity`, it will return an object, with the specific single integrity hash
that was found in `sri` key, and the size of the found content as `size`. If no content exists for this integrity, it will return `false`.
##### Example
```javascript
cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)
// Output
{
sri: {
source: 'sha256-MUSTVERIFY+ALL/THINGS==',
algorithm: 'sha256',
digest: 'MUSTVERIFY+ALL/THINGS==',
options: []
},
size: 9001
}
cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)
// Output
false
```
##### <a name="get-options"></a> Options
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
##### `opts.memoize`
Default: null
If explicitly truthy, cacache will read from memory and memoize data on bulk read. If `false`, cacache will read from disk data. Reader functions by default read from in-memory cache.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
#### <a name="put-data"></a> `> cacache.put(cache, key, data, [opts]) -> Promise`
Inserts data passed to it into the cache. The returned Promise resolves with a
digest (generated according to [`opts.algorithms`](#optsalgorithms)) after the
cache entry has been successfully written.
See: [options](#put-options)
##### Example
```javascript
fetch(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data)
}).then(integrity => {
console.log('integrity hash is', integrity)
})
```
#### <a name="put-stream"></a> `> cacache.put.stream(cache, key, [opts]) -> Writable`
Returns a [Writable
Stream](https://nodejs.org/api/stream.html#stream_writable_streams) that inserts
data written to it into the cache. Emits an `integrity` event with the digest of
written contents when it succeeds.
See: [options](#put-options)
##### Example
```javascript
request.get(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
cacache.put.stream(
cachePath, 'registry.npmjs.org|cacache@1.0.0'
).on('integrity', d => console.log(`integrity digest is ${d}`))
)
```
##### <a name="put-options"></a> Options
##### `opts.metadata`
Arbitrary metadata to be attached to the inserted key.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
`algorithms` has no effect if this option is present.
##### `opts.integrityEmitter`
*Streaming only* If present, uses the provided event emitter as a source of
truth for both integrity and size. This allows use cases where integrity is
already being calculated outside of cacache to reuse that data instead of
calculating it a second time.
The emitter must emit both the `'integrity'` and `'size'` events.
NOTE: If this option is provided, you must verify that you receive the correct
integrity value yourself and emit an `'error'` event if there is a mismatch.
[ssri Integrity Streams](https://github.com/npm/ssri#integrity-stream) do this for you when given an expected integrity.
##### `opts.algorithms`
Default: ['sha512']
Hashing algorithms to use when calculating the [subresource integrity
digest](#integrity)
for inserted data. Can use any algorithm listed in `crypto.getHashes()` or
`'omakase'`/`'お任せします'` to pick a random hash algorithm on each insertion. You
may also use any anagram of `'modnar'` to use this feature.
Currently only supports one algorithm at a time (i.e., an array length of
exactly `1`). Has no effect if `opts.integrity` is present.
##### `opts.memoize`
Default: null
If provided, cacache will memoize the given cache insertion in memory, bypassing
any filesystem checks for that key or digest in future cache fetches. Nothing
will be written to the in-memory cache unless this option is explicitly truthy.
If `opts.memoize` is an object or a `Map`-like (that is, an object with `get`
and `set` methods), it will be written to instead of the global memoization
cache.
Reading from disk data can be forced by explicitly passing `memoize: false` to
the reader functions, but their default will be to read from memory.
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="rm-all"></a> `> cacache.rm.all(cache) -> Promise`
Clears the entire cache. Mainly by blowing away the cache directory itself.
##### Example
```javascript
cacache.rm.all(cachePath).then(() => {
console.log('THE APOCALYPSE IS UPON US 😱')
})
```
#### <a name="rm-entry"></a> `> cacache.rm.entry(cache, key, [opts]) -> Promise`
Alias: `cacache.rm`
Removes the index entry for `key`. Content will still be accessible if
requested directly by content address ([`get.stream.byDigest`](#get-stream)).
By default, this appends a new entry to the index with an integrity of `null`.
If `opts.removeFully` is set to `true` then the index file itself will be
physically deleted rather than appending a `null`.
To remove the content itself (which might still be used by other entries), use
[`rm.content`](#rm-content). Or, to safely vacuum any unused content, use
[`verify`](#verify).
##### Example
```javascript
cacache.rm.entry(cachePath, 'my-thing').then(() => {
console.log('I did not like it anyway')
})
```
#### <a name="rm-content"></a> `> cacache.rm.content(cache, integrity) -> Promise`
Removes the content identified by `integrity`. Any index entries referring to it
will not be usable again until the content is re-added to the cache with an
identical digest.
##### Example
```javascript
cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
console.log('data for my-thing is gone!')
})
```
#### <a name="index-compact"></a> `> cacache.index.compact(cache, key, matchFn, [opts]) -> Promise`
Uses `matchFn`, which must be a synchronous function that accepts two entries
and returns a boolean indicating whether or not the two entries match, to
deduplicate all entries in the cache for the given `key`.
If `opts.validateEntry` is provided, it will be called as a function with the
only parameter being a single index entry. The function must return a Boolean,
if it returns `true` the entry is considered valid and will be kept in the index,
if it returns `false` the entry will be removed from the index.
If `opts.validateEntry` is not provided, however, every entry in the index will
be deduplicated and kept until the first `null` integrity is reached, removing
all entries that were written before the `null`.
The deduplicated list of entries is both written to the index, replacing the
existing content, and returned in the Promise.
#### <a name="index-insert"></a> `> cacache.index.insert(cache, key, integrity, opts) -> Promise`
Writes an index entry to the cache for the given `key` without writing content.
It is assumed if you are using this method, you have already stored the content
some other way and you only wish to add a new index to that content. The `metadata`
and `size` properties are read from `opts` and used as part of the index entry.
Returns a Promise resolving to the newly added entry.
#### <a name="clear-memoized"></a> `> cacache.clearMemoized()`
Completely resets the in-memory entry cache.
#### <a name="tmp-mkdir"></a> `> tmp.mkdir(cache, opts) -> Promise<Path>`
Returns a unique temporary directory inside the cache's `tmp` dir. This
directory will use the same safe user assignment that all the other stuff use.
Once the directory is made, it's the user's responsibility that all files
within are given the appropriate `gid`/`uid` ownership settings to match
the rest of the cache. If not, you can ask cacache to do it for you by
calling [`tmp.fix()`](#tmp-fix), which will fix all tmp directory
permissions.
If you want automatic cleanup of this directory, use
[`tmp.withTmp()`](#with-tpm)
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})
```
#### <a name="tmp-fix"></a> `> tmp.fix(cache) -> Promise`
Sets the `uid` and `gid` properties on all files and folders within the tmp
folder to match the rest of the cache.
Use this after manually writing files into [`tmp.mkdir`](#tmp-mkdir) or
[`tmp.withTmp`](#with-tmp).
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
writeFile(path.join(dir, 'file'), someData).then(() => {
// make sure we didn't just put a root-owned file in the cache
cacache.tmp.fix().then(() => {
// all uids and gids match now
})
})
})
```
#### <a name="with-tmp"></a> `> tmp.withTmp(cache, opts, cb) -> Promise`
Creates a temporary directory with [`tmp.mkdir()`](#tmp-mkdir) and calls `cb`
with it. The created temporary directory will be removed when the return value
of `cb()` resolves, the tmp directory will be automatically deleted once that
promise completes.
The same caveats apply when it comes to managing permissions for the tmp dir's
contents.
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.withTmp(cache, dir => {
return fs.writeFile(path.join(dir, 'blablabla'), 'blabla contents', { encoding: 'utf8' })
}).then(() => {
// `dir` no longer exists
})
```
##### <a name="tmp-options"></a> Options
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="integrity"></a> Subresource Integrity Digests
For content verification and addressing, cacache uses strings following the
[Subresource
Integrity spec](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity).
That is, any time cacache expects an `integrity` argument or option, it
should be in the format `<hashAlgorithm>-<base64-hash>`.
One deviation from the current spec is that cacache will support any hash
algorithms supported by the underlying Node.js process. You can use
`crypto.getHashes()` to see which ones you can use.
##### Generating Digests Yourself
If you have an existing content shasum, they are generally formatted as a
hexadecimal string (that is, a sha1 would look like:
`5f5513f8822fdbe5145af33b64d8d970dcf95c6e`). In order to be compatible with
cacache, you'll need to convert this to an equivalent subresource integrity
string. For this example, the corresponding hash would be:
`sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=`.
If you want to generate an integrity string yourself for existing data, you can
use something like this:
```javascript
const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'
const integrity = (
hashAlgorithm +
'-' +
crypto.createHash(hashAlgorithm).update(data).digest('base64')
)
```
You can also use [`ssri`](https://npm.im/ssri) to have a richer set of functionality
around SRI strings, including generation, parsing, and translating from existing
hex-formatted strings.
#### <a name="verify"></a> `> cacache.verify(cache, opts) -> Promise`
Checks out and fixes up your cache:
* Cleans up corrupted or invalid index entries.
* Custom entry filtering options.
* Garbage collects any content entries not referenced by the index.
* Checks integrity for all content entries and removes invalid content.
* Fixes cache ownership.
* Removes the `tmp` directory in the cache and all its contents.
When it's done, it'll return an object with various stats about the verification
process, including amount of storage reclaimed, number of valid entries, number
of entries removed, etc.
##### <a name="verify-options"></a> Options
##### `opts.concurrency`
Default: 20
Number of concurrently read files in the filesystem while doing clean up.
##### `opts.filter`
Receives a formatted entry. Return false to remove it.
Note: might be called more than once on the same entry.
##### `opts.log`
Custom logger function:
```
log: { silly () {} }
log.silly('verify', 'verifying cache at', cache)
```
##### Example
```sh
echo somegarbage >> $CACHEPATH/content/deadbeef
```
```javascript
cacache.verify(cachePath).then(stats => {
// deadbeef collected, because of invalid checksum.
console.log('cache is much nicer now! stats:', stats)
})
```
#### <a name="verify-last-run"></a> `> cacache.verify.lastRun(cache) -> Promise`
Returns a `Date` representing the last time `cacache.verify` was run on `cache`.
##### Example
```javascript
cacache.verify(cachePath).then(() => {
cacache.verify.lastRun(cachePath).then(lastTime => {
console.log('cacache.verify was last called on' + lastTime)
})
})
```

Some files were not shown because too many files have changed in this diff Show More