Last commit july 5th

This commit is contained in:
2024-07-05 13:46:23 +02:00
parent dad0d86e8c
commit b0e4dfbb76
24982 changed files with 2621219 additions and 413 deletions

15
spa/node_modules/@npmcli/metavuln-calculator/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) npm, Inc.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

289
spa/node_modules/@npmcli/metavuln-calculator/README.md generated vendored Normal file
View File

@@ -0,0 +1,289 @@
# @npmcli/metavuln-calculator
Calculate meta-vulnerabilities from package security advisories
This is a pretty low-level package to abstract out the parts of
[@npmcli/arborist](http://npm.im/@npmcli/arborist) that calculate
metavulnerabilities from security advisories. If you just want to get an
audit for a package tree, probably what you want to use is
`arborist.audit()`.
## USAGE
```js
const Calculator = require('@npmcli/metavuln-calculator')
// pass in any options for cacache and pacote
// see those modules for option descriptions
const calculator = new Calculator(options)
// get an advisory somehow, typically by POSTing a JSON payload like:
// {"pkgname":["1.2.3","4.3.5", ...versions], ...packages}
// to /-/npm/v1/security/advisories/bulk
// to get a payload response like:
// {
// "semver": [
// {
// "id": 31,
// "url": "https://npmjs.com/advisories/31",
// "title": "Regular Expression Denial of Service",
// "severity": "moderate",
// "vulnerable_versions": "<4.3.2"
// }
// ],
// ...advisories
// }
const arb = new Aborist(options)
const tree = await arb.loadActual()
const advisories = await getBulkAdvisoryReportSomehow(tree)
// then to get a comprehensive set of advisories including metavulns:
const set = new Set()
for (const [name, advisory] of Object.entries(advisories)) {
// make sure we have the advisories loaded with latest version lists
set.add(await calculator.calculate(name, {advisory}))
}
for (const vuln of set) {
for (const node of tree.inventory.query('name', vuln.name)) {
// not vulnerable, just keep looking
if (!vuln.testVersion(node.version))
continue
for (const { from: dep, spec } of node.edgesIn) {
const metaAdvisory = await calculator.calculate(dep.name, vuln)
if (metaAdvisory.testVersion(dep.version, spec)) {
set.add(metaAdvisory)
}
}
}
}
```
## API
### Class: Advisory
The `Calculator.calculate` method returns a Promise that resolves to a
`Advisory` object, filled in from the cache and updated if necessary with
the available advisory data.
Do not instantiate `Advisory` objects directly. Use the `calculate()`
method to get one with appropriate data filled in.
Do not mutate `Advisory` objects. Use the supplied methods only.
#### Fields
- `name` The name of the package that this vulnerability is about
- `id` The unique cache key for this vuln or metavuln. (See **Cache Keys**
below.)
- `dependency` For metavulns, the dependency that causes this package to be
have a vulnerability. For advisories, the same as `name`.
- `type` Either `'advisory'` or `'metavuln'`, depending on the type of
vulnerability that this object represents.
- `url` The url for the advisory (`null` for metavulns)
- `title` The text title of the advisory or metavuln
- `severity` The severity level info/low/medium/high/critical
- `range` The range that is vulnerable
- `versions` The set of available versions of the package
- `vulnerableVersions` The set of versions that are vulnerable
- `source` The numeric ID of the advisory, or the cache key of the
vulnerability that causes this metavuln
- `updated` Boolean indicating whether this vulnerability was updated since
being read from cache.
- `packument` The packument object for the package that this vulnerability
is about.
#### `vuln.testVersion(version, [dependencySpecifier]) -> Boolean`
Check to see if a given version is vulnerable. Returns `true` if the
version is vulnerable, and should be avoided.
For metavulns, `dependencySpecifier` indicates the version range of the
source of the vulnerability, which the module depends on. If not provided,
will attempt to read from the packument. If not provided, and unable to
read from the packument, then `true` is returned, indicating that the (not
installable) package version should be avoided.
#### Cache Keys
The cache keys are calculated by hashing together the `source` and `name`
fields, prefixing with the string `'security-advisory:'` and the name of
the dependency that is vulnerable.
So, a third-level metavulnerability might have a key like:
```
'security-advisory:foo:'+ hash(['foo', hash(['bar', hash(['baz', 123])])])
```
Thus, the cached entry with this key would reflect the version of `foo`
that is vulnerable by virtue of dependending exclusively on versions of
`bar` which are vulnerable by virtue of depending exclusively on versions
of `baz` which are vulnerable by virtue of advisory ID `123`.
Loading advisory data entirely from cache without hitting an npm registry
security advisory endpoint is not supported at this time, but technically
possible, and likely to come in a future version of this library.
### `calculator = new Calculator(options)`
Options object is used for `cacache` and `pacote` calls.
### `calculator.calculate(name, source)`
- `name` The name of the package that the advisory is about
- `source` Advisory object from the npm security endpoint, or a `Advisory`
object returned by a previous call to the `calculate()` method.
"Advisory" objects need to have:
- `id` id of the advisory or Advisory object
- `vulnerable_versions` range of versions affected
- `url`
- `title`
- `severity`
Fetches the packument and returns a Promise that resolves to a
vulnerability object described above.
Will perform required I/O to fetch package metadata from registry and
read from cache. Advisory information written back to cache.
## Dependent Version Sampling
Typically, dependency ranges don't change very frequently, and the most
recent version published on a given release line is most likely to contain
the fix for a given vulnerability.
So, we see things like this:
```
3.0.4 - not vulnerable
3.0.3 - vulnerable
3.0.2 - vulnerable
3.0.1 - vulnerable
3.0.0 - vulnerable
2.3.107 - not vulnerable
2.3.106 - not vulnerable
2.3.105 - vulnerable
... 523 more vulnerable versions ...
2.0.0 - vulnerable
1.1.102 - not vulnerable
1.1.101 - vulnerable
... 387 more vulnerable versions ...
0.0.0 - vulnerable
```
In order to determine which versions of a package are affected by a
vulnerability in a dependency, this module uses the following algorithm to
minimize the number of tests required by performing a binary search on each
version set, and presuming that versions _between_ vulnerable versions
within a given set are also vulnerable.
1. Sort list of available versions by SemVer precedence
2. Group versions into sets based on MAJOR/MINOR versions.
3.0.0 - 3.0.4
2.3.0 - 2.3.107
2.2.0 - 2.2.43
2.1.0 - 2.1.432
2.0.0 - 2.0.102
1.1.0 - 1.1.102
1.0.0 - 1.0.157
0.1.0 - 0.1.123
0.0.0 - 0.0.57
3. Test the highest and lowest in each MAJOR/MINOR set, and mark highest
and lowest with known-vulnerable status. (`(s)` means "safe" and `(v)`
means "vulnerable".)
3.0.0(v) - 3.0.4(s)
2.3.0(v) - 2.3.107(s)
2.2.0(v) - 2.2.43(v)
2.1.0(v) - 2.1.432(v)
2.0.0(v) - 2.0.102(v)
1.1.0(v) - 1.1.102(s)
1.0.0(v) - 1.0.157(v)
0.1.0(v) - 0.1.123(v)
0.0.0(v) - 0.0.57(v)
4. For each set of package versions:
1. If highest and lowest both vulnerable, assume entire set is
vulnerable, and continue to next set. Ie, in the example, throw out
the following version sets:
2.2.0(v) - 2.2.43(v)
2.1.0(v) - 2.1.432(v)
2.0.0(v) - 2.0.102(v)
1.0.0(v) - 1.0.157(v)
0.1.0(v) - 0.1.123(v)
0.0.0(v) - 0.0.57(v)
2. Test middle version MID in set, splitting into two sets.
3.0.0(v) - 3.0.2(v) - 3.0.4(s)
2.3.0(v) - 2.3.54(v) - 2.3.107(s)
1.1.0(v) - 1.1.51(v) - 1.1.102(s)
3. If any untested versions in Set(mid..highest) or Set(lowest..mid),
add to list of sets to test.
3.0.0(v) - 3.0.2(v) <-- thrown out on next iteration
3.0.2(v) - 3.0.4(s)
2.3.0(v) - 2.3.54(v) <-- thrown out on next iteration
2.3.54(v) - 2.3.107(s)
1.1.0(v) - 1.1.51(v) <-- thrown out on next iteration
1.1.51(v) - 1.1.102(s)
When the process finishes, all versions are either confirmed safe, or
confirmed/assumed vulnerable, and we avoid checking large sets of versions
where vulnerabilities went unfixed.
### Testing Version for MetaVuln Status
When the dependency is in `bundleDependencies`, we treat any dependent
version that _may_ be vulnerable as a vulnerability. If the dependency is
not in `bundleDependencies`, then we treat the dependent module as a
vulnerability if it can _only_ resolve to dependency versions that are
vulnerable.
This relies on the reasonable assumption that the version of a bundled
dependency will be within the stated dependency range, and accounts for the
fact that we can't know ahead of time which version of a dependency may be
bundled. So, we avoid versions that _may_ bundle a vulnerable dependency.
For example:
Package `foo` depends on package `bar` at the following version ranges:
```
foo version bar version range
1.0.0 ^1.2.3
1.0.1 ^1.2.4
1.0.2 ^1.2.5
1.1.0 ^1.3.1
1.1.1 ^1.3.2
1.1.2 ^1.3.3
2.0.0 ^2.0.0
2.0.1 ^2.0.1
2.0.2 ^2.0.2
```
There is an advisory for `bar@1.2.4 - 1.3.2`. So:
```
foo version vulnerable?
1.0.0 if bundled (can use 1.2.3, which is not vulnerable)
1.0.1 yes (must use ^1.2.4, entirely contained in vuln range)
1.0.2 yes (must use ^1.2.5, entirely contained in vuln range)
1.1.0 if bundled (can use 1.3.3, which is not vulnerable)
1.1.1 if bundled (can use 1.3.3, which is not vulnerable)
1.1.2 no (dep is outside of vuln range)
2.0.0 no (dep is outside of vuln range)
2.0.1 no (dep is outside of vuln range)
2.0.2 no (dep is outside of vuln range)
```
To test a package version for metaVulnerable status, we attempt to load the
manifest of the dependency, using the vulnerable version set as the `avoid`
versions. If we end up selecting a version that should be avoided, then
that means that the package is vulnerable by virtue of its dependency.

View File

@@ -0,0 +1,435 @@
const hash = require('./hash.js')
const semver = require('semver')
const semverOpt = { includePrerelease: true, loose: true }
const getDepSpec = require('./get-dep-spec.js')
// any fields that we don't want in the cache need to be hidden
const _source = Symbol('source')
const _packument = Symbol('packument')
const _versionVulnMemo = Symbol('versionVulnMemo')
const _updated = Symbol('updated')
const _options = Symbol('options')
const _specVulnMemo = Symbol('specVulnMemo')
const _testVersion = Symbol('testVersion')
const _testVersions = Symbol('testVersions')
const _calculateRange = Symbol('calculateRange')
const _markVulnerable = Symbol('markVulnerable')
const _testSpec = Symbol('testSpec')
class Advisory {
constructor (name, source, options = {}) {
this.source = source.id
this[_source] = source
this[_options] = options
this.name = name
if (!source.name) {
source.name = name
}
this.dependency = source.name
if (this.type === 'advisory') {
this.title = source.title
this.url = source.url
} else {
this.title = `Depends on vulnerable versions of ${source.name}`
this.url = null
}
this.severity = source.severity || 'high'
this.versions = []
this.vulnerableVersions = []
this.cwe = source.cwe
this.cvss = source.cvss
// advisories have the range, metavulns do not
// if an advisory doesn't specify range, assume all are vulnerable
this.range = this.type === 'advisory' ? source.vulnerable_versions || '*'
: null
this.id = hash(this)
this[_packument] = null
// memoized list of which versions are vulnerable
this[_versionVulnMemo] = new Map()
// memoized list of which dependency specs are vulnerable
this[_specVulnMemo] = new Map()
this[_updated] = false
}
// true if we updated from what we had in cache
get updated () {
return this[_updated]
}
get type () {
return this.dependency === this.name ? 'advisory' : 'metavuln'
}
get packument () {
return this[_packument]
}
// load up the data from a cache entry and a fetched packument
load (cached, packument) {
// basic data integrity gutcheck
if (!cached || typeof cached !== 'object') {
throw new TypeError('invalid cached data, expected object')
}
if (!packument || typeof packument !== 'object') {
throw new TypeError('invalid packument data, expected object')
}
if (cached.id && cached.id !== this.id) {
throw Object.assign(new Error('loading from incorrect cache entry'), {
expected: this.id,
actual: cached.id,
})
}
if (packument.name !== this.name) {
throw Object.assign(new Error('loading from incorrect packument'), {
expected: this.name,
actual: packument.name,
})
}
if (this[_packument]) {
throw new Error('advisory object already loaded')
}
// if we have a range from the initialization, and the cached
// data has a *different* range, then we know we have to recalc.
// just don't use the cached data, so we will definitely not match later
if (!this.range || cached.range && cached.range === this.range) {
Object.assign(this, cached)
}
this[_packument] = packument
const pakuVersions = Object.keys(packument.versions || {})
const allVersions = new Set([...pakuVersions, ...this.versions])
const versionsAdded = []
const versionsRemoved = []
for (const v of allVersions) {
if (!this.versions.includes(v)) {
versionsAdded.push(v)
this.versions.push(v)
} else if (!pakuVersions.includes(v)) {
versionsRemoved.push(v)
}
}
// strip out any removed versions from our lists, and sort by semver
this.versions = semver.sort(this.versions.filter(v =>
!versionsRemoved.includes(v)), semverOpt)
// if no changes, then just return what we got from cache
// versions added or removed always means we changed
// otherwise, advisories change if the range changes, and
// metavulns change if the source was updated
const unchanged = this.type === 'advisory'
? this.range && this.range === cached.range
: !this[_source].updated
// if the underlying source changed, by an advisory updating the
// range, or a source advisory being updated, then we have to re-check
// otherwise, only recheck the new ones.
this.vulnerableVersions = !unchanged ? []
: semver.sort(this.vulnerableVersions.filter(v =>
!versionsRemoved.includes(v)), semverOpt)
if (unchanged && !versionsAdded.length && !versionsRemoved.length) {
// nothing added or removed, nothing to do here. use the cached copy.
return this
}
this[_updated] = true
// test any versions newly added
if (!unchanged || versionsAdded.length) {
this[_testVersions](unchanged ? versionsAdded : this.versions)
}
this.vulnerableVersions = semver.sort(this.vulnerableVersions, semverOpt)
// metavulns have to calculate their range, since cache is invalidated
// advisories just get their range from the advisory above
if (this.type === 'metavuln') {
this[_calculateRange]()
}
return this
}
[_calculateRange] () {
// calling semver.simplifyRange with a massive list of versions, and those
// versions all concatenated with `||` is a geometric CPU explosion!
// we can try to be a *little* smarter up front by doing x-y for all
// contiguous version sets in the list
const ranges = []
this.versions = semver.sort(this.versions, semverOpt)
this.vulnerableVersions = semver.sort(this.vulnerableVersions, semverOpt)
for (let v = 0, vulnVer = 0; v < this.versions.length; v++) {
// figure out the vulnerable subrange
const vr = [this.versions[v]]
while (v < this.versions.length) {
if (this.versions[v] !== this.vulnerableVersions[vulnVer]) {
// we don't test prerelease versions, so just skip past it
if (/-/.test(this.versions[v])) {
v++
continue
}
break
}
if (vr.length > 1) {
vr[1] = this.versions[v]
} else {
vr.push(this.versions[v])
}
v++
vulnVer++
}
// it'll either be just the first version, which means no overlap,
// or the start and end versions, which might be the same version
if (vr.length > 1) {
const tail = this.versions[this.versions.length - 1]
ranges.push(vr[1] === tail ? `>=${vr[0]}`
: vr[0] === vr[1] ? vr[0]
: vr.join(' - '))
}
}
const metavuln = ranges.join(' || ').trim()
this.range = !metavuln ? '<0.0.0-0'
: semver.simplifyRange(this.versions, metavuln, semverOpt)
}
// returns true if marked as vulnerable, false if ok
// spec is a dependency specifier, for metavuln cases
// where the version might not be in the packument. if
// we have the packument and spec is not provided, then
// we use the dependency version from the manifest.
testVersion (version, spec = null) {
const sv = String(version)
if (this[_versionVulnMemo].has(sv)) {
return this[_versionVulnMemo].get(sv)
}
const result = this[_testVersion](version, spec)
if (result) {
this[_markVulnerable](version)
}
this[_versionVulnMemo].set(sv, !!result)
return result
}
[_markVulnerable] (version) {
const sv = String(version)
if (!this.vulnerableVersions.includes(sv)) {
this.vulnerableVersions.push(sv)
}
}
[_testVersion] (version, spec) {
const sv = String(version)
if (this.vulnerableVersions.includes(sv)) {
return true
}
if (this.type === 'advisory') {
// advisory, just test range
return semver.satisfies(version, this.range, semverOpt)
}
// check the dependency of this version on the vulnerable dep
// if we got a version that's not in the packument, fall back on
// the spec provided, if possible.
const mani = this[_packument]?.versions?.[version] || {
dependencies: {
[this.dependency]: spec,
},
}
if (!spec) {
spec = getDepSpec(mani, this.dependency)
}
// no dep, no vuln
if (spec === null) {
return false
}
if (!semver.validRange(spec, semverOpt)) {
// not a semver range, nothing we can hope to do about it
return true
}
const bd = mani.bundleDependencies
const bundled = bd && bd.includes(this[_source].name)
// XXX if bundled, then semver.intersects() means vulnerable
// else, pick a manifest and see if it can't be avoided
// try to pick a version of the dep that isn't vulnerable
const avoid = this[_source].range
if (bundled) {
return semver.intersects(spec, avoid, semverOpt)
}
return this[_source].testSpec(spec)
}
testSpec (spec) {
// testing all the versions is a bit costly, and the spec tends to stay
// consistent across multiple versions, so memoize this as well, in case
// we're testing lots of versions.
const memo = this[_specVulnMemo]
if (memo.has(spec)) {
return memo.get(spec)
}
const res = this[_testSpec](spec)
memo.set(spec, res)
return res
}
[_testSpec] (spec) {
for (const v of this.versions) {
const satisfies = semver.satisfies(v, spec)
if (!satisfies) {
continue
}
if (!this.testVersion(v)) {
return false
}
}
// either vulnerable, or not installable because nothing satisfied
// either way, best avoided.
return true
}
[_testVersions] (versions) {
if (!versions.length) {
return
}
// set of lists of versions
const versionSets = new Set()
versions = semver.sort(versions.map(v => semver.parse(v, semverOpt)))
// start out with the versions grouped by major and minor
let last = versions[0].major + '.' + versions[0].minor
let list = []
versionSets.add(list)
for (const v of versions) {
const k = v.major + '.' + v.minor
if (k !== last) {
last = k
list = []
versionSets.add(list)
}
list.push(v)
}
for (const set of versionSets) {
// it's common to have version lists like:
// 1.0.0
// 1.0.1-alpha.0
// 1.0.1-alpha.1
// ...
// 1.0.1-alpha.999
// 1.0.1
// 1.0.2-alpha.0
// ...
// 1.0.2-alpha.99
// 1.0.2
// with a huge number of prerelease versions that are not installable
// anyway.
// If mid has a prerelease tag, and set[0] does not, then walk it
// back until we hit a non-prerelease version
// If mid has a prerelease tag, and set[set.length-1] does not,
// then walk it forward until we hit a version without a prerelease tag
// Similarly, if the head/tail is a prerelease, but there is a non-pr
// version in the set, then start there instead.
let h = 0
const origHeadVuln = this.testVersion(set[h])
while (h < set.length && /-/.test(String(set[h]))) {
h++
}
// don't filter out the whole list! they might all be pr's
if (h === set.length) {
h = 0
} else if (origHeadVuln) {
// if the original was vulnerable, assume so are all of these
for (let hh = 0; hh < h; hh++) {
this[_markVulnerable](set[hh])
}
}
let t = set.length - 1
const origTailVuln = this.testVersion(set[t])
while (t > h && /-/.test(String(set[t]))) {
t--
}
// don't filter out the whole list! might all be pr's
if (t === h) {
t = set.length - 1
} else if (origTailVuln) {
// if original tail was vulnerable, assume these are as well
for (let tt = set.length - 1; tt > t; tt--) {
this[_markVulnerable](set[tt])
}
}
const headVuln = h === 0 ? origHeadVuln
: this.testVersion(set[h])
const tailVuln = t === set.length - 1 ? origTailVuln
: this.testVersion(set[t])
// if head and tail both vulnerable, whole list is thrown out
if (headVuln && tailVuln) {
for (let v = h; v < t; v++) {
this[_markVulnerable](set[v])
}
continue
}
// if length is 2 or 1, then we marked them all already
if (t < h + 2) {
continue
}
const mid = Math.floor(set.length / 2)
const pre = set.slice(0, mid)
const post = set.slice(mid)
// if the parent list wasn't prereleases, then drop pr tags
// from end of the pre list, and beginning of the post list,
// marking as vulnerable if the midpoint item we picked is.
if (!/-/.test(String(pre[0]))) {
const midVuln = this.testVersion(pre[pre.length - 1])
while (/-/.test(String(pre[pre.length - 1]))) {
const v = pre.pop()
if (midVuln) {
this[_markVulnerable](v)
}
}
}
if (!/-/.test(String(post[post.length - 1]))) {
const midVuln = this.testVersion(post[0])
while (/-/.test(String(post[0]))) {
const v = post.shift()
if (midVuln) {
this[_markVulnerable](v)
}
}
}
versionSets.add(pre)
versionSets.add(post)
}
}
}
module.exports = Advisory

View File

@@ -0,0 +1,15 @@
module.exports = (mani, name) => {
// skip dev because that only matters at the root,
// where we aren't fetching a manifest from the registry
// with multiple versions anyway.
const {
dependencies: deps = {},
optionalDependencies: optDeps = {},
peerDependencies: peerDeps = {},
} = mani
return deps && typeof deps[name] === 'string' ? deps[name]
: optDeps && typeof optDeps[name] === 'string' ? optDeps[name]
: peerDeps && typeof peerDeps[name] === 'string' ? peerDeps[name]
: null
}

View File

@@ -0,0 +1,5 @@
const { createHash } = require('crypto')
module.exports = ({ name, source }) => createHash('sha512')
.update(JSON.stringify([name, source]))
.digest('base64')

View File

@@ -0,0 +1,128 @@
// this is the public class that is used by consumers.
// the Advisory class handles all the calculation, and this
// class handles all the IO with the registry and cache.
const pacote = require('pacote')
const cacache = require('cacache')
const Advisory = require('./advisory.js')
const { homedir } = require('os')
const jsonParse = require('json-parse-even-better-errors')
const _packument = Symbol('packument')
const _cachePut = Symbol('cachePut')
const _cacheGet = Symbol('cacheGet')
const _cacheData = Symbol('cacheData')
const _packuments = Symbol('packuments')
const _cache = Symbol('cache')
const _options = Symbol('options')
const _advisories = Symbol('advisories')
const _calculate = Symbol('calculate')
class Calculator {
constructor (options = {}) {
this[_options] = { ...options }
this[_cache] = this[_options].cache || (homedir() + '/.npm/_cacache')
this[_options].cache = this[_cache]
this[_packuments] = new Map()
this[_cacheData] = new Map()
this[_advisories] = new Map()
}
get cache () {
return this[_cache]
}
get options () {
return { ...this[_options] }
}
async calculate (name, source) {
const k = `security-advisory:${name}:${source.id}`
if (this[_advisories].has(k)) {
return this[_advisories].get(k)
}
const p = this[_calculate](name, source)
this[_advisories].set(k, p)
return p
}
async [_calculate] (name, source) {
const k = `security-advisory:${name}:${source.id}`
const t = `metavuln:calculate:${k}`
process.emit('time', t)
const advisory = new Advisory(name, source, this[_options])
// load packument and cached advisory
const [cached, packument] = await Promise.all([
this[_cacheGet](advisory),
this[_packument](name),
])
process.emit('time', `metavuln:load:${k}`)
advisory.load(cached, packument)
process.emit('timeEnd', `metavuln:load:${k}`)
if (advisory.updated) {
await this[_cachePut](advisory)
}
this[_advisories].set(k, advisory)
process.emit('timeEnd', t)
return advisory
}
async [_cachePut] (advisory) {
const { name, id } = advisory
const key = `security-advisory:${name}:${id}`
process.emit('time', `metavuln:cache:put:${key}`)
const data = JSON.stringify(advisory)
const options = { ...this[_options] }
this[_cacheData].set(key, jsonParse(data))
await cacache.put(this[_cache], key, data, options).catch(() => {})
process.emit('timeEnd', `metavuln:cache:put:${key}`)
}
async [_cacheGet] (advisory) {
const { name, id } = advisory
const key = `security-advisory:${name}:${id}`
/* istanbul ignore if - should be impossible, since we memoize the
* advisory object itself using the same key, just being cautious */
if (this[_cacheData].has(key)) {
return this[_cacheData].get(key)
}
process.emit('time', `metavuln:cache:get:${key}`)
const p = cacache.get(this[_cache], key, { ...this[_options] })
.catch(() => ({ data: '{}' }))
.then(({ data }) => {
data = jsonParse(data)
process.emit('timeEnd', `metavuln:cache:get:${key}`)
this[_cacheData].set(key, data)
return data
})
this[_cacheData].set(key, p)
return p
}
async [_packument] (name) {
if (this[_packuments].has(name)) {
return this[_packuments].get(name)
}
process.emit('time', `metavuln:packument:${name}`)
const p = pacote.packument(name, { ...this[_options] })
.catch((er) => {
// presumably not something from the registry.
// an empty packument will have an effective range of *
return {
name,
versions: {},
}
})
.then(paku => {
process.emit('timeEnd', `metavuln:packument:${name}`)
this[_packuments].set(name, paku)
return paku
})
this[_packuments].set(name, p)
return p
}
}
module.exports = Calculator

View File

@@ -0,0 +1 @@
../glob/dist/esm/bin.mjs

View File

@@ -0,0 +1 @@
../semver/bin/semver.js

View File

@@ -0,0 +1,20 @@
<!-- This file is automatically added by @npmcli/template-oss. Do not edit. -->
ISC License
Copyright npm, Inc.
Permission to use, copy, modify, and/or distribute this
software for any purpose with or without fee is hereby
granted, provided that the above copyright notice and this
permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND NPM DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO
EVENT SHALL NPM BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,97 @@
# @npmcli/fs
polyfills, and extensions, of the core `fs` module.
## Features
- `fs.cp` polyfill for node < 16.7.0
- `fs.withTempDir` added
- `fs.readdirScoped` added
- `fs.moveFile` added
## `fs.withTempDir(root, fn, options) -> Promise`
### Parameters
- `root`: the directory in which to create the temporary directory
- `fn`: a function that will be called with the path to the temporary directory
- `options`
- `tmpPrefix`: a prefix to be used in the generated directory name
### Usage
The `withTempDir` function creates a temporary directory, runs the provided
function (`fn`), then removes the temporary directory and resolves or rejects
based on the result of `fn`.
```js
const fs = require('@npmcli/fs')
const os = require('os')
// this function will be called with the full path to the temporary directory
// it is called with `await` behind the scenes, so can be async if desired.
const myFunction = async (tempPath) => {
return 'done!'
}
const main = async () => {
const result = await fs.withTempDir(os.tmpdir(), myFunction)
// result === 'done!'
}
main()
```
## `fs.readdirScoped(root) -> Promise`
### Parameters
- `root`: the directory to read
### Usage
Like `fs.readdir` but handling `@org/module` dirs as if they were
a single entry.
```javascript
const { readdirScoped } = require('@npmcli/fs')
const entries = await readdirScoped('node_modules')
// entries will be something like: ['a', '@org/foo', '@org/bar']
```
## `fs.moveFile(source, dest, options) -> Promise`
A fork of [move-file](https://github.com/sindresorhus/move-file) with
support for Common JS.
### Highlights
- Promise API.
- Supports moving a file across partitions and devices.
- Optionally prevent overwriting an existing file.
- Creates non-existent destination directories for you.
- Automatically recurses when source is a directory.
### Parameters
- `source`: File, or directory, you want to move.
- `dest`: Where you want the file or directory moved.
- `options`
- `overwrite` (`boolean`, default: `true`): Overwrite existing destination file(s).
### Usage
The built-in
[`fs.rename()`](https://nodejs.org/api/fs.html#fs_fs_rename_oldpath_newpath_callback)
is just a JavaScript wrapper for the C `rename(2)` function, which doesn't
support moving files across partitions or devices. This module is what you
would have expected `fs.rename()` to be.
```js
const { moveFile } = require('@npmcli/fs');
(async () => {
await moveFile('source/unicorn.png', 'destination/unicorn.png');
console.log('The file has been moved');
})();
```

View File

@@ -0,0 +1,20 @@
// given an input that may or may not be an object, return an object that has
// a copy of every defined property listed in 'copy'. if the input is not an
// object, assign it to the property named by 'wrap'
const getOptions = (input, { copy, wrap }) => {
const result = {}
if (input && typeof input === 'object') {
for (const prop of copy) {
if (input[prop] !== undefined) {
result[prop] = input[prop]
}
}
} else {
result[wrap] = input
}
return result
}
module.exports = getOptions

View File

@@ -0,0 +1,9 @@
const semver = require('semver')
const satisfies = (range) => {
return semver.satisfies(process.version, range, { includePrerelease: true })
}
module.exports = {
satisfies,
}

View File

@@ -0,0 +1,15 @@
(The MIT License)
Copyright (c) 2011-2017 JP Richardson
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files
(the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,129 @@
'use strict'
const { inspect } = require('util')
// adapted from node's internal/errors
// https://github.com/nodejs/node/blob/c8a04049/lib/internal/errors.js
// close copy of node's internal SystemError class.
class SystemError {
constructor (code, prefix, context) {
// XXX context.code is undefined in all constructors used in cp/polyfill
// that may be a bug copied from node, maybe the constructor should use
// `code` not `errno`? nodejs/node#41104
let message = `${prefix}: ${context.syscall} returned ` +
`${context.code} (${context.message})`
if (context.path !== undefined) {
message += ` ${context.path}`
}
if (context.dest !== undefined) {
message += ` => ${context.dest}`
}
this.code = code
Object.defineProperties(this, {
name: {
value: 'SystemError',
enumerable: false,
writable: true,
configurable: true,
},
message: {
value: message,
enumerable: false,
writable: true,
configurable: true,
},
info: {
value: context,
enumerable: true,
configurable: true,
writable: false,
},
errno: {
get () {
return context.errno
},
set (value) {
context.errno = value
},
enumerable: true,
configurable: true,
},
syscall: {
get () {
return context.syscall
},
set (value) {
context.syscall = value
},
enumerable: true,
configurable: true,
},
})
if (context.path !== undefined) {
Object.defineProperty(this, 'path', {
get () {
return context.path
},
set (value) {
context.path = value
},
enumerable: true,
configurable: true,
})
}
if (context.dest !== undefined) {
Object.defineProperty(this, 'dest', {
get () {
return context.dest
},
set (value) {
context.dest = value
},
enumerable: true,
configurable: true,
})
}
}
toString () {
return `${this.name} [${this.code}]: ${this.message}`
}
[Symbol.for('nodejs.util.inspect.custom')] (_recurseTimes, ctx) {
return inspect(this, {
...ctx,
getters: true,
customInspect: false,
})
}
}
function E (code, message) {
module.exports[code] = class NodeError extends SystemError {
constructor (ctx) {
super(code, message, ctx)
}
}
}
E('ERR_FS_CP_DIR_TO_NON_DIR', 'Cannot overwrite directory with non-directory')
E('ERR_FS_CP_EEXIST', 'Target already exists')
E('ERR_FS_CP_EINVAL', 'Invalid src or dest')
E('ERR_FS_CP_FIFO_PIPE', 'Cannot copy a FIFO pipe')
E('ERR_FS_CP_NON_DIR_TO_DIR', 'Cannot overwrite non-directory with directory')
E('ERR_FS_CP_SOCKET', 'Cannot copy a socket file')
E('ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY', 'Cannot overwrite symlink in subdirectory of self')
E('ERR_FS_CP_UNKNOWN', 'Cannot copy an unknown file type')
E('ERR_FS_EISDIR', 'Path is a directory')
module.exports.ERR_INVALID_ARG_TYPE = class ERR_INVALID_ARG_TYPE extends Error {
constructor (name, expected, actual) {
super()
this.code = 'ERR_INVALID_ARG_TYPE'
this.message = `The ${name} argument must be ${expected}. Received ${typeof actual}`
}
}

View File

@@ -0,0 +1,22 @@
const fs = require('fs/promises')
const getOptions = require('../common/get-options.js')
const node = require('../common/node.js')
const polyfill = require('./polyfill.js')
// node 16.7.0 added fs.cp
const useNative = node.satisfies('>=16.7.0')
const cp = async (src, dest, opts) => {
const options = getOptions(opts, {
copy: ['dereference', 'errorOnExist', 'filter', 'force', 'preserveTimestamps', 'recursive'],
})
// the polyfill is tested separately from this module, no need to hack
// process.version to try to trigger it just for coverage
// istanbul ignore next
return useNative
? fs.cp(src, dest, options)
: polyfill(src, dest, options)
}
module.exports = cp

View File

@@ -0,0 +1,428 @@
// this file is a modified version of the code in node 17.2.0
// which is, in turn, a modified version of the fs-extra module on npm
// node core changes:
// - Use of the assert module has been replaced with core's error system.
// - All code related to the glob dependency has been removed.
// - Bring your own custom fs module is not currently supported.
// - Some basic code cleanup.
// changes here:
// - remove all callback related code
// - drop sync support
// - change assertions back to non-internal methods (see options.js)
// - throws ENOTDIR when rmdir gets an ENOENT for a path that exists in Windows
'use strict'
const {
ERR_FS_CP_DIR_TO_NON_DIR,
ERR_FS_CP_EEXIST,
ERR_FS_CP_EINVAL,
ERR_FS_CP_FIFO_PIPE,
ERR_FS_CP_NON_DIR_TO_DIR,
ERR_FS_CP_SOCKET,
ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY,
ERR_FS_CP_UNKNOWN,
ERR_FS_EISDIR,
ERR_INVALID_ARG_TYPE,
} = require('./errors.js')
const {
constants: {
errno: {
EEXIST,
EISDIR,
EINVAL,
ENOTDIR,
},
},
} = require('os')
const {
chmod,
copyFile,
lstat,
mkdir,
readdir,
readlink,
stat,
symlink,
unlink,
utimes,
} = require('fs/promises')
const {
dirname,
isAbsolute,
join,
parse,
resolve,
sep,
toNamespacedPath,
} = require('path')
const { fileURLToPath } = require('url')
const defaultOptions = {
dereference: false,
errorOnExist: false,
filter: undefined,
force: true,
preserveTimestamps: false,
recursive: false,
}
async function cp (src, dest, opts) {
if (opts != null && typeof opts !== 'object') {
throw new ERR_INVALID_ARG_TYPE('options', ['Object'], opts)
}
return cpFn(
toNamespacedPath(getValidatedPath(src)),
toNamespacedPath(getValidatedPath(dest)),
{ ...defaultOptions, ...opts })
}
function getValidatedPath (fileURLOrPath) {
const path = fileURLOrPath != null && fileURLOrPath.href
&& fileURLOrPath.origin
? fileURLToPath(fileURLOrPath)
: fileURLOrPath
return path
}
async function cpFn (src, dest, opts) {
// Warn about using preserveTimestamps on 32-bit node
// istanbul ignore next
if (opts.preserveTimestamps && process.arch === 'ia32') {
const warning = 'Using the preserveTimestamps option in 32-bit ' +
'node is not recommended'
process.emitWarning(warning, 'TimestampPrecisionWarning')
}
const stats = await checkPaths(src, dest, opts)
const { srcStat, destStat } = stats
await checkParentPaths(src, srcStat, dest)
if (opts.filter) {
return handleFilter(checkParentDir, destStat, src, dest, opts)
}
return checkParentDir(destStat, src, dest, opts)
}
async function checkPaths (src, dest, opts) {
const { 0: srcStat, 1: destStat } = await getStats(src, dest, opts)
if (destStat) {
if (areIdentical(srcStat, destStat)) {
throw new ERR_FS_CP_EINVAL({
message: 'src and dest cannot be the same',
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
if (srcStat.isDirectory() && !destStat.isDirectory()) {
throw new ERR_FS_CP_DIR_TO_NON_DIR({
message: `cannot overwrite directory ${src} ` +
`with non-directory ${dest}`,
path: dest,
syscall: 'cp',
errno: EISDIR,
})
}
if (!srcStat.isDirectory() && destStat.isDirectory()) {
throw new ERR_FS_CP_NON_DIR_TO_DIR({
message: `cannot overwrite non-directory ${src} ` +
`with directory ${dest}`,
path: dest,
syscall: 'cp',
errno: ENOTDIR,
})
}
}
if (srcStat.isDirectory() && isSrcSubdir(src, dest)) {
throw new ERR_FS_CP_EINVAL({
message: `cannot copy ${src} to a subdirectory of self ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
return { srcStat, destStat }
}
function areIdentical (srcStat, destStat) {
return destStat.ino && destStat.dev && destStat.ino === srcStat.ino &&
destStat.dev === srcStat.dev
}
function getStats (src, dest, opts) {
const statFunc = opts.dereference ?
(file) => stat(file, { bigint: true }) :
(file) => lstat(file, { bigint: true })
return Promise.all([
statFunc(src),
statFunc(dest).catch((err) => {
// istanbul ignore next: unsure how to cover.
if (err.code === 'ENOENT') {
return null
}
// istanbul ignore next: unsure how to cover.
throw err
}),
])
}
async function checkParentDir (destStat, src, dest, opts) {
const destParent = dirname(dest)
const dirExists = await pathExists(destParent)
if (dirExists) {
return getStatsForCopy(destStat, src, dest, opts)
}
await mkdir(destParent, { recursive: true })
return getStatsForCopy(destStat, src, dest, opts)
}
function pathExists (dest) {
return stat(dest).then(
() => true,
// istanbul ignore next: not sure when this would occur
(err) => (err.code === 'ENOENT' ? false : Promise.reject(err)))
}
// Recursively check if dest parent is a subdirectory of src.
// It works for all file types including symlinks since it
// checks the src and dest inodes. It starts from the deepest
// parent and stops once it reaches the src parent or the root path.
async function checkParentPaths (src, srcStat, dest) {
const srcParent = resolve(dirname(src))
const destParent = resolve(dirname(dest))
if (destParent === srcParent || destParent === parse(destParent).root) {
return
}
let destStat
try {
destStat = await stat(destParent, { bigint: true })
} catch (err) {
// istanbul ignore else: not sure when this would occur
if (err.code === 'ENOENT') {
return
}
// istanbul ignore next: not sure when this would occur
throw err
}
if (areIdentical(srcStat, destStat)) {
throw new ERR_FS_CP_EINVAL({
message: `cannot copy ${src} to a subdirectory of self ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
return checkParentPaths(src, srcStat, destParent)
}
const normalizePathToArray = (path) =>
resolve(path).split(sep).filter(Boolean)
// Return true if dest is a subdir of src, otherwise false.
// It only checks the path strings.
function isSrcSubdir (src, dest) {
const srcArr = normalizePathToArray(src)
const destArr = normalizePathToArray(dest)
return srcArr.every((cur, i) => destArr[i] === cur)
}
async function handleFilter (onInclude, destStat, src, dest, opts, cb) {
const include = await opts.filter(src, dest)
if (include) {
return onInclude(destStat, src, dest, opts, cb)
}
}
function startCopy (destStat, src, dest, opts) {
if (opts.filter) {
return handleFilter(getStatsForCopy, destStat, src, dest, opts)
}
return getStatsForCopy(destStat, src, dest, opts)
}
async function getStatsForCopy (destStat, src, dest, opts) {
const statFn = opts.dereference ? stat : lstat
const srcStat = await statFn(src)
// istanbul ignore else: can't portably test FIFO
if (srcStat.isDirectory() && opts.recursive) {
return onDir(srcStat, destStat, src, dest, opts)
} else if (srcStat.isDirectory()) {
throw new ERR_FS_EISDIR({
message: `${src} is a directory (not copied)`,
path: src,
syscall: 'cp',
errno: EINVAL,
})
} else if (srcStat.isFile() ||
srcStat.isCharacterDevice() ||
srcStat.isBlockDevice()) {
return onFile(srcStat, destStat, src, dest, opts)
} else if (srcStat.isSymbolicLink()) {
return onLink(destStat, src, dest)
} else if (srcStat.isSocket()) {
throw new ERR_FS_CP_SOCKET({
message: `cannot copy a socket file: ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
} else if (srcStat.isFIFO()) {
throw new ERR_FS_CP_FIFO_PIPE({
message: `cannot copy a FIFO pipe: ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
// istanbul ignore next: should be unreachable
throw new ERR_FS_CP_UNKNOWN({
message: `cannot copy an unknown file type: ${dest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
function onFile (srcStat, destStat, src, dest, opts) {
if (!destStat) {
return _copyFile(srcStat, src, dest, opts)
}
return mayCopyFile(srcStat, src, dest, opts)
}
async function mayCopyFile (srcStat, src, dest, opts) {
if (opts.force) {
await unlink(dest)
return _copyFile(srcStat, src, dest, opts)
} else if (opts.errorOnExist) {
throw new ERR_FS_CP_EEXIST({
message: `${dest} already exists`,
path: dest,
syscall: 'cp',
errno: EEXIST,
})
}
}
async function _copyFile (srcStat, src, dest, opts) {
await copyFile(src, dest)
if (opts.preserveTimestamps) {
return handleTimestampsAndMode(srcStat.mode, src, dest)
}
return setDestMode(dest, srcStat.mode)
}
async function handleTimestampsAndMode (srcMode, src, dest) {
// Make sure the file is writable before setting the timestamp
// otherwise open fails with EPERM when invoked with 'r+'
// (through utimes call)
if (fileIsNotWritable(srcMode)) {
await makeFileWritable(dest, srcMode)
return setDestTimestampsAndMode(srcMode, src, dest)
}
return setDestTimestampsAndMode(srcMode, src, dest)
}
function fileIsNotWritable (srcMode) {
return (srcMode & 0o200) === 0
}
function makeFileWritable (dest, srcMode) {
return setDestMode(dest, srcMode | 0o200)
}
async function setDestTimestampsAndMode (srcMode, src, dest) {
await setDestTimestamps(src, dest)
return setDestMode(dest, srcMode)
}
function setDestMode (dest, srcMode) {
return chmod(dest, srcMode)
}
async function setDestTimestamps (src, dest) {
// The initial srcStat.atime cannot be trusted
// because it is modified by the read(2) system call
// (See https://nodejs.org/api/fs.html#fs_stat_time_values)
const updatedSrcStat = await stat(src)
return utimes(dest, updatedSrcStat.atime, updatedSrcStat.mtime)
}
function onDir (srcStat, destStat, src, dest, opts) {
if (!destStat) {
return mkDirAndCopy(srcStat.mode, src, dest, opts)
}
return copyDir(src, dest, opts)
}
async function mkDirAndCopy (srcMode, src, dest, opts) {
await mkdir(dest)
await copyDir(src, dest, opts)
return setDestMode(dest, srcMode)
}
async function copyDir (src, dest, opts) {
const dir = await readdir(src)
for (let i = 0; i < dir.length; i++) {
const item = dir[i]
const srcItem = join(src, item)
const destItem = join(dest, item)
const { destStat } = await checkPaths(srcItem, destItem, opts)
await startCopy(destStat, srcItem, destItem, opts)
}
}
async function onLink (destStat, src, dest) {
let resolvedSrc = await readlink(src)
if (!isAbsolute(resolvedSrc)) {
resolvedSrc = resolve(dirname(src), resolvedSrc)
}
if (!destStat) {
return symlink(resolvedSrc, dest)
}
let resolvedDest
try {
resolvedDest = await readlink(dest)
} catch (err) {
// Dest exists and is a regular file or directory,
// Windows may throw UNKNOWN error. If dest already exists,
// fs throws error anyway, so no need to guard against it here.
// istanbul ignore next: can only test on windows
if (err.code === 'EINVAL' || err.code === 'UNKNOWN') {
return symlink(resolvedSrc, dest)
}
// istanbul ignore next: should not be possible
throw err
}
if (!isAbsolute(resolvedDest)) {
resolvedDest = resolve(dirname(dest), resolvedDest)
}
if (isSrcSubdir(resolvedSrc, resolvedDest)) {
throw new ERR_FS_CP_EINVAL({
message: `cannot copy ${resolvedSrc} to a subdirectory of self ` +
`${resolvedDest}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
// Do not copy if src is a subdir of dest since unlinking
// dest in this case would result in removing src contents
// and therefore a broken symlink would be created.
const srcStat = await stat(src)
if (srcStat.isDirectory() && isSrcSubdir(resolvedDest, resolvedSrc)) {
throw new ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY({
message: `cannot overwrite ${resolvedDest} with ${resolvedSrc}`,
path: dest,
syscall: 'cp',
errno: EINVAL,
})
}
return copyLink(resolvedSrc, dest)
}
async function copyLink (resolvedSrc, dest) {
await unlink(dest)
return symlink(resolvedSrc, dest)
}
module.exports = cp

View File

@@ -0,0 +1,13 @@
'use strict'
const cp = require('./cp/index.js')
const withTempDir = require('./with-temp-dir.js')
const readdirScoped = require('./readdir-scoped.js')
const moveFile = require('./move-file.js')
module.exports = {
cp,
withTempDir,
readdirScoped,
moveFile,
}

View File

@@ -0,0 +1,78 @@
const { dirname, join, resolve, relative, isAbsolute } = require('path')
const fs = require('fs/promises')
const pathExists = async path => {
try {
await fs.access(path)
return true
} catch (er) {
return er.code !== 'ENOENT'
}
}
const moveFile = async (source, destination, options = {}, root = true, symlinks = []) => {
if (!source || !destination) {
throw new TypeError('`source` and `destination` file required')
}
options = {
overwrite: true,
...options,
}
if (!options.overwrite && await pathExists(destination)) {
throw new Error(`The destination file exists: ${destination}`)
}
await fs.mkdir(dirname(destination), { recursive: true })
try {
await fs.rename(source, destination)
} catch (error) {
if (error.code === 'EXDEV' || error.code === 'EPERM') {
const sourceStat = await fs.lstat(source)
if (sourceStat.isDirectory()) {
const files = await fs.readdir(source)
await Promise.all(files.map((file) =>
moveFile(join(source, file), join(destination, file), options, false, symlinks)
))
} else if (sourceStat.isSymbolicLink()) {
symlinks.push({ source, destination })
} else {
await fs.copyFile(source, destination)
}
} else {
throw error
}
}
if (root) {
await Promise.all(symlinks.map(async ({ source: symSource, destination: symDestination }) => {
let target = await fs.readlink(symSource)
// junction symlinks in windows will be absolute paths, so we need to
// make sure they point to the symlink destination
if (isAbsolute(target)) {
target = resolve(symDestination, relative(symSource, target))
}
// try to determine what the actual file is so we can create the correct
// type of symlink in windows
let targetStat = 'file'
try {
targetStat = await fs.stat(resolve(dirname(symSource), target))
if (targetStat.isDirectory()) {
targetStat = 'junction'
}
} catch {
// targetStat remains 'file'
}
await fs.symlink(
target,
symDestination,
targetStat
)
}))
await fs.rm(source, { recursive: true, force: true })
}
}
module.exports = moveFile

View File

@@ -0,0 +1,20 @@
const { readdir } = require('fs/promises')
const { join } = require('path')
const readdirScoped = async (dir) => {
const results = []
for (const item of await readdir(dir)) {
if (item.startsWith('@')) {
for (const scopedItem of await readdir(join(dir, item))) {
results.push(join(item, scopedItem))
}
} else {
results.push(item)
}
}
return results
}
module.exports = readdirScoped

View File

@@ -0,0 +1,39 @@
const { join, sep } = require('path')
const getOptions = require('./common/get-options.js')
const { mkdir, mkdtemp, rm } = require('fs/promises')
// create a temp directory, ensure its permissions match its parent, then call
// the supplied function passing it the path to the directory. clean up after
// the function finishes, whether it throws or not
const withTempDir = async (root, fn, opts) => {
const options = getOptions(opts, {
copy: ['tmpPrefix'],
})
// create the directory
await mkdir(root, { recursive: true })
const target = await mkdtemp(join(`${root}${sep}`, options.tmpPrefix || ''))
let err
let result
try {
result = await fn(target)
} catch (_err) {
err = _err
}
try {
await rm(target, { force: true, recursive: true })
} catch {
// ignore errors
}
if (err) {
throw err
}
return result
}
module.exports = withTempDir

View File

@@ -0,0 +1,52 @@
{
"name": "@npmcli/fs",
"version": "3.1.1",
"description": "filesystem utilities for the npm cli",
"main": "lib/index.js",
"files": [
"bin/",
"lib/"
],
"scripts": {
"snap": "tap",
"test": "tap",
"npmclilint": "npmcli-lint",
"lint": "eslint \"**/*.{js,cjs,ts,mjs,jsx,tsx}\"",
"lintfix": "npm run lint -- --fix",
"posttest": "npm run lint",
"postsnap": "npm run lintfix --",
"postlint": "template-oss-check",
"template-oss-apply": "template-oss-apply --force"
},
"repository": {
"type": "git",
"url": "git+https://github.com/npm/fs.git"
},
"keywords": [
"npm",
"oss"
],
"author": "GitHub Inc.",
"license": "ISC",
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.22.0",
"tap": "^16.0.1"
},
"dependencies": {
"semver": "^7.3.5"
},
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.22.0"
},
"tap": {
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
}
}

View File

@@ -0,0 +1,2 @@
tidelift: "npm/brace-expansion"
patreon: juliangruber

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2013 Julian Gruber <julian@juliangruber.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,135 @@
# brace-expansion
[Brace expansion](https://www.gnu.org/software/bash/manual/html_node/Brace-Expansion.html),
as known from sh/bash, in JavaScript.
[![build status](https://secure.travis-ci.org/juliangruber/brace-expansion.svg)](http://travis-ci.org/juliangruber/brace-expansion)
[![downloads](https://img.shields.io/npm/dm/brace-expansion.svg)](https://www.npmjs.org/package/brace-expansion)
[![Greenkeeper badge](https://badges.greenkeeper.io/juliangruber/brace-expansion.svg)](https://greenkeeper.io/)
[![testling badge](https://ci.testling.com/juliangruber/brace-expansion.png)](https://ci.testling.com/juliangruber/brace-expansion)
## Example
```js
var expand = require('brace-expansion');
expand('file-{a,b,c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
expand('-v{,,}')
// => ['-v', '-v', '-v']
expand('file{0..2}.jpg')
// => ['file0.jpg', 'file1.jpg', 'file2.jpg']
expand('file-{a..c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
expand('file{2..0}.jpg')
// => ['file2.jpg', 'file1.jpg', 'file0.jpg']
expand('file{0..4..2}.jpg')
// => ['file0.jpg', 'file2.jpg', 'file4.jpg']
expand('file-{a..e..2}.jpg')
// => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg']
expand('file{00..10..5}.jpg')
// => ['file00.jpg', 'file05.jpg', 'file10.jpg']
expand('{{A..C},{a..c}}')
// => ['A', 'B', 'C', 'a', 'b', 'c']
expand('ppp{,config,oe{,conf}}')
// => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf']
```
## API
```js
var expand = require('brace-expansion');
```
### var expanded = expand(str)
Return an array of all possible and valid expansions of `str`. If none are
found, `[str]` is returned.
Valid expansions are:
```js
/^(.*,)+(.+)?$/
// {a,b,...}
```
A comma separated list of options, like `{a,b}` or `{a,{b,c}}` or `{,a,}`.
```js
/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
// {x..y[..incr]}
```
A numeric sequence from `x` to `y` inclusive, with optional increment.
If `x` or `y` start with a leading `0`, all the numbers will be padded
to have equal length. Negative numbers and backwards iteration work too.
```js
/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
// {x..y[..incr]}
```
An alphabetic sequence from `x` to `y` inclusive, with optional increment.
`x` and `y` must be exactly one character, and if given, `incr` must be a
number.
For compatibility reasons, the string `${` is not eligible for brace expansion.
## Installation
With [npm](https://npmjs.org) do:
```bash
npm install brace-expansion
```
## Contributors
- [Julian Gruber](https://github.com/juliangruber)
- [Isaac Z. Schlueter](https://github.com/isaacs)
## Sponsors
This module is proudly supported by my [Sponsors](https://github.com/juliangruber/sponsors)!
Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my [Patreon](https://www.patreon.com/juliangruber). Not sure how much of my modules you're using? Try [feross/thanks](https://github.com/feross/thanks)!
## Security contact information
To report a security vulnerability, please use the
[Tidelift security contact](https://tidelift.com/security).
Tidelift will coordinate the fix and disclosure.
## License
(MIT)
Copyright (c) 2013 Julian Gruber &lt;julian@juliangruber.com&gt;
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,203 @@
var balanced = require('balanced-match');
module.exports = expandTop;
var escSlash = '\0SLASH'+Math.random()+'\0';
var escOpen = '\0OPEN'+Math.random()+'\0';
var escClose = '\0CLOSE'+Math.random()+'\0';
var escComma = '\0COMMA'+Math.random()+'\0';
var escPeriod = '\0PERIOD'+Math.random()+'\0';
function numeric(str) {
return parseInt(str, 10) == str
? parseInt(str, 10)
: str.charCodeAt(0);
}
function escapeBraces(str) {
return str.split('\\\\').join(escSlash)
.split('\\{').join(escOpen)
.split('\\}').join(escClose)
.split('\\,').join(escComma)
.split('\\.').join(escPeriod);
}
function unescapeBraces(str) {
return str.split(escSlash).join('\\')
.split(escOpen).join('{')
.split(escClose).join('}')
.split(escComma).join(',')
.split(escPeriod).join('.');
}
// Basically just str.split(","), but handling cases
// where we have nested braced sections, which should be
// treated as individual members, like {a,{b,c},d}
function parseCommaParts(str) {
if (!str)
return [''];
var parts = [];
var m = balanced('{', '}', str);
if (!m)
return str.split(',');
var pre = m.pre;
var body = m.body;
var post = m.post;
var p = pre.split(',');
p[p.length-1] += '{' + body + '}';
var postParts = parseCommaParts(post);
if (post.length) {
p[p.length-1] += postParts.shift();
p.push.apply(p, postParts);
}
parts.push.apply(parts, p);
return parts;
}
function expandTop(str) {
if (!str)
return [];
// I don't know why Bash 4.3 does this, but it does.
// Anything starting with {} will have the first two bytes preserved
// but *only* at the top level, so {},a}b will not expand to anything,
// but a{},b}c will be expanded to [a}c,abc].
// One could argue that this is a bug in Bash, but since the goal of
// this module is to match Bash's rules, we escape a leading {}
if (str.substr(0, 2) === '{}') {
str = '\\{\\}' + str.substr(2);
}
return expand(escapeBraces(str), true).map(unescapeBraces);
}
function embrace(str) {
return '{' + str + '}';
}
function isPadded(el) {
return /^-?0\d/.test(el);
}
function lte(i, y) {
return i <= y;
}
function gte(i, y) {
return i >= y;
}
function expand(str, isTop) {
var expansions = [];
var m = balanced('{', '}', str);
if (!m) return [str];
// no need to expand pre, since it is guaranteed to be free of brace-sets
var pre = m.pre;
var post = m.post.length
? expand(m.post, false)
: [''];
if (/\$$/.test(m.pre)) {
for (var k = 0; k < post.length; k++) {
var expansion = pre+ '{' + m.body + '}' + post[k];
expansions.push(expansion);
}
} else {
var isNumericSequence = /^-?\d+\.\.-?\d+(?:\.\.-?\d+)?$/.test(m.body);
var isAlphaSequence = /^[a-zA-Z]\.\.[a-zA-Z](?:\.\.-?\d+)?$/.test(m.body);
var isSequence = isNumericSequence || isAlphaSequence;
var isOptions = m.body.indexOf(',') >= 0;
if (!isSequence && !isOptions) {
// {a},b}
if (m.post.match(/,.*\}/)) {
str = m.pre + '{' + m.body + escClose + m.post;
return expand(str);
}
return [str];
}
var n;
if (isSequence) {
n = m.body.split(/\.\./);
} else {
n = parseCommaParts(m.body);
if (n.length === 1) {
// x{{a,b}}y ==> x{a}y x{b}y
n = expand(n[0], false).map(embrace);
if (n.length === 1) {
return post.map(function(p) {
return m.pre + n[0] + p;
});
}
}
}
// at this point, n is the parts, and we know it's not a comma set
// with a single entry.
var N;
if (isSequence) {
var x = numeric(n[0]);
var y = numeric(n[1]);
var width = Math.max(n[0].length, n[1].length)
var incr = n.length == 3
? Math.abs(numeric(n[2]))
: 1;
var test = lte;
var reverse = y < x;
if (reverse) {
incr *= -1;
test = gte;
}
var pad = n.some(isPadded);
N = [];
for (var i = x; test(i, y); i += incr) {
var c;
if (isAlphaSequence) {
c = String.fromCharCode(i);
if (c === '\\')
c = '';
} else {
c = String(i);
if (pad) {
var need = width - c.length;
if (need > 0) {
var z = new Array(need + 1).join('0');
if (i < 0)
c = '-' + z + c.slice(1);
else
c = z + c;
}
}
}
N.push(c);
}
} else {
N = [];
for (var j = 0; j < n.length; j++) {
N.push.apply(N, expand(n[j], false));
}
}
for (var j = 0; j < N.length; j++) {
for (var k = 0; k < post.length; k++) {
var expansion = pre + N[j] + post[k];
if (!isTop || isSequence || expansion)
expansions.push(expansion);
}
}
}
return expansions;
}

View File

@@ -0,0 +1,46 @@
{
"name": "brace-expansion",
"description": "Brace expansion as known from sh/bash",
"version": "2.0.1",
"repository": {
"type": "git",
"url": "git://github.com/juliangruber/brace-expansion.git"
},
"homepage": "https://github.com/juliangruber/brace-expansion",
"main": "index.js",
"scripts": {
"test": "tape test/*.js",
"gentest": "bash test/generate.sh",
"bench": "matcha test/perf/bench.js"
},
"dependencies": {
"balanced-match": "^1.0.0"
},
"devDependencies": {
"@c4312/matcha": "^1.3.1",
"tape": "^4.6.0"
},
"keywords": [],
"author": {
"name": "Julian Gruber",
"email": "mail@juliangruber.com",
"url": "http://juliangruber.com"
},
"license": "MIT",
"testling": {
"files": "test/*.js",
"browsers": [
"ie/8..latest",
"firefox/20..latest",
"firefox/nightly",
"chrome/25..latest",
"chrome/canary",
"opera/12..latest",
"opera/next",
"safari/5.1..latest",
"ipad/6.0..latest",
"iphone/6.0..latest",
"android-browser/4.2..latest"
]
}
}

View File

@@ -0,0 +1,16 @@
ISC License
Copyright (c) npm, Inc.
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE COPYRIGHT HOLDER DISCLAIMS
ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,716 @@
# cacache [![npm version](https://img.shields.io/npm/v/cacache.svg)](https://npm.im/cacache) [![license](https://img.shields.io/npm/l/cacache.svg)](https://npm.im/cacache) [![Travis](https://img.shields.io/travis/npm/cacache.svg)](https://travis-ci.org/npm/cacache) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/cacache?svg=true)](https://ci.appveyor.com/project/npm/cacache) [![Coverage Status](https://coveralls.io/repos/github/npm/cacache/badge.svg?branch=latest)](https://coveralls.io/github/npm/cacache?branch=latest)
[`cacache`](https://github.com/npm/cacache) is a Node.js library for managing
local key and content address caches. It's really fast, really good at
concurrency, and it will never give you corrupted data, even if cache files
get corrupted or manipulated.
On systems that support user and group settings on files, cacache will
match the `uid` and `gid` values to the folder where the cache lives, even
when running as `root`.
It was written to be used as [npm](https://npm.im)'s local cache, but can
just as easily be used on its own.
## Install
`$ npm install --save cacache`
## Table of Contents
* [Example](#example)
* [Features](#features)
* [Contributing](#contributing)
* [API](#api)
* [Using localized APIs](#localized-api)
* Reading
* [`ls`](#ls)
* [`ls.stream`](#ls-stream)
* [`get`](#get-data)
* [`get.stream`](#get-stream)
* [`get.info`](#get-info)
* [`get.hasContent`](#get-hasContent)
* Writing
* [`put`](#put-data)
* [`put.stream`](#put-stream)
* [`rm.all`](#rm-all)
* [`rm.entry`](#rm-entry)
* [`rm.content`](#rm-content)
* [`index.compact`](#index-compact)
* [`index.insert`](#index-insert)
* Utilities
* [`clearMemoized`](#clear-memoized)
* [`tmp.mkdir`](#tmp-mkdir)
* [`tmp.withTmp`](#with-tmp)
* Integrity
* [Subresource Integrity](#integrity)
* [`verify`](#verify)
* [`verify.lastRun`](#verify-last-run)
### Example
```javascript
const cacache = require('cacache')
const fs = require('fs')
const tarball = '/path/to/mytar.tgz'
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'
// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
console.log(`Saved content to ${cachePath}.`)
})
const destination = '/tmp/mytar.tgz'
// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
cachePath, key
).pipe(
fs.createWriteStream(destination)
).on('finish', () => {
console.log('done extracting!')
})
// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
fs.writeFile(destination, data, err => {
console.log('tarball data fetched based on its sha512sum and written out!')
})
})
```
### Features
* Extraction by key or by content address (shasum, etc)
* [Subresource Integrity](#integrity) web standard support
* Multi-hash support - safely host sha1, sha512, etc, in a single cache
* Automatic content deduplication
* Fault tolerance (immune to corruption, partial writes, process races, etc)
* Consistency guarantees on read and write (full data verification)
* Lockless, high-concurrency cache access
* Streaming support
* Promise support
* Fast -- sub-millisecond reads and writes including verification
* Arbitrary metadata storage
* Garbage collection and additional offline verification
* Thorough test coverage
* There's probably a bloom filter in there somewhere. Those are cool, right? 🤔
### Contributing
The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.
All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other.
Please refer to the [Changelog](CHANGELOG.md) for project history details, too.
Happy hacking!
### API
#### <a name="ls"></a> `> cacache.ls(cache) -> Promise<Object>`
Lists info for all entries currently in the cache as a single large object. Each
entry in the object will be keyed by the unique index key, with corresponding
[`get.info`](#get-info) objects as the values.
##### Example
```javascript
cacache.ls(cachePath).then(console.log)
// Output
{
'my-thing': {
key: 'my-thing',
integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 4023948,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
},
'other-thing': {
key: 'other-thing',
integrity: 'sha1-ANothER+hasH=',
path: '.testcache/content/bada55',
time: 11992309289,
size: 111112
}
}
```
#### <a name="ls-stream"></a> `> cacache.ls.stream(cache) -> Readable`
Lists info for all entries currently in the cache as a single large object.
This works just like [`ls`](#ls), except [`get.info`](#get-info) entries are
returned as `'data'` events on the returned stream.
##### Example
```javascript
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
key: 'my-thing',
integrity: 'sha512-BaSe64HaSh',
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 13423,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
{
key: 'other-thing',
integrity: 'whirlpool-WoWSoMuchSupport',
path: '.testcache/content/bada55',
time: 11992309289,
size: 498023984029
}
{
...
}
```
#### <a name="get-data"></a> `> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})`
Returns an object with the cached data, digest, and metadata identified by
`key`. The `data` property of this object will be a `Buffer` instance that
presumably holds some data that means something to you. I'm sure you know what
to do with it! cacache just won't care.
`integrity` is a [Subresource
Integrity](#integrity)
string. That is, a string that can be used to verify `data`, which looks like
`<hash-algorithm>-<base64-integrity-hash>`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, the promise will be rejected.
A sub-function, `get.byDigest` may be used for identical behavior, except lookup
will happen by integrity hash, bypassing the index entirely. This version of the
function *only* returns `data` itself, without any wrapper.
See: [options](#get-options)
##### Note
This function loads the entire cache entry into memory before returning it. If
you're dealing with Very Large data, consider using [`get.stream`](#get-stream)
instead.
##### Example
```javascript
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
metadata: {
thingName: 'my'
},
integrity: 'sha512-BaSe64HaSh',
data: Buffer#<deadbeef>,
size: 9320
}
// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>
```
#### <a name="get-stream"></a> `> cacache.get.stream(cache, key, [opts]) -> Readable`
Returns a [Readable Stream](https://nodejs.org/api/stream.html#stream_readable_streams) of the cached data identified by `key`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, an error will be emitted.
`metadata` and `integrity` events will be emitted before the stream closes, if
you need to collect that extra data about the cached entry.
A sub-function, `get.stream.byDigest` may be used for identical behavior,
except lookup will happen by integrity hash, bypassing the index entirely. This
version does not emit the `metadata` and `integrity` events at all.
See: [options](#get-options)
##### Example
```javascript
// Look up by key
cache.get.stream(
cachePath, 'my-thing'
).on('metadata', metadata => {
console.log('metadata:', metadata)
}).on('integrity', integrity => {
console.log('integrity:', integrity)
}).pipe(
fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='
// Look up by digest
cache.get.stream.byDigest(
cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
fs.createWriteStream('./x.tgz')
)
```
#### <a name="get-info"></a> `> cacache.get.info(cache, key) -> Promise`
Looks up `key` in the cache index, returning information about the entry if
one exists.
##### Fields
* `key` - Key the entry was looked up under. Matches the `key` argument.
* `integrity` - [Subresource Integrity hash](#integrity) for the content this entry refers to.
* `path` - Filesystem path where content is stored, joined with `cache` argument.
* `time` - Timestamp the entry was first added on.
* `metadata` - User-assigned metadata associated with the entry/content.
##### Example
```javascript
cacache.get.info(cachePath, 'my-thing').then(console.log)
// Output
{
key: 'my-thing',
integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
path: '.testcache/content/deadbeef',
time: 12345698490,
size: 849234,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
```
#### <a name="get-hasContent"></a> `> cacache.get.hasContent(cache, integrity) -> Promise`
Looks up a [Subresource Integrity hash](#integrity) in the cache. If content
exists for this `integrity`, it will return an object, with the specific single integrity hash
that was found in `sri` key, and the size of the found content as `size`. If no content exists for this integrity, it will return `false`.
##### Example
```javascript
cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)
// Output
{
sri: {
source: 'sha256-MUSTVERIFY+ALL/THINGS==',
algorithm: 'sha256',
digest: 'MUSTVERIFY+ALL/THINGS==',
options: []
},
size: 9001
}
cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)
// Output
false
```
##### <a name="get-options"></a> Options
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
##### `opts.memoize`
Default: null
If explicitly truthy, cacache will read from memory and memoize data on bulk read. If `false`, cacache will read from disk data. Reader functions by default read from in-memory cache.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
#### <a name="put-data"></a> `> cacache.put(cache, key, data, [opts]) -> Promise`
Inserts data passed to it into the cache. The returned Promise resolves with a
digest (generated according to [`opts.algorithms`](#optsalgorithms)) after the
cache entry has been successfully written.
See: [options](#put-options)
##### Example
```javascript
fetch(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data)
}).then(integrity => {
console.log('integrity hash is', integrity)
})
```
#### <a name="put-stream"></a> `> cacache.put.stream(cache, key, [opts]) -> Writable`
Returns a [Writable
Stream](https://nodejs.org/api/stream.html#stream_writable_streams) that inserts
data written to it into the cache. Emits an `integrity` event with the digest of
written contents when it succeeds.
See: [options](#put-options)
##### Example
```javascript
request.get(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
cacache.put.stream(
cachePath, 'registry.npmjs.org|cacache@1.0.0'
).on('integrity', d => console.log(`integrity digest is ${d}`))
)
```
##### <a name="put-options"></a> Options
##### `opts.metadata`
Arbitrary metadata to be attached to the inserted key.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
`algorithms` has no effect if this option is present.
##### `opts.integrityEmitter`
*Streaming only* If present, uses the provided event emitter as a source of
truth for both integrity and size. This allows use cases where integrity is
already being calculated outside of cacache to reuse that data instead of
calculating it a second time.
The emitter must emit both the `'integrity'` and `'size'` events.
NOTE: If this option is provided, you must verify that you receive the correct
integrity value yourself and emit an `'error'` event if there is a mismatch.
[ssri Integrity Streams](https://github.com/npm/ssri#integrity-stream) do this for you when given an expected integrity.
##### `opts.algorithms`
Default: ['sha512']
Hashing algorithms to use when calculating the [subresource integrity
digest](#integrity)
for inserted data. Can use any algorithm listed in `crypto.getHashes()` or
`'omakase'`/`'お任せします'` to pick a random hash algorithm on each insertion. You
may also use any anagram of `'modnar'` to use this feature.
Currently only supports one algorithm at a time (i.e., an array length of
exactly `1`). Has no effect if `opts.integrity` is present.
##### `opts.memoize`
Default: null
If provided, cacache will memoize the given cache insertion in memory, bypassing
any filesystem checks for that key or digest in future cache fetches. Nothing
will be written to the in-memory cache unless this option is explicitly truthy.
If `opts.memoize` is an object or a `Map`-like (that is, an object with `get`
and `set` methods), it will be written to instead of the global memoization
cache.
Reading from disk data can be forced by explicitly passing `memoize: false` to
the reader functions, but their default will be to read from memory.
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="rm-all"></a> `> cacache.rm.all(cache) -> Promise`
Clears the entire cache. Mainly by blowing away the cache directory itself.
##### Example
```javascript
cacache.rm.all(cachePath).then(() => {
console.log('THE APOCALYPSE IS UPON US 😱')
})
```
#### <a name="rm-entry"></a> `> cacache.rm.entry(cache, key, [opts]) -> Promise`
Alias: `cacache.rm`
Removes the index entry for `key`. Content will still be accessible if
requested directly by content address ([`get.stream.byDigest`](#get-stream)).
By default, this appends a new entry to the index with an integrity of `null`.
If `opts.removeFully` is set to `true` then the index file itself will be
physically deleted rather than appending a `null`.
To remove the content itself (which might still be used by other entries), use
[`rm.content`](#rm-content). Or, to safely vacuum any unused content, use
[`verify`](#verify).
##### Example
```javascript
cacache.rm.entry(cachePath, 'my-thing').then(() => {
console.log('I did not like it anyway')
})
```
#### <a name="rm-content"></a> `> cacache.rm.content(cache, integrity) -> Promise`
Removes the content identified by `integrity`. Any index entries referring to it
will not be usable again until the content is re-added to the cache with an
identical digest.
##### Example
```javascript
cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
console.log('data for my-thing is gone!')
})
```
#### <a name="index-compact"></a> `> cacache.index.compact(cache, key, matchFn, [opts]) -> Promise`
Uses `matchFn`, which must be a synchronous function that accepts two entries
and returns a boolean indicating whether or not the two entries match, to
deduplicate all entries in the cache for the given `key`.
If `opts.validateEntry` is provided, it will be called as a function with the
only parameter being a single index entry. The function must return a Boolean,
if it returns `true` the entry is considered valid and will be kept in the index,
if it returns `false` the entry will be removed from the index.
If `opts.validateEntry` is not provided, however, every entry in the index will
be deduplicated and kept until the first `null` integrity is reached, removing
all entries that were written before the `null`.
The deduplicated list of entries is both written to the index, replacing the
existing content, and returned in the Promise.
#### <a name="index-insert"></a> `> cacache.index.insert(cache, key, integrity, opts) -> Promise`
Writes an index entry to the cache for the given `key` without writing content.
It is assumed if you are using this method, you have already stored the content
some other way and you only wish to add a new index to that content. The `metadata`
and `size` properties are read from `opts` and used as part of the index entry.
Returns a Promise resolving to the newly added entry.
#### <a name="clear-memoized"></a> `> cacache.clearMemoized()`
Completely resets the in-memory entry cache.
#### <a name="tmp-mkdir"></a> `> tmp.mkdir(cache, opts) -> Promise<Path>`
Returns a unique temporary directory inside the cache's `tmp` dir. This
directory will use the same safe user assignment that all the other stuff use.
Once the directory is made, it's the user's responsibility that all files
within are given the appropriate `gid`/`uid` ownership settings to match
the rest of the cache. If not, you can ask cacache to do it for you by
calling [`tmp.fix()`](#tmp-fix), which will fix all tmp directory
permissions.
If you want automatic cleanup of this directory, use
[`tmp.withTmp()`](#with-tpm)
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})
```
#### <a name="tmp-fix"></a> `> tmp.fix(cache) -> Promise`
Sets the `uid` and `gid` properties on all files and folders within the tmp
folder to match the rest of the cache.
Use this after manually writing files into [`tmp.mkdir`](#tmp-mkdir) or
[`tmp.withTmp`](#with-tmp).
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
writeFile(path.join(dir, 'file'), someData).then(() => {
// make sure we didn't just put a root-owned file in the cache
cacache.tmp.fix().then(() => {
// all uids and gids match now
})
})
})
```
#### <a name="with-tmp"></a> `> tmp.withTmp(cache, opts, cb) -> Promise`
Creates a temporary directory with [`tmp.mkdir()`](#tmp-mkdir) and calls `cb`
with it. The created temporary directory will be removed when the return value
of `cb()` resolves, the tmp directory will be automatically deleted once that
promise completes.
The same caveats apply when it comes to managing permissions for the tmp dir's
contents.
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.withTmp(cache, dir => {
return fs.writeFile(path.join(dir, 'blablabla'), 'blabla contents', { encoding: 'utf8' })
}).then(() => {
// `dir` no longer exists
})
```
##### <a name="tmp-options"></a> Options
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="integrity"></a> Subresource Integrity Digests
For content verification and addressing, cacache uses strings following the
[Subresource
Integrity spec](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity).
That is, any time cacache expects an `integrity` argument or option, it
should be in the format `<hashAlgorithm>-<base64-hash>`.
One deviation from the current spec is that cacache will support any hash
algorithms supported by the underlying Node.js process. You can use
`crypto.getHashes()` to see which ones you can use.
##### Generating Digests Yourself
If you have an existing content shasum, they are generally formatted as a
hexadecimal string (that is, a sha1 would look like:
`5f5513f8822fdbe5145af33b64d8d970dcf95c6e`). In order to be compatible with
cacache, you'll need to convert this to an equivalent subresource integrity
string. For this example, the corresponding hash would be:
`sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=`.
If you want to generate an integrity string yourself for existing data, you can
use something like this:
```javascript
const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'
const integrity = (
hashAlgorithm +
'-' +
crypto.createHash(hashAlgorithm).update(data).digest('base64')
)
```
You can also use [`ssri`](https://npm.im/ssri) to have a richer set of functionality
around SRI strings, including generation, parsing, and translating from existing
hex-formatted strings.
#### <a name="verify"></a> `> cacache.verify(cache, opts) -> Promise`
Checks out and fixes up your cache:
* Cleans up corrupted or invalid index entries.
* Custom entry filtering options.
* Garbage collects any content entries not referenced by the index.
* Checks integrity for all content entries and removes invalid content.
* Fixes cache ownership.
* Removes the `tmp` directory in the cache and all its contents.
When it's done, it'll return an object with various stats about the verification
process, including amount of storage reclaimed, number of valid entries, number
of entries removed, etc.
##### <a name="verify-options"></a> Options
##### `opts.concurrency`
Default: 20
Number of concurrently read files in the filesystem while doing clean up.
##### `opts.filter`
Receives a formatted entry. Return false to remove it.
Note: might be called more than once on the same entry.
##### `opts.log`
Custom logger function:
```
log: { silly () {} }
log.silly('verify', 'verifying cache at', cache)
```
##### Example
```sh
echo somegarbage >> $CACHEPATH/content/deadbeef
```
```javascript
cacache.verify(cachePath).then(stats => {
// deadbeef collected, because of invalid checksum.
console.log('cache is much nicer now! stats:', stats)
})
```
#### <a name="verify-last-run"></a> `> cacache.verify.lastRun(cache) -> Promise`
Returns a `Date` representing the last time `cacache.verify` was run on `cache`.
##### Example
```javascript
cacache.verify(cachePath).then(() => {
cacache.verify.lastRun(cachePath).then(lastTime => {
console.log('cacache.verify was last called on' + lastTime)
})
})
```

View File

@@ -0,0 +1,29 @@
'use strict'
const contentVer = require('../../package.json')['cache-version'].content
const hashToSegments = require('../util/hash-to-segments')
const path = require('path')
const ssri = require('ssri')
// Current format of content file path:
//
// sha512-BaSE64Hex= ->
// ~/.my-cache/content-v2/sha512/ba/da/55deadbeefc0ffee
//
module.exports = contentPath
function contentPath (cache, integrity) {
const sri = ssri.parse(integrity, { single: true })
// contentPath is the *strongest* algo given
return path.join(
contentDir(cache),
sri.algorithm,
...hashToSegments(sri.hexDigest())
)
}
module.exports.contentDir = contentDir
function contentDir (cache) {
return path.join(cache, `content-v${contentVer}`)
}

View File

@@ -0,0 +1,166 @@
'use strict'
const fs = require('fs/promises')
const fsm = require('fs-minipass')
const ssri = require('ssri')
const contentPath = require('./path')
const Pipeline = require('minipass-pipeline')
module.exports = read
const MAX_SINGLE_READ_SIZE = 64 * 1024 * 1024
async function read (cache, integrity, opts = {}) {
const { size } = opts
const { stat, cpath, sri } = await withContentSri(cache, integrity, async (cpath, sri) => {
// get size
const stat = await fs.stat(cpath)
return { stat, cpath, sri }
})
if (typeof size === 'number' && stat.size !== size) {
throw sizeError(size, stat.size)
}
if (stat.size > MAX_SINGLE_READ_SIZE) {
return readPipeline(cpath, stat.size, sri, new Pipeline()).concat()
}
const data = await fs.readFile(cpath, { encoding: null })
if (!ssri.checkData(data, sri)) {
throw integrityError(sri, cpath)
}
return data
}
const readPipeline = (cpath, size, sri, stream) => {
stream.push(
new fsm.ReadStream(cpath, {
size,
readSize: MAX_SINGLE_READ_SIZE,
}),
ssri.integrityStream({
integrity: sri,
size,
})
)
return stream
}
module.exports.stream = readStream
module.exports.readStream = readStream
function readStream (cache, integrity, opts = {}) {
const { size } = opts
const stream = new Pipeline()
// Set all this up to run on the stream and then just return the stream
Promise.resolve().then(async () => {
const { stat, cpath, sri } = await withContentSri(cache, integrity, async (cpath, sri) => {
// just stat to ensure it exists
const stat = await fs.stat(cpath)
return { stat, cpath, sri }
})
if (typeof size === 'number' && size !== stat.size) {
return stream.emit('error', sizeError(size, stat.size))
}
return readPipeline(cpath, stat.size, sri, stream)
}).catch(err => stream.emit('error', err))
return stream
}
module.exports.copy = copy
function copy (cache, integrity, dest) {
return withContentSri(cache, integrity, (cpath, sri) => {
return fs.copyFile(cpath, dest)
})
}
module.exports.hasContent = hasContent
async function hasContent (cache, integrity) {
if (!integrity) {
return false
}
try {
return await withContentSri(cache, integrity, async (cpath, sri) => {
const stat = await fs.stat(cpath)
return { size: stat.size, sri, stat }
})
} catch (err) {
if (err.code === 'ENOENT') {
return false
}
if (err.code === 'EPERM') {
/* istanbul ignore else */
if (process.platform !== 'win32') {
throw err
} else {
return false
}
}
}
}
async function withContentSri (cache, integrity, fn) {
const sri = ssri.parse(integrity)
// If `integrity` has multiple entries, pick the first digest
// with available local data.
const algo = sri.pickAlgorithm()
const digests = sri[algo]
if (digests.length <= 1) {
const cpath = contentPath(cache, digests[0])
return fn(cpath, digests[0])
} else {
// Can't use race here because a generic error can happen before
// a ENOENT error, and can happen before a valid result
const results = await Promise.all(digests.map(async (meta) => {
try {
return await withContentSri(cache, meta, fn)
} catch (err) {
if (err.code === 'ENOENT') {
return Object.assign(
new Error('No matching content found for ' + sri.toString()),
{ code: 'ENOENT' }
)
}
return err
}
}))
// Return the first non error if it is found
const result = results.find((r) => !(r instanceof Error))
if (result) {
return result
}
// Throw the No matching content found error
const enoentError = results.find((r) => r.code === 'ENOENT')
if (enoentError) {
throw enoentError
}
// Throw generic error
throw results.find((r) => r instanceof Error)
}
}
function sizeError (expected, found) {
/* eslint-disable-next-line max-len */
const err = new Error(`Bad data size: expected inserted data to be ${expected} bytes, but got ${found} instead`)
err.expected = expected
err.found = found
err.code = 'EBADSIZE'
return err
}
function integrityError (sri, path) {
const err = new Error(`Integrity verification failed for ${sri} (${path})`)
err.code = 'EINTEGRITY'
err.sri = sri
err.path = path
return err
}

View File

@@ -0,0 +1,18 @@
'use strict'
const fs = require('fs/promises')
const contentPath = require('./path')
const { hasContent } = require('./read')
module.exports = rm
async function rm (cache, integrity) {
const content = await hasContent(cache, integrity)
// ~pretty~ sure we can't end up with a content lacking sri, but be safe
if (content && content.sri) {
await fs.rm(contentPath(cache, content.sri), { recursive: true, force: true })
return true
} else {
return false
}
}

View File

@@ -0,0 +1,205 @@
'use strict'
const events = require('events')
const contentPath = require('./path')
const fs = require('fs/promises')
const { moveFile } = require('@npmcli/fs')
const { Minipass } = require('minipass')
const Pipeline = require('minipass-pipeline')
const Flush = require('minipass-flush')
const path = require('path')
const ssri = require('ssri')
const uniqueFilename = require('unique-filename')
const fsm = require('fs-minipass')
module.exports = write
// Cache of move operations in process so we don't duplicate
const moveOperations = new Map()
async function write (cache, data, opts = {}) {
const { algorithms, size, integrity } = opts
if (typeof size === 'number' && data.length !== size) {
throw sizeError(size, data.length)
}
const sri = ssri.fromData(data, algorithms ? { algorithms } : {})
if (integrity && !ssri.checkData(data, integrity, opts)) {
throw checksumError(integrity, sri)
}
for (const algo in sri) {
const tmp = await makeTmp(cache, opts)
const hash = sri[algo].toString()
try {
await fs.writeFile(tmp.target, data, { flag: 'wx' })
await moveToDestination(tmp, cache, hash, opts)
} finally {
if (!tmp.moved) {
await fs.rm(tmp.target, { recursive: true, force: true })
}
}
}
return { integrity: sri, size: data.length }
}
module.exports.stream = writeStream
// writes proxied to the 'inputStream' that is passed to the Promise
// 'end' is deferred until content is handled.
class CacacheWriteStream extends Flush {
constructor (cache, opts) {
super()
this.opts = opts
this.cache = cache
this.inputStream = new Minipass()
this.inputStream.on('error', er => this.emit('error', er))
this.inputStream.on('drain', () => this.emit('drain'))
this.handleContentP = null
}
write (chunk, encoding, cb) {
if (!this.handleContentP) {
this.handleContentP = handleContent(
this.inputStream,
this.cache,
this.opts
)
}
return this.inputStream.write(chunk, encoding, cb)
}
flush (cb) {
this.inputStream.end(() => {
if (!this.handleContentP) {
const e = new Error('Cache input stream was empty')
e.code = 'ENODATA'
// empty streams are probably emitting end right away.
// defer this one tick by rejecting a promise on it.
return Promise.reject(e).catch(cb)
}
// eslint-disable-next-line promise/catch-or-return
this.handleContentP.then(
(res) => {
res.integrity && this.emit('integrity', res.integrity)
// eslint-disable-next-line promise/always-return
res.size !== null && this.emit('size', res.size)
cb()
},
(er) => cb(er)
)
})
}
}
function writeStream (cache, opts = {}) {
return new CacacheWriteStream(cache, opts)
}
async function handleContent (inputStream, cache, opts) {
const tmp = await makeTmp(cache, opts)
try {
const res = await pipeToTmp(inputStream, cache, tmp.target, opts)
await moveToDestination(
tmp,
cache,
res.integrity,
opts
)
return res
} finally {
if (!tmp.moved) {
await fs.rm(tmp.target, { recursive: true, force: true })
}
}
}
async function pipeToTmp (inputStream, cache, tmpTarget, opts) {
const outStream = new fsm.WriteStream(tmpTarget, {
flags: 'wx',
})
if (opts.integrityEmitter) {
// we need to create these all simultaneously since they can fire in any order
const [integrity, size] = await Promise.all([
events.once(opts.integrityEmitter, 'integrity').then(res => res[0]),
events.once(opts.integrityEmitter, 'size').then(res => res[0]),
new Pipeline(inputStream, outStream).promise(),
])
return { integrity, size }
}
let integrity
let size
const hashStream = ssri.integrityStream({
integrity: opts.integrity,
algorithms: opts.algorithms,
size: opts.size,
})
hashStream.on('integrity', i => {
integrity = i
})
hashStream.on('size', s => {
size = s
})
const pipeline = new Pipeline(inputStream, hashStream, outStream)
await pipeline.promise()
return { integrity, size }
}
async function makeTmp (cache, opts) {
const tmpTarget = uniqueFilename(path.join(cache, 'tmp'), opts.tmpPrefix)
await fs.mkdir(path.dirname(tmpTarget), { recursive: true })
return {
target: tmpTarget,
moved: false,
}
}
async function moveToDestination (tmp, cache, sri, opts) {
const destination = contentPath(cache, sri)
const destDir = path.dirname(destination)
if (moveOperations.has(destination)) {
return moveOperations.get(destination)
}
moveOperations.set(
destination,
fs.mkdir(destDir, { recursive: true })
.then(async () => {
await moveFile(tmp.target, destination, { overwrite: false })
tmp.moved = true
return tmp.moved
})
.catch(err => {
if (!err.message.startsWith('The destination file exists')) {
throw Object.assign(err, { code: 'EEXIST' })
}
}).finally(() => {
moveOperations.delete(destination)
})
)
return moveOperations.get(destination)
}
function sizeError (expected, found) {
/* eslint-disable-next-line max-len */
const err = new Error(`Bad data size: expected inserted data to be ${expected} bytes, but got ${found} instead`)
err.expected = expected
err.found = found
err.code = 'EBADSIZE'
return err
}
function checksumError (expected, found) {
const err = new Error(`Integrity check failed:
Wanted: ${expected}
Found: ${found}`)
err.code = 'EINTEGRITY'
err.expected = expected
err.found = found
return err
}

View File

@@ -0,0 +1,330 @@
'use strict'
const crypto = require('crypto')
const {
appendFile,
mkdir,
readFile,
readdir,
rm,
writeFile,
} = require('fs/promises')
const { Minipass } = require('minipass')
const path = require('path')
const ssri = require('ssri')
const uniqueFilename = require('unique-filename')
const contentPath = require('./content/path')
const hashToSegments = require('./util/hash-to-segments')
const indexV = require('../package.json')['cache-version'].index
const { moveFile } = require('@npmcli/fs')
module.exports.NotFoundError = class NotFoundError extends Error {
constructor (cache, key) {
super(`No cache entry for ${key} found in ${cache}`)
this.code = 'ENOENT'
this.cache = cache
this.key = key
}
}
module.exports.compact = compact
async function compact (cache, key, matchFn, opts = {}) {
const bucket = bucketPath(cache, key)
const entries = await bucketEntries(bucket)
const newEntries = []
// we loop backwards because the bottom-most result is the newest
// since we add new entries with appendFile
for (let i = entries.length - 1; i >= 0; --i) {
const entry = entries[i]
// a null integrity could mean either a delete was appended
// or the user has simply stored an index that does not map
// to any content. we determine if the user wants to keep the
// null integrity based on the validateEntry function passed in options.
// if the integrity is null and no validateEntry is provided, we break
// as we consider the null integrity to be a deletion of everything
// that came before it.
if (entry.integrity === null && !opts.validateEntry) {
break
}
// if this entry is valid, and it is either the first entry or
// the newEntries array doesn't already include an entry that
// matches this one based on the provided matchFn, then we add
// it to the beginning of our list
if ((!opts.validateEntry || opts.validateEntry(entry) === true) &&
(newEntries.length === 0 ||
!newEntries.find((oldEntry) => matchFn(oldEntry, entry)))) {
newEntries.unshift(entry)
}
}
const newIndex = '\n' + newEntries.map((entry) => {
const stringified = JSON.stringify(entry)
const hash = hashEntry(stringified)
return `${hash}\t${stringified}`
}).join('\n')
const setup = async () => {
const target = uniqueFilename(path.join(cache, 'tmp'), opts.tmpPrefix)
await mkdir(path.dirname(target), { recursive: true })
return {
target,
moved: false,
}
}
const teardown = async (tmp) => {
if (!tmp.moved) {
return rm(tmp.target, { recursive: true, force: true })
}
}
const write = async (tmp) => {
await writeFile(tmp.target, newIndex, { flag: 'wx' })
await mkdir(path.dirname(bucket), { recursive: true })
// we use @npmcli/move-file directly here because we
// want to overwrite the existing file
await moveFile(tmp.target, bucket)
tmp.moved = true
}
// write the file atomically
const tmp = await setup()
try {
await write(tmp)
} finally {
await teardown(tmp)
}
// we reverse the list we generated such that the newest
// entries come first in order to make looping through them easier
// the true passed to formatEntry tells it to keep null
// integrity values, if they made it this far it's because
// validateEntry returned true, and as such we should return it
return newEntries.reverse().map((entry) => formatEntry(cache, entry, true))
}
module.exports.insert = insert
async function insert (cache, key, integrity, opts = {}) {
const { metadata, size, time } = opts
const bucket = bucketPath(cache, key)
const entry = {
key,
integrity: integrity && ssri.stringify(integrity),
time: time || Date.now(),
size,
metadata,
}
try {
await mkdir(path.dirname(bucket), { recursive: true })
const stringified = JSON.stringify(entry)
// NOTE - Cleverness ahoy!
//
// This works because it's tremendously unlikely for an entry to corrupt
// another while still preserving the string length of the JSON in
// question. So, we just slap the length in there and verify it on read.
//
// Thanks to @isaacs for the whiteboarding session that ended up with
// this.
await appendFile(bucket, `\n${hashEntry(stringified)}\t${stringified}`)
} catch (err) {
if (err.code === 'ENOENT') {
return undefined
}
throw err
}
return formatEntry(cache, entry)
}
module.exports.find = find
async function find (cache, key) {
const bucket = bucketPath(cache, key)
try {
const entries = await bucketEntries(bucket)
return entries.reduce((latest, next) => {
if (next && next.key === key) {
return formatEntry(cache, next)
} else {
return latest
}
}, null)
} catch (err) {
if (err.code === 'ENOENT') {
return null
} else {
throw err
}
}
}
module.exports.delete = del
function del (cache, key, opts = {}) {
if (!opts.removeFully) {
return insert(cache, key, null, opts)
}
const bucket = bucketPath(cache, key)
return rm(bucket, { recursive: true, force: true })
}
module.exports.lsStream = lsStream
function lsStream (cache) {
const indexDir = bucketDir(cache)
const stream = new Minipass({ objectMode: true })
// Set all this up to run on the stream and then just return the stream
Promise.resolve().then(async () => {
const buckets = await readdirOrEmpty(indexDir)
await Promise.all(buckets.map(async (bucket) => {
const bucketPath = path.join(indexDir, bucket)
const subbuckets = await readdirOrEmpty(bucketPath)
await Promise.all(subbuckets.map(async (subbucket) => {
const subbucketPath = path.join(bucketPath, subbucket)
// "/cachename/<bucket 0xFF>/<bucket 0xFF>./*"
const subbucketEntries = await readdirOrEmpty(subbucketPath)
await Promise.all(subbucketEntries.map(async (entry) => {
const entryPath = path.join(subbucketPath, entry)
try {
const entries = await bucketEntries(entryPath)
// using a Map here prevents duplicate keys from showing up
// twice, I guess?
const reduced = entries.reduce((acc, entry) => {
acc.set(entry.key, entry)
return acc
}, new Map())
// reduced is a map of key => entry
for (const entry of reduced.values()) {
const formatted = formatEntry(cache, entry)
if (formatted) {
stream.write(formatted)
}
}
} catch (err) {
if (err.code === 'ENOENT') {
return undefined
}
throw err
}
}))
}))
}))
stream.end()
return stream
}).catch(err => stream.emit('error', err))
return stream
}
module.exports.ls = ls
async function ls (cache) {
const entries = await lsStream(cache).collect()
return entries.reduce((acc, xs) => {
acc[xs.key] = xs
return acc
}, {})
}
module.exports.bucketEntries = bucketEntries
async function bucketEntries (bucket, filter) {
const data = await readFile(bucket, 'utf8')
return _bucketEntries(data, filter)
}
function _bucketEntries (data, filter) {
const entries = []
data.split('\n').forEach((entry) => {
if (!entry) {
return
}
const pieces = entry.split('\t')
if (!pieces[1] || hashEntry(pieces[1]) !== pieces[0]) {
// Hash is no good! Corruption or malice? Doesn't matter!
// EJECT EJECT
return
}
let obj
try {
obj = JSON.parse(pieces[1])
} catch (_) {
// eslint-ignore-next-line no-empty-block
}
// coverage disabled here, no need to test with an entry that parses to something falsey
// istanbul ignore else
if (obj) {
entries.push(obj)
}
})
return entries
}
module.exports.bucketDir = bucketDir
function bucketDir (cache) {
return path.join(cache, `index-v${indexV}`)
}
module.exports.bucketPath = bucketPath
function bucketPath (cache, key) {
const hashed = hashKey(key)
return path.join.apply(
path,
[bucketDir(cache)].concat(hashToSegments(hashed))
)
}
module.exports.hashKey = hashKey
function hashKey (key) {
return hash(key, 'sha256')
}
module.exports.hashEntry = hashEntry
function hashEntry (str) {
return hash(str, 'sha1')
}
function hash (str, digest) {
return crypto
.createHash(digest)
.update(str)
.digest('hex')
}
function formatEntry (cache, entry, keepAll) {
// Treat null digests as deletions. They'll shadow any previous entries.
if (!entry.integrity && !keepAll) {
return null
}
return {
key: entry.key,
integrity: entry.integrity,
path: entry.integrity ? contentPath(cache, entry.integrity) : undefined,
size: entry.size,
time: entry.time,
metadata: entry.metadata,
}
}
function readdirOrEmpty (dir) {
return readdir(dir).catch((err) => {
if (err.code === 'ENOENT' || err.code === 'ENOTDIR') {
return []
}
throw err
})
}

View File

@@ -0,0 +1,170 @@
'use strict'
const Collect = require('minipass-collect')
const { Minipass } = require('minipass')
const Pipeline = require('minipass-pipeline')
const index = require('./entry-index')
const memo = require('./memoization')
const read = require('./content/read')
async function getData (cache, key, opts = {}) {
const { integrity, memoize, size } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return {
metadata: memoized.entry.metadata,
data: memoized.data,
integrity: memoized.entry.integrity,
size: memoized.entry.size,
}
}
const entry = await index.find(cache, key, opts)
if (!entry) {
throw new index.NotFoundError(cache, key)
}
const data = await read(cache, entry.integrity, { integrity, size })
if (memoize) {
memo.put(cache, entry, data, opts)
}
return {
data,
metadata: entry.metadata,
size: entry.size,
integrity: entry.integrity,
}
}
module.exports = getData
async function getDataByDigest (cache, key, opts = {}) {
const { integrity, memoize, size } = opts
const memoized = memo.get.byDigest(cache, key, opts)
if (memoized && memoize !== false) {
return memoized
}
const res = await read(cache, key, { integrity, size })
if (memoize) {
memo.put.byDigest(cache, key, res, opts)
}
return res
}
module.exports.byDigest = getDataByDigest
const getMemoizedStream = (memoized) => {
const stream = new Minipass()
stream.on('newListener', function (ev, cb) {
ev === 'metadata' && cb(memoized.entry.metadata)
ev === 'integrity' && cb(memoized.entry.integrity)
ev === 'size' && cb(memoized.entry.size)
})
stream.end(memoized.data)
return stream
}
function getStream (cache, key, opts = {}) {
const { memoize, size } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return getMemoizedStream(memoized)
}
const stream = new Pipeline()
// Set all this up to run on the stream and then just return the stream
Promise.resolve().then(async () => {
const entry = await index.find(cache, key)
if (!entry) {
throw new index.NotFoundError(cache, key)
}
stream.emit('metadata', entry.metadata)
stream.emit('integrity', entry.integrity)
stream.emit('size', entry.size)
stream.on('newListener', function (ev, cb) {
ev === 'metadata' && cb(entry.metadata)
ev === 'integrity' && cb(entry.integrity)
ev === 'size' && cb(entry.size)
})
const src = read.readStream(
cache,
entry.integrity,
{ ...opts, size: typeof size !== 'number' ? entry.size : size }
)
if (memoize) {
const memoStream = new Collect.PassThrough()
memoStream.on('collect', data => memo.put(cache, entry, data, opts))
stream.unshift(memoStream)
}
stream.unshift(src)
return stream
}).catch((err) => stream.emit('error', err))
return stream
}
module.exports.stream = getStream
function getStreamDigest (cache, integrity, opts = {}) {
const { memoize } = opts
const memoized = memo.get.byDigest(cache, integrity, opts)
if (memoized && memoize !== false) {
const stream = new Minipass()
stream.end(memoized)
return stream
} else {
const stream = read.readStream(cache, integrity, opts)
if (!memoize) {
return stream
}
const memoStream = new Collect.PassThrough()
memoStream.on('collect', data => memo.put.byDigest(
cache,
integrity,
data,
opts
))
return new Pipeline(stream, memoStream)
}
}
module.exports.stream.byDigest = getStreamDigest
function info (cache, key, opts = {}) {
const { memoize } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return Promise.resolve(memoized.entry)
} else {
return index.find(cache, key)
}
}
module.exports.info = info
async function copy (cache, key, dest, opts = {}) {
const entry = await index.find(cache, key, opts)
if (!entry) {
throw new index.NotFoundError(cache, key)
}
await read.copy(cache, entry.integrity, dest, opts)
return {
metadata: entry.metadata,
size: entry.size,
integrity: entry.integrity,
}
}
module.exports.copy = copy
async function copyByDigest (cache, key, dest, opts = {}) {
await read.copy(cache, key, dest, opts)
return key
}
module.exports.copy.byDigest = copyByDigest
module.exports.hasContent = read.hasContent

View File

@@ -0,0 +1,42 @@
'use strict'
const get = require('./get.js')
const put = require('./put.js')
const rm = require('./rm.js')
const verify = require('./verify.js')
const { clearMemoized } = require('./memoization.js')
const tmp = require('./util/tmp.js')
const index = require('./entry-index.js')
module.exports.index = {}
module.exports.index.compact = index.compact
module.exports.index.insert = index.insert
module.exports.ls = index.ls
module.exports.ls.stream = index.lsStream
module.exports.get = get
module.exports.get.byDigest = get.byDigest
module.exports.get.stream = get.stream
module.exports.get.stream.byDigest = get.stream.byDigest
module.exports.get.copy = get.copy
module.exports.get.copy.byDigest = get.copy.byDigest
module.exports.get.info = get.info
module.exports.get.hasContent = get.hasContent
module.exports.put = put
module.exports.put.stream = put.stream
module.exports.rm = rm.entry
module.exports.rm.all = rm.all
module.exports.rm.entry = module.exports.rm
module.exports.rm.content = rm.content
module.exports.clearMemoized = clearMemoized
module.exports.tmp = {}
module.exports.tmp.mkdir = tmp.mkdir
module.exports.tmp.withTmp = tmp.withTmp
module.exports.verify = verify
module.exports.verify.lastRun = verify.lastRun

View File

@@ -0,0 +1,72 @@
'use strict'
const LRU = require('lru-cache')
const MEMOIZED = new LRU({
max: 500,
maxSize: 50 * 1024 * 1024, // 50MB
ttl: 3 * 60 * 1000, // 3 minutes
sizeCalculation: (entry, key) => key.startsWith('key:') ? entry.data.length : entry.length,
})
module.exports.clearMemoized = clearMemoized
function clearMemoized () {
const old = {}
MEMOIZED.forEach((v, k) => {
old[k] = v
})
MEMOIZED.clear()
return old
}
module.exports.put = put
function put (cache, entry, data, opts) {
pickMem(opts).set(`key:${cache}:${entry.key}`, { entry, data })
putDigest(cache, entry.integrity, data, opts)
}
module.exports.put.byDigest = putDigest
function putDigest (cache, integrity, data, opts) {
pickMem(opts).set(`digest:${cache}:${integrity}`, data)
}
module.exports.get = get
function get (cache, key, opts) {
return pickMem(opts).get(`key:${cache}:${key}`)
}
module.exports.get.byDigest = getDigest
function getDigest (cache, integrity, opts) {
return pickMem(opts).get(`digest:${cache}:${integrity}`)
}
class ObjProxy {
constructor (obj) {
this.obj = obj
}
get (key) {
return this.obj[key]
}
set (key, val) {
this.obj[key] = val
}
}
function pickMem (opts) {
if (!opts || !opts.memoize) {
return MEMOIZED
} else if (opts.memoize.get && opts.memoize.set) {
return opts.memoize
} else if (typeof opts.memoize === 'object') {
return new ObjProxy(opts.memoize)
} else {
return MEMOIZED
}
}

View File

@@ -0,0 +1,80 @@
'use strict'
const index = require('./entry-index')
const memo = require('./memoization')
const write = require('./content/write')
const Flush = require('minipass-flush')
const { PassThrough } = require('minipass-collect')
const Pipeline = require('minipass-pipeline')
const putOpts = (opts) => ({
algorithms: ['sha512'],
...opts,
})
module.exports = putData
async function putData (cache, key, data, opts = {}) {
const { memoize } = opts
opts = putOpts(opts)
const res = await write(cache, data, opts)
const entry = await index.insert(cache, key, res.integrity, { ...opts, size: res.size })
if (memoize) {
memo.put(cache, entry, data, opts)
}
return res.integrity
}
module.exports.stream = putStream
function putStream (cache, key, opts = {}) {
const { memoize } = opts
opts = putOpts(opts)
let integrity
let size
let error
let memoData
const pipeline = new Pipeline()
// first item in the pipeline is the memoizer, because we need
// that to end first and get the collected data.
if (memoize) {
const memoizer = new PassThrough().on('collect', data => {
memoData = data
})
pipeline.push(memoizer)
}
// contentStream is a write-only, not a passthrough
// no data comes out of it.
const contentStream = write.stream(cache, opts)
.on('integrity', (int) => {
integrity = int
})
.on('size', (s) => {
size = s
})
.on('error', (err) => {
error = err
})
pipeline.push(contentStream)
// last but not least, we write the index and emit hash and size,
// and memoize if we're doing that
pipeline.push(new Flush({
async flush () {
if (!error) {
const entry = await index.insert(cache, key, integrity, { ...opts, size })
if (memoize && memoData) {
memo.put(cache, entry, memoData, opts)
}
pipeline.emit('integrity', integrity)
pipeline.emit('size', size)
}
},
}))
return pipeline
}

View File

@@ -0,0 +1,31 @@
'use strict'
const { rm } = require('fs/promises')
const glob = require('./util/glob.js')
const index = require('./entry-index')
const memo = require('./memoization')
const path = require('path')
const rmContent = require('./content/rm')
module.exports = entry
module.exports.entry = entry
function entry (cache, key, opts) {
memo.clearMemoized()
return index.delete(cache, key, opts)
}
module.exports.content = content
function content (cache, integrity) {
memo.clearMemoized()
return rmContent(cache, integrity)
}
module.exports.all = all
async function all (cache) {
memo.clearMemoized()
const paths = await glob(path.join(cache, '*(content-*|index-*)'), { silent: true, nosort: true })
return Promise.all(paths.map((p) => rm(p, { recursive: true, force: true })))
}

View File

@@ -0,0 +1,7 @@
'use strict'
const { glob } = require('glob')
const path = require('path')
const globify = (pattern) => pattern.split(path.win32.sep).join(path.posix.sep)
module.exports = (path, options) => glob(globify(path), options)

View File

@@ -0,0 +1,7 @@
'use strict'
module.exports = hashToSegments
function hashToSegments (hash) {
return [hash.slice(0, 2), hash.slice(2, 4), hash.slice(4)]
}

View File

@@ -0,0 +1,26 @@
'use strict'
const { withTempDir } = require('@npmcli/fs')
const fs = require('fs/promises')
const path = require('path')
module.exports.mkdir = mktmpdir
async function mktmpdir (cache, opts = {}) {
const { tmpPrefix } = opts
const tmpDir = path.join(cache, 'tmp')
await fs.mkdir(tmpDir, { recursive: true, owner: 'inherit' })
// do not use path.join(), it drops the trailing / if tmpPrefix is unset
const target = `${tmpDir}${path.sep}${tmpPrefix || ''}`
return fs.mkdtemp(target, { owner: 'inherit' })
}
module.exports.withTmp = withTmp
function withTmp (cache, opts, cb) {
if (!cb) {
cb = opts
opts = {}
}
return withTempDir(path.join(cache, 'tmp'), cb, opts)
}

View File

@@ -0,0 +1,257 @@
'use strict'
const {
mkdir,
readFile,
rm,
stat,
truncate,
writeFile,
} = require('fs/promises')
const pMap = require('p-map')
const contentPath = require('./content/path')
const fsm = require('fs-minipass')
const glob = require('./util/glob.js')
const index = require('./entry-index')
const path = require('path')
const ssri = require('ssri')
const hasOwnProperty = (obj, key) =>
Object.prototype.hasOwnProperty.call(obj, key)
const verifyOpts = (opts) => ({
concurrency: 20,
log: { silly () {} },
...opts,
})
module.exports = verify
async function verify (cache, opts) {
opts = verifyOpts(opts)
opts.log.silly('verify', 'verifying cache at', cache)
const steps = [
markStartTime,
fixPerms,
garbageCollect,
rebuildIndex,
cleanTmp,
writeVerifile,
markEndTime,
]
const stats = {}
for (const step of steps) {
const label = step.name
const start = new Date()
const s = await step(cache, opts)
if (s) {
Object.keys(s).forEach((k) => {
stats[k] = s[k]
})
}
const end = new Date()
if (!stats.runTime) {
stats.runTime = {}
}
stats.runTime[label] = end - start
}
stats.runTime.total = stats.endTime - stats.startTime
opts.log.silly(
'verify',
'verification finished for',
cache,
'in',
`${stats.runTime.total}ms`
)
return stats
}
async function markStartTime (cache, opts) {
return { startTime: new Date() }
}
async function markEndTime (cache, opts) {
return { endTime: new Date() }
}
async function fixPerms (cache, opts) {
opts.log.silly('verify', 'fixing cache permissions')
await mkdir(cache, { recursive: true })
return null
}
// Implements a naive mark-and-sweep tracing garbage collector.
//
// The algorithm is basically as follows:
// 1. Read (and filter) all index entries ("pointers")
// 2. Mark each integrity value as "live"
// 3. Read entire filesystem tree in `content-vX/` dir
// 4. If content is live, verify its checksum and delete it if it fails
// 5. If content is not marked as live, rm it.
//
async function garbageCollect (cache, opts) {
opts.log.silly('verify', 'garbage collecting content')
const indexStream = index.lsStream(cache)
const liveContent = new Set()
indexStream.on('data', (entry) => {
if (opts.filter && !opts.filter(entry)) {
return
}
// integrity is stringified, re-parse it so we can get each hash
const integrity = ssri.parse(entry.integrity)
for (const algo in integrity) {
liveContent.add(integrity[algo].toString())
}
})
await new Promise((resolve, reject) => {
indexStream.on('end', resolve).on('error', reject)
})
const contentDir = contentPath.contentDir(cache)
const files = await glob(path.join(contentDir, '**'), {
follow: false,
nodir: true,
nosort: true,
})
const stats = {
verifiedContent: 0,
reclaimedCount: 0,
reclaimedSize: 0,
badContentCount: 0,
keptSize: 0,
}
await pMap(
files,
async (f) => {
const split = f.split(/[/\\]/)
const digest = split.slice(split.length - 3).join('')
const algo = split[split.length - 4]
const integrity = ssri.fromHex(digest, algo)
if (liveContent.has(integrity.toString())) {
const info = await verifyContent(f, integrity)
if (!info.valid) {
stats.reclaimedCount++
stats.badContentCount++
stats.reclaimedSize += info.size
} else {
stats.verifiedContent++
stats.keptSize += info.size
}
} else {
// No entries refer to this content. We can delete.
stats.reclaimedCount++
const s = await stat(f)
await rm(f, { recursive: true, force: true })
stats.reclaimedSize += s.size
}
return stats
},
{ concurrency: opts.concurrency }
)
return stats
}
async function verifyContent (filepath, sri) {
const contentInfo = {}
try {
const { size } = await stat(filepath)
contentInfo.size = size
contentInfo.valid = true
await ssri.checkStream(new fsm.ReadStream(filepath), sri)
} catch (err) {
if (err.code === 'ENOENT') {
return { size: 0, valid: false }
}
if (err.code !== 'EINTEGRITY') {
throw err
}
await rm(filepath, { recursive: true, force: true })
contentInfo.valid = false
}
return contentInfo
}
async function rebuildIndex (cache, opts) {
opts.log.silly('verify', 'rebuilding index')
const entries = await index.ls(cache)
const stats = {
missingContent: 0,
rejectedEntries: 0,
totalEntries: 0,
}
const buckets = {}
for (const k in entries) {
/* istanbul ignore else */
if (hasOwnProperty(entries, k)) {
const hashed = index.hashKey(k)
const entry = entries[k]
const excluded = opts.filter && !opts.filter(entry)
excluded && stats.rejectedEntries++
if (buckets[hashed] && !excluded) {
buckets[hashed].push(entry)
} else if (buckets[hashed] && excluded) {
// skip
} else if (excluded) {
buckets[hashed] = []
buckets[hashed]._path = index.bucketPath(cache, k)
} else {
buckets[hashed] = [entry]
buckets[hashed]._path = index.bucketPath(cache, k)
}
}
}
await pMap(
Object.keys(buckets),
(key) => {
return rebuildBucket(cache, buckets[key], stats, opts)
},
{ concurrency: opts.concurrency }
)
return stats
}
async function rebuildBucket (cache, bucket, stats, opts) {
await truncate(bucket._path)
// This needs to be serialized because cacache explicitly
// lets very racy bucket conflicts clobber each other.
for (const entry of bucket) {
const content = contentPath(cache, entry.integrity)
try {
await stat(content)
await index.insert(cache, entry.key, entry.integrity, {
metadata: entry.metadata,
size: entry.size,
time: entry.time,
})
stats.totalEntries++
} catch (err) {
if (err.code === 'ENOENT') {
stats.rejectedEntries++
stats.missingContent++
} else {
throw err
}
}
}
}
function cleanTmp (cache, opts) {
opts.log.silly('verify', 'cleaning tmp directory')
return rm(path.join(cache, 'tmp'), { recursive: true, force: true })
}
async function writeVerifile (cache, opts) {
const verifile = path.join(cache, '_lastverified')
opts.log.silly('verify', 'writing verifile to ' + verifile)
return writeFile(verifile, `${Date.now()}`)
}
module.exports.lastRun = lastRun
async function lastRun (cache) {
const data = await readFile(path.join(cache, '_lastverified'), { encoding: 'utf8' })
return new Date(+data)
}

View File

@@ -0,0 +1,82 @@
{
"name": "cacache",
"version": "17.1.4",
"cache-version": {
"content": "2",
"index": "5"
},
"description": "Fast, fault-tolerant, cross-platform, disk-based, data-agnostic, content-addressable cache.",
"main": "lib/index.js",
"files": [
"bin/",
"lib/"
],
"scripts": {
"test": "tap",
"snap": "tap",
"coverage": "tap",
"test-docker": "docker run -it --rm --name pacotest -v \"$PWD\":/tmp -w /tmp node:latest npm test",
"lint": "eslint \"**/*.js\"",
"npmclilint": "npmcli-lint",
"lintfix": "npm run lint -- --fix",
"postsnap": "npm run lintfix --",
"postlint": "template-oss-check",
"posttest": "npm run lint",
"template-oss-apply": "template-oss-apply --force"
},
"repository": {
"type": "git",
"url": "https://github.com/npm/cacache.git"
},
"keywords": [
"cache",
"caching",
"content-addressable",
"sri",
"sri hash",
"subresource integrity",
"cache",
"storage",
"store",
"file store",
"filesystem",
"disk cache",
"disk storage"
],
"license": "ISC",
"dependencies": {
"@npmcli/fs": "^3.1.0",
"fs-minipass": "^3.0.0",
"glob": "^10.2.2",
"lru-cache": "^7.7.1",
"minipass": "^7.0.3",
"minipass-collect": "^1.0.2",
"minipass-flush": "^1.0.5",
"minipass-pipeline": "^1.2.4",
"p-map": "^4.0.0",
"ssri": "^10.0.0",
"tar": "^6.1.11",
"unique-filename": "^3.0.0"
},
"devDependencies": {
"@npmcli/eslint-config": "^4.0.0",
"@npmcli/template-oss": "4.18.0",
"tap": "^16.0.0"
},
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"windowsCI": false,
"version": "4.18.0",
"publish": "true"
},
"author": "GitHub Inc.",
"tap": {
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
}
}

View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,70 @@
# fs-minipass
Filesystem streams based on [minipass](http://npm.im/minipass).
4 classes are exported:
- ReadStream
- ReadStreamSync
- WriteStream
- WriteStreamSync
When using `ReadStreamSync`, all of the data is made available
immediately upon consuming the stream. Nothing is buffered in memory
when the stream is constructed. If the stream is piped to a writer,
then it will synchronously `read()` and emit data into the writer as
fast as the writer can consume it. (That is, it will respect
backpressure.) If you call `stream.read()` then it will read the
entire file and return the contents.
When using `WriteStreamSync`, every write is flushed to the file
synchronously. If your writes all come in a single tick, then it'll
write it all out in a single tick. It's as synchronous as you are.
The async versions work much like their node builtin counterparts,
with the exception of introducing significantly less Stream machinery
overhead.
## USAGE
It's just streams, you pipe them or read() them or write() to them.
```js
const fsm = require('fs-minipass')
const readStream = new fsm.ReadStream('file.txt')
const writeStream = new fsm.WriteStream('output.txt')
writeStream.write('some file header or whatever\n')
readStream.pipe(writeStream)
```
## ReadStream(path, options)
Path string is required, but somewhat irrelevant if an open file
descriptor is passed in as an option.
Options:
- `fd` Pass in a numeric file descriptor, if the file is already open.
- `readSize` The size of reads to do, defaults to 16MB
- `size` The size of the file, if known. Prevents zero-byte read()
call at the end.
- `autoClose` Set to `false` to prevent the file descriptor from being
closed when the file is done being read.
## WriteStream(path, options)
Path string is required, but somewhat irrelevant if an open file
descriptor is passed in as an option.
Options:
- `fd` Pass in a numeric file descriptor, if the file is already open.
- `mode` The mode to create the file with. Defaults to `0o666`.
- `start` The position in the file to start reading. If not
specified, then the file will start writing at position zero, and be
truncated by default.
- `autoClose` Set to `false` to prevent the file descriptor from being
closed when the stream is ended.
- `flags` Flags to use when opening the file. Irrelevant if `fd` is
passed in, since file won't be opened in that case. Defaults to
`'a'` if a `pos` is specified, or `'w'` otherwise.

View File

@@ -0,0 +1,443 @@
'use strict'
const { Minipass } = require('minipass')
const EE = require('events').EventEmitter
const fs = require('fs')
const writev = fs.writev
const _autoClose = Symbol('_autoClose')
const _close = Symbol('_close')
const _ended = Symbol('_ended')
const _fd = Symbol('_fd')
const _finished = Symbol('_finished')
const _flags = Symbol('_flags')
const _flush = Symbol('_flush')
const _handleChunk = Symbol('_handleChunk')
const _makeBuf = Symbol('_makeBuf')
const _mode = Symbol('_mode')
const _needDrain = Symbol('_needDrain')
const _onerror = Symbol('_onerror')
const _onopen = Symbol('_onopen')
const _onread = Symbol('_onread')
const _onwrite = Symbol('_onwrite')
const _open = Symbol('_open')
const _path = Symbol('_path')
const _pos = Symbol('_pos')
const _queue = Symbol('_queue')
const _read = Symbol('_read')
const _readSize = Symbol('_readSize')
const _reading = Symbol('_reading')
const _remain = Symbol('_remain')
const _size = Symbol('_size')
const _write = Symbol('_write')
const _writing = Symbol('_writing')
const _defaultFlag = Symbol('_defaultFlag')
const _errored = Symbol('_errored')
class ReadStream extends Minipass {
constructor (path, opt) {
opt = opt || {}
super(opt)
this.readable = true
this.writable = false
if (typeof path !== 'string') {
throw new TypeError('path must be a string')
}
this[_errored] = false
this[_fd] = typeof opt.fd === 'number' ? opt.fd : null
this[_path] = path
this[_readSize] = opt.readSize || 16 * 1024 * 1024
this[_reading] = false
this[_size] = typeof opt.size === 'number' ? opt.size : Infinity
this[_remain] = this[_size]
this[_autoClose] = typeof opt.autoClose === 'boolean' ?
opt.autoClose : true
if (typeof this[_fd] === 'number') {
this[_read]()
} else {
this[_open]()
}
}
get fd () {
return this[_fd]
}
get path () {
return this[_path]
}
write () {
throw new TypeError('this is a readable stream')
}
end () {
throw new TypeError('this is a readable stream')
}
[_open] () {
fs.open(this[_path], 'r', (er, fd) => this[_onopen](er, fd))
}
[_onopen] (er, fd) {
if (er) {
this[_onerror](er)
} else {
this[_fd] = fd
this.emit('open', fd)
this[_read]()
}
}
[_makeBuf] () {
return Buffer.allocUnsafe(Math.min(this[_readSize], this[_remain]))
}
[_read] () {
if (!this[_reading]) {
this[_reading] = true
const buf = this[_makeBuf]()
/* istanbul ignore if */
if (buf.length === 0) {
return process.nextTick(() => this[_onread](null, 0, buf))
}
fs.read(this[_fd], buf, 0, buf.length, null, (er, br, b) =>
this[_onread](er, br, b))
}
}
[_onread] (er, br, buf) {
this[_reading] = false
if (er) {
this[_onerror](er)
} else if (this[_handleChunk](br, buf)) {
this[_read]()
}
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
const fd = this[_fd]
this[_fd] = null
fs.close(fd, er => er ? this.emit('error', er) : this.emit('close'))
}
}
[_onerror] (er) {
this[_reading] = true
this[_close]()
this.emit('error', er)
}
[_handleChunk] (br, buf) {
let ret = false
// no effect if infinite
this[_remain] -= br
if (br > 0) {
ret = super.write(br < buf.length ? buf.slice(0, br) : buf)
}
if (br === 0 || this[_remain] <= 0) {
ret = false
this[_close]()
super.end()
}
return ret
}
emit (ev, data) {
switch (ev) {
case 'prefinish':
case 'finish':
break
case 'drain':
if (typeof this[_fd] === 'number') {
this[_read]()
}
break
case 'error':
if (this[_errored]) {
return
}
this[_errored] = true
return super.emit(ev, data)
default:
return super.emit(ev, data)
}
}
}
class ReadStreamSync extends ReadStream {
[_open] () {
let threw = true
try {
this[_onopen](null, fs.openSync(this[_path], 'r'))
threw = false
} finally {
if (threw) {
this[_close]()
}
}
}
[_read] () {
let threw = true
try {
if (!this[_reading]) {
this[_reading] = true
do {
const buf = this[_makeBuf]()
/* istanbul ignore next */
const br = buf.length === 0 ? 0
: fs.readSync(this[_fd], buf, 0, buf.length, null)
if (!this[_handleChunk](br, buf)) {
break
}
} while (true)
this[_reading] = false
}
threw = false
} finally {
if (threw) {
this[_close]()
}
}
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
const fd = this[_fd]
this[_fd] = null
fs.closeSync(fd)
this.emit('close')
}
}
}
class WriteStream extends EE {
constructor (path, opt) {
opt = opt || {}
super(opt)
this.readable = false
this.writable = true
this[_errored] = false
this[_writing] = false
this[_ended] = false
this[_needDrain] = false
this[_queue] = []
this[_path] = path
this[_fd] = typeof opt.fd === 'number' ? opt.fd : null
this[_mode] = opt.mode === undefined ? 0o666 : opt.mode
this[_pos] = typeof opt.start === 'number' ? opt.start : null
this[_autoClose] = typeof opt.autoClose === 'boolean' ?
opt.autoClose : true
// truncating makes no sense when writing into the middle
const defaultFlag = this[_pos] !== null ? 'r+' : 'w'
this[_defaultFlag] = opt.flags === undefined
this[_flags] = this[_defaultFlag] ? defaultFlag : opt.flags
if (this[_fd] === null) {
this[_open]()
}
}
emit (ev, data) {
if (ev === 'error') {
if (this[_errored]) {
return
}
this[_errored] = true
}
return super.emit(ev, data)
}
get fd () {
return this[_fd]
}
get path () {
return this[_path]
}
[_onerror] (er) {
this[_close]()
this[_writing] = true
this.emit('error', er)
}
[_open] () {
fs.open(this[_path], this[_flags], this[_mode],
(er, fd) => this[_onopen](er, fd))
}
[_onopen] (er, fd) {
if (this[_defaultFlag] &&
this[_flags] === 'r+' &&
er && er.code === 'ENOENT') {
this[_flags] = 'w'
this[_open]()
} else if (er) {
this[_onerror](er)
} else {
this[_fd] = fd
this.emit('open', fd)
if (!this[_writing]) {
this[_flush]()
}
}
}
end (buf, enc) {
if (buf) {
this.write(buf, enc)
}
this[_ended] = true
// synthetic after-write logic, where drain/finish live
if (!this[_writing] && !this[_queue].length &&
typeof this[_fd] === 'number') {
this[_onwrite](null, 0)
}
return this
}
write (buf, enc) {
if (typeof buf === 'string') {
buf = Buffer.from(buf, enc)
}
if (this[_ended]) {
this.emit('error', new Error('write() after end()'))
return false
}
if (this[_fd] === null || this[_writing] || this[_queue].length) {
this[_queue].push(buf)
this[_needDrain] = true
return false
}
this[_writing] = true
this[_write](buf)
return true
}
[_write] (buf) {
fs.write(this[_fd], buf, 0, buf.length, this[_pos], (er, bw) =>
this[_onwrite](er, bw))
}
[_onwrite] (er, bw) {
if (er) {
this[_onerror](er)
} else {
if (this[_pos] !== null) {
this[_pos] += bw
}
if (this[_queue].length) {
this[_flush]()
} else {
this[_writing] = false
if (this[_ended] && !this[_finished]) {
this[_finished] = true
this[_close]()
this.emit('finish')
} else if (this[_needDrain]) {
this[_needDrain] = false
this.emit('drain')
}
}
}
}
[_flush] () {
if (this[_queue].length === 0) {
if (this[_ended]) {
this[_onwrite](null, 0)
}
} else if (this[_queue].length === 1) {
this[_write](this[_queue].pop())
} else {
const iovec = this[_queue]
this[_queue] = []
writev(this[_fd], iovec, this[_pos],
(er, bw) => this[_onwrite](er, bw))
}
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
const fd = this[_fd]
this[_fd] = null
fs.close(fd, er => er ? this.emit('error', er) : this.emit('close'))
}
}
}
class WriteStreamSync extends WriteStream {
[_open] () {
let fd
// only wrap in a try{} block if we know we'll retry, to avoid
// the rethrow obscuring the error's source frame in most cases.
if (this[_defaultFlag] && this[_flags] === 'r+') {
try {
fd = fs.openSync(this[_path], this[_flags], this[_mode])
} catch (er) {
if (er.code === 'ENOENT') {
this[_flags] = 'w'
return this[_open]()
} else {
throw er
}
}
} else {
fd = fs.openSync(this[_path], this[_flags], this[_mode])
}
this[_onopen](null, fd)
}
[_close] () {
if (this[_autoClose] && typeof this[_fd] === 'number') {
const fd = this[_fd]
this[_fd] = null
fs.closeSync(fd)
this.emit('close')
}
}
[_write] (buf) {
// throw the original, but try to close if it fails
let threw = true
try {
this[_onwrite](null,
fs.writeSync(this[_fd], buf, 0, buf.length, this[_pos]))
threw = false
} finally {
if (threw) {
try {
this[_close]()
} catch {
// ok error
}
}
}
}
}
exports.ReadStream = ReadStream
exports.ReadStreamSync = ReadStreamSync
exports.WriteStream = WriteStream
exports.WriteStreamSync = WriteStreamSync

View File

@@ -0,0 +1,54 @@
{
"name": "fs-minipass",
"version": "3.0.3",
"main": "lib/index.js",
"scripts": {
"test": "tap",
"lint": "eslint \"**/*.js\"",
"postlint": "template-oss-check",
"template-oss-apply": "template-oss-apply --force",
"lintfix": "npm run lint -- --fix",
"snap": "tap",
"posttest": "npm run lint"
},
"keywords": [],
"author": "GitHub Inc.",
"license": "ISC",
"repository": {
"type": "git",
"url": "https://github.com/npm/fs-minipass.git"
},
"bugs": {
"url": "https://github.com/npm/fs-minipass/issues"
},
"homepage": "https://github.com/npm/fs-minipass#readme",
"description": "fs read and write streams based on minipass",
"dependencies": {
"minipass": "^7.0.3"
},
"devDependencies": {
"@npmcli/eslint-config": "^4.0.1",
"@npmcli/template-oss": "4.18.0",
"mutate-fs": "^2.1.1",
"tap": "^16.3.2"
},
"files": [
"bin/",
"lib/"
],
"tap": {
"check-coverage": true,
"nyc-arg": [
"--exclude",
"tap-snapshots/**"
]
},
"engines": {
"node": "^14.17.0 || ^16.13.0 || >=18.0.0"
},
"templateOSS": {
"//@npmcli/template-oss": "This file is partially managed by @npmcli/template-oss. Edits may be overwritten.",
"version": "4.18.0",
"publish": "true"
}
}

View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) 2009-2023 Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,389 @@
/// <reference types="node" />
import { Minimatch } from 'minimatch';
import { Minipass } from 'minipass';
import { FSOption, Path, PathScurry } from 'path-scurry';
import { IgnoreLike } from './ignore.js';
import { Pattern } from './pattern.js';
export type MatchSet = Minimatch['set'];
export type GlobParts = Exclude<Minimatch['globParts'], undefined>;
/**
* A `GlobOptions` object may be provided to any of the exported methods, and
* must be provided to the `Glob` constructor.
*
* All options are optional, boolean, and false by default, unless otherwise
* noted.
*
* All resolved options are added to the Glob object as properties.
*
* If you are running many `glob` operations, you can pass a Glob object as the
* `options` argument to a subsequent operation to share the previously loaded
* cache.
*/
export interface GlobOptions {
/**
* Set to `true` to always receive absolute paths for
* matched files. Set to `false` to always return relative paths.
*
* When this option is not set, absolute paths are returned for patterns
* that are absolute, and otherwise paths are returned that are relative
* to the `cwd` setting.
*
* This does _not_ make an extra system call to get
* the realpath, it only does string path resolution.
*
* Conflicts with {@link withFileTypes}
*/
absolute?: boolean;
/**
* Set to false to enable {@link windowsPathsNoEscape}
*
* @deprecated
*/
allowWindowsEscape?: boolean;
/**
* The current working directory in which to search. Defaults to
* `process.cwd()`.
*
* May be eiher a string path or a `file://` URL object or string.
*/
cwd?: string | URL;
/**
* Include `.dot` files in normal matches and `globstar`
* matches. Note that an explicit dot in a portion of the pattern
* will always match dot files.
*/
dot?: boolean;
/**
* Prepend all relative path strings with `./` (or `.\` on Windows).
*
* Without this option, returned relative paths are "bare", so instead of
* returning `'./foo/bar'`, they are returned as `'foo/bar'`.
*
* Relative patterns starting with `'../'` are not prepended with `./`, even
* if this option is set.
*/
dotRelative?: boolean;
/**
* Follow symlinked directories when expanding `**`
* patterns. This can result in a lot of duplicate references in
* the presence of cyclic links, and make performance quite bad.
*
* By default, a `**` in a pattern will follow 1 symbolic link if
* it is not the first item in the pattern, or none if it is the
* first item in the pattern, following the same behavior as Bash.
*/
follow?: boolean;
/**
* string or string[], or an object with `ignore` and `ignoreChildren`
* methods.
*
* If a string or string[] is provided, then this is treated as a glob
* pattern or array of glob patterns to exclude from matches. To ignore all
* children within a directory, as well as the entry itself, append `'/**'`
* to the ignore pattern.
*
* **Note** `ignore` patterns are _always_ in `dot:true` mode, regardless of
* any other settings.
*
* If an object is provided that has `ignored(path)` and/or
* `childrenIgnored(path)` methods, then these methods will be called to
* determine whether any Path is a match or if its children should be
* traversed, respectively.
*/
ignore?: string | string[] | IgnoreLike;
/**
* Treat brace expansion like `{a,b}` as a "magic" pattern. Has no
* effect if {@link nobrace} is set.
*
* Only has effect on the {@link hasMagic} function.
*/
magicalBraces?: boolean;
/**
* Add a `/` character to directory matches. Note that this requires
* additional stat calls in some cases.
*/
mark?: boolean;
/**
* Perform a basename-only match if the pattern does not contain any slash
* characters. That is, `*.js` would be treated as equivalent to
* `**\/*.js`, matching all js files in all directories.
*/
matchBase?: boolean;
/**
* Limit the directory traversal to a given depth below the cwd.
* Note that this does NOT prevent traversal to sibling folders,
* root patterns, and so on. It only limits the maximum folder depth
* that the walk will descend, relative to the cwd.
*/
maxDepth?: number;
/**
* Do not expand `{a,b}` and `{1..3}` brace sets.
*/
nobrace?: boolean;
/**
* Perform a case-insensitive match. This defaults to `true` on macOS and
* Windows systems, and `false` on all others.
*
* **Note** `nocase` should only be explicitly set when it is
* known that the filesystem's case sensitivity differs from the
* platform default. If set `true` on case-sensitive file
* systems, or `false` on case-insensitive file systems, then the
* walk may return more or less results than expected.
*/
nocase?: boolean;
/**
* Do not match directories, only files. (Note: to match
* _only_ directories, put a `/` at the end of the pattern.)
*/
nodir?: boolean;
/**
* Do not match "extglob" patterns such as `+(a|b)`.
*/
noext?: boolean;
/**
* Do not match `**` against multiple filenames. (Ie, treat it as a normal
* `*` instead.)
*
* Conflicts with {@link matchBase}
*/
noglobstar?: boolean;
/**
* Defaults to value of `process.platform` if available, or `'linux'` if
* not. Setting `platform:'win32'` on non-Windows systems may cause strange
* behavior.
*/
platform?: NodeJS.Platform;
/**
* Set to true to call `fs.realpath` on all of the
* results. In the case of an entry that cannot be resolved, the
* entry is omitted. This incurs a slight performance penalty, of
* course, because of the added system calls.
*/
realpath?: boolean;
/**
*
* A string path resolved against the `cwd` option, which
* is used as the starting point for absolute patterns that start
* with `/`, (but not drive letters or UNC paths on Windows).
*
* Note that this _doesn't_ necessarily limit the walk to the
* `root` directory, and doesn't affect the cwd starting point for
* non-absolute patterns. A pattern containing `..` will still be
* able to traverse out of the root directory, if it is not an
* actual root directory on the filesystem, and any non-absolute
* patterns will be matched in the `cwd`. For example, the
* pattern `/../*` with `{root:'/some/path'}` will return all
* files in `/some`, not all files in `/some/path`. The pattern
* `*` with `{root:'/some/path'}` will return all the entries in
* the cwd, not the entries in `/some/path`.
*
* To start absolute and non-absolute patterns in the same
* path, you can use `{root:''}`. However, be aware that on
* Windows systems, a pattern like `x:/*` or `//host/share/*` will
* _always_ start in the `x:/` or `//host/share` directory,
* regardless of the `root` setting.
*/
root?: string;
/**
* A [PathScurry](http://npm.im/path-scurry) object used
* to traverse the file system. If the `nocase` option is set
* explicitly, then any provided `scurry` object must match this
* setting.
*/
scurry?: PathScurry;
/**
* Call `lstat()` on all entries, whether required or not to determine
* if it's a valid match. When used with {@link withFileTypes}, this means
* that matches will include data such as modified time, permissions, and
* so on. Note that this will incur a performance cost due to the added
* system calls.
*/
stat?: boolean;
/**
* An AbortSignal which will cancel the Glob walk when
* triggered.
*/
signal?: AbortSignal;
/**
* Use `\\` as a path separator _only_, and
* _never_ as an escape character. If set, all `\\` characters are
* replaced with `/` in the pattern.
*
* Note that this makes it **impossible** to match against paths
* containing literal glob pattern characters, but allows matching
* with patterns constructed using `path.join()` and
* `path.resolve()` on Windows platforms, mimicking the (buggy!)
* behavior of Glob v7 and before on Windows. Please use with
* caution, and be mindful of [the caveat below about Windows
* paths](#windows). (For legacy reasons, this is also set if
* `allowWindowsEscape` is set to the exact value `false`.)
*/
windowsPathsNoEscape?: boolean;
/**
* Return [PathScurry](http://npm.im/path-scurry)
* `Path` objects instead of strings. These are similar to a
* NodeJS `Dirent` object, but with additional methods and
* properties.
*
* Conflicts with {@link absolute}
*/
withFileTypes?: boolean;
/**
* An fs implementation to override some or all of the defaults. See
* http://npm.im/path-scurry for details about what can be overridden.
*/
fs?: FSOption;
/**
* Just passed along to Minimatch. Note that this makes all pattern
* matching operations slower and *extremely* noisy.
*/
debug?: boolean;
/**
* Return `/` delimited paths, even on Windows.
*
* On posix systems, this has no effect. But, on Windows, it means that
* paths will be `/` delimited, and absolute paths will be their full
* resolved UNC forms, eg instead of `'C:\\foo\\bar'`, it would return
* `'//?/C:/foo/bar'`
*/
posix?: boolean;
/**
* Do not match any children of any matches. For example, the pattern
* `**\/foo` would match `a/foo`, but not `a/foo/b/foo` in this mode.
*
* This is especially useful for cases like "find all `node_modules`
* folders, but not the ones in `node_modules`".
*
* In order to support this, the `Ignore` implementation must support an
* `add(pattern: string)` method. If using the default `Ignore` class, then
* this is fine, but if this is set to `false`, and a custom `Ignore` is
* provided that does not have an `add()` method, then it will throw an
* error.
*
* **Caveat** It *only* ignores matches that would be a descendant of a
* previous match, and only if that descendant is matched *after* the
* ancestor is encountered. Since the file system walk happens in
* indeterminate order, it's possible that a match will already be added
* before its ancestor, if multiple or braced patterns are used.
*
* For example:
*
* ```ts
* const results = await glob([
* // likely to match first, since it's just a stat
* 'a/b/c/d/e/f',
*
* // this pattern is more complicated! It must to various readdir()
* // calls and test the results against a regular expression, and that
* // is certainly going to take a little bit longer.
* //
* // So, later on, it encounters a match at 'a/b/c/d/e', but it's too
* // late to ignore a/b/c/d/e/f, because it's already been emitted.
* 'a/[bdf]/?/[a-z]/*',
* ], { includeChildMatches: false })
* ```
*
* It's best to only set this to `false` if you can be reasonably sure that
* no components of the pattern will potentially match one another's file
* system descendants, or if the occasional included child entry will not
* cause problems.
*
* @default true
*/
includeChildMatches?: boolean;
}
export type GlobOptionsWithFileTypesTrue = GlobOptions & {
withFileTypes: true;
absolute?: undefined;
mark?: undefined;
posix?: undefined;
};
export type GlobOptionsWithFileTypesFalse = GlobOptions & {
withFileTypes?: false;
};
export type GlobOptionsWithFileTypesUnset = GlobOptions & {
withFileTypes?: undefined;
};
export type Result<Opts> = Opts extends GlobOptionsWithFileTypesTrue ? Path : Opts extends GlobOptionsWithFileTypesFalse ? string : Opts extends GlobOptionsWithFileTypesUnset ? string : string | Path;
export type Results<Opts> = Result<Opts>[];
export type FileTypes<Opts> = Opts extends GlobOptionsWithFileTypesTrue ? true : Opts extends GlobOptionsWithFileTypesFalse ? false : Opts extends GlobOptionsWithFileTypesUnset ? false : boolean;
/**
* An object that can perform glob pattern traversals.
*/
export declare class Glob<Opts extends GlobOptions> implements GlobOptions {
absolute?: boolean;
cwd: string;
root?: string;
dot: boolean;
dotRelative: boolean;
follow: boolean;
ignore?: string | string[] | IgnoreLike;
magicalBraces: boolean;
mark?: boolean;
matchBase: boolean;
maxDepth: number;
nobrace: boolean;
nocase: boolean;
nodir: boolean;
noext: boolean;
noglobstar: boolean;
pattern: string[];
platform: NodeJS.Platform;
realpath: boolean;
scurry: PathScurry;
stat: boolean;
signal?: AbortSignal;
windowsPathsNoEscape: boolean;
withFileTypes: FileTypes<Opts>;
includeChildMatches: boolean;
/**
* The options provided to the constructor.
*/
opts: Opts;
/**
* An array of parsed immutable {@link Pattern} objects.
*/
patterns: Pattern[];
/**
* All options are stored as properties on the `Glob` object.
*
* See {@link GlobOptions} for full options descriptions.
*
* Note that a previous `Glob` object can be passed as the
* `GlobOptions` to another `Glob` instantiation to re-use settings
* and caches with a new pattern.
*
* Traversal functions can be called multiple times to run the walk
* again.
*/
constructor(pattern: string | string[], opts: Opts);
/**
* Returns a Promise that resolves to the results array.
*/
walk(): Promise<Results<Opts>>;
/**
* synchronous {@link Glob.walk}
*/
walkSync(): Results<Opts>;
/**
* Stream results asynchronously.
*/
stream(): Minipass<Result<Opts>, Result<Opts>>;
/**
* Stream results synchronously.
*/
streamSync(): Minipass<Result<Opts>, Result<Opts>>;
/**
* Default sync iteration function. Returns a Generator that
* iterates over the results.
*/
iterateSync(): Generator<Result<Opts>, void, void>;
[Symbol.iterator](): Generator<Result<Opts>, void, void>;
/**
* Default async iteration function. Returns an AsyncGenerator that
* iterates over the results.
*/
iterate(): AsyncGenerator<Result<Opts>, void, void>;
[Symbol.asyncIterator](): AsyncGenerator<Result<Opts>, void, void>;
}
//# sourceMappingURL=glob.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"glob.d.ts","sourceRoot":"","sources":["../../src/glob.ts"],"names":[],"mappings":";AAAA,OAAO,EAAE,SAAS,EAAoB,MAAM,WAAW,CAAA;AACvD,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AAEnC,OAAO,EACL,QAAQ,EACR,IAAI,EACJ,UAAU,EAIX,MAAM,aAAa,CAAA;AACpB,OAAO,EAAE,UAAU,EAAE,MAAM,aAAa,CAAA;AACxC,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAA;AAGtC,MAAM,MAAM,QAAQ,GAAG,SAAS,CAAC,KAAK,CAAC,CAAA;AACvC,MAAM,MAAM,SAAS,GAAG,OAAO,CAAC,SAAS,CAAC,WAAW,CAAC,EAAE,SAAS,CAAC,CAAA;AAalE;;;;;;;;;;;;GAYG;AACH,MAAM,WAAW,WAAW;IAC1B;;;;;;;;;;;;OAYG;IACH,QAAQ,CAAC,EAAE,OAAO,CAAA;IAElB;;;;OAIG;IACH,kBAAkB,CAAC,EAAE,OAAO,CAAA;IAE5B;;;;;OAKG;IACH,GAAG,CAAC,EAAE,MAAM,GAAG,GAAG,CAAA;IAElB;;;;OAIG;IACH,GAAG,CAAC,EAAE,OAAO,CAAA;IAEb;;;;;;;;OAQG;IACH,WAAW,CAAC,EAAE,OAAO,CAAA;IAErB;;;;;;;;OAQG;IACH,MAAM,CAAC,EAAE,OAAO,CAAA;IAEhB;;;;;;;;;;;;;;;;OAgBG;IACH,MAAM,CAAC,EAAE,MAAM,GAAG,MAAM,EAAE,GAAG,UAAU,CAAA;IAEvC;;;;;OAKG;IACH,aAAa,CAAC,EAAE,OAAO,CAAA;IAEvB;;;OAGG;IACH,IAAI,CAAC,EAAE,OAAO,CAAA;IAEd;;;;OAIG;IACH,SAAS,CAAC,EAAE,OAAO,CAAA;IAEnB;;;;;OAKG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAA;IAEjB;;OAEG;IACH,OAAO,CAAC,EAAE,OAAO,CAAA;IAEjB;;;;;;;;;OASG;IACH,MAAM,CAAC,EAAE,OAAO,CAAA;IAEhB;;;OAGG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;OAEG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;;;;OAKG;IACH,UAAU,CAAC,EAAE,OAAO,CAAA;IAEpB;;;;OAIG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAC,QAAQ,CAAA;IAE1B;;;;;OAKG;IACH,QAAQ,CAAC,EAAE,OAAO,CAAA;IAElB;;;;;;;;;;;;;;;;;;;;;;OAsBG;IACH,IAAI,CAAC,EAAE,MAAM,CAAA;IAEb;;;;;OAKG;IACH,MAAM,CAAC,EAAE,UAAU,CAAA;IAEnB;;;;;;OAMG;IACH,IAAI,CAAC,EAAE,OAAO,CAAA;IAEd;;;OAGG;IACH,MAAM,CAAC,EAAE,WAAW,CAAA;IAEpB;;;;;;;;;;;;;OAaG;IACH,oBAAoB,CAAC,EAAE,OAAO,CAAA;IAE9B;;;;;;;OAOG;IACH,aAAa,CAAC,EAAE,OAAO,CAAA;IAEvB;;;OAGG;IACH,EAAE,CAAC,EAAE,QAAQ,CAAA;IAEb;;;OAGG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;;;;;;OAOG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;OA0CG;IACH,mBAAmB,CAAC,EAAE,OAAO,CAAA;CAC9B;AAED,MAAM,MAAM,4BAA4B,GAAG,WAAW,GAAG;IACvD,aAAa,EAAE,IAAI,CAAA;IAEnB,QAAQ,CAAC,EAAE,SAAS,CAAA;IACpB,IAAI,CAAC,EAAE,SAAS,CAAA;IAChB,KAAK,CAAC,EAAE,SAAS,CAAA;CAClB,CAAA;AAED,MAAM,MAAM,6BAA6B,GAAG,WAAW,GAAG;IACxD,aAAa,CAAC,EAAE,KAAK,CAAA;CACtB,CAAA;AAED,MAAM,MAAM,6BAA6B,GAAG,WAAW,GAAG;IACxD,aAAa,CAAC,EAAE,SAAS,CAAA;CAC1B,CAAA;AAED,MAAM,MAAM,MAAM,CAAC,IAAI,IACrB,IAAI,SAAS,4BAA4B,GAAG,IAAI,GAC9C,IAAI,SAAS,6BAA6B,GAAG,MAAM,GACnD,IAAI,SAAS,6BAA6B,GAAG,MAAM,GACnD,MAAM,GAAG,IAAI,CAAA;AACjB,MAAM,MAAM,OAAO,CAAC,IAAI,IAAI,MAAM,CAAC,IAAI,CAAC,EAAE,CAAA;AAE1C,MAAM,MAAM,SAAS,CAAC,IAAI,IACxB,IAAI,SAAS,4BAA4B,GAAG,IAAI,GAC9C,IAAI,SAAS,6BAA6B,GAAG,KAAK,GAClD,IAAI,SAAS,6BAA6B,GAAG,KAAK,GAClD,OAAO,CAAA;AAEX;;GAEG;AACH,qBAAa,IAAI,CAAC,IAAI,SAAS,WAAW,CAAE,YAAW,WAAW;IAChE,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,GAAG,EAAE,MAAM,CAAA;IACX,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,GAAG,EAAE,OAAO,CAAA;IACZ,WAAW,EAAE,OAAO,CAAA;IACpB,MAAM,EAAE,OAAO,CAAA;IACf,MAAM,CAAC,EAAE,MAAM,GAAG,MAAM,EAAE,GAAG,UAAU,CAAA;IACvC,aAAa,EAAE,OAAO,CAAA;IACtB,IAAI,CAAC,EAAE,OAAO,CAAA;IACd,SAAS,EAAE,OAAO,CAAA;IAClB,QAAQ,EAAE,MAAM,CAAA;IAChB,OAAO,EAAE,OAAO,CAAA;IAChB,MAAM,EAAE,OAAO,CAAA;IACf,KAAK,EAAE,OAAO,CAAA;IACd,KAAK,EAAE,OAAO,CAAA;IACd,UAAU,EAAE,OAAO,CAAA;IACnB,OAAO,EAAE,MAAM,EAAE,CAAA;IACjB,QAAQ,EAAE,MAAM,CAAC,QAAQ,CAAA;IACzB,QAAQ,EAAE,OAAO,CAAA;IACjB,MAAM,EAAE,UAAU,CAAA;IAClB,IAAI,EAAE,OAAO,CAAA;IACb,MAAM,CAAC,EAAE,WAAW,CAAA;IACpB,oBAAoB,EAAE,OAAO,CAAA;IAC7B,aAAa,EAAE,SAAS,CAAC,IAAI,CAAC,CAAA;IAC9B,mBAAmB,EAAE,OAAO,CAAA;IAE5B;;OAEG;IACH,IAAI,EAAE,IAAI,CAAA;IAEV;;OAEG;IACH,QAAQ,EAAE,OAAO,EAAE,CAAA;IAEnB;;;;;;;;;;;OAWG;gBACS,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAAE,IAAI,EAAE,IAAI;IA2HlD;;OAEG;IACG,IAAI,IAAI,OAAO,CAAC,OAAO,CAAC,IAAI,CAAC,CAAC;IAoBpC;;OAEG;IACH,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;IAgBzB;;OAEG;IACH,MAAM,IAAI,QAAQ,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,MAAM,CAAC,IAAI,CAAC,CAAC;IAc9C;;OAEG;IACH,UAAU,IAAI,QAAQ,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,MAAM,CAAC,IAAI,CAAC,CAAC;IAclD;;;OAGG;IACH,WAAW,IAAI,SAAS,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,IAAI,CAAC;IAGlD,CAAC,MAAM,CAAC,QAAQ,CAAC;IAIjB;;;OAGG;IACH,OAAO,IAAI,cAAc,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,IAAI,CAAC;IAGnD,CAAC,MAAM,CAAC,aAAa,CAAC;CAGvB"}

View File

@@ -0,0 +1,247 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Glob = void 0;
const minimatch_1 = require("minimatch");
const node_url_1 = require("node:url");
const path_scurry_1 = require("path-scurry");
const pattern_js_1 = require("./pattern.js");
const walker_js_1 = require("./walker.js");
// if no process global, just call it linux.
// so we default to case-sensitive, / separators
const defaultPlatform = (typeof process === 'object' &&
process &&
typeof process.platform === 'string') ?
process.platform
: 'linux';
/**
* An object that can perform glob pattern traversals.
*/
class Glob {
absolute;
cwd;
root;
dot;
dotRelative;
follow;
ignore;
magicalBraces;
mark;
matchBase;
maxDepth;
nobrace;
nocase;
nodir;
noext;
noglobstar;
pattern;
platform;
realpath;
scurry;
stat;
signal;
windowsPathsNoEscape;
withFileTypes;
includeChildMatches;
/**
* The options provided to the constructor.
*/
opts;
/**
* An array of parsed immutable {@link Pattern} objects.
*/
patterns;
/**
* All options are stored as properties on the `Glob` object.
*
* See {@link GlobOptions} for full options descriptions.
*
* Note that a previous `Glob` object can be passed as the
* `GlobOptions` to another `Glob` instantiation to re-use settings
* and caches with a new pattern.
*
* Traversal functions can be called multiple times to run the walk
* again.
*/
constructor(pattern, opts) {
/* c8 ignore start */
if (!opts)
throw new TypeError('glob options required');
/* c8 ignore stop */
this.withFileTypes = !!opts.withFileTypes;
this.signal = opts.signal;
this.follow = !!opts.follow;
this.dot = !!opts.dot;
this.dotRelative = !!opts.dotRelative;
this.nodir = !!opts.nodir;
this.mark = !!opts.mark;
if (!opts.cwd) {
this.cwd = '';
}
else if (opts.cwd instanceof URL || opts.cwd.startsWith('file://')) {
opts.cwd = (0, node_url_1.fileURLToPath)(opts.cwd);
}
this.cwd = opts.cwd || '';
this.root = opts.root;
this.magicalBraces = !!opts.magicalBraces;
this.nobrace = !!opts.nobrace;
this.noext = !!opts.noext;
this.realpath = !!opts.realpath;
this.absolute = opts.absolute;
this.includeChildMatches = opts.includeChildMatches !== false;
this.noglobstar = !!opts.noglobstar;
this.matchBase = !!opts.matchBase;
this.maxDepth =
typeof opts.maxDepth === 'number' ? opts.maxDepth : Infinity;
this.stat = !!opts.stat;
this.ignore = opts.ignore;
if (this.withFileTypes && this.absolute !== undefined) {
throw new Error('cannot set absolute and withFileTypes:true');
}
if (typeof pattern === 'string') {
pattern = [pattern];
}
this.windowsPathsNoEscape =
!!opts.windowsPathsNoEscape ||
opts.allowWindowsEscape ===
false;
if (this.windowsPathsNoEscape) {
pattern = pattern.map(p => p.replace(/\\/g, '/'));
}
if (this.matchBase) {
if (opts.noglobstar) {
throw new TypeError('base matching requires globstar');
}
pattern = pattern.map(p => (p.includes('/') ? p : `./**/${p}`));
}
this.pattern = pattern;
this.platform = opts.platform || defaultPlatform;
this.opts = { ...opts, platform: this.platform };
if (opts.scurry) {
this.scurry = opts.scurry;
if (opts.nocase !== undefined &&
opts.nocase !== opts.scurry.nocase) {
throw new Error('nocase option contradicts provided scurry option');
}
}
else {
const Scurry = opts.platform === 'win32' ? path_scurry_1.PathScurryWin32
: opts.platform === 'darwin' ? path_scurry_1.PathScurryDarwin
: opts.platform ? path_scurry_1.PathScurryPosix
: path_scurry_1.PathScurry;
this.scurry = new Scurry(this.cwd, {
nocase: opts.nocase,
fs: opts.fs,
});
}
this.nocase = this.scurry.nocase;
// If you do nocase:true on a case-sensitive file system, then
// we need to use regexps instead of strings for non-magic
// path portions, because statting `aBc` won't return results
// for the file `AbC` for example.
const nocaseMagicOnly = this.platform === 'darwin' || this.platform === 'win32';
const mmo = {
// default nocase based on platform
...opts,
dot: this.dot,
matchBase: this.matchBase,
nobrace: this.nobrace,
nocase: this.nocase,
nocaseMagicOnly,
nocomment: true,
noext: this.noext,
nonegate: true,
optimizationLevel: 2,
platform: this.platform,
windowsPathsNoEscape: this.windowsPathsNoEscape,
debug: !!this.opts.debug,
};
const mms = this.pattern.map(p => new minimatch_1.Minimatch(p, mmo));
const [matchSet, globParts] = mms.reduce((set, m) => {
set[0].push(...m.set);
set[1].push(...m.globParts);
return set;
}, [[], []]);
this.patterns = matchSet.map((set, i) => {
const g = globParts[i];
/* c8 ignore start */
if (!g)
throw new Error('invalid pattern object');
/* c8 ignore stop */
return new pattern_js_1.Pattern(set, g, 0, this.platform);
});
}
async walk() {
// Walkers always return array of Path objects, so we just have to
// coerce them into the right shape. It will have already called
// realpath() if the option was set to do so, so we know that's cached.
// start out knowing the cwd, at least
return [
...(await new walker_js_1.GlobWalker(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).walk()),
];
}
walkSync() {
return [
...new walker_js_1.GlobWalker(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).walkSync(),
];
}
stream() {
return new walker_js_1.GlobStream(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).stream();
}
streamSync() {
return new walker_js_1.GlobStream(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).streamSync();
}
/**
* Default sync iteration function. Returns a Generator that
* iterates over the results.
*/
iterateSync() {
return this.streamSync()[Symbol.iterator]();
}
[Symbol.iterator]() {
return this.iterateSync();
}
/**
* Default async iteration function. Returns an AsyncGenerator that
* iterates over the results.
*/
iterate() {
return this.stream()[Symbol.asyncIterator]();
}
[Symbol.asyncIterator]() {
return this.iterate();
}
}
exports.Glob = Glob;
//# sourceMappingURL=glob.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,14 @@
import { GlobOptions } from './glob.js';
/**
* Return true if the patterns provided contain any magic glob characters,
* given the options provided.
*
* Brace expansion is not considered "magic" unless the `magicalBraces` option
* is set, as brace expansion just turns one string into an array of strings.
* So a pattern like `'x{a,b}y'` would return `false`, because `'xay'` and
* `'xby'` both do not contain any magic glob characters, and it's treated the
* same as if you had called it on `['xay', 'xby']`. When `magicalBraces:true`
* is in the options, brace expansion _is_ treated as a pattern having magic.
*/
export declare const hasMagic: (pattern: string | string[], options?: GlobOptions) => boolean;
//# sourceMappingURL=has-magic.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"has-magic.d.ts","sourceRoot":"","sources":["../../src/has-magic.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,WAAW,EAAE,MAAM,WAAW,CAAA;AAEvC;;;;;;;;;;GAUG;AACH,eAAO,MAAM,QAAQ,YACV,MAAM,GAAG,MAAM,EAAE,YACjB,WAAW,KACnB,OAQF,CAAA"}

View File

@@ -0,0 +1,27 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.hasMagic = void 0;
const minimatch_1 = require("minimatch");
/**
* Return true if the patterns provided contain any magic glob characters,
* given the options provided.
*
* Brace expansion is not considered "magic" unless the `magicalBraces` option
* is set, as brace expansion just turns one string into an array of strings.
* So a pattern like `'x{a,b}y'` would return `false`, because `'xay'` and
* `'xby'` both do not contain any magic glob characters, and it's treated the
* same as if you had called it on `['xay', 'xby']`. When `magicalBraces:true`
* is in the options, brace expansion _is_ treated as a pattern having magic.
*/
const hasMagic = (pattern, options = {}) => {
if (!Array.isArray(pattern)) {
pattern = [pattern];
}
for (const p of pattern) {
if (new minimatch_1.Minimatch(p, options).hasMagic())
return true;
}
return false;
};
exports.hasMagic = hasMagic;
//# sourceMappingURL=has-magic.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"has-magic.js","sourceRoot":"","sources":["../../src/has-magic.ts"],"names":[],"mappings":";;;AAAA,yCAAqC;AAGrC;;;;;;;;;;GAUG;AACI,MAAM,QAAQ,GAAG,CACtB,OAA0B,EAC1B,UAAuB,EAAE,EAChB,EAAE;IACX,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,OAAO,CAAC,EAAE,CAAC;QAC5B,OAAO,GAAG,CAAC,OAAO,CAAC,CAAA;IACrB,CAAC;IACD,KAAK,MAAM,CAAC,IAAI,OAAO,EAAE,CAAC;QACxB,IAAI,IAAI,qBAAS,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,QAAQ,EAAE;YAAE,OAAO,IAAI,CAAA;IACvD,CAAC;IACD,OAAO,KAAK,CAAA;AACd,CAAC,CAAA;AAXY,QAAA,QAAQ,YAWpB","sourcesContent":["import { Minimatch } from 'minimatch'\nimport { GlobOptions } from './glob.js'\n\n/**\n * Return true if the patterns provided contain any magic glob characters,\n * given the options provided.\n *\n * Brace expansion is not considered \"magic\" unless the `magicalBraces` option\n * is set, as brace expansion just turns one string into an array of strings.\n * So a pattern like `'x{a,b}y'` would return `false`, because `'xay'` and\n * `'xby'` both do not contain any magic glob characters, and it's treated the\n * same as if you had called it on `['xay', 'xby']`. When `magicalBraces:true`\n * is in the options, brace expansion _is_ treated as a pattern having magic.\n */\nexport const hasMagic = (\n pattern: string | string[],\n options: GlobOptions = {},\n): boolean => {\n if (!Array.isArray(pattern)) {\n pattern = [pattern]\n }\n for (const p of pattern) {\n if (new Minimatch(p, options).hasMagic()) return true\n }\n return false\n}\n"]}

View File

@@ -0,0 +1,25 @@
/// <reference types="node" />
import { Minimatch, MinimatchOptions } from 'minimatch';
import { Path } from 'path-scurry';
import { GlobWalkerOpts } from './walker.js';
export interface IgnoreLike {
ignored?: (p: Path) => boolean;
childrenIgnored?: (p: Path) => boolean;
add?: (ignore: string) => void;
}
/**
* Class used to process ignored patterns
*/
export declare class Ignore implements IgnoreLike {
relative: Minimatch[];
relativeChildren: Minimatch[];
absolute: Minimatch[];
absoluteChildren: Minimatch[];
platform: NodeJS.Platform;
mmopts: MinimatchOptions;
constructor(ignored: string[], { nobrace, nocase, noext, noglobstar, platform, }: GlobWalkerOpts);
add(ign: string): void;
ignored(p: Path): boolean;
childrenIgnored(p: Path): boolean;
}
//# sourceMappingURL=ignore.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"ignore.d.ts","sourceRoot":"","sources":["../../src/ignore.ts"],"names":[],"mappings":";AAKA,OAAO,EAAE,SAAS,EAAE,gBAAgB,EAAE,MAAM,WAAW,CAAA;AACvD,OAAO,EAAE,IAAI,EAAE,MAAM,aAAa,CAAA;AAElC,OAAO,EAAE,cAAc,EAAE,MAAM,aAAa,CAAA;AAE5C,MAAM,WAAW,UAAU;IACzB,OAAO,CAAC,EAAE,CAAC,CAAC,EAAE,IAAI,KAAK,OAAO,CAAA;IAC9B,eAAe,CAAC,EAAE,CAAC,CAAC,EAAE,IAAI,KAAK,OAAO,CAAA;IACtC,GAAG,CAAC,EAAE,CAAC,MAAM,EAAE,MAAM,KAAK,IAAI,CAAA;CAC/B;AAWD;;GAEG;AACH,qBAAa,MAAO,YAAW,UAAU;IACvC,QAAQ,EAAE,SAAS,EAAE,CAAA;IACrB,gBAAgB,EAAE,SAAS,EAAE,CAAA;IAC7B,QAAQ,EAAE,SAAS,EAAE,CAAA;IACrB,gBAAgB,EAAE,SAAS,EAAE,CAAA;IAC7B,QAAQ,EAAE,MAAM,CAAC,QAAQ,CAAA;IACzB,MAAM,EAAE,gBAAgB,CAAA;gBAGtB,OAAO,EAAE,MAAM,EAAE,EACjB,EACE,OAAO,EACP,MAAM,EACN,KAAK,EACL,UAAU,EACV,QAA0B,GAC3B,EAAE,cAAc;IAqBnB,GAAG,CAAC,GAAG,EAAE,MAAM;IAyCf,OAAO,CAAC,CAAC,EAAE,IAAI,GAAG,OAAO;IAczB,eAAe,CAAC,CAAC,EAAE,IAAI,GAAG,OAAO;CAWlC"}

View File

@@ -0,0 +1,119 @@
"use strict";
// give it a pattern, and it'll be able to tell you if
// a given path should be ignored.
// Ignoring a path ignores its children if the pattern ends in /**
// Ignores are always parsed in dot:true mode
Object.defineProperty(exports, "__esModule", { value: true });
exports.Ignore = void 0;
const minimatch_1 = require("minimatch");
const pattern_js_1 = require("./pattern.js");
const defaultPlatform = (typeof process === 'object' &&
process &&
typeof process.platform === 'string') ?
process.platform
: 'linux';
/**
* Class used to process ignored patterns
*/
class Ignore {
relative;
relativeChildren;
absolute;
absoluteChildren;
platform;
mmopts;
constructor(ignored, { nobrace, nocase, noext, noglobstar, platform = defaultPlatform, }) {
this.relative = [];
this.absolute = [];
this.relativeChildren = [];
this.absoluteChildren = [];
this.platform = platform;
this.mmopts = {
dot: true,
nobrace,
nocase,
noext,
noglobstar,
optimizationLevel: 2,
platform,
nocomment: true,
nonegate: true,
};
for (const ign of ignored)
this.add(ign);
}
add(ign) {
// this is a little weird, but it gives us a clean set of optimized
// minimatch matchers, without getting tripped up if one of them
// ends in /** inside a brace section, and it's only inefficient at
// the start of the walk, not along it.
// It'd be nice if the Pattern class just had a .test() method, but
// handling globstars is a bit of a pita, and that code already lives
// in minimatch anyway.
// Another way would be if maybe Minimatch could take its set/globParts
// as an option, and then we could at least just use Pattern to test
// for absolute-ness.
// Yet another way, Minimatch could take an array of glob strings, and
// a cwd option, and do the right thing.
const mm = new minimatch_1.Minimatch(ign, this.mmopts);
for (let i = 0; i < mm.set.length; i++) {
const parsed = mm.set[i];
const globParts = mm.globParts[i];
/* c8 ignore start */
if (!parsed || !globParts) {
throw new Error('invalid pattern object');
}
// strip off leading ./ portions
// https://github.com/isaacs/node-glob/issues/570
while (parsed[0] === '.' && globParts[0] === '.') {
parsed.shift();
globParts.shift();
}
/* c8 ignore stop */
const p = new pattern_js_1.Pattern(parsed, globParts, 0, this.platform);
const m = new minimatch_1.Minimatch(p.globString(), this.mmopts);
const children = globParts[globParts.length - 1] === '**';
const absolute = p.isAbsolute();
if (absolute)
this.absolute.push(m);
else
this.relative.push(m);
if (children) {
if (absolute)
this.absoluteChildren.push(m);
else
this.relativeChildren.push(m);
}
}
}
ignored(p) {
const fullpath = p.fullpath();
const fullpaths = `${fullpath}/`;
const relative = p.relative() || '.';
const relatives = `${relative}/`;
for (const m of this.relative) {
if (m.match(relative) || m.match(relatives))
return true;
}
for (const m of this.absolute) {
if (m.match(fullpath) || m.match(fullpaths))
return true;
}
return false;
}
childrenIgnored(p) {
const fullpath = p.fullpath() + '/';
const relative = (p.relative() || '.') + '/';
for (const m of this.relativeChildren) {
if (m.match(relative))
return true;
}
for (const m of this.absoluteChildren) {
if (m.match(fullpath))
return true;
}
return false;
}
}
exports.Ignore = Ignore;
//# sourceMappingURL=ignore.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,97 @@
import { Minipass } from 'minipass';
import { Path } from 'path-scurry';
import type { GlobOptions, GlobOptionsWithFileTypesFalse, GlobOptionsWithFileTypesTrue, GlobOptionsWithFileTypesUnset } from './glob.js';
import { Glob } from './glob.js';
export { escape, unescape } from 'minimatch';
export type { FSOption, Path, WalkOptions, WalkOptionsWithFileTypesTrue, WalkOptionsWithFileTypesUnset, } from 'path-scurry';
export { Glob } from './glob.js';
export type { GlobOptions, GlobOptionsWithFileTypesFalse, GlobOptionsWithFileTypesTrue, GlobOptionsWithFileTypesUnset, } from './glob.js';
export { hasMagic } from './has-magic.js';
export { Ignore } from './ignore.js';
export type { IgnoreLike } from './ignore.js';
export type { MatchStream } from './walker.js';
/**
* Syncronous form of {@link globStream}. Will read all the matches as fast as
* you consume them, even all in a single tick if you consume them immediately,
* but will still respond to backpressure if they're not consumed immediately.
*/
export declare function globStreamSync(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Minipass<Path, Path>;
export declare function globStreamSync(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Minipass<string, string>;
export declare function globStreamSync(pattern: string | string[], options: GlobOptionsWithFileTypesUnset): Minipass<string, string>;
export declare function globStreamSync(pattern: string | string[], options: GlobOptions): Minipass<Path, Path> | Minipass<string, string>;
/**
* Return a stream that emits all the strings or `Path` objects and
* then emits `end` when completed.
*/
export declare function globStream(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Minipass<string, string>;
export declare function globStream(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Minipass<Path, Path>;
export declare function globStream(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): Minipass<string, string>;
export declare function globStream(pattern: string | string[], options: GlobOptions): Minipass<Path, Path> | Minipass<string, string>;
/**
* Synchronous form of {@link glob}
*/
export declare function globSync(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): string[];
export declare function globSync(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Path[];
export declare function globSync(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): string[];
export declare function globSync(pattern: string | string[], options: GlobOptions): Path[] | string[];
/**
* Perform an asynchronous glob search for the pattern(s) specified. Returns
* [Path](https://isaacs.github.io/path-scurry/classes/PathBase) objects if the
* {@link withFileTypes} option is set to `true`. See {@link GlobOptions} for
* full option descriptions.
*/
declare function glob_(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): Promise<string[]>;
declare function glob_(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Promise<Path[]>;
declare function glob_(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Promise<string[]>;
declare function glob_(pattern: string | string[], options: GlobOptions): Promise<Path[] | string[]>;
/**
* Return a sync iterator for walking glob pattern matches.
*/
export declare function globIterateSync(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): Generator<string, void, void>;
export declare function globIterateSync(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Generator<Path, void, void>;
export declare function globIterateSync(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Generator<string, void, void>;
export declare function globIterateSync(pattern: string | string[], options: GlobOptions): Generator<Path, void, void> | Generator<string, void, void>;
/**
* Return an async iterator for walking glob pattern matches.
*/
export declare function globIterate(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): AsyncGenerator<string, void, void>;
export declare function globIterate(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): AsyncGenerator<Path, void, void>;
export declare function globIterate(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): AsyncGenerator<string, void, void>;
export declare function globIterate(pattern: string | string[], options: GlobOptions): AsyncGenerator<Path, void, void> | AsyncGenerator<string, void, void>;
export declare const streamSync: typeof globStreamSync;
export declare const stream: typeof globStream & {
sync: typeof globStreamSync;
};
export declare const iterateSync: typeof globIterateSync;
export declare const iterate: typeof globIterate & {
sync: typeof globIterateSync;
};
export declare const sync: typeof globSync & {
stream: typeof globStreamSync;
iterate: typeof globIterateSync;
};
export declare const glob: typeof glob_ & {
glob: typeof glob_;
globSync: typeof globSync;
sync: typeof globSync & {
stream: typeof globStreamSync;
iterate: typeof globIterateSync;
};
globStream: typeof globStream;
stream: typeof globStream & {
sync: typeof globStreamSync;
};
globStreamSync: typeof globStreamSync;
streamSync: typeof globStreamSync;
globIterate: typeof globIterate;
iterate: typeof globIterate & {
sync: typeof globIterateSync;
};
globIterateSync: typeof globIterateSync;
iterateSync: typeof globIterateSync;
Glob: typeof Glob;
hasMagic: (pattern: string | string[], options?: GlobOptions) => boolean;
escape: (s: string, { windowsPathsNoEscape, }?: Pick<import("minimatch").MinimatchOptions, "windowsPathsNoEscape"> | undefined) => string;
unescape: (s: string, { windowsPathsNoEscape, }?: Pick<import("minimatch").MinimatchOptions, "windowsPathsNoEscape"> | undefined) => string;
};
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AACnC,OAAO,EAAE,IAAI,EAAE,MAAM,aAAa,CAAA;AAClC,OAAO,KAAK,EACV,WAAW,EACX,6BAA6B,EAC7B,4BAA4B,EAC5B,6BAA6B,EAC9B,MAAM,WAAW,CAAA;AAClB,OAAO,EAAE,IAAI,EAAE,MAAM,WAAW,CAAA;AAGhC,OAAO,EAAE,MAAM,EAAE,QAAQ,EAAE,MAAM,WAAW,CAAA;AAC5C,YAAY,EACV,QAAQ,EACR,IAAI,EACJ,WAAW,EACX,4BAA4B,EAC5B,6BAA6B,GAC9B,MAAM,aAAa,CAAA;AACpB,OAAO,EAAE,IAAI,EAAE,MAAM,WAAW,CAAA;AAChC,YAAY,EACV,WAAW,EACX,6BAA6B,EAC7B,4BAA4B,EAC5B,6BAA6B,GAC9B,MAAM,WAAW,CAAA;AAClB,OAAO,EAAE,QAAQ,EAAE,MAAM,gBAAgB,CAAA;AACzC,OAAO,EAAE,MAAM,EAAE,MAAM,aAAa,CAAA;AACpC,YAAY,EAAE,UAAU,EAAE,MAAM,aAAa,CAAA;AAC7C,YAAY,EAAE,WAAW,EAAE,MAAM,aAAa,CAAA;AAE9C;;;;GAIG;AACH,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,CAAA;AACvB,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,GAAG,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAQlD;;;GAGG;AACH,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,CAAA;AACvB,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,GAAG,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAQlD;;GAEG;AACH,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,MAAM,EAAE,CAAA;AACX,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,IAAI,EAAE,CAAA;AACT,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,MAAM,EAAE,CAAA;AACX,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,IAAI,EAAE,GAAG,MAAM,EAAE,CAAA;AAQpB;;;;;GAKG;AACH,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,OAAO,CAAC,MAAM,EAAE,CAAC,CAAA;AACpB,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,OAAO,CAAC,IAAI,EAAE,CAAC,CAAA;AAClB,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,OAAO,CAAC,MAAM,EAAE,CAAC,CAAA;AACpB,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,OAAO,CAAC,IAAI,EAAE,GAAG,MAAM,EAAE,CAAC,CAAA;AAQ7B;;GAEG;AACH,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAChC,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,SAAS,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAC9B,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAChC,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,SAAS,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,GAAG,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAQ9D;;GAEG;AACH,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,cAAc,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AACrC,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,cAAc,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AACnC,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,cAAc,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AACrC,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,cAAc,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,GAAG,cAAc,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AASxE,eAAO,MAAM,UAAU,uBAAiB,CAAA;AACxC,eAAO,MAAM,MAAM;;CAAsD,CAAA;AACzE,eAAO,MAAM,WAAW,wBAAkB,CAAA;AAC1C,eAAO,MAAM,OAAO;;CAElB,CAAA;AACF,eAAO,MAAM,IAAI;;;CAGf,CAAA;AAEF,eAAO,MAAM,IAAI;;;;;;;;;;;;;;;;;;;;;;;CAgBf,CAAA"}

View File

@@ -0,0 +1,68 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.glob = exports.sync = exports.iterate = exports.iterateSync = exports.stream = exports.streamSync = exports.globIterate = exports.globIterateSync = exports.globSync = exports.globStream = exports.globStreamSync = exports.Ignore = exports.hasMagic = exports.Glob = exports.unescape = exports.escape = void 0;
const minimatch_1 = require("minimatch");
const glob_js_1 = require("./glob.js");
const has_magic_js_1 = require("./has-magic.js");
var minimatch_2 = require("minimatch");
Object.defineProperty(exports, "escape", { enumerable: true, get: function () { return minimatch_2.escape; } });
Object.defineProperty(exports, "unescape", { enumerable: true, get: function () { return minimatch_2.unescape; } });
var glob_js_2 = require("./glob.js");
Object.defineProperty(exports, "Glob", { enumerable: true, get: function () { return glob_js_2.Glob; } });
var has_magic_js_2 = require("./has-magic.js");
Object.defineProperty(exports, "hasMagic", { enumerable: true, get: function () { return has_magic_js_2.hasMagic; } });
var ignore_js_1 = require("./ignore.js");
Object.defineProperty(exports, "Ignore", { enumerable: true, get: function () { return ignore_js_1.Ignore; } });
function globStreamSync(pattern, options = {}) {
return new glob_js_1.Glob(pattern, options).streamSync();
}
exports.globStreamSync = globStreamSync;
function globStream(pattern, options = {}) {
return new glob_js_1.Glob(pattern, options).stream();
}
exports.globStream = globStream;
function globSync(pattern, options = {}) {
return new glob_js_1.Glob(pattern, options).walkSync();
}
exports.globSync = globSync;
async function glob_(pattern, options = {}) {
return new glob_js_1.Glob(pattern, options).walk();
}
function globIterateSync(pattern, options = {}) {
return new glob_js_1.Glob(pattern, options).iterateSync();
}
exports.globIterateSync = globIterateSync;
function globIterate(pattern, options = {}) {
return new glob_js_1.Glob(pattern, options).iterate();
}
exports.globIterate = globIterate;
// aliases: glob.sync.stream() glob.stream.sync() glob.sync() etc
exports.streamSync = globStreamSync;
exports.stream = Object.assign(globStream, { sync: globStreamSync });
exports.iterateSync = globIterateSync;
exports.iterate = Object.assign(globIterate, {
sync: globIterateSync,
});
exports.sync = Object.assign(globSync, {
stream: globStreamSync,
iterate: globIterateSync,
});
exports.glob = Object.assign(glob_, {
glob: glob_,
globSync,
sync: exports.sync,
globStream,
stream: exports.stream,
globStreamSync,
streamSync: exports.streamSync,
globIterate,
iterate: exports.iterate,
globIterateSync,
iterateSync: exports.iterateSync,
Glob: glob_js_1.Glob,
hasMagic: has_magic_js_1.hasMagic,
escape: minimatch_1.escape,
unescape: minimatch_1.unescape,
});
exports.glob.glob = exports.glob;
//# sourceMappingURL=index.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
{
"type": "commonjs"
}

View File

@@ -0,0 +1,77 @@
/// <reference types="node" />
import { GLOBSTAR } from 'minimatch';
export type MMPattern = string | RegExp | typeof GLOBSTAR;
export type PatternList = [p: MMPattern, ...rest: MMPattern[]];
export type UNCPatternList = [
p0: '',
p1: '',
p2: string,
p3: string,
...rest: MMPattern[]
];
export type DrivePatternList = [p0: string, ...rest: MMPattern[]];
export type AbsolutePatternList = [p0: '', ...rest: MMPattern[]];
export type GlobList = [p: string, ...rest: string[]];
/**
* An immutable-ish view on an array of glob parts and their parsed
* results
*/
export declare class Pattern {
#private;
readonly length: number;
constructor(patternList: MMPattern[], globList: string[], index: number, platform: NodeJS.Platform);
/**
* The first entry in the parsed list of patterns
*/
pattern(): MMPattern;
/**
* true of if pattern() returns a string
*/
isString(): boolean;
/**
* true of if pattern() returns GLOBSTAR
*/
isGlobstar(): boolean;
/**
* true if pattern() returns a regexp
*/
isRegExp(): boolean;
/**
* The /-joined set of glob parts that make up this pattern
*/
globString(): string;
/**
* true if there are more pattern parts after this one
*/
hasMore(): boolean;
/**
* The rest of the pattern after this part, or null if this is the end
*/
rest(): Pattern | null;
/**
* true if the pattern represents a //unc/path/ on windows
*/
isUNC(): boolean;
/**
* True if the pattern starts with a drive letter on Windows
*/
isDrive(): boolean;
/**
* True if the pattern is rooted on an absolute path
*/
isAbsolute(): boolean;
/**
* consume the root of the pattern, and return it
*/
root(): string;
/**
* Check to see if the current globstar pattern is allowed to follow
* a symbolic link.
*/
checkFollowGlobstar(): boolean;
/**
* Mark that the current globstar pattern is following a symbolic link
*/
markFollowGlobstar(): boolean;
}
//# sourceMappingURL=pattern.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"pattern.d.ts","sourceRoot":"","sources":["../../src/pattern.ts"],"names":[],"mappings":";AAEA,OAAO,EAAE,QAAQ,EAAE,MAAM,WAAW,CAAA;AACpC,MAAM,MAAM,SAAS,GAAG,MAAM,GAAG,MAAM,GAAG,OAAO,QAAQ,CAAA;AAGzD,MAAM,MAAM,WAAW,GAAG,CAAC,CAAC,EAAE,SAAS,EAAE,GAAG,IAAI,EAAE,SAAS,EAAE,CAAC,CAAA;AAC9D,MAAM,MAAM,cAAc,GAAG;IAC3B,EAAE,EAAE,EAAE;IACN,EAAE,EAAE,EAAE;IACN,EAAE,EAAE,MAAM;IACV,EAAE,EAAE,MAAM;IACV,GAAG,IAAI,EAAE,SAAS,EAAE;CACrB,CAAA;AACD,MAAM,MAAM,gBAAgB,GAAG,CAAC,EAAE,EAAE,MAAM,EAAE,GAAG,IAAI,EAAE,SAAS,EAAE,CAAC,CAAA;AACjE,MAAM,MAAM,mBAAmB,GAAG,CAAC,EAAE,EAAE,EAAE,EAAE,GAAG,IAAI,EAAE,SAAS,EAAE,CAAC,CAAA;AAChE,MAAM,MAAM,QAAQ,GAAG,CAAC,CAAC,EAAE,MAAM,EAAE,GAAG,IAAI,EAAE,MAAM,EAAE,CAAC,CAAA;AAMrD;;;GAGG;AACH,qBAAa,OAAO;;IAIlB,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAA;gBAUrB,WAAW,EAAE,SAAS,EAAE,EACxB,QAAQ,EAAE,MAAM,EAAE,EAClB,KAAK,EAAE,MAAM,EACb,QAAQ,EAAE,MAAM,CAAC,QAAQ;IA6D3B;;OAEG;IACH,OAAO,IAAI,SAAS;IAIpB;;OAEG;IACH,QAAQ,IAAI,OAAO;IAGnB;;OAEG;IACH,UAAU,IAAI,OAAO;IAGrB;;OAEG;IACH,QAAQ,IAAI,OAAO;IAInB;;OAEG;IACH,UAAU,IAAI,MAAM;IAUpB;;OAEG;IACH,OAAO,IAAI,OAAO;IAIlB;;OAEG;IACH,IAAI,IAAI,OAAO,GAAG,IAAI;IAetB;;OAEG;IACH,KAAK,IAAI,OAAO;IAoBhB;;OAEG;IACH,OAAO,IAAI,OAAO;IAelB;;OAEG;IACH,UAAU,IAAI,OAAO;IAUrB;;OAEG;IACH,IAAI,IAAI,MAAM;IASd;;;OAGG;IACH,mBAAmB,IAAI,OAAO;IAQ9B;;OAEG;IACH,kBAAkB,IAAI,OAAO;CAM9B"}

View File

@@ -0,0 +1,219 @@
"use strict";
// this is just a very light wrapper around 2 arrays with an offset index
Object.defineProperty(exports, "__esModule", { value: true });
exports.Pattern = void 0;
const minimatch_1 = require("minimatch");
const isPatternList = (pl) => pl.length >= 1;
const isGlobList = (gl) => gl.length >= 1;
/**
* An immutable-ish view on an array of glob parts and their parsed
* results
*/
class Pattern {
#patternList;
#globList;
#index;
length;
#platform;
#rest;
#globString;
#isDrive;
#isUNC;
#isAbsolute;
#followGlobstar = true;
constructor(patternList, globList, index, platform) {
if (!isPatternList(patternList)) {
throw new TypeError('empty pattern list');
}
if (!isGlobList(globList)) {
throw new TypeError('empty glob list');
}
if (globList.length !== patternList.length) {
throw new TypeError('mismatched pattern list and glob list lengths');
}
this.length = patternList.length;
if (index < 0 || index >= this.length) {
throw new TypeError('index out of range');
}
this.#patternList = patternList;
this.#globList = globList;
this.#index = index;
this.#platform = platform;
// normalize root entries of absolute patterns on initial creation.
if (this.#index === 0) {
// c: => ['c:/']
// C:/ => ['C:/']
// C:/x => ['C:/', 'x']
// //host/share => ['//host/share/']
// //host/share/ => ['//host/share/']
// //host/share/x => ['//host/share/', 'x']
// /etc => ['/', 'etc']
// / => ['/']
if (this.isUNC()) {
// '' / '' / 'host' / 'share'
const [p0, p1, p2, p3, ...prest] = this.#patternList;
const [g0, g1, g2, g3, ...grest] = this.#globList;
if (prest[0] === '') {
// ends in /
prest.shift();
grest.shift();
}
const p = [p0, p1, p2, p3, ''].join('/');
const g = [g0, g1, g2, g3, ''].join('/');
this.#patternList = [p, ...prest];
this.#globList = [g, ...grest];
this.length = this.#patternList.length;
}
else if (this.isDrive() || this.isAbsolute()) {
const [p1, ...prest] = this.#patternList;
const [g1, ...grest] = this.#globList;
if (prest[0] === '') {
// ends in /
prest.shift();
grest.shift();
}
const p = p1 + '/';
const g = g1 + '/';
this.#patternList = [p, ...prest];
this.#globList = [g, ...grest];
this.length = this.#patternList.length;
}
}
}
/**
* The first entry in the parsed list of patterns
*/
pattern() {
return this.#patternList[this.#index];
}
/**
* true of if pattern() returns a string
*/
isString() {
return typeof this.#patternList[this.#index] === 'string';
}
/**
* true of if pattern() returns GLOBSTAR
*/
isGlobstar() {
return this.#patternList[this.#index] === minimatch_1.GLOBSTAR;
}
/**
* true if pattern() returns a regexp
*/
isRegExp() {
return this.#patternList[this.#index] instanceof RegExp;
}
/**
* The /-joined set of glob parts that make up this pattern
*/
globString() {
return (this.#globString =
this.#globString ||
(this.#index === 0 ?
this.isAbsolute() ?
this.#globList[0] + this.#globList.slice(1).join('/')
: this.#globList.join('/')
: this.#globList.slice(this.#index).join('/')));
}
/**
* true if there are more pattern parts after this one
*/
hasMore() {
return this.length > this.#index + 1;
}
/**
* The rest of the pattern after this part, or null if this is the end
*/
rest() {
if (this.#rest !== undefined)
return this.#rest;
if (!this.hasMore())
return (this.#rest = null);
this.#rest = new Pattern(this.#patternList, this.#globList, this.#index + 1, this.#platform);
this.#rest.#isAbsolute = this.#isAbsolute;
this.#rest.#isUNC = this.#isUNC;
this.#rest.#isDrive = this.#isDrive;
return this.#rest;
}
/**
* true if the pattern represents a //unc/path/ on windows
*/
isUNC() {
const pl = this.#patternList;
return this.#isUNC !== undefined ?
this.#isUNC
: (this.#isUNC =
this.#platform === 'win32' &&
this.#index === 0 &&
pl[0] === '' &&
pl[1] === '' &&
typeof pl[2] === 'string' &&
!!pl[2] &&
typeof pl[3] === 'string' &&
!!pl[3]);
}
// pattern like C:/...
// split = ['C:', ...]
// XXX: would be nice to handle patterns like `c:*` to test the cwd
// in c: for *, but I don't know of a way to even figure out what that
// cwd is without actually chdir'ing into it?
/**
* True if the pattern starts with a drive letter on Windows
*/
isDrive() {
const pl = this.#patternList;
return this.#isDrive !== undefined ?
this.#isDrive
: (this.#isDrive =
this.#platform === 'win32' &&
this.#index === 0 &&
this.length > 1 &&
typeof pl[0] === 'string' &&
/^[a-z]:$/i.test(pl[0]));
}
// pattern = '/' or '/...' or '/x/...'
// split = ['', ''] or ['', ...] or ['', 'x', ...]
// Drive and UNC both considered absolute on windows
/**
* True if the pattern is rooted on an absolute path
*/
isAbsolute() {
const pl = this.#patternList;
return this.#isAbsolute !== undefined ?
this.#isAbsolute
: (this.#isAbsolute =
(pl[0] === '' && pl.length > 1) ||
this.isDrive() ||
this.isUNC());
}
/**
* consume the root of the pattern, and return it
*/
root() {
const p = this.#patternList[0];
return (typeof p === 'string' && this.isAbsolute() && this.#index === 0) ?
p
: '';
}
/**
* Check to see if the current globstar pattern is allowed to follow
* a symbolic link.
*/
checkFollowGlobstar() {
return !(this.#index === 0 ||
!this.isGlobstar() ||
!this.#followGlobstar);
}
/**
* Mark that the current globstar pattern is following a symbolic link
*/
markFollowGlobstar() {
if (this.#index === 0 || !this.isGlobstar() || !this.#followGlobstar)
return false;
this.#followGlobstar = false;
return true;
}
}
exports.Pattern = Pattern;
//# sourceMappingURL=pattern.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,59 @@
import { MMRegExp } from 'minimatch';
import { Path } from 'path-scurry';
import { Pattern } from './pattern.js';
import { GlobWalkerOpts } from './walker.js';
/**
* A cache of which patterns have been processed for a given Path
*/
export declare class HasWalkedCache {
store: Map<string, Set<string>>;
constructor(store?: Map<string, Set<string>>);
copy(): HasWalkedCache;
hasWalked(target: Path, pattern: Pattern): boolean | undefined;
storeWalked(target: Path, pattern: Pattern): void;
}
/**
* A record of which paths have been matched in a given walk step,
* and whether they only are considered a match if they are a directory,
* and whether their absolute or relative path should be returned.
*/
export declare class MatchRecord {
store: Map<Path, number>;
add(target: Path, absolute: boolean, ifDir: boolean): void;
entries(): [Path, boolean, boolean][];
}
/**
* A collection of patterns that must be processed in a subsequent step
* for a given path.
*/
export declare class SubWalks {
store: Map<Path, Pattern[]>;
add(target: Path, pattern: Pattern): void;
get(target: Path): Pattern[];
entries(): [Path, Pattern[]][];
keys(): Path[];
}
/**
* The class that processes patterns for a given path.
*
* Handles child entry filtering, and determining whether a path's
* directory contents must be read.
*/
export declare class Processor {
hasWalkedCache: HasWalkedCache;
matches: MatchRecord;
subwalks: SubWalks;
patterns?: Pattern[];
follow: boolean;
dot: boolean;
opts: GlobWalkerOpts;
constructor(opts: GlobWalkerOpts, hasWalkedCache?: HasWalkedCache);
processPatterns(target: Path, patterns: Pattern[]): this;
subwalkTargets(): Path[];
child(): Processor;
filterEntries(parent: Path, entries: Path[]): Processor;
testGlobstar(e: Path, pattern: Pattern, rest: Pattern | null, absolute: boolean): void;
testRegExp(e: Path, p: MMRegExp, rest: Pattern | null, absolute: boolean): void;
testString(e: Path, p: string, rest: Pattern | null, absolute: boolean): void;
}
//# sourceMappingURL=processor.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"processor.d.ts","sourceRoot":"","sources":["../../src/processor.ts"],"names":[],"mappings":"AAEA,OAAO,EAAY,QAAQ,EAAE,MAAM,WAAW,CAAA;AAC9C,OAAO,EAAE,IAAI,EAAE,MAAM,aAAa,CAAA;AAClC,OAAO,EAAa,OAAO,EAAE,MAAM,cAAc,CAAA;AACjD,OAAO,EAAE,cAAc,EAAE,MAAM,aAAa,CAAA;AAE5C;;GAEG;AACH,qBAAa,cAAc;IACzB,KAAK,EAAE,GAAG,CAAC,MAAM,EAAE,GAAG,CAAC,MAAM,CAAC,CAAC,CAAA;gBACnB,KAAK,GAAE,GAAG,CAAC,MAAM,EAAE,GAAG,CAAC,MAAM,CAAC,CAAa;IAGvD,IAAI;IAGJ,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,OAAO,EAAE,OAAO;IAGxC,WAAW,CAAC,MAAM,EAAE,IAAI,EAAE,OAAO,EAAE,OAAO;CAM3C;AAED;;;;GAIG;AACH,qBAAa,WAAW;IACtB,KAAK,EAAE,GAAG,CAAC,IAAI,EAAE,MAAM,CAAC,CAAY;IACpC,GAAG,CAAC,MAAM,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,KAAK,EAAE,OAAO;IAMnD,OAAO,IAAI,CAAC,IAAI,EAAE,OAAO,EAAE,OAAO,CAAC,EAAE;CAOtC;AAED;;;GAGG;AACH,qBAAa,QAAQ;IACnB,KAAK,EAAE,GAAG,CAAC,IAAI,EAAE,OAAO,EAAE,CAAC,CAAY;IACvC,GAAG,CAAC,MAAM,EAAE,IAAI,EAAE,OAAO,EAAE,OAAO;IAWlC,GAAG,CAAC,MAAM,EAAE,IAAI,GAAG,OAAO,EAAE;IAS5B,OAAO,IAAI,CAAC,IAAI,EAAE,OAAO,EAAE,CAAC,EAAE;IAG9B,IAAI,IAAI,IAAI,EAAE;CAGf;AAED;;;;;GAKG;AACH,qBAAa,SAAS;IACpB,cAAc,EAAE,cAAc,CAAA;IAC9B,OAAO,cAAoB;IAC3B,QAAQ,WAAiB;IACzB,QAAQ,CAAC,EAAE,OAAO,EAAE,CAAA;IACpB,MAAM,EAAE,OAAO,CAAA;IACf,GAAG,EAAE,OAAO,CAAA;IACZ,IAAI,EAAE,cAAc,CAAA;gBAER,IAAI,EAAE,cAAc,EAAE,cAAc,CAAC,EAAE,cAAc;IAQjE,eAAe,CAAC,MAAM,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE;IAmGjD,cAAc,IAAI,IAAI,EAAE;IAIxB,KAAK;IAQL,aAAa,CAAC,MAAM,EAAE,IAAI,EAAE,OAAO,EAAE,IAAI,EAAE,GAAG,SAAS;IAqBvD,YAAY,CACV,CAAC,EAAE,IAAI,EACP,OAAO,EAAE,OAAO,EAChB,IAAI,EAAE,OAAO,GAAG,IAAI,EACpB,QAAQ,EAAE,OAAO;IA8CnB,UAAU,CACR,CAAC,EAAE,IAAI,EACP,CAAC,EAAE,QAAQ,EACX,IAAI,EAAE,OAAO,GAAG,IAAI,EACpB,QAAQ,EAAE,OAAO;IAUnB,UAAU,CAAC,CAAC,EAAE,IAAI,EAAE,CAAC,EAAE,MAAM,EAAE,IAAI,EAAE,OAAO,GAAG,IAAI,EAAE,QAAQ,EAAE,OAAO;CASvE"}

View File

@@ -0,0 +1,301 @@
"use strict";
// synchronous utility for filtering entries and calculating subwalks
Object.defineProperty(exports, "__esModule", { value: true });
exports.Processor = exports.SubWalks = exports.MatchRecord = exports.HasWalkedCache = void 0;
const minimatch_1 = require("minimatch");
/**
* A cache of which patterns have been processed for a given Path
*/
class HasWalkedCache {
store;
constructor(store = new Map()) {
this.store = store;
}
copy() {
return new HasWalkedCache(new Map(this.store));
}
hasWalked(target, pattern) {
return this.store.get(target.fullpath())?.has(pattern.globString());
}
storeWalked(target, pattern) {
const fullpath = target.fullpath();
const cached = this.store.get(fullpath);
if (cached)
cached.add(pattern.globString());
else
this.store.set(fullpath, new Set([pattern.globString()]));
}
}
exports.HasWalkedCache = HasWalkedCache;
/**
* A record of which paths have been matched in a given walk step,
* and whether they only are considered a match if they are a directory,
* and whether their absolute or relative path should be returned.
*/
class MatchRecord {
store = new Map();
add(target, absolute, ifDir) {
const n = (absolute ? 2 : 0) | (ifDir ? 1 : 0);
const current = this.store.get(target);
this.store.set(target, current === undefined ? n : n & current);
}
// match, absolute, ifdir
entries() {
return [...this.store.entries()].map(([path, n]) => [
path,
!!(n & 2),
!!(n & 1),
]);
}
}
exports.MatchRecord = MatchRecord;
/**
* A collection of patterns that must be processed in a subsequent step
* for a given path.
*/
class SubWalks {
store = new Map();
add(target, pattern) {
if (!target.canReaddir()) {
return;
}
const subs = this.store.get(target);
if (subs) {
if (!subs.find(p => p.globString() === pattern.globString())) {
subs.push(pattern);
}
}
else
this.store.set(target, [pattern]);
}
get(target) {
const subs = this.store.get(target);
/* c8 ignore start */
if (!subs) {
throw new Error('attempting to walk unknown path');
}
/* c8 ignore stop */
return subs;
}
entries() {
return this.keys().map(k => [k, this.store.get(k)]);
}
keys() {
return [...this.store.keys()].filter(t => t.canReaddir());
}
}
exports.SubWalks = SubWalks;
/**
* The class that processes patterns for a given path.
*
* Handles child entry filtering, and determining whether a path's
* directory contents must be read.
*/
class Processor {
hasWalkedCache;
matches = new MatchRecord();
subwalks = new SubWalks();
patterns;
follow;
dot;
opts;
constructor(opts, hasWalkedCache) {
this.opts = opts;
this.follow = !!opts.follow;
this.dot = !!opts.dot;
this.hasWalkedCache =
hasWalkedCache ? hasWalkedCache.copy() : new HasWalkedCache();
}
processPatterns(target, patterns) {
this.patterns = patterns;
const processingSet = patterns.map(p => [target, p]);
// map of paths to the magic-starting subwalks they need to walk
// first item in patterns is the filter
for (let [t, pattern] of processingSet) {
this.hasWalkedCache.storeWalked(t, pattern);
const root = pattern.root();
const absolute = pattern.isAbsolute() && this.opts.absolute !== false;
// start absolute patterns at root
if (root) {
t = t.resolve(root === '/' && this.opts.root !== undefined ?
this.opts.root
: root);
const rest = pattern.rest();
if (!rest) {
this.matches.add(t, true, false);
continue;
}
else {
pattern = rest;
}
}
if (t.isENOENT())
continue;
let p;
let rest;
let changed = false;
while (typeof (p = pattern.pattern()) === 'string' &&
(rest = pattern.rest())) {
const c = t.resolve(p);
t = c;
pattern = rest;
changed = true;
}
p = pattern.pattern();
rest = pattern.rest();
if (changed) {
if (this.hasWalkedCache.hasWalked(t, pattern))
continue;
this.hasWalkedCache.storeWalked(t, pattern);
}
// now we have either a final string for a known entry,
// more strings for an unknown entry,
// or a pattern starting with magic, mounted on t.
if (typeof p === 'string') {
// must not be final entry, otherwise we would have
// concatenated it earlier.
const ifDir = p === '..' || p === '' || p === '.';
this.matches.add(t.resolve(p), absolute, ifDir);
continue;
}
else if (p === minimatch_1.GLOBSTAR) {
// if no rest, match and subwalk pattern
// if rest, process rest and subwalk pattern
// if it's a symlink, but we didn't get here by way of a
// globstar match (meaning it's the first time THIS globstar
// has traversed a symlink), then we follow it. Otherwise, stop.
if (!t.isSymbolicLink() ||
this.follow ||
pattern.checkFollowGlobstar()) {
this.subwalks.add(t, pattern);
}
const rp = rest?.pattern();
const rrest = rest?.rest();
if (!rest || ((rp === '' || rp === '.') && !rrest)) {
// only HAS to be a dir if it ends in **/ or **/.
// but ending in ** will match files as well.
this.matches.add(t, absolute, rp === '' || rp === '.');
}
else {
if (rp === '..') {
// this would mean you're matching **/.. at the fs root,
// and no thanks, I'm not gonna test that specific case.
/* c8 ignore start */
const tp = t.parent || t;
/* c8 ignore stop */
if (!rrest)
this.matches.add(tp, absolute, true);
else if (!this.hasWalkedCache.hasWalked(tp, rrest)) {
this.subwalks.add(tp, rrest);
}
}
}
}
else if (p instanceof RegExp) {
this.subwalks.add(t, pattern);
}
}
return this;
}
subwalkTargets() {
return this.subwalks.keys();
}
child() {
return new Processor(this.opts, this.hasWalkedCache);
}
// return a new Processor containing the subwalks for each
// child entry, and a set of matches, and
// a hasWalkedCache that's a copy of this one
// then we're going to call
filterEntries(parent, entries) {
const patterns = this.subwalks.get(parent);
// put matches and entry walks into the results processor
const results = this.child();
for (const e of entries) {
for (const pattern of patterns) {
const absolute = pattern.isAbsolute();
const p = pattern.pattern();
const rest = pattern.rest();
if (p === minimatch_1.GLOBSTAR) {
results.testGlobstar(e, pattern, rest, absolute);
}
else if (p instanceof RegExp) {
results.testRegExp(e, p, rest, absolute);
}
else {
results.testString(e, p, rest, absolute);
}
}
}
return results;
}
testGlobstar(e, pattern, rest, absolute) {
if (this.dot || !e.name.startsWith('.')) {
if (!pattern.hasMore()) {
this.matches.add(e, absolute, false);
}
if (e.canReaddir()) {
// if we're in follow mode or it's not a symlink, just keep
// testing the same pattern. If there's more after the globstar,
// then this symlink consumes the globstar. If not, then we can
// follow at most ONE symlink along the way, so we mark it, which
// also checks to ensure that it wasn't already marked.
if (this.follow || !e.isSymbolicLink()) {
this.subwalks.add(e, pattern);
}
else if (e.isSymbolicLink()) {
if (rest && pattern.checkFollowGlobstar()) {
this.subwalks.add(e, rest);
}
else if (pattern.markFollowGlobstar()) {
this.subwalks.add(e, pattern);
}
}
}
}
// if the NEXT thing matches this entry, then also add
// the rest.
if (rest) {
const rp = rest.pattern();
if (typeof rp === 'string' &&
// dots and empty were handled already
rp !== '..' &&
rp !== '' &&
rp !== '.') {
this.testString(e, rp, rest.rest(), absolute);
}
else if (rp === '..') {
/* c8 ignore start */
const ep = e.parent || e;
/* c8 ignore stop */
this.subwalks.add(ep, rest);
}
else if (rp instanceof RegExp) {
this.testRegExp(e, rp, rest.rest(), absolute);
}
}
}
testRegExp(e, p, rest, absolute) {
if (!p.test(e.name))
return;
if (!rest) {
this.matches.add(e, absolute, false);
}
else {
this.subwalks.add(e, rest);
}
}
testString(e, p, rest, absolute) {
// should never happen?
if (!e.isNamed(p))
return;
if (!rest) {
this.matches.add(e, absolute, false);
}
else {
this.subwalks.add(e, rest);
}
}
}
exports.Processor = Processor;
//# sourceMappingURL=processor.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,98 @@
/// <reference types="node" />
/**
* Single-use utility classes to provide functionality to the {@link Glob}
* methods.
*
* @module
*/
import { Minipass } from 'minipass';
import { Path } from 'path-scurry';
import { IgnoreLike } from './ignore.js';
import { Pattern } from './pattern.js';
import { Processor } from './processor.js';
export interface GlobWalkerOpts {
absolute?: boolean;
allowWindowsEscape?: boolean;
cwd?: string | URL;
dot?: boolean;
dotRelative?: boolean;
follow?: boolean;
ignore?: string | string[] | IgnoreLike;
mark?: boolean;
matchBase?: boolean;
maxDepth?: number;
nobrace?: boolean;
nocase?: boolean;
nodir?: boolean;
noext?: boolean;
noglobstar?: boolean;
platform?: NodeJS.Platform;
posix?: boolean;
realpath?: boolean;
root?: string;
stat?: boolean;
signal?: AbortSignal;
windowsPathsNoEscape?: boolean;
withFileTypes?: boolean;
includeChildMatches?: boolean;
}
export type GWOFileTypesTrue = GlobWalkerOpts & {
withFileTypes: true;
};
export type GWOFileTypesFalse = GlobWalkerOpts & {
withFileTypes: false;
};
export type GWOFileTypesUnset = GlobWalkerOpts & {
withFileTypes?: undefined;
};
export type Result<O extends GlobWalkerOpts> = O extends GWOFileTypesTrue ? Path : O extends GWOFileTypesFalse ? string : O extends GWOFileTypesUnset ? string : Path | string;
export type Matches<O extends GlobWalkerOpts> = O extends GWOFileTypesTrue ? Set<Path> : O extends GWOFileTypesFalse ? Set<string> : O extends GWOFileTypesUnset ? Set<string> : Set<Path | string>;
export type MatchStream<O extends GlobWalkerOpts> = Minipass<Result<O>, Result<O>>;
/**
* basic walking utilities that all the glob walker types use
*/
export declare abstract class GlobUtil<O extends GlobWalkerOpts = GlobWalkerOpts> {
#private;
path: Path;
patterns: Pattern[];
opts: O;
seen: Set<Path>;
paused: boolean;
aborted: boolean;
signal?: AbortSignal;
maxDepth: number;
includeChildMatches: boolean;
constructor(patterns: Pattern[], path: Path, opts: O);
pause(): void;
resume(): void;
onResume(fn: () => any): void;
matchCheck(e: Path, ifDir: boolean): Promise<Path | undefined>;
matchCheckTest(e: Path | undefined, ifDir: boolean): Path | undefined;
matchCheckSync(e: Path, ifDir: boolean): Path | undefined;
abstract matchEmit(p: Result<O>): void;
abstract matchEmit(p: string | Path): void;
matchFinish(e: Path, absolute: boolean): void;
match(e: Path, absolute: boolean, ifDir: boolean): Promise<void>;
matchSync(e: Path, absolute: boolean, ifDir: boolean): void;
walkCB(target: Path, patterns: Pattern[], cb: () => any): void;
walkCB2(target: Path, patterns: Pattern[], processor: Processor, cb: () => any): any;
walkCB3(target: Path, entries: Path[], processor: Processor, cb: () => any): void;
walkCBSync(target: Path, patterns: Pattern[], cb: () => any): void;
walkCB2Sync(target: Path, patterns: Pattern[], processor: Processor, cb: () => any): any;
walkCB3Sync(target: Path, entries: Path[], processor: Processor, cb: () => any): void;
}
export declare class GlobWalker<O extends GlobWalkerOpts = GlobWalkerOpts> extends GlobUtil<O> {
matches: Set<Result<O>>;
constructor(patterns: Pattern[], path: Path, opts: O);
matchEmit(e: Result<O>): void;
walk(): Promise<Set<Result<O>>>;
walkSync(): Set<Result<O>>;
}
export declare class GlobStream<O extends GlobWalkerOpts = GlobWalkerOpts> extends GlobUtil<O> {
results: Minipass<Result<O>, Result<O>>;
constructor(patterns: Pattern[], path: Path, opts: O);
matchEmit(e: Result<O>): void;
stream(): MatchStream<O>;
streamSync(): MatchStream<O>;
}
//# sourceMappingURL=walker.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"walker.d.ts","sourceRoot":"","sources":["../../src/walker.ts"],"names":[],"mappings":";AAAA;;;;;GAKG;AACH,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AACnC,OAAO,EAAE,IAAI,EAAE,MAAM,aAAa,CAAA;AAClC,OAAO,EAAU,UAAU,EAAE,MAAM,aAAa,CAAA;AAOhD,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAA;AACtC,OAAO,EAAE,SAAS,EAAE,MAAM,gBAAgB,CAAA;AAE1C,MAAM,WAAW,cAAc;IAC7B,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,kBAAkB,CAAC,EAAE,OAAO,CAAA;IAC5B,GAAG,CAAC,EAAE,MAAM,GAAG,GAAG,CAAA;IAClB,GAAG,CAAC,EAAE,OAAO,CAAA;IACb,WAAW,CAAC,EAAE,OAAO,CAAA;IACrB,MAAM,CAAC,EAAE,OAAO,CAAA;IAChB,MAAM,CAAC,EAAE,MAAM,GAAG,MAAM,EAAE,GAAG,UAAU,CAAA;IACvC,IAAI,CAAC,EAAE,OAAO,CAAA;IACd,SAAS,CAAC,EAAE,OAAO,CAAA;IAGnB,QAAQ,CAAC,EAAE,MAAM,CAAA;IACjB,OAAO,CAAC,EAAE,OAAO,CAAA;IACjB,MAAM,CAAC,EAAE,OAAO,CAAA;IAChB,KAAK,CAAC,EAAE,OAAO,CAAA;IACf,KAAK,CAAC,EAAE,OAAO,CAAA;IACf,UAAU,CAAC,EAAE,OAAO,CAAA;IACpB,QAAQ,CAAC,EAAE,MAAM,CAAC,QAAQ,CAAA;IAC1B,KAAK,CAAC,EAAE,OAAO,CAAA;IACf,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,IAAI,CAAC,EAAE,OAAO,CAAA;IACd,MAAM,CAAC,EAAE,WAAW,CAAA;IACpB,oBAAoB,CAAC,EAAE,OAAO,CAAA;IAC9B,aAAa,CAAC,EAAE,OAAO,CAAA;IACvB,mBAAmB,CAAC,EAAE,OAAO,CAAA;CAC9B;AAED,MAAM,MAAM,gBAAgB,GAAG,cAAc,GAAG;IAC9C,aAAa,EAAE,IAAI,CAAA;CACpB,CAAA;AACD,MAAM,MAAM,iBAAiB,GAAG,cAAc,GAAG;IAC/C,aAAa,EAAE,KAAK,CAAA;CACrB,CAAA;AACD,MAAM,MAAM,iBAAiB,GAAG,cAAc,GAAG;IAC/C,aAAa,CAAC,EAAE,SAAS,CAAA;CAC1B,CAAA;AAED,MAAM,MAAM,MAAM,CAAC,CAAC,SAAS,cAAc,IACzC,CAAC,SAAS,gBAAgB,GAAG,IAAI,GAC/B,CAAC,SAAS,iBAAiB,GAAG,MAAM,GACpC,CAAC,SAAS,iBAAiB,GAAG,MAAM,GACpC,IAAI,GAAG,MAAM,CAAA;AAEjB,MAAM,MAAM,OAAO,CAAC,CAAC,SAAS,cAAc,IAC1C,CAAC,SAAS,gBAAgB,GAAG,GAAG,CAAC,IAAI,CAAC,GACpC,CAAC,SAAS,iBAAiB,GAAG,GAAG,CAAC,MAAM,CAAC,GACzC,CAAC,SAAS,iBAAiB,GAAG,GAAG,CAAC,MAAM,CAAC,GACzC,GAAG,CAAC,IAAI,GAAG,MAAM,CAAC,CAAA;AAEtB,MAAM,MAAM,WAAW,CAAC,CAAC,SAAS,cAAc,IAAI,QAAQ,CAC1D,MAAM,CAAC,CAAC,CAAC,EACT,MAAM,CAAC,CAAC,CAAC,CACV,CAAA;AAUD;;GAEG;AACH,8BAAsB,QAAQ,CAAC,CAAC,SAAS,cAAc,GAAG,cAAc;;IACtE,IAAI,EAAE,IAAI,CAAA;IACV,QAAQ,EAAE,OAAO,EAAE,CAAA;IACnB,IAAI,EAAE,CAAC,CAAA;IACP,IAAI,EAAE,GAAG,CAAC,IAAI,CAAC,CAAkB;IACjC,MAAM,EAAE,OAAO,CAAQ;IACvB,OAAO,EAAE,OAAO,CAAQ;IAIxB,MAAM,CAAC,EAAE,WAAW,CAAA;IACpB,QAAQ,EAAE,MAAM,CAAA;IAChB,mBAAmB,EAAE,OAAO,CAAA;gBAEhB,QAAQ,EAAE,OAAO,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,IAAI,EAAE,CAAC;IAsCpD,KAAK;IAGL,MAAM;IAUN,QAAQ,CAAC,EAAE,EAAE,MAAM,GAAG;IAahB,UAAU,CAAC,CAAC,EAAE,IAAI,EAAE,KAAK,EAAE,OAAO,GAAG,OAAO,CAAC,IAAI,GAAG,SAAS,CAAC;IAqBpE,cAAc,CAAC,CAAC,EAAE,IAAI,GAAG,SAAS,EAAE,KAAK,EAAE,OAAO,GAAG,IAAI,GAAG,SAAS;IAgBrE,cAAc,CAAC,CAAC,EAAE,IAAI,EAAE,KAAK,EAAE,OAAO,GAAG,IAAI,GAAG,SAAS;IAmBzD,QAAQ,CAAC,SAAS,CAAC,CAAC,EAAE,MAAM,CAAC,CAAC,CAAC,GAAG,IAAI;IACtC,QAAQ,CAAC,SAAS,CAAC,CAAC,EAAE,MAAM,GAAG,IAAI,GAAG,IAAI;IAE1C,WAAW,CAAC,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO;IA2BhC,KAAK,CAAC,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,KAAK,EAAE,OAAO,GAAG,OAAO,CAAC,IAAI,CAAC;IAKtE,SAAS,CAAC,CAAC,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,KAAK,EAAE,OAAO,GAAG,IAAI;IAK3D,MAAM,CAAC,MAAM,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,EAAE,EAAE,EAAE,MAAM,GAAG;IAOvD,OAAO,CACL,MAAM,EAAE,IAAI,EACZ,QAAQ,EAAE,OAAO,EAAE,EACnB,SAAS,EAAE,SAAS,EACpB,EAAE,EAAE,MAAM,GAAG;IA2Cf,OAAO,CACL,MAAM,EAAE,IAAI,EACZ,OAAO,EAAE,IAAI,EAAE,EACf,SAAS,EAAE,SAAS,EACpB,EAAE,EAAE,MAAM,GAAG;IAsBf,UAAU,CAAC,MAAM,EAAE,IAAI,EAAE,QAAQ,EAAE,OAAO,EAAE,EAAE,EAAE,EAAE,MAAM,GAAG;IAO3D,WAAW,CACT,MAAM,EAAE,IAAI,EACZ,QAAQ,EAAE,OAAO,EAAE,EACnB,SAAS,EAAE,SAAS,EACpB,EAAE,EAAE,MAAM,GAAG;IAqCf,WAAW,CACT,MAAM,EAAE,IAAI,EACZ,OAAO,EAAE,IAAI,EAAE,EACf,SAAS,EAAE,SAAS,EACpB,EAAE,EAAE,MAAM,GAAG;CAoBhB;AAED,qBAAa,UAAU,CACrB,CAAC,SAAS,cAAc,GAAG,cAAc,CACzC,SAAQ,QAAQ,CAAC,CAAC,CAAC;IACnB,OAAO,iBAAuB;gBAElB,QAAQ,EAAE,OAAO,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,IAAI,EAAE,CAAC;IAIpD,SAAS,CAAC,CAAC,EAAE,MAAM,CAAC,CAAC,CAAC,GAAG,IAAI;IAIvB,IAAI,IAAI,OAAO,CAAC,GAAG,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC,CAAC;IAiBrC,QAAQ,IAAI,GAAG,CAAC,MAAM,CAAC,CAAC,CAAC,CAAC;CAW3B;AAED,qBAAa,UAAU,CACrB,CAAC,SAAS,cAAc,GAAG,cAAc,CACzC,SAAQ,QAAQ,CAAC,CAAC,CAAC;IACnB,OAAO,EAAE,QAAQ,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,MAAM,CAAC,CAAC,CAAC,CAAC,CAAA;gBAE3B,QAAQ,EAAE,OAAO,EAAE,EAAE,IAAI,EAAE,IAAI,EAAE,IAAI,EAAE,CAAC;IAUpD,SAAS,CAAC,CAAC,EAAE,MAAM,CAAC,CAAC,CAAC,GAAG,IAAI;IAK7B,MAAM,IAAI,WAAW,CAAC,CAAC,CAAC;IAYxB,UAAU,IAAI,WAAW,CAAC,CAAC,CAAC;CAO7B"}

View File

@@ -0,0 +1,387 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.GlobStream = exports.GlobWalker = exports.GlobUtil = void 0;
/**
* Single-use utility classes to provide functionality to the {@link Glob}
* methods.
*
* @module
*/
const minipass_1 = require("minipass");
const ignore_js_1 = require("./ignore.js");
const processor_js_1 = require("./processor.js");
const makeIgnore = (ignore, opts) => typeof ignore === 'string' ? new ignore_js_1.Ignore([ignore], opts)
: Array.isArray(ignore) ? new ignore_js_1.Ignore(ignore, opts)
: ignore;
/**
* basic walking utilities that all the glob walker types use
*/
class GlobUtil {
path;
patterns;
opts;
seen = new Set();
paused = false;
aborted = false;
#onResume = [];
#ignore;
#sep;
signal;
maxDepth;
includeChildMatches;
constructor(patterns, path, opts) {
this.patterns = patterns;
this.path = path;
this.opts = opts;
this.#sep = !opts.posix && opts.platform === 'win32' ? '\\' : '/';
this.includeChildMatches = opts.includeChildMatches !== false;
if (opts.ignore || !this.includeChildMatches) {
this.#ignore = makeIgnore(opts.ignore ?? [], opts);
if (!this.includeChildMatches &&
typeof this.#ignore.add !== 'function') {
const m = 'cannot ignore child matches, ignore lacks add() method.';
throw new Error(m);
}
}
// ignore, always set with maxDepth, but it's optional on the
// GlobOptions type
/* c8 ignore start */
this.maxDepth = opts.maxDepth || Infinity;
/* c8 ignore stop */
if (opts.signal) {
this.signal = opts.signal;
this.signal.addEventListener('abort', () => {
this.#onResume.length = 0;
});
}
}
#ignored(path) {
return this.seen.has(path) || !!this.#ignore?.ignored?.(path);
}
#childrenIgnored(path) {
return !!this.#ignore?.childrenIgnored?.(path);
}
// backpressure mechanism
pause() {
this.paused = true;
}
resume() {
/* c8 ignore start */
if (this.signal?.aborted)
return;
/* c8 ignore stop */
this.paused = false;
let fn = undefined;
while (!this.paused && (fn = this.#onResume.shift())) {
fn();
}
}
onResume(fn) {
if (this.signal?.aborted)
return;
/* c8 ignore start */
if (!this.paused) {
fn();
}
else {
/* c8 ignore stop */
this.#onResume.push(fn);
}
}
// do the requisite realpath/stat checking, and return the path
// to add or undefined to filter it out.
async matchCheck(e, ifDir) {
if (ifDir && this.opts.nodir)
return undefined;
let rpc;
if (this.opts.realpath) {
rpc = e.realpathCached() || (await e.realpath());
if (!rpc)
return undefined;
e = rpc;
}
const needStat = e.isUnknown() || this.opts.stat;
const s = needStat ? await e.lstat() : e;
if (this.opts.follow && this.opts.nodir && s?.isSymbolicLink()) {
const target = await s.realpath();
/* c8 ignore start */
if (target && (target.isUnknown() || this.opts.stat)) {
await target.lstat();
}
/* c8 ignore stop */
}
return this.matchCheckTest(s, ifDir);
}
matchCheckTest(e, ifDir) {
return (e &&
(this.maxDepth === Infinity || e.depth() <= this.maxDepth) &&
(!ifDir || e.canReaddir()) &&
(!this.opts.nodir || !e.isDirectory()) &&
(!this.opts.nodir ||
!this.opts.follow ||
!e.isSymbolicLink() ||
!e.realpathCached()?.isDirectory()) &&
!this.#ignored(e)) ?
e
: undefined;
}
matchCheckSync(e, ifDir) {
if (ifDir && this.opts.nodir)
return undefined;
let rpc;
if (this.opts.realpath) {
rpc = e.realpathCached() || e.realpathSync();
if (!rpc)
return undefined;
e = rpc;
}
const needStat = e.isUnknown() || this.opts.stat;
const s = needStat ? e.lstatSync() : e;
if (this.opts.follow && this.opts.nodir && s?.isSymbolicLink()) {
const target = s.realpathSync();
if (target && (target?.isUnknown() || this.opts.stat)) {
target.lstatSync();
}
}
return this.matchCheckTest(s, ifDir);
}
matchFinish(e, absolute) {
if (this.#ignored(e))
return;
// we know we have an ignore if this is false, but TS doesn't
if (!this.includeChildMatches && this.#ignore?.add) {
const ign = `${e.relativePosix()}/**`;
this.#ignore.add(ign);
}
const abs = this.opts.absolute === undefined ? absolute : this.opts.absolute;
this.seen.add(e);
const mark = this.opts.mark && e.isDirectory() ? this.#sep : '';
// ok, we have what we need!
if (this.opts.withFileTypes) {
this.matchEmit(e);
}
else if (abs) {
const abs = this.opts.posix ? e.fullpathPosix() : e.fullpath();
this.matchEmit(abs + mark);
}
else {
const rel = this.opts.posix ? e.relativePosix() : e.relative();
const pre = this.opts.dotRelative && !rel.startsWith('..' + this.#sep) ?
'.' + this.#sep
: '';
this.matchEmit(!rel ? '.' + mark : pre + rel + mark);
}
}
async match(e, absolute, ifDir) {
const p = await this.matchCheck(e, ifDir);
if (p)
this.matchFinish(p, absolute);
}
matchSync(e, absolute, ifDir) {
const p = this.matchCheckSync(e, ifDir);
if (p)
this.matchFinish(p, absolute);
}
walkCB(target, patterns, cb) {
/* c8 ignore start */
if (this.signal?.aborted)
cb();
/* c8 ignore stop */
this.walkCB2(target, patterns, new processor_js_1.Processor(this.opts), cb);
}
walkCB2(target, patterns, processor, cb) {
if (this.#childrenIgnored(target))
return cb();
if (this.signal?.aborted)
cb();
if (this.paused) {
this.onResume(() => this.walkCB2(target, patterns, processor, cb));
return;
}
processor.processPatterns(target, patterns);
// done processing. all of the above is sync, can be abstracted out.
// subwalks is a map of paths to the entry filters they need
// matches is a map of paths to [absolute, ifDir] tuples.
let tasks = 1;
const next = () => {
if (--tasks === 0)
cb();
};
for (const [m, absolute, ifDir] of processor.matches.entries()) {
if (this.#ignored(m))
continue;
tasks++;
this.match(m, absolute, ifDir).then(() => next());
}
for (const t of processor.subwalkTargets()) {
if (this.maxDepth !== Infinity && t.depth() >= this.maxDepth) {
continue;
}
tasks++;
const childrenCached = t.readdirCached();
if (t.calledReaddir())
this.walkCB3(t, childrenCached, processor, next);
else {
t.readdirCB((_, entries) => this.walkCB3(t, entries, processor, next), true);
}
}
next();
}
walkCB3(target, entries, processor, cb) {
processor = processor.filterEntries(target, entries);
let tasks = 1;
const next = () => {
if (--tasks === 0)
cb();
};
for (const [m, absolute, ifDir] of processor.matches.entries()) {
if (this.#ignored(m))
continue;
tasks++;
this.match(m, absolute, ifDir).then(() => next());
}
for (const [target, patterns] of processor.subwalks.entries()) {
tasks++;
this.walkCB2(target, patterns, processor.child(), next);
}
next();
}
walkCBSync(target, patterns, cb) {
/* c8 ignore start */
if (this.signal?.aborted)
cb();
/* c8 ignore stop */
this.walkCB2Sync(target, patterns, new processor_js_1.Processor(this.opts), cb);
}
walkCB2Sync(target, patterns, processor, cb) {
if (this.#childrenIgnored(target))
return cb();
if (this.signal?.aborted)
cb();
if (this.paused) {
this.onResume(() => this.walkCB2Sync(target, patterns, processor, cb));
return;
}
processor.processPatterns(target, patterns);
// done processing. all of the above is sync, can be abstracted out.
// subwalks is a map of paths to the entry filters they need
// matches is a map of paths to [absolute, ifDir] tuples.
let tasks = 1;
const next = () => {
if (--tasks === 0)
cb();
};
for (const [m, absolute, ifDir] of processor.matches.entries()) {
if (this.#ignored(m))
continue;
this.matchSync(m, absolute, ifDir);
}
for (const t of processor.subwalkTargets()) {
if (this.maxDepth !== Infinity && t.depth() >= this.maxDepth) {
continue;
}
tasks++;
const children = t.readdirSync();
this.walkCB3Sync(t, children, processor, next);
}
next();
}
walkCB3Sync(target, entries, processor, cb) {
processor = processor.filterEntries(target, entries);
let tasks = 1;
const next = () => {
if (--tasks === 0)
cb();
};
for (const [m, absolute, ifDir] of processor.matches.entries()) {
if (this.#ignored(m))
continue;
this.matchSync(m, absolute, ifDir);
}
for (const [target, patterns] of processor.subwalks.entries()) {
tasks++;
this.walkCB2Sync(target, patterns, processor.child(), next);
}
next();
}
}
exports.GlobUtil = GlobUtil;
class GlobWalker extends GlobUtil {
matches = new Set();
constructor(patterns, path, opts) {
super(patterns, path, opts);
}
matchEmit(e) {
this.matches.add(e);
}
async walk() {
if (this.signal?.aborted)
throw this.signal.reason;
if (this.path.isUnknown()) {
await this.path.lstat();
}
await new Promise((res, rej) => {
this.walkCB(this.path, this.patterns, () => {
if (this.signal?.aborted) {
rej(this.signal.reason);
}
else {
res(this.matches);
}
});
});
return this.matches;
}
walkSync() {
if (this.signal?.aborted)
throw this.signal.reason;
if (this.path.isUnknown()) {
this.path.lstatSync();
}
// nothing for the callback to do, because this never pauses
this.walkCBSync(this.path, this.patterns, () => {
if (this.signal?.aborted)
throw this.signal.reason;
});
return this.matches;
}
}
exports.GlobWalker = GlobWalker;
class GlobStream extends GlobUtil {
results;
constructor(patterns, path, opts) {
super(patterns, path, opts);
this.results = new minipass_1.Minipass({
signal: this.signal,
objectMode: true,
});
this.results.on('drain', () => this.resume());
this.results.on('resume', () => this.resume());
}
matchEmit(e) {
this.results.write(e);
if (!this.results.flowing)
this.pause();
}
stream() {
const target = this.path;
if (target.isUnknown()) {
target.lstat().then(() => {
this.walkCB(target, this.patterns, () => this.results.end());
});
}
else {
this.walkCB(target, this.patterns, () => this.results.end());
}
return this.results;
}
streamSync() {
if (this.path.isUnknown()) {
this.path.lstatSync();
}
this.walkCBSync(this.path, this.patterns, () => this.results.end());
return this.results;
}
}
exports.GlobStream = GlobStream;
//# sourceMappingURL=walker.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env node
export {};
//# sourceMappingURL=bin.d.mts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"bin.d.mts","sourceRoot":"","sources":["../../src/bin.mts"],"names":[],"mappings":""}

View File

@@ -0,0 +1,270 @@
#!/usr/bin/env node
import { foregroundChild } from 'foreground-child';
import { existsSync } from 'fs';
import { jack } from 'jackspeak';
import { loadPackageJson } from 'package-json-from-dist';
import { join } from 'path';
import { globStream } from './index.js';
const { version } = loadPackageJson(import.meta.url, '../package.json');
const j = jack({
usage: 'glob [options] [<pattern> [<pattern> ...]]',
})
.description(`
Glob v${version}
Expand the positional glob expression arguments into any matching file
system paths found.
`)
.opt({
cmd: {
short: 'c',
hint: 'command',
description: `Run the command provided, passing the glob expression
matches as arguments.`,
},
})
.opt({
default: {
short: 'p',
hint: 'pattern',
description: `If no positional arguments are provided, glob will use
this pattern`,
},
})
.flag({
all: {
short: 'A',
description: `By default, the glob cli command will not expand any
arguments that are an exact match to a file on disk.
This prevents double-expanding, in case the shell expands
an argument whose filename is a glob expression.
For example, if 'app/*.ts' would match 'app/[id].ts', then
on Windows powershell or cmd.exe, 'glob app/*.ts' will
expand to 'app/[id].ts', as expected. However, in posix
shells such as bash or zsh, the shell will first expand
'app/*.ts' to a list of filenames. Then glob will look
for a file matching 'app/[id].ts' (ie, 'app/i.ts' or
'app/d.ts'), which is unexpected.
Setting '--all' prevents this behavior, causing glob
to treat ALL patterns as glob expressions to be expanded,
even if they are an exact match to a file on disk.
When setting this option, be sure to enquote arguments
so that the shell will not expand them prior to passing
them to the glob command process.
`,
},
absolute: {
short: 'a',
description: 'Expand to absolute paths',
},
'dot-relative': {
short: 'd',
description: `Prepend './' on relative matches`,
},
mark: {
short: 'm',
description: `Append a / on any directories matched`,
},
posix: {
short: 'x',
description: `Always resolve to posix style paths, using '/' as the
directory separator, even on Windows. Drive letter
absolute matches on Windows will be expanded to their
full resolved UNC maths, eg instead of 'C:\\foo\\bar',
it will expand to '//?/C:/foo/bar'.
`,
},
follow: {
short: 'f',
description: `Follow symlinked directories when expanding '**'`,
},
realpath: {
short: 'R',
description: `Call 'fs.realpath' on all of the results. In the case
of an entry that cannot be resolved, the entry is
omitted. This incurs a slight performance penalty, of
course, because of the added system calls.`,
},
stat: {
short: 's',
description: `Call 'fs.lstat' on all entries, whether required or not
to determine if it's a valid match.`,
},
'match-base': {
short: 'b',
description: `Perform a basename-only match if the pattern does not
contain any slash characters. That is, '*.js' would be
treated as equivalent to '**/*.js', matching js files
in all directories.
`,
},
dot: {
description: `Allow patterns to match files/directories that start
with '.', even if the pattern does not start with '.'
`,
},
nobrace: {
description: 'Do not expand {...} patterns',
},
nocase: {
description: `Perform a case-insensitive match. This defaults to
'true' on macOS and Windows platforms, and false on
all others.
Note: 'nocase' should only be explicitly set when it is
known that the filesystem's case sensitivity differs
from the platform default. If set 'true' on
case-insensitive file systems, then the walk may return
more or less results than expected.
`,
},
nodir: {
description: `Do not match directories, only files.
Note: to *only* match directories, append a '/' at the
end of the pattern.
`,
},
noext: {
description: `Do not expand extglob patterns, such as '+(a|b)'`,
},
noglobstar: {
description: `Do not expand '**' against multiple path portions.
Ie, treat it as a normal '*' instead.`,
},
'windows-path-no-escape': {
description: `Use '\\' as a path separator *only*, and *never* as an
escape character. If set, all '\\' characters are
replaced with '/' in the pattern.`,
},
})
.num({
'max-depth': {
short: 'D',
description: `Maximum depth to traverse from the current
working directory`,
},
})
.opt({
cwd: {
short: 'C',
description: 'Current working directory to execute/match in',
default: process.cwd(),
},
root: {
short: 'r',
description: `A string path resolved against the 'cwd', which is
used as the starting point for absolute patterns that
start with '/' (but not drive letters or UNC paths
on Windows).
Note that this *doesn't* necessarily limit the walk to
the 'root' directory, and doesn't affect the cwd
starting point for non-absolute patterns. A pattern
containing '..' will still be able to traverse out of
the root directory, if it is not an actual root directory
on the filesystem, and any non-absolute patterns will
still be matched in the 'cwd'.
To start absolute and non-absolute patterns in the same
path, you can use '--root=' to set it to the empty
string. However, be aware that on Windows systems, a
pattern like 'x:/*' or '//host/share/*' will *always*
start in the 'x:/' or '//host/share/' directory,
regardless of the --root setting.
`,
},
platform: {
description: `Defaults to the value of 'process.platform' if
available, or 'linux' if not. Setting --platform=win32
on non-Windows systems may cause strange behavior!`,
validOptions: [
'aix',
'android',
'darwin',
'freebsd',
'haiku',
'linux',
'openbsd',
'sunos',
'win32',
'cygwin',
'netbsd',
],
},
})
.optList({
ignore: {
short: 'i',
description: `Glob patterns to ignore`,
},
})
.flag({
debug: {
short: 'v',
description: `Output a huge amount of noisy debug information about
patterns as they are parsed and used to match files.`,
},
})
.flag({
help: {
short: 'h',
description: 'Show this usage information',
},
});
try {
const { positionals, values } = j.parse();
if (values.help) {
console.log(j.usage());
process.exit(0);
}
if (positionals.length === 0 && !values.default)
throw 'No patterns provided';
if (positionals.length === 0 && values.default)
positionals.push(values.default);
const patterns = values.all ? positionals : positionals.filter(p => !existsSync(p));
const matches = values.all ?
[]
: positionals.filter(p => existsSync(p)).map(p => join(p));
const stream = globStream(patterns, {
absolute: values.absolute,
cwd: values.cwd,
dot: values.dot,
dotRelative: values['dot-relative'],
follow: values.follow,
ignore: values.ignore,
mark: values.mark,
matchBase: values['match-base'],
maxDepth: values['max-depth'],
nobrace: values.nobrace,
nocase: values.nocase,
nodir: values.nodir,
noext: values.noext,
noglobstar: values.noglobstar,
platform: values.platform,
realpath: values.realpath,
root: values.root,
stat: values.stat,
debug: values.debug,
posix: values.posix,
});
const cmd = values.cmd;
if (!cmd) {
matches.forEach(m => console.log(m));
stream.on('data', f => console.log(f));
}
else {
stream.on('data', f => matches.push(f));
stream.on('end', () => foregroundChild(cmd, matches, { shell: true }));
}
}
catch (e) {
console.error(j.usage());
console.error(e instanceof Error ? e.message : String(e));
process.exit(1);
}
//# sourceMappingURL=bin.mjs.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,389 @@
/// <reference types="node" resolution-mode="require"/>
import { Minimatch } from 'minimatch';
import { Minipass } from 'minipass';
import { FSOption, Path, PathScurry } from 'path-scurry';
import { IgnoreLike } from './ignore.js';
import { Pattern } from './pattern.js';
export type MatchSet = Minimatch['set'];
export type GlobParts = Exclude<Minimatch['globParts'], undefined>;
/**
* A `GlobOptions` object may be provided to any of the exported methods, and
* must be provided to the `Glob` constructor.
*
* All options are optional, boolean, and false by default, unless otherwise
* noted.
*
* All resolved options are added to the Glob object as properties.
*
* If you are running many `glob` operations, you can pass a Glob object as the
* `options` argument to a subsequent operation to share the previously loaded
* cache.
*/
export interface GlobOptions {
/**
* Set to `true` to always receive absolute paths for
* matched files. Set to `false` to always return relative paths.
*
* When this option is not set, absolute paths are returned for patterns
* that are absolute, and otherwise paths are returned that are relative
* to the `cwd` setting.
*
* This does _not_ make an extra system call to get
* the realpath, it only does string path resolution.
*
* Conflicts with {@link withFileTypes}
*/
absolute?: boolean;
/**
* Set to false to enable {@link windowsPathsNoEscape}
*
* @deprecated
*/
allowWindowsEscape?: boolean;
/**
* The current working directory in which to search. Defaults to
* `process.cwd()`.
*
* May be eiher a string path or a `file://` URL object or string.
*/
cwd?: string | URL;
/**
* Include `.dot` files in normal matches and `globstar`
* matches. Note that an explicit dot in a portion of the pattern
* will always match dot files.
*/
dot?: boolean;
/**
* Prepend all relative path strings with `./` (or `.\` on Windows).
*
* Without this option, returned relative paths are "bare", so instead of
* returning `'./foo/bar'`, they are returned as `'foo/bar'`.
*
* Relative patterns starting with `'../'` are not prepended with `./`, even
* if this option is set.
*/
dotRelative?: boolean;
/**
* Follow symlinked directories when expanding `**`
* patterns. This can result in a lot of duplicate references in
* the presence of cyclic links, and make performance quite bad.
*
* By default, a `**` in a pattern will follow 1 symbolic link if
* it is not the first item in the pattern, or none if it is the
* first item in the pattern, following the same behavior as Bash.
*/
follow?: boolean;
/**
* string or string[], or an object with `ignore` and `ignoreChildren`
* methods.
*
* If a string or string[] is provided, then this is treated as a glob
* pattern or array of glob patterns to exclude from matches. To ignore all
* children within a directory, as well as the entry itself, append `'/**'`
* to the ignore pattern.
*
* **Note** `ignore` patterns are _always_ in `dot:true` mode, regardless of
* any other settings.
*
* If an object is provided that has `ignored(path)` and/or
* `childrenIgnored(path)` methods, then these methods will be called to
* determine whether any Path is a match or if its children should be
* traversed, respectively.
*/
ignore?: string | string[] | IgnoreLike;
/**
* Treat brace expansion like `{a,b}` as a "magic" pattern. Has no
* effect if {@link nobrace} is set.
*
* Only has effect on the {@link hasMagic} function.
*/
magicalBraces?: boolean;
/**
* Add a `/` character to directory matches. Note that this requires
* additional stat calls in some cases.
*/
mark?: boolean;
/**
* Perform a basename-only match if the pattern does not contain any slash
* characters. That is, `*.js` would be treated as equivalent to
* `**\/*.js`, matching all js files in all directories.
*/
matchBase?: boolean;
/**
* Limit the directory traversal to a given depth below the cwd.
* Note that this does NOT prevent traversal to sibling folders,
* root patterns, and so on. It only limits the maximum folder depth
* that the walk will descend, relative to the cwd.
*/
maxDepth?: number;
/**
* Do not expand `{a,b}` and `{1..3}` brace sets.
*/
nobrace?: boolean;
/**
* Perform a case-insensitive match. This defaults to `true` on macOS and
* Windows systems, and `false` on all others.
*
* **Note** `nocase` should only be explicitly set when it is
* known that the filesystem's case sensitivity differs from the
* platform default. If set `true` on case-sensitive file
* systems, or `false` on case-insensitive file systems, then the
* walk may return more or less results than expected.
*/
nocase?: boolean;
/**
* Do not match directories, only files. (Note: to match
* _only_ directories, put a `/` at the end of the pattern.)
*/
nodir?: boolean;
/**
* Do not match "extglob" patterns such as `+(a|b)`.
*/
noext?: boolean;
/**
* Do not match `**` against multiple filenames. (Ie, treat it as a normal
* `*` instead.)
*
* Conflicts with {@link matchBase}
*/
noglobstar?: boolean;
/**
* Defaults to value of `process.platform` if available, or `'linux'` if
* not. Setting `platform:'win32'` on non-Windows systems may cause strange
* behavior.
*/
platform?: NodeJS.Platform;
/**
* Set to true to call `fs.realpath` on all of the
* results. In the case of an entry that cannot be resolved, the
* entry is omitted. This incurs a slight performance penalty, of
* course, because of the added system calls.
*/
realpath?: boolean;
/**
*
* A string path resolved against the `cwd` option, which
* is used as the starting point for absolute patterns that start
* with `/`, (but not drive letters or UNC paths on Windows).
*
* Note that this _doesn't_ necessarily limit the walk to the
* `root` directory, and doesn't affect the cwd starting point for
* non-absolute patterns. A pattern containing `..` will still be
* able to traverse out of the root directory, if it is not an
* actual root directory on the filesystem, and any non-absolute
* patterns will be matched in the `cwd`. For example, the
* pattern `/../*` with `{root:'/some/path'}` will return all
* files in `/some`, not all files in `/some/path`. The pattern
* `*` with `{root:'/some/path'}` will return all the entries in
* the cwd, not the entries in `/some/path`.
*
* To start absolute and non-absolute patterns in the same
* path, you can use `{root:''}`. However, be aware that on
* Windows systems, a pattern like `x:/*` or `//host/share/*` will
* _always_ start in the `x:/` or `//host/share` directory,
* regardless of the `root` setting.
*/
root?: string;
/**
* A [PathScurry](http://npm.im/path-scurry) object used
* to traverse the file system. If the `nocase` option is set
* explicitly, then any provided `scurry` object must match this
* setting.
*/
scurry?: PathScurry;
/**
* Call `lstat()` on all entries, whether required or not to determine
* if it's a valid match. When used with {@link withFileTypes}, this means
* that matches will include data such as modified time, permissions, and
* so on. Note that this will incur a performance cost due to the added
* system calls.
*/
stat?: boolean;
/**
* An AbortSignal which will cancel the Glob walk when
* triggered.
*/
signal?: AbortSignal;
/**
* Use `\\` as a path separator _only_, and
* _never_ as an escape character. If set, all `\\` characters are
* replaced with `/` in the pattern.
*
* Note that this makes it **impossible** to match against paths
* containing literal glob pattern characters, but allows matching
* with patterns constructed using `path.join()` and
* `path.resolve()` on Windows platforms, mimicking the (buggy!)
* behavior of Glob v7 and before on Windows. Please use with
* caution, and be mindful of [the caveat below about Windows
* paths](#windows). (For legacy reasons, this is also set if
* `allowWindowsEscape` is set to the exact value `false`.)
*/
windowsPathsNoEscape?: boolean;
/**
* Return [PathScurry](http://npm.im/path-scurry)
* `Path` objects instead of strings. These are similar to a
* NodeJS `Dirent` object, but with additional methods and
* properties.
*
* Conflicts with {@link absolute}
*/
withFileTypes?: boolean;
/**
* An fs implementation to override some or all of the defaults. See
* http://npm.im/path-scurry for details about what can be overridden.
*/
fs?: FSOption;
/**
* Just passed along to Minimatch. Note that this makes all pattern
* matching operations slower and *extremely* noisy.
*/
debug?: boolean;
/**
* Return `/` delimited paths, even on Windows.
*
* On posix systems, this has no effect. But, on Windows, it means that
* paths will be `/` delimited, and absolute paths will be their full
* resolved UNC forms, eg instead of `'C:\\foo\\bar'`, it would return
* `'//?/C:/foo/bar'`
*/
posix?: boolean;
/**
* Do not match any children of any matches. For example, the pattern
* `**\/foo` would match `a/foo`, but not `a/foo/b/foo` in this mode.
*
* This is especially useful for cases like "find all `node_modules`
* folders, but not the ones in `node_modules`".
*
* In order to support this, the `Ignore` implementation must support an
* `add(pattern: string)` method. If using the default `Ignore` class, then
* this is fine, but if this is set to `false`, and a custom `Ignore` is
* provided that does not have an `add()` method, then it will throw an
* error.
*
* **Caveat** It *only* ignores matches that would be a descendant of a
* previous match, and only if that descendant is matched *after* the
* ancestor is encountered. Since the file system walk happens in
* indeterminate order, it's possible that a match will already be added
* before its ancestor, if multiple or braced patterns are used.
*
* For example:
*
* ```ts
* const results = await glob([
* // likely to match first, since it's just a stat
* 'a/b/c/d/e/f',
*
* // this pattern is more complicated! It must to various readdir()
* // calls and test the results against a regular expression, and that
* // is certainly going to take a little bit longer.
* //
* // So, later on, it encounters a match at 'a/b/c/d/e', but it's too
* // late to ignore a/b/c/d/e/f, because it's already been emitted.
* 'a/[bdf]/?/[a-z]/*',
* ], { includeChildMatches: false })
* ```
*
* It's best to only set this to `false` if you can be reasonably sure that
* no components of the pattern will potentially match one another's file
* system descendants, or if the occasional included child entry will not
* cause problems.
*
* @default true
*/
includeChildMatches?: boolean;
}
export type GlobOptionsWithFileTypesTrue = GlobOptions & {
withFileTypes: true;
absolute?: undefined;
mark?: undefined;
posix?: undefined;
};
export type GlobOptionsWithFileTypesFalse = GlobOptions & {
withFileTypes?: false;
};
export type GlobOptionsWithFileTypesUnset = GlobOptions & {
withFileTypes?: undefined;
};
export type Result<Opts> = Opts extends GlobOptionsWithFileTypesTrue ? Path : Opts extends GlobOptionsWithFileTypesFalse ? string : Opts extends GlobOptionsWithFileTypesUnset ? string : string | Path;
export type Results<Opts> = Result<Opts>[];
export type FileTypes<Opts> = Opts extends GlobOptionsWithFileTypesTrue ? true : Opts extends GlobOptionsWithFileTypesFalse ? false : Opts extends GlobOptionsWithFileTypesUnset ? false : boolean;
/**
* An object that can perform glob pattern traversals.
*/
export declare class Glob<Opts extends GlobOptions> implements GlobOptions {
absolute?: boolean;
cwd: string;
root?: string;
dot: boolean;
dotRelative: boolean;
follow: boolean;
ignore?: string | string[] | IgnoreLike;
magicalBraces: boolean;
mark?: boolean;
matchBase: boolean;
maxDepth: number;
nobrace: boolean;
nocase: boolean;
nodir: boolean;
noext: boolean;
noglobstar: boolean;
pattern: string[];
platform: NodeJS.Platform;
realpath: boolean;
scurry: PathScurry;
stat: boolean;
signal?: AbortSignal;
windowsPathsNoEscape: boolean;
withFileTypes: FileTypes<Opts>;
includeChildMatches: boolean;
/**
* The options provided to the constructor.
*/
opts: Opts;
/**
* An array of parsed immutable {@link Pattern} objects.
*/
patterns: Pattern[];
/**
* All options are stored as properties on the `Glob` object.
*
* See {@link GlobOptions} for full options descriptions.
*
* Note that a previous `Glob` object can be passed as the
* `GlobOptions` to another `Glob` instantiation to re-use settings
* and caches with a new pattern.
*
* Traversal functions can be called multiple times to run the walk
* again.
*/
constructor(pattern: string | string[], opts: Opts);
/**
* Returns a Promise that resolves to the results array.
*/
walk(): Promise<Results<Opts>>;
/**
* synchronous {@link Glob.walk}
*/
walkSync(): Results<Opts>;
/**
* Stream results asynchronously.
*/
stream(): Minipass<Result<Opts>, Result<Opts>>;
/**
* Stream results synchronously.
*/
streamSync(): Minipass<Result<Opts>, Result<Opts>>;
/**
* Default sync iteration function. Returns a Generator that
* iterates over the results.
*/
iterateSync(): Generator<Result<Opts>, void, void>;
[Symbol.iterator](): Generator<Result<Opts>, void, void>;
/**
* Default async iteration function. Returns an AsyncGenerator that
* iterates over the results.
*/
iterate(): AsyncGenerator<Result<Opts>, void, void>;
[Symbol.asyncIterator](): AsyncGenerator<Result<Opts>, void, void>;
}
//# sourceMappingURL=glob.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"glob.d.ts","sourceRoot":"","sources":["../../src/glob.ts"],"names":[],"mappings":";AAAA,OAAO,EAAE,SAAS,EAAoB,MAAM,WAAW,CAAA;AACvD,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AAEnC,OAAO,EACL,QAAQ,EACR,IAAI,EACJ,UAAU,EAIX,MAAM,aAAa,CAAA;AACpB,OAAO,EAAE,UAAU,EAAE,MAAM,aAAa,CAAA;AACxC,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAA;AAGtC,MAAM,MAAM,QAAQ,GAAG,SAAS,CAAC,KAAK,CAAC,CAAA;AACvC,MAAM,MAAM,SAAS,GAAG,OAAO,CAAC,SAAS,CAAC,WAAW,CAAC,EAAE,SAAS,CAAC,CAAA;AAalE;;;;;;;;;;;;GAYG;AACH,MAAM,WAAW,WAAW;IAC1B;;;;;;;;;;;;OAYG;IACH,QAAQ,CAAC,EAAE,OAAO,CAAA;IAElB;;;;OAIG;IACH,kBAAkB,CAAC,EAAE,OAAO,CAAA;IAE5B;;;;;OAKG;IACH,GAAG,CAAC,EAAE,MAAM,GAAG,GAAG,CAAA;IAElB;;;;OAIG;IACH,GAAG,CAAC,EAAE,OAAO,CAAA;IAEb;;;;;;;;OAQG;IACH,WAAW,CAAC,EAAE,OAAO,CAAA;IAErB;;;;;;;;OAQG;IACH,MAAM,CAAC,EAAE,OAAO,CAAA;IAEhB;;;;;;;;;;;;;;;;OAgBG;IACH,MAAM,CAAC,EAAE,MAAM,GAAG,MAAM,EAAE,GAAG,UAAU,CAAA;IAEvC;;;;;OAKG;IACH,aAAa,CAAC,EAAE,OAAO,CAAA;IAEvB;;;OAGG;IACH,IAAI,CAAC,EAAE,OAAO,CAAA;IAEd;;;;OAIG;IACH,SAAS,CAAC,EAAE,OAAO,CAAA;IAEnB;;;;;OAKG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAA;IAEjB;;OAEG;IACH,OAAO,CAAC,EAAE,OAAO,CAAA;IAEjB;;;;;;;;;OASG;IACH,MAAM,CAAC,EAAE,OAAO,CAAA;IAEhB;;;OAGG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;OAEG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;;;;OAKG;IACH,UAAU,CAAC,EAAE,OAAO,CAAA;IAEpB;;;;OAIG;IACH,QAAQ,CAAC,EAAE,MAAM,CAAC,QAAQ,CAAA;IAE1B;;;;;OAKG;IACH,QAAQ,CAAC,EAAE,OAAO,CAAA;IAElB;;;;;;;;;;;;;;;;;;;;;;OAsBG;IACH,IAAI,CAAC,EAAE,MAAM,CAAA;IAEb;;;;;OAKG;IACH,MAAM,CAAC,EAAE,UAAU,CAAA;IAEnB;;;;;;OAMG;IACH,IAAI,CAAC,EAAE,OAAO,CAAA;IAEd;;;OAGG;IACH,MAAM,CAAC,EAAE,WAAW,CAAA;IAEpB;;;;;;;;;;;;;OAaG;IACH,oBAAoB,CAAC,EAAE,OAAO,CAAA;IAE9B;;;;;;;OAOG;IACH,aAAa,CAAC,EAAE,OAAO,CAAA;IAEvB;;;OAGG;IACH,EAAE,CAAC,EAAE,QAAQ,CAAA;IAEb;;;OAGG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;;;;;;OAOG;IACH,KAAK,CAAC,EAAE,OAAO,CAAA;IAEf;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;OA0CG;IACH,mBAAmB,CAAC,EAAE,OAAO,CAAA;CAC9B;AAED,MAAM,MAAM,4BAA4B,GAAG,WAAW,GAAG;IACvD,aAAa,EAAE,IAAI,CAAA;IAEnB,QAAQ,CAAC,EAAE,SAAS,CAAA;IACpB,IAAI,CAAC,EAAE,SAAS,CAAA;IAChB,KAAK,CAAC,EAAE,SAAS,CAAA;CAClB,CAAA;AAED,MAAM,MAAM,6BAA6B,GAAG,WAAW,GAAG;IACxD,aAAa,CAAC,EAAE,KAAK,CAAA;CACtB,CAAA;AAED,MAAM,MAAM,6BAA6B,GAAG,WAAW,GAAG;IACxD,aAAa,CAAC,EAAE,SAAS,CAAA;CAC1B,CAAA;AAED,MAAM,MAAM,MAAM,CAAC,IAAI,IACrB,IAAI,SAAS,4BAA4B,GAAG,IAAI,GAC9C,IAAI,SAAS,6BAA6B,GAAG,MAAM,GACnD,IAAI,SAAS,6BAA6B,GAAG,MAAM,GACnD,MAAM,GAAG,IAAI,CAAA;AACjB,MAAM,MAAM,OAAO,CAAC,IAAI,IAAI,MAAM,CAAC,IAAI,CAAC,EAAE,CAAA;AAE1C,MAAM,MAAM,SAAS,CAAC,IAAI,IACxB,IAAI,SAAS,4BAA4B,GAAG,IAAI,GAC9C,IAAI,SAAS,6BAA6B,GAAG,KAAK,GAClD,IAAI,SAAS,6BAA6B,GAAG,KAAK,GAClD,OAAO,CAAA;AAEX;;GAEG;AACH,qBAAa,IAAI,CAAC,IAAI,SAAS,WAAW,CAAE,YAAW,WAAW;IAChE,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,GAAG,EAAE,MAAM,CAAA;IACX,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,GAAG,EAAE,OAAO,CAAA;IACZ,WAAW,EAAE,OAAO,CAAA;IACpB,MAAM,EAAE,OAAO,CAAA;IACf,MAAM,CAAC,EAAE,MAAM,GAAG,MAAM,EAAE,GAAG,UAAU,CAAA;IACvC,aAAa,EAAE,OAAO,CAAA;IACtB,IAAI,CAAC,EAAE,OAAO,CAAA;IACd,SAAS,EAAE,OAAO,CAAA;IAClB,QAAQ,EAAE,MAAM,CAAA;IAChB,OAAO,EAAE,OAAO,CAAA;IAChB,MAAM,EAAE,OAAO,CAAA;IACf,KAAK,EAAE,OAAO,CAAA;IACd,KAAK,EAAE,OAAO,CAAA;IACd,UAAU,EAAE,OAAO,CAAA;IACnB,OAAO,EAAE,MAAM,EAAE,CAAA;IACjB,QAAQ,EAAE,MAAM,CAAC,QAAQ,CAAA;IACzB,QAAQ,EAAE,OAAO,CAAA;IACjB,MAAM,EAAE,UAAU,CAAA;IAClB,IAAI,EAAE,OAAO,CAAA;IACb,MAAM,CAAC,EAAE,WAAW,CAAA;IACpB,oBAAoB,EAAE,OAAO,CAAA;IAC7B,aAAa,EAAE,SAAS,CAAC,IAAI,CAAC,CAAA;IAC9B,mBAAmB,EAAE,OAAO,CAAA;IAE5B;;OAEG;IACH,IAAI,EAAE,IAAI,CAAA;IAEV;;OAEG;IACH,QAAQ,EAAE,OAAO,EAAE,CAAA;IAEnB;;;;;;;;;;;OAWG;gBACS,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAAE,IAAI,EAAE,IAAI;IA2HlD;;OAEG;IACG,IAAI,IAAI,OAAO,CAAC,OAAO,CAAC,IAAI,CAAC,CAAC;IAoBpC;;OAEG;IACH,QAAQ,IAAI,OAAO,CAAC,IAAI,CAAC;IAgBzB;;OAEG;IACH,MAAM,IAAI,QAAQ,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,MAAM,CAAC,IAAI,CAAC,CAAC;IAc9C;;OAEG;IACH,UAAU,IAAI,QAAQ,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,MAAM,CAAC,IAAI,CAAC,CAAC;IAclD;;;OAGG;IACH,WAAW,IAAI,SAAS,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,IAAI,CAAC;IAGlD,CAAC,MAAM,CAAC,QAAQ,CAAC;IAIjB;;;OAGG;IACH,OAAO,IAAI,cAAc,CAAC,MAAM,CAAC,IAAI,CAAC,EAAE,IAAI,EAAE,IAAI,CAAC;IAGnD,CAAC,MAAM,CAAC,aAAa,CAAC;CAGvB"}

View File

@@ -0,0 +1,243 @@
import { Minimatch } from 'minimatch';
import { fileURLToPath } from 'node:url';
import { PathScurry, PathScurryDarwin, PathScurryPosix, PathScurryWin32, } from 'path-scurry';
import { Pattern } from './pattern.js';
import { GlobStream, GlobWalker } from './walker.js';
// if no process global, just call it linux.
// so we default to case-sensitive, / separators
const defaultPlatform = (typeof process === 'object' &&
process &&
typeof process.platform === 'string') ?
process.platform
: 'linux';
/**
* An object that can perform glob pattern traversals.
*/
export class Glob {
absolute;
cwd;
root;
dot;
dotRelative;
follow;
ignore;
magicalBraces;
mark;
matchBase;
maxDepth;
nobrace;
nocase;
nodir;
noext;
noglobstar;
pattern;
platform;
realpath;
scurry;
stat;
signal;
windowsPathsNoEscape;
withFileTypes;
includeChildMatches;
/**
* The options provided to the constructor.
*/
opts;
/**
* An array of parsed immutable {@link Pattern} objects.
*/
patterns;
/**
* All options are stored as properties on the `Glob` object.
*
* See {@link GlobOptions} for full options descriptions.
*
* Note that a previous `Glob` object can be passed as the
* `GlobOptions` to another `Glob` instantiation to re-use settings
* and caches with a new pattern.
*
* Traversal functions can be called multiple times to run the walk
* again.
*/
constructor(pattern, opts) {
/* c8 ignore start */
if (!opts)
throw new TypeError('glob options required');
/* c8 ignore stop */
this.withFileTypes = !!opts.withFileTypes;
this.signal = opts.signal;
this.follow = !!opts.follow;
this.dot = !!opts.dot;
this.dotRelative = !!opts.dotRelative;
this.nodir = !!opts.nodir;
this.mark = !!opts.mark;
if (!opts.cwd) {
this.cwd = '';
}
else if (opts.cwd instanceof URL || opts.cwd.startsWith('file://')) {
opts.cwd = fileURLToPath(opts.cwd);
}
this.cwd = opts.cwd || '';
this.root = opts.root;
this.magicalBraces = !!opts.magicalBraces;
this.nobrace = !!opts.nobrace;
this.noext = !!opts.noext;
this.realpath = !!opts.realpath;
this.absolute = opts.absolute;
this.includeChildMatches = opts.includeChildMatches !== false;
this.noglobstar = !!opts.noglobstar;
this.matchBase = !!opts.matchBase;
this.maxDepth =
typeof opts.maxDepth === 'number' ? opts.maxDepth : Infinity;
this.stat = !!opts.stat;
this.ignore = opts.ignore;
if (this.withFileTypes && this.absolute !== undefined) {
throw new Error('cannot set absolute and withFileTypes:true');
}
if (typeof pattern === 'string') {
pattern = [pattern];
}
this.windowsPathsNoEscape =
!!opts.windowsPathsNoEscape ||
opts.allowWindowsEscape ===
false;
if (this.windowsPathsNoEscape) {
pattern = pattern.map(p => p.replace(/\\/g, '/'));
}
if (this.matchBase) {
if (opts.noglobstar) {
throw new TypeError('base matching requires globstar');
}
pattern = pattern.map(p => (p.includes('/') ? p : `./**/${p}`));
}
this.pattern = pattern;
this.platform = opts.platform || defaultPlatform;
this.opts = { ...opts, platform: this.platform };
if (opts.scurry) {
this.scurry = opts.scurry;
if (opts.nocase !== undefined &&
opts.nocase !== opts.scurry.nocase) {
throw new Error('nocase option contradicts provided scurry option');
}
}
else {
const Scurry = opts.platform === 'win32' ? PathScurryWin32
: opts.platform === 'darwin' ? PathScurryDarwin
: opts.platform ? PathScurryPosix
: PathScurry;
this.scurry = new Scurry(this.cwd, {
nocase: opts.nocase,
fs: opts.fs,
});
}
this.nocase = this.scurry.nocase;
// If you do nocase:true on a case-sensitive file system, then
// we need to use regexps instead of strings for non-magic
// path portions, because statting `aBc` won't return results
// for the file `AbC` for example.
const nocaseMagicOnly = this.platform === 'darwin' || this.platform === 'win32';
const mmo = {
// default nocase based on platform
...opts,
dot: this.dot,
matchBase: this.matchBase,
nobrace: this.nobrace,
nocase: this.nocase,
nocaseMagicOnly,
nocomment: true,
noext: this.noext,
nonegate: true,
optimizationLevel: 2,
platform: this.platform,
windowsPathsNoEscape: this.windowsPathsNoEscape,
debug: !!this.opts.debug,
};
const mms = this.pattern.map(p => new Minimatch(p, mmo));
const [matchSet, globParts] = mms.reduce((set, m) => {
set[0].push(...m.set);
set[1].push(...m.globParts);
return set;
}, [[], []]);
this.patterns = matchSet.map((set, i) => {
const g = globParts[i];
/* c8 ignore start */
if (!g)
throw new Error('invalid pattern object');
/* c8 ignore stop */
return new Pattern(set, g, 0, this.platform);
});
}
async walk() {
// Walkers always return array of Path objects, so we just have to
// coerce them into the right shape. It will have already called
// realpath() if the option was set to do so, so we know that's cached.
// start out knowing the cwd, at least
return [
...(await new GlobWalker(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).walk()),
];
}
walkSync() {
return [
...new GlobWalker(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).walkSync(),
];
}
stream() {
return new GlobStream(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).stream();
}
streamSync() {
return new GlobStream(this.patterns, this.scurry.cwd, {
...this.opts,
maxDepth: this.maxDepth !== Infinity ?
this.maxDepth + this.scurry.cwd.depth()
: Infinity,
platform: this.platform,
nocase: this.nocase,
includeChildMatches: this.includeChildMatches,
}).streamSync();
}
/**
* Default sync iteration function. Returns a Generator that
* iterates over the results.
*/
iterateSync() {
return this.streamSync()[Symbol.iterator]();
}
[Symbol.iterator]() {
return this.iterateSync();
}
/**
* Default async iteration function. Returns an AsyncGenerator that
* iterates over the results.
*/
iterate() {
return this.stream()[Symbol.asyncIterator]();
}
[Symbol.asyncIterator]() {
return this.iterate();
}
}
//# sourceMappingURL=glob.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,14 @@
import { GlobOptions } from './glob.js';
/**
* Return true if the patterns provided contain any magic glob characters,
* given the options provided.
*
* Brace expansion is not considered "magic" unless the `magicalBraces` option
* is set, as brace expansion just turns one string into an array of strings.
* So a pattern like `'x{a,b}y'` would return `false`, because `'xay'` and
* `'xby'` both do not contain any magic glob characters, and it's treated the
* same as if you had called it on `['xay', 'xby']`. When `magicalBraces:true`
* is in the options, brace expansion _is_ treated as a pattern having magic.
*/
export declare const hasMagic: (pattern: string | string[], options?: GlobOptions) => boolean;
//# sourceMappingURL=has-magic.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"has-magic.d.ts","sourceRoot":"","sources":["../../src/has-magic.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,WAAW,EAAE,MAAM,WAAW,CAAA;AAEvC;;;;;;;;;;GAUG;AACH,eAAO,MAAM,QAAQ,YACV,MAAM,GAAG,MAAM,EAAE,YACjB,WAAW,KACnB,OAQF,CAAA"}

View File

@@ -0,0 +1,23 @@
import { Minimatch } from 'minimatch';
/**
* Return true if the patterns provided contain any magic glob characters,
* given the options provided.
*
* Brace expansion is not considered "magic" unless the `magicalBraces` option
* is set, as brace expansion just turns one string into an array of strings.
* So a pattern like `'x{a,b}y'` would return `false`, because `'xay'` and
* `'xby'` both do not contain any magic glob characters, and it's treated the
* same as if you had called it on `['xay', 'xby']`. When `magicalBraces:true`
* is in the options, brace expansion _is_ treated as a pattern having magic.
*/
export const hasMagic = (pattern, options = {}) => {
if (!Array.isArray(pattern)) {
pattern = [pattern];
}
for (const p of pattern) {
if (new Minimatch(p, options).hasMagic())
return true;
}
return false;
};
//# sourceMappingURL=has-magic.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"has-magic.js","sourceRoot":"","sources":["../../src/has-magic.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,SAAS,EAAE,MAAM,WAAW,CAAA;AAGrC;;;;;;;;;;GAUG;AACH,MAAM,CAAC,MAAM,QAAQ,GAAG,CACtB,OAA0B,EAC1B,UAAuB,EAAE,EAChB,EAAE;IACX,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,OAAO,CAAC,EAAE,CAAC;QAC5B,OAAO,GAAG,CAAC,OAAO,CAAC,CAAA;IACrB,CAAC;IACD,KAAK,MAAM,CAAC,IAAI,OAAO,EAAE,CAAC;QACxB,IAAI,IAAI,SAAS,CAAC,CAAC,EAAE,OAAO,CAAC,CAAC,QAAQ,EAAE;YAAE,OAAO,IAAI,CAAA;IACvD,CAAC;IACD,OAAO,KAAK,CAAA;AACd,CAAC,CAAA","sourcesContent":["import { Minimatch } from 'minimatch'\nimport { GlobOptions } from './glob.js'\n\n/**\n * Return true if the patterns provided contain any magic glob characters,\n * given the options provided.\n *\n * Brace expansion is not considered \"magic\" unless the `magicalBraces` option\n * is set, as brace expansion just turns one string into an array of strings.\n * So a pattern like `'x{a,b}y'` would return `false`, because `'xay'` and\n * `'xby'` both do not contain any magic glob characters, and it's treated the\n * same as if you had called it on `['xay', 'xby']`. When `magicalBraces:true`\n * is in the options, brace expansion _is_ treated as a pattern having magic.\n */\nexport const hasMagic = (\n pattern: string | string[],\n options: GlobOptions = {},\n): boolean => {\n if (!Array.isArray(pattern)) {\n pattern = [pattern]\n }\n for (const p of pattern) {\n if (new Minimatch(p, options).hasMagic()) return true\n }\n return false\n}\n"]}

View File

@@ -0,0 +1,25 @@
/// <reference types="node" resolution-mode="require"/>
import { Minimatch, MinimatchOptions } from 'minimatch';
import { Path } from 'path-scurry';
import { GlobWalkerOpts } from './walker.js';
export interface IgnoreLike {
ignored?: (p: Path) => boolean;
childrenIgnored?: (p: Path) => boolean;
add?: (ignore: string) => void;
}
/**
* Class used to process ignored patterns
*/
export declare class Ignore implements IgnoreLike {
relative: Minimatch[];
relativeChildren: Minimatch[];
absolute: Minimatch[];
absoluteChildren: Minimatch[];
platform: NodeJS.Platform;
mmopts: MinimatchOptions;
constructor(ignored: string[], { nobrace, nocase, noext, noglobstar, platform, }: GlobWalkerOpts);
add(ign: string): void;
ignored(p: Path): boolean;
childrenIgnored(p: Path): boolean;
}
//# sourceMappingURL=ignore.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"ignore.d.ts","sourceRoot":"","sources":["../../src/ignore.ts"],"names":[],"mappings":";AAKA,OAAO,EAAE,SAAS,EAAE,gBAAgB,EAAE,MAAM,WAAW,CAAA;AACvD,OAAO,EAAE,IAAI,EAAE,MAAM,aAAa,CAAA;AAElC,OAAO,EAAE,cAAc,EAAE,MAAM,aAAa,CAAA;AAE5C,MAAM,WAAW,UAAU;IACzB,OAAO,CAAC,EAAE,CAAC,CAAC,EAAE,IAAI,KAAK,OAAO,CAAA;IAC9B,eAAe,CAAC,EAAE,CAAC,CAAC,EAAE,IAAI,KAAK,OAAO,CAAA;IACtC,GAAG,CAAC,EAAE,CAAC,MAAM,EAAE,MAAM,KAAK,IAAI,CAAA;CAC/B;AAWD;;GAEG;AACH,qBAAa,MAAO,YAAW,UAAU;IACvC,QAAQ,EAAE,SAAS,EAAE,CAAA;IACrB,gBAAgB,EAAE,SAAS,EAAE,CAAA;IAC7B,QAAQ,EAAE,SAAS,EAAE,CAAA;IACrB,gBAAgB,EAAE,SAAS,EAAE,CAAA;IAC7B,QAAQ,EAAE,MAAM,CAAC,QAAQ,CAAA;IACzB,MAAM,EAAE,gBAAgB,CAAA;gBAGtB,OAAO,EAAE,MAAM,EAAE,EACjB,EACE,OAAO,EACP,MAAM,EACN,KAAK,EACL,UAAU,EACV,QAA0B,GAC3B,EAAE,cAAc;IAqBnB,GAAG,CAAC,GAAG,EAAE,MAAM;IAyCf,OAAO,CAAC,CAAC,EAAE,IAAI,GAAG,OAAO;IAczB,eAAe,CAAC,CAAC,EAAE,IAAI,GAAG,OAAO;CAWlC"}

View File

@@ -0,0 +1,115 @@
// give it a pattern, and it'll be able to tell you if
// a given path should be ignored.
// Ignoring a path ignores its children if the pattern ends in /**
// Ignores are always parsed in dot:true mode
import { Minimatch } from 'minimatch';
import { Pattern } from './pattern.js';
const defaultPlatform = (typeof process === 'object' &&
process &&
typeof process.platform === 'string') ?
process.platform
: 'linux';
/**
* Class used to process ignored patterns
*/
export class Ignore {
relative;
relativeChildren;
absolute;
absoluteChildren;
platform;
mmopts;
constructor(ignored, { nobrace, nocase, noext, noglobstar, platform = defaultPlatform, }) {
this.relative = [];
this.absolute = [];
this.relativeChildren = [];
this.absoluteChildren = [];
this.platform = platform;
this.mmopts = {
dot: true,
nobrace,
nocase,
noext,
noglobstar,
optimizationLevel: 2,
platform,
nocomment: true,
nonegate: true,
};
for (const ign of ignored)
this.add(ign);
}
add(ign) {
// this is a little weird, but it gives us a clean set of optimized
// minimatch matchers, without getting tripped up if one of them
// ends in /** inside a brace section, and it's only inefficient at
// the start of the walk, not along it.
// It'd be nice if the Pattern class just had a .test() method, but
// handling globstars is a bit of a pita, and that code already lives
// in minimatch anyway.
// Another way would be if maybe Minimatch could take its set/globParts
// as an option, and then we could at least just use Pattern to test
// for absolute-ness.
// Yet another way, Minimatch could take an array of glob strings, and
// a cwd option, and do the right thing.
const mm = new Minimatch(ign, this.mmopts);
for (let i = 0; i < mm.set.length; i++) {
const parsed = mm.set[i];
const globParts = mm.globParts[i];
/* c8 ignore start */
if (!parsed || !globParts) {
throw new Error('invalid pattern object');
}
// strip off leading ./ portions
// https://github.com/isaacs/node-glob/issues/570
while (parsed[0] === '.' && globParts[0] === '.') {
parsed.shift();
globParts.shift();
}
/* c8 ignore stop */
const p = new Pattern(parsed, globParts, 0, this.platform);
const m = new Minimatch(p.globString(), this.mmopts);
const children = globParts[globParts.length - 1] === '**';
const absolute = p.isAbsolute();
if (absolute)
this.absolute.push(m);
else
this.relative.push(m);
if (children) {
if (absolute)
this.absoluteChildren.push(m);
else
this.relativeChildren.push(m);
}
}
}
ignored(p) {
const fullpath = p.fullpath();
const fullpaths = `${fullpath}/`;
const relative = p.relative() || '.';
const relatives = `${relative}/`;
for (const m of this.relative) {
if (m.match(relative) || m.match(relatives))
return true;
}
for (const m of this.absolute) {
if (m.match(fullpath) || m.match(fullpaths))
return true;
}
return false;
}
childrenIgnored(p) {
const fullpath = p.fullpath() + '/';
const relative = (p.relative() || '.') + '/';
for (const m of this.relativeChildren) {
if (m.match(relative))
return true;
}
for (const m of this.absoluteChildren) {
if (m.match(fullpath))
return true;
}
return false;
}
}
//# sourceMappingURL=ignore.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,97 @@
import { Minipass } from 'minipass';
import { Path } from 'path-scurry';
import type { GlobOptions, GlobOptionsWithFileTypesFalse, GlobOptionsWithFileTypesTrue, GlobOptionsWithFileTypesUnset } from './glob.js';
import { Glob } from './glob.js';
export { escape, unescape } from 'minimatch';
export type { FSOption, Path, WalkOptions, WalkOptionsWithFileTypesTrue, WalkOptionsWithFileTypesUnset, } from 'path-scurry';
export { Glob } from './glob.js';
export type { GlobOptions, GlobOptionsWithFileTypesFalse, GlobOptionsWithFileTypesTrue, GlobOptionsWithFileTypesUnset, } from './glob.js';
export { hasMagic } from './has-magic.js';
export { Ignore } from './ignore.js';
export type { IgnoreLike } from './ignore.js';
export type { MatchStream } from './walker.js';
/**
* Syncronous form of {@link globStream}. Will read all the matches as fast as
* you consume them, even all in a single tick if you consume them immediately,
* but will still respond to backpressure if they're not consumed immediately.
*/
export declare function globStreamSync(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Minipass<Path, Path>;
export declare function globStreamSync(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Minipass<string, string>;
export declare function globStreamSync(pattern: string | string[], options: GlobOptionsWithFileTypesUnset): Minipass<string, string>;
export declare function globStreamSync(pattern: string | string[], options: GlobOptions): Minipass<Path, Path> | Minipass<string, string>;
/**
* Return a stream that emits all the strings or `Path` objects and
* then emits `end` when completed.
*/
export declare function globStream(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Minipass<string, string>;
export declare function globStream(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Minipass<Path, Path>;
export declare function globStream(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): Minipass<string, string>;
export declare function globStream(pattern: string | string[], options: GlobOptions): Minipass<Path, Path> | Minipass<string, string>;
/**
* Synchronous form of {@link glob}
*/
export declare function globSync(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): string[];
export declare function globSync(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Path[];
export declare function globSync(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): string[];
export declare function globSync(pattern: string | string[], options: GlobOptions): Path[] | string[];
/**
* Perform an asynchronous glob search for the pattern(s) specified. Returns
* [Path](https://isaacs.github.io/path-scurry/classes/PathBase) objects if the
* {@link withFileTypes} option is set to `true`. See {@link GlobOptions} for
* full option descriptions.
*/
declare function glob_(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): Promise<string[]>;
declare function glob_(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Promise<Path[]>;
declare function glob_(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Promise<string[]>;
declare function glob_(pattern: string | string[], options: GlobOptions): Promise<Path[] | string[]>;
/**
* Return a sync iterator for walking glob pattern matches.
*/
export declare function globIterateSync(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): Generator<string, void, void>;
export declare function globIterateSync(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): Generator<Path, void, void>;
export declare function globIterateSync(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): Generator<string, void, void>;
export declare function globIterateSync(pattern: string | string[], options: GlobOptions): Generator<Path, void, void> | Generator<string, void, void>;
/**
* Return an async iterator for walking glob pattern matches.
*/
export declare function globIterate(pattern: string | string[], options?: GlobOptionsWithFileTypesUnset | undefined): AsyncGenerator<string, void, void>;
export declare function globIterate(pattern: string | string[], options: GlobOptionsWithFileTypesTrue): AsyncGenerator<Path, void, void>;
export declare function globIterate(pattern: string | string[], options: GlobOptionsWithFileTypesFalse): AsyncGenerator<string, void, void>;
export declare function globIterate(pattern: string | string[], options: GlobOptions): AsyncGenerator<Path, void, void> | AsyncGenerator<string, void, void>;
export declare const streamSync: typeof globStreamSync;
export declare const stream: typeof globStream & {
sync: typeof globStreamSync;
};
export declare const iterateSync: typeof globIterateSync;
export declare const iterate: typeof globIterate & {
sync: typeof globIterateSync;
};
export declare const sync: typeof globSync & {
stream: typeof globStreamSync;
iterate: typeof globIterateSync;
};
export declare const glob: typeof glob_ & {
glob: typeof glob_;
globSync: typeof globSync;
sync: typeof globSync & {
stream: typeof globStreamSync;
iterate: typeof globIterateSync;
};
globStream: typeof globStream;
stream: typeof globStream & {
sync: typeof globStreamSync;
};
globStreamSync: typeof globStreamSync;
streamSync: typeof globStreamSync;
globIterate: typeof globIterate;
iterate: typeof globIterate & {
sync: typeof globIterateSync;
};
globIterateSync: typeof globIterateSync;
iterateSync: typeof globIterateSync;
Glob: typeof Glob;
hasMagic: (pattern: string | string[], options?: GlobOptions) => boolean;
escape: (s: string, { windowsPathsNoEscape, }?: Pick<import("minimatch").MinimatchOptions, "windowsPathsNoEscape"> | undefined) => string;
unescape: (s: string, { windowsPathsNoEscape, }?: Pick<import("minimatch").MinimatchOptions, "windowsPathsNoEscape"> | undefined) => string;
};
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,QAAQ,EAAE,MAAM,UAAU,CAAA;AACnC,OAAO,EAAE,IAAI,EAAE,MAAM,aAAa,CAAA;AAClC,OAAO,KAAK,EACV,WAAW,EACX,6BAA6B,EAC7B,4BAA4B,EAC5B,6BAA6B,EAC9B,MAAM,WAAW,CAAA;AAClB,OAAO,EAAE,IAAI,EAAE,MAAM,WAAW,CAAA;AAGhC,OAAO,EAAE,MAAM,EAAE,QAAQ,EAAE,MAAM,WAAW,CAAA;AAC5C,YAAY,EACV,QAAQ,EACR,IAAI,EACJ,WAAW,EACX,4BAA4B,EAC5B,6BAA6B,GAC9B,MAAM,aAAa,CAAA;AACpB,OAAO,EAAE,IAAI,EAAE,MAAM,WAAW,CAAA;AAChC,YAAY,EACV,WAAW,EACX,6BAA6B,EAC7B,4BAA4B,EAC5B,6BAA6B,GAC9B,MAAM,WAAW,CAAA;AAClB,OAAO,EAAE,QAAQ,EAAE,MAAM,gBAAgB,CAAA;AACzC,OAAO,EAAE,MAAM,EAAE,MAAM,aAAa,CAAA;AACpC,YAAY,EAAE,UAAU,EAAE,MAAM,aAAa,CAAA;AAC7C,YAAY,EAAE,WAAW,EAAE,MAAM,aAAa,CAAA;AAE9C;;;;GAIG;AACH,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,CAAA;AACvB,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,cAAc,CAC5B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,GAAG,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAQlD;;;GAGG;AACH,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,CAAA;AACvB,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAC3B,wBAAgB,UAAU,CACxB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,QAAQ,CAAC,IAAI,EAAE,IAAI,CAAC,GAAG,QAAQ,CAAC,MAAM,EAAE,MAAM,CAAC,CAAA;AAQlD;;GAEG;AACH,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,MAAM,EAAE,CAAA;AACX,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,IAAI,EAAE,CAAA;AACT,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,MAAM,EAAE,CAAA;AACX,wBAAgB,QAAQ,CACtB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,IAAI,EAAE,GAAG,MAAM,EAAE,CAAA;AAQpB;;;;;GAKG;AACH,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,OAAO,CAAC,MAAM,EAAE,CAAC,CAAA;AACpB,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,OAAO,CAAC,IAAI,EAAE,CAAC,CAAA;AAClB,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,OAAO,CAAC,MAAM,EAAE,CAAC,CAAA;AACpB,iBAAe,KAAK,CAClB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,OAAO,CAAC,IAAI,EAAE,GAAG,MAAM,EAAE,CAAC,CAAA;AAQ7B;;GAEG;AACH,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAChC,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,SAAS,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAC9B,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAChC,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,SAAS,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,GAAG,SAAS,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AAQ9D;;GAEG;AACH,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,CAAC,EAAE,6BAA6B,GAAG,SAAS,GAClD,cAAc,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AACrC,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,4BAA4B,GACpC,cAAc,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AACnC,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,6BAA6B,GACrC,cAAc,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AACrC,wBAAgB,WAAW,CACzB,OAAO,EAAE,MAAM,GAAG,MAAM,EAAE,EAC1B,OAAO,EAAE,WAAW,GACnB,cAAc,CAAC,IAAI,EAAE,IAAI,EAAE,IAAI,CAAC,GAAG,cAAc,CAAC,MAAM,EAAE,IAAI,EAAE,IAAI,CAAC,CAAA;AASxE,eAAO,MAAM,UAAU,uBAAiB,CAAA;AACxC,eAAO,MAAM,MAAM;;CAAsD,CAAA;AACzE,eAAO,MAAM,WAAW,wBAAkB,CAAA;AAC1C,eAAO,MAAM,OAAO;;CAElB,CAAA;AACF,eAAO,MAAM,IAAI;;;CAGf,CAAA;AAEF,eAAO,MAAM,IAAI;;;;;;;;;;;;;;;;;;;;;;;CAgBf,CAAA"}

View File

@@ -0,0 +1,55 @@
import { escape, unescape } from 'minimatch';
import { Glob } from './glob.js';
import { hasMagic } from './has-magic.js';
export { escape, unescape } from 'minimatch';
export { Glob } from './glob.js';
export { hasMagic } from './has-magic.js';
export { Ignore } from './ignore.js';
export function globStreamSync(pattern, options = {}) {
return new Glob(pattern, options).streamSync();
}
export function globStream(pattern, options = {}) {
return new Glob(pattern, options).stream();
}
export function globSync(pattern, options = {}) {
return new Glob(pattern, options).walkSync();
}
async function glob_(pattern, options = {}) {
return new Glob(pattern, options).walk();
}
export function globIterateSync(pattern, options = {}) {
return new Glob(pattern, options).iterateSync();
}
export function globIterate(pattern, options = {}) {
return new Glob(pattern, options).iterate();
}
// aliases: glob.sync.stream() glob.stream.sync() glob.sync() etc
export const streamSync = globStreamSync;
export const stream = Object.assign(globStream, { sync: globStreamSync });
export const iterateSync = globIterateSync;
export const iterate = Object.assign(globIterate, {
sync: globIterateSync,
});
export const sync = Object.assign(globSync, {
stream: globStreamSync,
iterate: globIterateSync,
});
export const glob = Object.assign(glob_, {
glob: glob_,
globSync,
sync,
globStream,
stream,
globStreamSync,
streamSync,
globIterate,
iterate,
globIterateSync,
iterateSync,
Glob,
hasMagic,
escape,
unescape,
});
glob.glob = glob;
//# sourceMappingURL=index.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
{
"type": "module"
}

View File

@@ -0,0 +1,77 @@
/// <reference types="node" resolution-mode="require"/>
import { GLOBSTAR } from 'minimatch';
export type MMPattern = string | RegExp | typeof GLOBSTAR;
export type PatternList = [p: MMPattern, ...rest: MMPattern[]];
export type UNCPatternList = [
p0: '',
p1: '',
p2: string,
p3: string,
...rest: MMPattern[]
];
export type DrivePatternList = [p0: string, ...rest: MMPattern[]];
export type AbsolutePatternList = [p0: '', ...rest: MMPattern[]];
export type GlobList = [p: string, ...rest: string[]];
/**
* An immutable-ish view on an array of glob parts and their parsed
* results
*/
export declare class Pattern {
#private;
readonly length: number;
constructor(patternList: MMPattern[], globList: string[], index: number, platform: NodeJS.Platform);
/**
* The first entry in the parsed list of patterns
*/
pattern(): MMPattern;
/**
* true of if pattern() returns a string
*/
isString(): boolean;
/**
* true of if pattern() returns GLOBSTAR
*/
isGlobstar(): boolean;
/**
* true if pattern() returns a regexp
*/
isRegExp(): boolean;
/**
* The /-joined set of glob parts that make up this pattern
*/
globString(): string;
/**
* true if there are more pattern parts after this one
*/
hasMore(): boolean;
/**
* The rest of the pattern after this part, or null if this is the end
*/
rest(): Pattern | null;
/**
* true if the pattern represents a //unc/path/ on windows
*/
isUNC(): boolean;
/**
* True if the pattern starts with a drive letter on Windows
*/
isDrive(): boolean;
/**
* True if the pattern is rooted on an absolute path
*/
isAbsolute(): boolean;
/**
* consume the root of the pattern, and return it
*/
root(): string;
/**
* Check to see if the current globstar pattern is allowed to follow
* a symbolic link.
*/
checkFollowGlobstar(): boolean;
/**
* Mark that the current globstar pattern is following a symbolic link
*/
markFollowGlobstar(): boolean;
}
//# sourceMappingURL=pattern.d.ts.map

Some files were not shown because too many files have changed in this diff Show More