Compare commits

...

111 Commits

Author SHA1 Message Date
Alex Waygood
09e8599e91 [red-knot] Explicitly test that no duplicate editable search paths are ever added 2024-07-19 14:30:08 +01:00
Alex Waygood
5f96f69151 [red-knot] Fix bug where module resolution would not be invalidated if an entire package was deleted (#12378) 2024-07-19 13:53:09 +01:00
Micha Reiser
ad19b3fd0e [red-knot] Add verbosity argument to CLI (#12404) 2024-07-19 11:38:24 +00:00
Carl Meyer
a62e2d2000 [red-knot] preparse builtins in without_parse benchmark (#12395) 2024-07-19 05:58:27 +00:00
Dylan
d61747093c [ruff] Rename RUF007 to zip-instead-of-pairwise (#12399)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Renames the rule
[RUF007](https://docs.astral.sh/ruff/rules/pairwise-over-zipped/) from
`pairwise-over-zipped` to `zip-instead-of-pairwise`. This closes #12397.

Specifically, in this PR:

- The file containing the rule was renamed
- The struct was renamed
- The function implementing the rule was renamed

## Testing

<!-- How was it tested? -->

- `cargo test`
- Docs re-built locally and verified that new rule name is displayed.
(Screenshots below).

<img width="939" alt="New rule name in rule summary"
src="https://github.com/user-attachments/assets/bf638bc9-1b7a-4675-99bf-e4de88fec167">

<img width="805" alt="New rule name in rule details"
src="https://github.com/user-attachments/assets/6fffd745-2568-424a-84e5-f94a41351022">
2024-07-18 19:26:27 -04:00
ukyen
0ba7fc63d0 [pydocstyle] Escaped docstring in docstring (D301 ) (#12192)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
This PR updates D301 rule to allow inclduing escaped docstring, e.g.
`\"""Foo.\"""` or `\"\"\"Bar.\"\"\"`, within a docstring.

Related issue: #12152 

## Test Plan

Add more test cases to D301.py and update the snapshot file.

<!-- How was it tested? -->
2024-07-18 18:36:05 -04:00
Carl Meyer
fa5b19d4b6 [red-knot] use a simpler builtin in the benchmark (#12393)
In preparation for supporting resolving builtins, simplify the benchmark
so it doesn't look up `str`, which is actually a complex builtin to deal
with because it inherits `Sequence[str]`.

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-07-18 14:04:33 -07:00
Carl Meyer
181e7b3c0d [red-knot] rename module_global to global (#12385)
Per comments in https://github.com/astral-sh/ruff/pull/12269, "module
global" is kind of long, and arguably redundant.

I tried just using "module" but there were too many cases where I felt
this was ambiguous. I like the way "global" works out better, though it
does require an understanding that in Python "global" generally means
"module global" not "globally global" (though in a sense module globals
are also globally global since modules are singletons).
2024-07-18 13:05:30 -07:00
Carl Meyer
519eca9fe7 [red-knot] support implicit global name lookups (#12374)
Support falling back to a global name lookup if a name isn't defined in
the local scope, in the cases where that is correct according to Python
semantics.

In class scopes, a name lookup checks the local namespace first, and if
the name isn't found there, looks it up in globals.

In function scopes (and type parameter scopes, which are function-like),
if a name has any definitions in the local scope, it is a local, and
accessing it when none of those definitions have executed yet just
results in an `UnboundLocalError`, it does not fall back to a global. If
the name does not have any definitions in the local scope, then it is an
implicit global.

Public symbol type lookups never include such a fall back. For example,
if a name is not defined in a class scope, it is not available as a
member on that class, even if a name lookup within the class scope would
have fallen back to a global lookup.

This PR makes the `@override` lint rule work again.

Not yet included/supported in this PR:

* Support for free variables / closures: a free symbol in a nested
function-like scope referring to a symbol in an outer function-like
scope.
* Support for `global` and `nonlocal` statements, which force a symbol
to be treated as global or nonlocal even if it has definitions in the
local scope.
* Module-global lookups should fall back to builtins if the name isn't
found in the module scope.

I would like to expose nicer APIs for the various kinds of symbols
(explicit global, implicit global, free, etc), but this will also wait
for a later PR, when more kinds of symbols are supported.
2024-07-18 10:50:43 -07:00
Dhruv Manilawala
f0d589d7a3 Provide custom job permissions to cargo-dist (#12386)
We can't just directly update the `release.yml` file because that's
auto-generated using `cargo-dist`. So, update the permissions in
`Cargo.toml` and then use `cargo dist generate` to make sure there's no
diff.
2024-07-18 16:49:38 +00:00
Dhruv Manilawala
512c8b2cc5 Provide contents read permission to wasm publish job (#12384)
The job has asked for the permission:
811f78d94d/.github/workflows/publish-wasm.yml (L25)
2024-07-18 22:02:49 +05:30
Carl Meyer
811f78d94d [red-knot] small efficiency improvements and bugfixes to use-def map building (#12373)
Adds inference tests sufficient to give full test coverage of the
`UseDefMapBuilder::merge` method.

In the process I realized that we could implement visiting of if
statements in `SemanticBuilder` with fewer `snapshot`, `restore`, and
`merge` operations, so I restructured that visit a bit.

I also found one correctness bug in the `merge` method (it failed to
extend the given snapshot with "unbound" for any missing symbols,
meaning we would just lose the fact that the symbol could be unbound in
the merged-in path), and two efficiency bugs (if one of the ranges to
merge is empty, we can just use the other one, no need for copies, and
if the ranges are overlapping -- which can occur with nested branches --
we can still just merge them with no copies), and fixed all three.
2024-07-18 09:24:58 -07:00
Dhruv Manilawala
8f1be31289 Update 0.5.3 changelog caption (#12383)
As suggested in
https://github.com/astral-sh/ruff/pull/12381#discussion_r1683123202
2024-07-18 16:17:07 +00:00
Dhruv Manilawala
8cfbac71a4 Bump version to 0.5.3 (#12381) 2024-07-18 16:07:34 +00:00
Charlie Marsh
9460857932 Migrate to standalone docs repo (#12341)
## Summary

See: https://github.com/astral-sh/uv/pull/5081
2024-07-18 15:35:49 +00:00
Dhruv Manilawala
a028ca22f0 Add VS Code specific extension settings (#12380)
## Summary

This PR adds VS Code specific extension settings in the online
documentation.

The content is basically taken from the `package.json` file in the
`ruff-vscode` repository.
2024-07-18 20:58:14 +05:30
Dhruv Manilawala
7953f6aa79 Update versioning policy for editor integration (#12375)
## Summary

Following the stabilization of the Ruff language server, we need to
update our versioning policy to account for any changes in it. This
could be server settings, capability, etc.

This PR also adds a new section for the VS Code extension which is
adopted from [Biome's versioning
policy](https://biomejs.dev/internals/versioning/#visual-studio-code-extension)
for the same.

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-07-18 15:17:36 +00:00
Charlie Marsh
764d9ab4ee Allow repeated-equality-comparison for mixed operations (#12369)
## Summary

This PR allows us to fix both expressions in `foo == "a" or foo == "b"
or ("c" != bar and "d" != bar)`, but limits the rule to consecutive
comparisons, following https://github.com/astral-sh/ruff/issues/7797.

I think this logic was _probably_ added because of
https://github.com/astral-sh/ruff/pull/12368 -- the intent being that
we'd replace the _entire_ expression.
2024-07-18 11:16:40 -04:00
Charlie Marsh
9b9d701500 Allow additional arguments for sum and max comprehensions (#12364)
## Summary

These can have other arguments, so it seems wrong to gate on single
argument here.

Closes https://github.com/astral-sh/ruff/issues/12358.
2024-07-18 08:37:28 -04:00
Dhruv Manilawala
648cca199b Add docs for Ruff language server (#12344)
## Summary

This PR adds documentation for the Ruff language server.

It mainly does the following:
1. Combines various READMEs containing instructions for different editor
setup in their respective section on the online docs
2. Provide an enumerated list of server settings. Additionally, it also
provides a section for VS Code specific options.
3. Adds a "Features" section which enumerates all the current
capabilities of the native server

For (2), the settings documentation is done manually but a future
improvement (easier after `ruff-lsp` is deprecated) is to move the docs
in to Rust struct and generate the documentation from the code itself.
And, the VS Code extension specific options can be generated by diffing
against the `package.json` in `ruff-vscode` repository.

### Structure

1. Setup: This section contains the configuration for setting up the
language server for different editors
2. Features: This section contains a list of capabilities provided by
the server along with short GIF to showcase it
3. Settings: This section contains an enumerated list of settings in a
similar format to the one for the linter / formatter
4. Migrating from `ruff-lsp`

> [!NOTE]
>
> The settings page is manually written but could possibly be
auto-generated via a macro similar to `OptionsMetadata` on the
`ClientSettings` struct

resolves: #11217 

## Test Plan

Generate and open the documentation locally using:
1. `python scripts/generate_mkdocs.py`
2. `mkdocs serve -f mkdocs.insiders.yml`
2024-07-18 17:41:43 +05:30
Dhruv Manilawala
2e77b775b0 Consider --preview flag for server subcommand (#12208)
## Summary

This PR removes the requirement of `--preview` flag to run the `ruff
server` and instead considers it to be an indicator to turn on preview
mode for the linter and the formatter.

resolves: #12161 

## Test Plan

Add test cases to assert the `preview` value is updated accordingly.

In an editor context, I used the local `ruff` executable in Neovim with
the `--preview` flag and verified that the preview-only violations are
being highlighted.

Running with:
```lua
require('lspconfig').ruff.setup({
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
})
```
The screenshot shows that `E502` is highlighted with the below config in
`pyproject.toml`:

<img width="877" alt="Screenshot 2024-07-17 at 16 43 09"
src="https://github.com/user-attachments/assets/c7016ef3-55b1-4a14-bbd3-a07b1bcdd323">
2024-07-18 11:05:01 +05:30
Dhruv Manilawala
ebe5b06c95 Use fallback settings when indexing the project (#12362)
## Summary

This PR updates the settings index building logic in the language server
to consider the fallback settings for applying ignore filters in
`WalkBuilder` and the exclusion via `exclude` / `extend-exclude`.

This flow matches the one in the `ruff` CLI where the root settings is
built by (1) finding the workspace setting in the ancestor directory (2)
finding the user configuration if that's missing and (3) fallback to
using the default configuration.

Previously, the index building logic was being executed before (2) and
(3). This PR reverses the logic so that the exclusion /
`respect_gitignore` is being considered from the default settings if
there's no workspace / user settings. This has the benefit that the
server no longer enters the `.git` directory or any other excluded
directory when a user opens a file in the home directory.

Related to #11366

## Test plan

Opened a test file from the home directory and confirmed with the debug
trace (removed in #12360) that the server excludes the `.git` directory
when indexing.
2024-07-18 09:16:45 +05:30
Carl Meyer
b2a49d8140 [red-knot] better docs for use-def maps (#12357)
Add better doc comments and comments, as well as one debug assertion, to
use-def map building.
2024-07-17 17:50:58 -07:00
Carl Meyer
985a999234 [red-knot] better docs for type inference (#12356)
Add some docs for how type inference works.

Also a couple minor code changes to rearrange or rename for better
clarity.
2024-07-17 13:36:58 -07:00
cake-monotone
1df51b1fbf [pyupgrade] Implement unnecessary-default-type-args (UP043) (#12371)
## Summary

Add new rule and implement for `unnecessary default type arguments`
under the `UP` category (`UP043`).

```py
// < py313
Generator[int, None, None] 

// >= py313
Generator[int]
```

I think that as Python 3.13 develops, there might be more default type
arguments added besides `Generator` and `AsyncGenerator`. So, I made
this more flexible to accommodate future changes.

related issue: #12286

## Test Plan

snapshot included..!
2024-07-17 19:45:43 +00:00
Charlie Marsh
1435b0f022 Remove discard, remove, and pop allowance for loop-iterator-mutation (#12365)
## Summary

Pretty sure this should still be an error, but also, I think I added
this because of ecosystem CI? So want to see what pops up.

Closes https://github.com/astral-sh/ruff/issues/12164.
2024-07-17 17:42:14 +00:00
Charlie Marsh
e39298dcbc Use UTF-8 as default encoding in unspecified-encoding fix (#12370)
## Summary

This is the _intended_ default that PEP 597 _wants_, but it's not
backwards compatible. The fix is already unsafe, so it's better for us
to recommend the desired and expected behavior.

Closes https://github.com/astral-sh/ruff/issues/12069.
2024-07-17 12:57:27 -04:00
Charlie Marsh
1de8ff3308 Detect enumerate iterations in loop-iterator-mutation (#12366)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12164.
2024-07-17 12:03:36 -04:00
Charlie Marsh
72e02206d6 Avoid dropping extra boolean operations in repeated-equality-comparison (#12368)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12062.
2024-07-17 11:49:27 -04:00
Charlie Marsh
80f0116641 Ignore self and cls when counting arguments (#12367)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12320.
2024-07-17 10:49:38 -04:00
Micha Reiser
79b535587b [red-knot] Reload notebook on file change (#12361) 2024-07-17 12:23:48 +00:00
Dhruv Manilawala
6e0cbe0f35 Remove leftover debug log (#12360)
This was a leftover from #12299
2024-07-17 17:52:44 +05:30
Micha Reiser
91338ae902 [red-knot] Add basic workspace support (#12318) 2024-07-17 11:34:21 +02:00
Micha Reiser
0c72577b5d [red-knot] Add notebook support (#12338) 2024-07-17 08:26:33 +00:00
Matthew Runyon
fe04f2b09d Publish wasm API to npm (#12317) 2024-07-17 08:50:38 +02:00
Carl Meyer
073588b48e [red-knot] improve semantic index tests (#12355)
Improve semantic index tests with better assertions than just `.len()`,
and re-add use-definition test that was commented out in the switch to
Salsa initially.
2024-07-16 23:46:49 -07:00
Alex Waygood
9a2dafb43d [red-knot] Add support for editable installs to the module resolver (#12307)
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Carl Meyer <carl@astral.sh>
2024-07-16 18:17:47 +00:00
Carl Meyer
595b1aa4a1 [red-knot] per-definition inference, use-def maps (#12269)
Implements definition-level type inference, with basic control flow
(only if statements and if expressions so far) in Salsa.

There are a couple key ideas here:

1) We can do type inference queries at any of three region
granularities: an entire scope, a single definition, or a single
expression. These are represented by the `InferenceRegion` enum, and the
entry points are the salsa queries `infer_scope_types`,
`infer_definition_types`, and `infer_expression_types`. Generally
per-scope will be used for scopes that we are directly checking and
per-definition will be used anytime we are looking up symbol types from
another module/scope. Per-expression should be uncommon: used only for
the RHS of an unpacking or multi-target assignment (to avoid
re-inferring the RHS once per symbol defined in the assignment) and for
test nodes in type narrowing (e.g. the `test` of an `If` node). All
three queries return a `TypeInference` with a map of types for all
definitions and expressions within their region. If you do e.g.
scope-level inference, when it hits a definition, or an
independently-inferable expression, it should use the relevant query
(which may already be cached) to get all types within the smaller
region. This avoids double-inferring smaller regions, even though larger
regions encompass smaller ones.

2) Instead of building a control-flow graph and lazily traversing it to
find definitions which reach a use of a name (which is O(n^2) in the
worst case), instead semantic indexing builds a use-def map, where every
use of a name knows which definitions can reach that use. We also no
longer track all definitions of a symbol in the symbol itself; instead
the use-def map also records which defs remain visible at the end of the
scope, and considers these the publicly-visible definitions of the
symbol (see below).

Major items left as TODOs in this PR, to be done in follow-up PRs:

1) Free/global references aren't supported yet (only lookup based on
definitions in current scope), which means the override-check example
doesn't currently work. This is the first thing I'll fix as follow-up to
this PR.

2) Control flow outside of if statements and expressions.

3) Type narrowing.

There are also some smaller relevant changes here:

1) Eliminate `Option` in the return type of member lookups; instead
always return `Type::Unbound` for a name we can't find. Also use
`Type::Unbound` for modules we can't resolve (not 100% sure about this
one yet.)

2) Eliminate the use of the terms "public" and "root" to refer to
module-global scope or symbols. Instead consistently use the term
"module-global". It's longer, but it's the clearest, and the most
consistent with typical Python terminology. In particular I don't like
"public" for this use because it has other implications around author
intent (is an underscore-prefixed module-global symbol "public"?). And
"root" is just not commonly used for this in Python.

3) Eliminate the `PublicSymbol` Salsa ingredient. Many non-module-global
symbols can also be seen from other scopes (e.g. by a free var in a
nested scope, or by class attribute access), and thus need to have a
"public type" (that is, the type not as seen from a particular use in
the control flow of the same scope, but the type as seen from some other
scope.) So all symbols need to have a "public type" (here I want to keep
the use of the term "public", unless someone has a better term to
suggest -- since it's "public type of a symbol" and not "public symbol"
the confusion with e.g. initial underscores is less of an issue.) At
least initially, I would like to try not having special handling for
module-global symbols vs other symbols.

4) Switch to using "definitions that reach end of scope" rather than
"all definitions" in determining the public type of a symbol. I'm
convinced that in general this is the right way to go. We may want to
refine this further in future for some free-variable cases, but it can
be changed purely by making changes to the building of the use-def map
(the `public_definitions` index in it), without affecting any other
code. One consequence of combining this with no control-flow support
(just last-definition-wins) is that some inference tests now give more
wrong-looking results; I left TODO comments on these tests to fix them
when control flow is added.

And some potential areas for consideration in the future:

1) Should `symbol_ty` be a Salsa query? This would require making all
symbols a Salsa ingredient, and tracking even more dependencies. But it
would save some repeated reconstruction of unions, for symbols with
multiple public definitions. For now I'm not making it a query, but open
to changing this in future with actual perf evidence that it's better.
2024-07-16 11:02:30 -07:00
Charlie Marsh
30cef67b45 Remove BindingKind::ComprehensionVar (#12347)
## Summary

This doesn't seem to be used anywhere. Maybe it mattered when we didn't
handle generator scopes properly?
2024-07-16 11:18:04 -04:00
Charlie Marsh
d0c5925672 Consider expression before statement when determining binding kind (#12346)
## Summary

I believe these should always bind more tightly -- e.g., in:

```python
for _ in bar(baz for foo in [1]):
    pass
```

The inner `baz` and `foo` should be considered comprehension variables,
not for loop bindings.

We need to revisit this more holistically. In some of these cases,
`BindingKind` should probably be a flag, not an enum, since the values
aren't mutually exclusive. Separately, we should probably be more
precise in how we set it (e.g., by passing down from the parent rather
than sniffing in `handle_node_store`).

Closes https://github.com/astral-sh/ruff/issues/12339
2024-07-16 14:49:26 +00:00
Micha Reiser
b1487b6b4f Ignore more OpenAI notebooks with syntax errors in the ecosystem check (#12342) 2024-07-16 10:32:00 +02:00
Micha Reiser
85ae02d62e [red-knot] Add walk_directories to System (#12297) 2024-07-16 06:40:10 +00:00
konsti
9a817a2922 Insert empty line between suite and alternative branch after def/class (#12294)
When there is a function or class definition at the end of a suite
followed by the beginning of an alternative block, we have to insert a
single empty line between them.

In the if-else-statement example below, we insert an empty line after
the `foo` in the if-block, but none after the else-block `foo`, since in
the latter case the enclosing suite already adds empty lines.

```python
if sys.version_info >= (3, 10):
    def foo():
        return "new"
else:
    def foo():
        return "old"
class Bar:
    pass
```

To do so, we track whether the current suite is the last one in the
current statement with a new option on the suite kind.

Fixes #12199

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-15 12:59:33 +02:00
Dhruv Manilawala
ecd4b4d943 Build settings index in parallel for the native server (#12299)
## Summary

This PR updates the server to build the settings index in parallel using
similar logic as `python_files_in_path`.

This should help with https://github.com/astral-sh/ruff/issues/11366 but
ideally we would want to build it lazily.

## Test Plan

`cargo insta test`
2024-07-15 09:57:54 +00:00
github-actions[bot]
b9a8cd390f Sync vendored typeshed stubs (#12325)
Close and reopen this PR to trigger CI

Co-authored-by: typeshedbot <>
2024-07-15 07:46:55 +01:00
renovate[bot]
2348714081 Update pre-commit dependencies (#12330) 2024-07-15 07:27:10 +01:00
renovate[bot]
3817b207cf Update NPM Development dependencies (#12331)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 08:08:10 +02:00
renovate[bot]
b1cf9ea663 Update Rust crate clap_complete_command to 0.6.0 (#12332)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 08:05:07 +02:00
renovate[bot]
8ad10b9307 Update Rust crate compact_str to 0.8.0 (#12333)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-15 06:03:23 +00:00
renovate[bot]
9c5524a9a2 Update Rust crate tikv-jemallocator to 0.6.0 (#12335)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 08:01:43 +02:00
renovate[bot]
1530223311 Update Rust crate serde_with to v3.9.0 (#12334)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 07:58:18 +02:00
renovate[bot]
b9671522c4 Update Rust crate thiserror to v1.0.62 (#12329)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 05:54:00 +00:00
renovate[bot]
9918202422 Update Rust crate syn to v2.0.71 (#12328)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 07:52:09 +02:00
renovate[bot]
42e7147860 Update Rust crate clap to v4.5.9 (#12326)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 07:51:48 +02:00
renovate[bot]
25feab93f8 Update Rust crate matchit to v0.8.4 (#12327)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-15 07:50:58 +02:00
Charlie Marsh
dc8db1afb0 Make some amendments to the v0.5.2 changelog (#12319) 2024-07-14 14:47:51 +00:00
Charlie Marsh
18c364d5df [flake8-bandit] Support explicit string concatenations in S310 HTTP detection (#12315)
Closes https://github.com/astral-sh/ruff/issues/12314.
2024-07-14 10:44:08 -04:00
Charlie Marsh
7a7c601d5e Bump version to v0.5.2 (#12316) 2024-07-14 10:43:58 -04:00
Charlie Marsh
3bfbbbc78c Avoid allocation when validating HTTP and HTTPS prefixes (#12313) 2024-07-13 17:25:02 -04:00
Tim Chan
1a3ee45b23 [flake8-bandit] Avoid S310 violations for HTTP-safe f-strings (#12305)
this resolves https://github.com/astral-sh/ruff/issues/12245
2024-07-13 20:57:05 +00:00
Charlie Marsh
65848869d5 [refurb] Make list-reverse-copy an unsafe fix (#12303)
## Summary

I don't know that there's more to do here. We could consider not raising
the violation at all for arguments, but that would have some false
negatives and could also be surprising to users.

Closes https://github.com/astral-sh/ruff/issues/12267.
2024-07-13 15:45:35 -04:00
Charlie Marsh
456d6a2fb2 Consider with blocks as single-item branches (#12311)
## Summary

Ensures that, e.g., the following is not considered a
redefinition-without-use:

```python
import contextlib

foo = None
with contextlib.suppress(ImportError):
    from some_module import foo
```

Closes https://github.com/astral-sh/ruff/issues/12309.
2024-07-13 15:22:17 -04:00
Charlie Marsh
940df67823 Omit code frames for fixes with empty ranges (#12304)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12291.

## Test Plan

```shell
❯ cargo run check ../uv/foo --select INP
/Users/crmarsh/workspace/uv/foo/bar/baz.py:1:1: INP001 File `/Users/crmarsh/workspace/uv/foo/bar/baz.py` is part of an implicit namespace package. Add an `__init__.py`.
Found 1 error.
```
2024-07-12 15:21:28 +00:00
Charlie Marsh
e58713e2ac Make cache-write failures non-fatal (#12302)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12284.
2024-07-12 10:33:54 -04:00
Charlie Marsh
aa5c53b38b Remove 'non-obvious' allowance for E721 (#12300)
## Summary

I don't fully understand the purpose of this. In #7905, it was just
copied over from the previous non-preview implementation. But it means
that (e.g.) we don't treat `type(self.foo)` as a type -- which is wrong.

Closes https://github.com/astral-sh/ruff/issues/12290.
2024-07-12 09:21:43 -04:00
Charlie Marsh
4e6ecb2348 Treat not operations as boolean tests (#12301)
## Summary

Closes https://github.com/astral-sh/ruff/issues/12285.
2024-07-12 08:53:37 -04:00
Alex Waygood
6febd96dfe [red-knot] Add a read_directory() method to the ruff_db::system::System trait (#12289) 2024-07-12 12:31:05 +00:00
Matthias
17e84d5f40 [numpy] Update NPY201: add np.NAN to exception (#12292)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-12 12:09:55 +00:00
Victorien
b6545ce5d6 Use indentation consistently (#12293) 2024-07-12 14:08:56 +02:00
Dhruv Manilawala
90e9aae3f4 Consider nested configs for settings reloading (#12253)
## Summary

This PR fixes a bug in the settings reloading logic to consider nested
configuration in a workspace.

fixes: #11766

## Test Plan


https://github.com/astral-sh/ruff/assets/67177269/69704b7b-44b9-4cc7-b5a7-376bf87c6ef4
2024-07-12 05:00:08 +00:00
Micha Reiser
bd01004a42 Use space separator before parenthesiszed expressions in comprehensions with leading comments. (#12282) 2024-07-11 22:38:12 +02:00
Gaétan Lepage
d0298dc26d Explicitly add schemars to ruff_python_ast Cargo.toml (#12275)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-11 06:46:34 +00:00
Jack Desert
bbb9fe1692 [Docs] Clear instruction for single quotes (linter and formatter) (#12015)
## Summary

In order to use single quotes with both the ruff linter and the ruff
formatter,
two different rules must be applied. This was not clear to me when 
internet searching "configure ruff single quotes" and it eventually
I filed this issue:

https://github.com/astral-sh/ruff/issues/12003
2024-07-10 16:29:29 +00:00
Alex Waygood
5b21922420 [red-knot] Add more stress tests for module resolver invalidation (#12272) 2024-07-10 14:34:06 +00:00
Micha Reiser
abcf07c8c5 Change File::touch_path to only take a SystemPath (#12273) 2024-07-10 12:15:14 +00:00
Alex Waygood
e8b5341c97 [red-knot] Rework module resolver tests (#12260) 2024-07-10 10:40:21 +00:00
Auguste Lalande
880c31d164 [flake8-async] Update ASYNC116 to match upstream (#12266)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-10 09:58:33 +02:00
Auguste Lalande
d365f1a648 [flake8-async] Update ASYNC115 to match upstream (#12262)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-10 09:43:11 +02:00
Micha Reiser
4cc7bc9d32 Use more threads when discovering python files (#12258) 2024-07-10 09:29:17 +02:00
Dhruv Manilawala
0bb2fc6eec Conside include, extend-include for the native server (#12252)
## Summary

This PR updates the native server to consider the `include` and
`extend-include` file resolver settings.

fixes: #12242 

## Test Plan

Note: Settings reloading doesn't work for nested configs which is fixed
in #12253 so the preview here only showcases root level config.

https://github.com/astral-sh/ruff/assets/67177269/e8969128-c175-4f98-8114-0d692b906cc8
2024-07-10 04:12:57 +00:00
Auguste Lalande
855d62cdde [flake8-async] Update ASYNC110 to match upstream (#12261)
## Summary

Update the name of `ASYNC110` to match
[upstream](https://flake8-async.readthedocs.io/en/latest/rules.html).

Also update to the functionality to match upstream by adding support for
`asyncio` and `anyio` (gated behind preview).

Part of https://github.com/astral-sh/ruff/issues/12039.

## Test Plan

Added tests for `asyncio` and `anyio`
2024-07-09 17:17:28 -07:00
Auguste Lalande
88abc6aed8 [flake8-async] Update ASYNC100 to match upstream (#12221)
## Summary

Update the name of `ASYNC100` to match
[upstream](https://flake8-async.readthedocs.io/en/latest/rules.html).

Also update to the functionality to match upstream by supporting
additional context managers from asyncio and anyio. Matching this
[list](https://flake8-async.readthedocs.io/en/latest/glossary.html#timeout-context).

Part of #12039.

## Test Plan

Added the new context managers to the fixture.
2024-07-09 17:55:18 +00:00
Alex Waygood
6fa4e32ad3 [red-knot] Use vendored typeshed stubs for stdlib module resolution (#12224) 2024-07-09 09:21:52 +00:00
Alex Waygood
000dabcd88 [red-knot] Allow module-resolution options to be specified via the CLI (#12246) 2024-07-09 09:16:28 +00:00
Micha Reiser
f8ff42a13d [red-knot] Prevent salsa cancellation from aborting the program (#12183) 2024-07-09 08:26:15 +00:00
Micha Reiser
b5834d57af [red-knot] Only store absolute paths in Files (#12215) 2024-07-09 09:52:13 +02:00
renovate[bot]
3d3ff10bb9 Update dependency react-resizable-panels to v2.0.20 (#12231)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-07-09 07:26:08 +00:00
Micha Reiser
ac04380f36 [red-knot] Rename FileSystem to System (#12214) 2024-07-09 07:20:51 +00:00
Auguste Lalande
16a63c88cf [flake8-async] Update ASYNC109 to match upstream (#12236)
## Summary

Update the name of `ASYNC109` to match
[upstream](https://flake8-async.readthedocs.io/en/latest/rules.html).

Also update to the functionality to match upstream by supporting
additional context managers from `asyncio` and `anyio`. This doesn't
change any of the detection functionality, but recommends additional
context managers from `asyncio` and `anyio` depending on context.

Part of https://github.com/astral-sh/ruff/issues/12039.

## Test Plan

Added fixture for asyncio recommendation
2024-07-09 04:14:27 +00:00
Dani Bodor
10f07d88a2 Update help and documentation for --output-format to reflect "full" default (#12248)
fix #12247 

changed help to list "full" as the default for --output-format and
removed "text" as an option (as this is no longer supported).
2024-07-09 02:45:24 +00:00
Evan Rittenhouse
1e04bd0b73 Restrict fowarding newline argument in open() calls to Python versions >= 3.10 (#12244)
Fixes https://github.com/astral-sh/ruff/issues/12222
2024-07-09 02:43:31 +00:00
epenet
2041b0e5fb [flake8-return] Exempt properties from explicit return rule (RET501) (#12243)
First contribution - apologies if something is missing

Fixes #12197
2024-07-08 19:39:30 -07:00
Micha Reiser
bf3d903939 Warn about D203 formatter incompatibility (#12238) 2024-07-08 15:12:14 +02:00
Micha Reiser
64855c5f06 Remove default-run from 'red_knot' crate (#12241) 2024-07-08 09:31:45 +00:00
renovate[bot]
b5ab4ce293 Update pre-commit dependencies (#12232) 2024-07-08 01:51:24 +00:00
renovate[bot]
c396b9f08b Update cloudflare/wrangler-action action to v3.7.0 (#12235) 2024-07-07 21:41:24 -04:00
renovate[bot]
9ed3893e6d Update Rust crate ureq to v2.10.0 (#12234) 2024-07-07 21:41:18 -04:00
renovate[bot]
30c9604c1d Update NPM Development dependencies (#12233) 2024-07-07 21:41:09 -04:00
renovate[bot]
7e4a1c2b33 Update Rust crate syn to v2.0.69 (#12230) 2024-07-07 21:40:42 -04:00
renovate[bot]
e379160941 Update Rust crate serde_with to v3.8.3 (#12229) 2024-07-07 21:40:35 -04:00
renovate[bot]
38b503ebcc Update Rust crate serde_json to v1.0.120 (#12228) 2024-07-07 21:40:29 -04:00
renovate[bot]
dac476f2c0 Update Rust crate serde to v1.0.204 (#12227) 2024-07-07 21:40:20 -04:00
renovate[bot]
754e5d6a7d Update Rust crate imara-diff to v0.1.6 (#12226) 2024-07-07 21:40:03 -04:00
Alex Waygood
d9c15e7a12 [Ecosystem checks] trio has changed its default branch name to main (#12225)
Fixes CI errors seen in
https://github.com/astral-sh/ruff/pull/12224#issuecomment-2212594024

x-ref
17b3644f64
2024-07-07 23:37:27 +01:00
Trim21
757c75752e [flake8-bandit] fix S113 false positive for httpx without timeout argument (#12213)
## Summary

S113 exists because `requests` doesn't have a default timeout, so
request without timeout may hang indefinitely

> B113: Test for missing requests timeout
This plugin test checks for requests or httpx calls without a timeout
specified.
>
> Nearly all production code should use this parameter in nearly all
requests, **Failure to do so can cause your program to hang
indefinitely.**


But httpx has default timeout 5s, so S113 for httpx request without
`timeout` argument is a false positive, only valid case would be
`timeout=None`.

https://www.python-httpx.org/advanced/timeouts/

> HTTPX is careful to enforce timeouts everywhere by default.
>
> The default behavior is to raise a TimeoutException after 5 seconds of
network inactivity.


## Test Plan

snap updated
2024-07-06 14:08:40 -05:00
Micha Reiser
9d61727289 [red-knot] Exclude drop time in benchmark (#12218) 2024-07-06 17:35:00 +02:00
Alex Waygood
a62a432a48 [red-knot] Respect typeshed's VERSIONS file when resolving stdlib modules (#12141) 2024-07-05 22:43:31 +00:00
Charlie Marsh
8198723201 Move SELinux docs to example (#12211) 2024-07-05 20:42:43 +00:00
Maximilian Kolb
7df10ea3e9 Docs: Respect SELinux with podman for docker mount (#12102)
Tested on Fedora 40 with Podman 5.1.1 and ruff "0.5.0" and "latest".
source: https://unix.stackexchange.com/q/651198


## Error without fix

````
$ podman run --rm -it -v .:/io ghcr.io/astral-sh/ruff:latest check
error: Failed to initialize cache at /io/.ruff_cache: Permission denied (os error 13)
warning: Encountered error: Permission denied (os error 13)
All checks passed!

$ podman run --rm -it -v .:/io ghcr.io/astral-sh/ruff:latest format
error: Failed to initialize cache at /io/.ruff_cache: Permission denied (os error 13)
error: Encountered error: Permission denied (os error 13)
````

## Summary

Running ruff by using a docker container requires `:Z` when mounting the
current directory on Fedora with SELinux and Podman.

## Test Plan

````
$ podman run --rm -it -v .:/io:Z ghcr.io/astral-sh/ruff:latest check
$ podman run --rm -it -v .:/io:Z ghcr.io/astral-sh/ruff:0.5.0 check
````
2024-07-05 15:39:00 -05:00
Carl Meyer
0e44235981 [red-knot] intern types using Salsa (#12061)
Intern types using Salsa interning instead of in the `TypeInference`
result.

This eliminates the need for `TypingContext`, and also paves the way for
finer-grained type inference queries.
2024-07-05 12:16:37 -07:00
Dhruv Manilawala
7b50061b43 Fix eslint errors for playground source code (#12207)
Refer
https://github.com/astral-sh/ruff/actions/runs/9808907924/job/27086333001
2024-07-05 13:43:16 +00:00
358 changed files with 18458 additions and 8762 deletions

View File

@@ -21,42 +21,131 @@ jobs:
mkdocs:
runs-on: ubuntu-latest
env:
CF_API_TOKEN_EXISTS: ${{ secrets.CF_API_TOKEN != '' }}
MKDOCS_INSIDERS_SSH_KEY_EXISTS: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY != '' }}
steps:
- uses: actions/checkout@v4
with:
ref: ${{ inputs.ref }}
- uses: actions/setup-python@v5
with:
python-version: 3.12
- name: "Set docs version"
run: |
version="${{ (inputs.plan != '' && fromJson(inputs.plan).announcement_tag) || inputs.ref }}"
# if version is missing, exit with error
if [[ -z "$version" ]]; then
echo "Can't build docs without a version."
exit 1
fi
# Use version as display name for now
display_name="$version"
echo "version=$version" >> $GITHUB_ENV
echo "display_name=$display_name" >> $GITHUB_ENV
- name: "Set branch name"
run: |
version="${{ env.version }}"
display_name="${{ env.display_name }}"
timestamp="$(date +%s)"
# create branch_display_name from display_name by replacing all
# characters disallowed in git branch names with hyphens
branch_display_name="$(echo "$display_name" | tr -c '[:alnum:]._' '-' | tr -s '-')"
echo "branch_name=update-docs-$branch_display_name-$timestamp" >> $GITHUB_ENV
echo "timestamp=$timestamp" >> $GITHUB_ENV
- name: "Add SSH key"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.MKDOCS_INSIDERS_SSH_KEY }}
- name: "Install Rust toolchain"
run: rustup show
- uses: Swatinem/rust-cache@v2
- name: "Install Insiders dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: pip install -r docs/requirements-insiders.txt
- name: "Install dependencies"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: pip install -r docs/requirements.txt
- name: "Copy README File"
run: |
python scripts/transform_readme.py --target mkdocs
python scripts/generate_mkdocs.py
- name: "Build Insiders docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS == 'true' }}
run: mkdocs build --strict -f mkdocs.insiders.yml
- name: "Build docs"
if: ${{ env.MKDOCS_INSIDERS_SSH_KEY_EXISTS != 'true' }}
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.6.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}
# `github.head_ref` is only set during pull requests and for manual runs or tags we use `main` to deploy to production
command: pages deploy site --project-name=astral-docs --branch ${{ github.head_ref || 'main' }} --commit-hash ${GITHUB_SHA}
- name: "Clone docs repo"
run: |
version="${{ env.version }}"
git clone https://${{ secrets.ASTRAL_DOCS_PAT }}@github.com/astral-sh/docs.git astral-docs
- name: "Copy docs"
run: rm -rf astral-docs/site/ruff && mkdir -p astral-docs/site && cp -r site/ruff astral-docs/site/
- name: "Commit docs"
working-directory: astral-docs
run: |
branch_name="${{ env.branch_name }}"
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
git checkout -b $branch_name
git add site/ruff
git commit -m "Update ruff documentation for $version"
- name: "Create Pull Request"
working-directory: astral-docs
env:
GITHUB_TOKEN: ${{ secrets.ASTRAL_DOCS_PAT }}
run: |
version="${{ env.version }}"
display_name="${{ env.display_name }}"
branch_name="${{ env.branch_name }}"
# set the PR title
pull_request_title="Update ruff documentation for $display_name"
# Delete any existing pull requests that are open for this version
# by checking against pull_request_title because the new PR will
# supersede the old one.
gh pr list --state open --json title --jq '.[] | select(.title == "$pull_request_title") | .number' | \
xargs -I {} gh pr close {}
# push the branch to GitHub
git push origin $branch_name
# create the PR
gh pr create --base main --head $branch_name \
--title "$pull_request_title" \
--body "Automated documentation update for $display_name" \
--label "documentation"
- name: "Merge Pull Request"
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
working-directory: astral-docs
env:
GITHUB_TOKEN: ${{ secrets.ASTRAL_DOCS_PAT }}
run: |
branch_name="${{ env.branch_name }}"
# auto-merge the PR if the build was triggered by a release. Manual builds should be reviewed by a human.
# give the PR a few seconds to be created before trying to auto-merge it
sleep 10
gh pr merge --squash $branch_name

View File

@@ -47,7 +47,7 @@ jobs:
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.6.1
uses: cloudflare/wrangler-action@v3.7.0
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

55
.github/workflows/publish-wasm.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
# Build and publish ruff-api for wasm.
#
# Assumed to run as a subworkflow of .github/workflows/release.yml; specifically, as a publish
# job within `cargo-dist`.
name: "Build and publish wasm"
on:
workflow_dispatch:
workflow_call:
inputs:
plan:
required: true
type: string
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
jobs:
ruff_wasm:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
strategy:
matrix:
target: [web, bundler, nodejs]
fail-fast: false
steps:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup target add wasm32-unknown-unknown
- uses: jetli/wasm-pack-action@v0.4.0
- uses: jetli/wasm-bindgen-action@v0.2.0
- name: "Run wasm-pack build"
run: wasm-pack build --target ${{ matrix.target }} crates/ruff_wasm
- name: "Rename generated package"
run: | # Replace the package name w/ jq
jq '.name="@astral-sh/ruff-wasm-${{ matrix.target }}"' crates/ruff_wasm/pkg/package.json > /tmp/package.json
mv /tmp/package.json crates/ruff_wasm/pkg
- run: cp LICENSE crates/ruff_wasm/pkg # wasm-pack does not put the LICENSE file in the pkg
- uses: actions/setup-node@v4
with:
node-version: 18
registry-url: "https://registry.npmjs.org"
- name: "Publish (dry-run)"
if: ${{ inputs.plan == '' || fromJson(inputs.plan).announcement_tag_is_implicit }}
run: npm publish --dry-run crates/ruff_wasm/pkg
- name: "Publish"
if: ${{ inputs.plan != '' && !fromJson(inputs.plan).announcement_tag_is_implicit }}
run: npm publish --provenance --access public crates/ruff_wasm/pkg
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

View File

@@ -214,16 +214,32 @@ jobs:
"id-token": "write"
"packages": "write"
custom-publish-wasm:
needs:
- plan
- host
if: ${{ !fromJson(needs.plan.outputs.val).announcement_is_prerelease || fromJson(needs.plan.outputs.val).publish_prereleases }}
uses: ./.github/workflows/publish-wasm.yml
with:
plan: ${{ needs.plan.outputs.val }}
secrets: inherit
# publish jobs get escalated permissions
permissions:
"contents": "read"
"id-token": "write"
"packages": "write"
# Create a GitHub Release while uploading all files to it
announce:
needs:
- plan
- host
- custom-publish-pypi
- custom-publish-wasm
# use "always() && ..." to allow us to wait for all publish jobs while
# still allowing individual publish jobs to skip themselves (for prereleases).
# "host" however must run to completion, no skipping allowed!
if: ${{ always() && needs.host.result == 'success' && (needs.custom-publish-pypi.result == 'skipped' || needs.custom-publish-pypi.result == 'success') }}
if: ${{ always() && needs.host.result == 'success' && (needs.custom-publish-pypi.result == 'skipped' || needs.custom-publish-pypi.result == 'success') && (needs.custom-publish-wasm.result == 'skipped' || needs.custom-publish-wasm.result == 'success') }}
runs-on: "ubuntu-20.04"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -42,7 +42,7 @@ repos:
)$
- repo: https://github.com/crate-ci/typos
rev: v1.22.9
rev: v1.23.2
hooks:
- id: typos
@@ -56,7 +56,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.0
rev: v0.5.2
hooks:
- id: ruff-format
- id: ruff

View File

@@ -1,5 +1,93 @@
# Changelog
## 0.5.3
**Ruff 0.5.3 marks the stable release of the Ruff language server and introduces revamped
[documentation](https://docs.astral.sh/ruff/editors), including [setup guides for your editor of
choice](https://docs.astral.sh/ruff/editors/setup) and [the language server
itself](https://docs.astral.sh/ruff/editors/settings)**.
### Preview features
- Formatter: Insert empty line between suite and alternative branch after function/class definition ([#12294](https://github.com/astral-sh/ruff/pull/12294))
- \[`pyupgrade`\] Implement `unnecessary-default-type-args` (`UP043`) ([#12371](https://github.com/astral-sh/ruff/pull/12371))
### Rule changes
- \[`flake8-bugbear`\] Detect enumerate iterations in `loop-iterator-mutation` (`B909`) ([#12366](https://github.com/astral-sh/ruff/pull/12366))
- \[`flake8-bugbear`\] Remove `discard`, `remove`, and `pop` allowance for `loop-iterator-mutation` (`B909`) ([#12365](https://github.com/astral-sh/ruff/pull/12365))
- \[`pylint`\] Allow `repeated-equality-comparison` for mixed operations (`PLR1714`) ([#12369](https://github.com/astral-sh/ruff/pull/12369))
- \[`pylint`\] Ignore `self` and `cls` when counting arguments (`PLR0913`) ([#12367](https://github.com/astral-sh/ruff/pull/12367))
- \[`pylint`\] Use UTF-8 as default encoding in `unspecified-encoding` fix (`PLW1514`) ([#12370](https://github.com/astral-sh/ruff/pull/12370))
### Server
- Build settings index in parallel for the native server ([#12299](https://github.com/astral-sh/ruff/pull/12299))
- Use fallback settings when indexing the project ([#12362](https://github.com/astral-sh/ruff/pull/12362))
- Consider `--preview` flag for `server` subcommand for the linter and formatter ([#12208](https://github.com/astral-sh/ruff/pull/12208))
### Bug fixes
- \[`flake8-comprehensions`\] Allow additional arguments for `sum` and `max` comprehensions (`C419`) ([#12364](https://github.com/astral-sh/ruff/pull/12364))
- \[`pylint`\] Avoid dropping extra boolean operations in `repeated-equality-comparison` (`PLR1714`) ([#12368](https://github.com/astral-sh/ruff/pull/12368))
- \[`pylint`\] Consider expression before statement when determining binding kind (`PLR1704`) ([#12346](https://github.com/astral-sh/ruff/pull/12346))
### Documentation
- Add docs for Ruff language server ([#12344](https://github.com/astral-sh/ruff/pull/12344))
- Migrate to standalone docs repo ([#12341](https://github.com/astral-sh/ruff/pull/12341))
- Update versioning policy for editor integration ([#12375](https://github.com/astral-sh/ruff/pull/12375))
### Other changes
- Publish Wasm API to npm ([#12317](https://github.com/astral-sh/ruff/pull/12317))
## 0.5.2
### Preview features
- Use `space` separator before parenthesized expressions in comprehensions with leading comments ([#12282](https://github.com/astral-sh/ruff/pull/12282))
- \[`flake8-async`\] Update `ASYNC100` to include `anyio` and `asyncio` ([#12221](https://github.com/astral-sh/ruff/pull/12221))
- \[`flake8-async`\] Update `ASYNC109` to include `anyio` and `asyncio` ([#12236](https://github.com/astral-sh/ruff/pull/12236))
- \[`flake8-async`\] Update `ASYNC110` to include `anyio` and `asyncio` ([#12261](https://github.com/astral-sh/ruff/pull/12261))
- \[`flake8-async`\] Update `ASYNC115` to include `anyio` and `asyncio` ([#12262](https://github.com/astral-sh/ruff/pull/12262))
- \[`flake8-async`\] Update `ASYNC116` to include `anyio` and `asyncio` ([#12266](https://github.com/astral-sh/ruff/pull/12266))
### Rule changes
- \[`flake8-return`\] Exempt properties from explicit return rule (`RET501`) ([#12243](https://github.com/astral-sh/ruff/pull/12243))
- \[`numpy`\] Add `np.NAN`-to-`np.nan` diagnostic ([#12292](https://github.com/astral-sh/ruff/pull/12292))
- \[`refurb`\] Make `list-reverse-copy` an unsafe fix ([#12303](https://github.com/astral-sh/ruff/pull/12303))
### Server
- Consider `include` and `extend-include` settings in native server ([#12252](https://github.com/astral-sh/ruff/pull/12252))
- Include nested configurations in settings reloading ([#12253](https://github.com/astral-sh/ruff/pull/12253))
### CLI
- Omit code frames for fixes with empty ranges ([#12304](https://github.com/astral-sh/ruff/pull/12304))
- Warn about formatter incompatibility for `D203` ([#12238](https://github.com/astral-sh/ruff/pull/12238))
### Bug fixes
- Make cache-write failures non-fatal on Windows ([#12302](https://github.com/astral-sh/ruff/pull/12302))
- Treat `not` operations as boolean tests ([#12301](https://github.com/astral-sh/ruff/pull/12301))
- \[`flake8-bandit`\] Avoid `S310` violations for HTTP-safe f-strings ([#12305](https://github.com/astral-sh/ruff/pull/12305))
- \[`flake8-bandit`\] Support explicit string concatenations in S310 HTTP detection ([#12315](https://github.com/astral-sh/ruff/pull/12315))
- \[`flake8-bandit`\] fix S113 false positive for httpx without `timeout` argument ([#12213](https://github.com/astral-sh/ruff/pull/12213))
- \[`pycodestyle`\] Remove "non-obvious" allowance for E721 ([#12300](https://github.com/astral-sh/ruff/pull/12300))
- \[`pyflakes`\] Consider `with` blocks as single-item branches for redefinition analysis ([#12311](https://github.com/astral-sh/ruff/pull/12311))
- \[`refurb`\] Restrict forwarding for `newline` argument in `open()` calls to Python versions >= 3.10 ([#12244](https://github.com/astral-sh/ruff/pull/12244))
### Documentation
- Update help and documentation to reflect `--output-format full` default ([#12248](https://github.com/astral-sh/ruff/pull/12248))
### Performance
- Use more threads when discovering Python files ([#12258](https://github.com/astral-sh/ruff/pull/12258))
## 0.5.1
### Preview features

150
Cargo.lock generated
View File

@@ -234,9 +234,9 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "castaway"
version = "0.2.2"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a17ed5635fc8536268e5d4de1e22e81ac34419e5f052d4d51f4e01dcc263fcc"
checksum = "0abae9be0aaf9ea96a3b1b8b1b55c602ca751eba1b1500220cea4ecbafe7c0d5"
dependencies = [
"rustversion",
]
@@ -314,9 +314,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.8"
version = "4.5.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "84b3edb18336f4df585bc9aa31dd99c036dfa5dc5e9a2939a722a188f3a8970d"
checksum = "64acc1846d54c1fe936a78dc189c34e28d3f5afc348403f28ecf53660b9b8462"
dependencies = [
"clap_builder",
"clap_derive",
@@ -324,9 +324,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.8"
version = "4.5.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1c09dd5ada6c6c78075d6fd0da3f90d8080651e2d6cc8eb2f1aaa4034ced708"
checksum = "6fb8393d67ba2e7bfaf28a23458e4e2b543cc73a99595511eb207fdb8aede942"
dependencies = [
"anstream",
"anstyle",
@@ -346,31 +346,20 @@ dependencies = [
[[package]]
name = "clap_complete_command"
version = "0.5.1"
version = "0.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "183495371ea78d4c9ff638bfc6497d46fed2396e4f9c50aebc1278a4a9919a3d"
checksum = "da8e198c052315686d36371e8a3c5778b7852fc75cc313e4e11eeb7a644a1b62"
dependencies = [
"clap",
"clap_complete",
"clap_complete_fig",
"clap_complete_nushell",
]
[[package]]
name = "clap_complete_fig"
version = "4.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "54b3e65f91fabdd23cac3d57d39d5d938b4daabd070c335c006dccb866a61110"
dependencies = [
"clap",
"clap_complete",
]
[[package]]
name = "clap_complete_nushell"
version = "0.1.11"
version = "4.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5d02bc8b1a18ee47c4d2eec3fb5ac034dc68ebea6125b1509e9ccdffcddce66e"
checksum = "1accf1b463dee0d3ab2be72591dccdab8bef314958340447c882c4c72acfe2a3"
dependencies = [
"clap",
"clap_complete",
@@ -447,13 +436,14 @@ dependencies = [
[[package]]
name = "compact_str"
version = "0.7.1"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f86b9c4c00838774a6d902ef931eff7470720c51d90c2e32cfe15dc304737b3f"
checksum = "6050c3a16ddab2e412160b31f2c871015704239bca62f72f6e5f0be631d3f644"
dependencies = [
"castaway",
"cfg-if",
"itoa",
"rustversion",
"ryu",
"serde",
"static_assertions",
@@ -666,7 +656,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "978747c1d849a7d2ee5e8adc0159961c48fb7e5db2f06af6723b80123bb53856"
dependencies = [
"cfg-if",
"hashbrown 0.14.5",
"hashbrown",
"lock_api",
"once_cell",
"parking_lot_core",
@@ -680,7 +670,7 @@ checksum = "804c8821570c3f8b70230c2ba75ffa5c0f9a4189b9a432b6656c536712acae28"
dependencies = [
"cfg-if",
"crossbeam-utils",
"hashbrown 0.14.5",
"hashbrown",
"lock_api",
"once_cell",
"parking_lot_core",
@@ -928,12 +918,6 @@ dependencies = [
"crunchy",
]
[[package]]
name = "hashbrown"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
[[package]]
name = "hashbrown"
version = "0.14.5"
@@ -950,7 +934,7 @@ version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8094feaf31ff591f651a2664fb9cfd92bba7a60ce3197265e9482ebe753c8f7"
dependencies = [
"hashbrown 0.14.5",
"hashbrown",
]
[[package]]
@@ -1037,12 +1021,12 @@ dependencies = [
[[package]]
name = "imara-diff"
version = "0.1.5"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e98c1d0ad70fc91b8b9654b1f33db55e59579d3b3de2bffdced0fdb810570cb8"
checksum = "af13c8ceb376860ff0c6a66d83a8cdd4ecd9e464da24621bbffcd02b49619434"
dependencies = [
"ahash",
"hashbrown 0.12.3",
"hashbrown",
]
[[package]]
@@ -1062,7 +1046,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "168fb715dda47215e360912c096649d23d58bf392ac62f73919e831745e40f26"
dependencies = [
"equivalent",
"hashbrown 0.14.5",
"hashbrown",
"serde",
]
@@ -1378,9 +1362,9 @@ checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5"
[[package]]
name = "matchit"
version = "0.8.3"
version = "0.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8d3c2fcf089c060eb333302d80c5f3ffa8297abecf220f788e4a09ef85f59420"
checksum = "47e1ffaa40ddd1f3ed91f717a33c8c0ee23fff369e3aa8772b9605cc1d22f4c3"
[[package]]
name = "memchr"
@@ -1532,6 +1516,15 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d"
[[package]]
name = "ordermap"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab5a8e22be64dfa1123429350872e7be33594dbf5ae5212c90c5890e71966d1d"
dependencies = [
"indexmap",
]
[[package]]
name = "os_str_bytes"
version = "6.6.1"
@@ -1861,6 +1854,7 @@ name = "red_knot"
version = "0.0.0"
dependencies = [
"anyhow",
"clap",
"countme",
"crossbeam",
"ctrlc",
@@ -1882,8 +1876,10 @@ name = "red_knot_module_resolver"
version = "0.0.0"
dependencies = [
"anyhow",
"camino",
"compact_str",
"insta",
"once_cell",
"path-slash",
"ruff_db",
"ruff_python_stdlib",
@@ -1901,13 +1897,14 @@ version = "0.0.0"
dependencies = [
"anyhow",
"bitflags 2.6.0",
"hashbrown 0.14.5",
"indexmap",
"hashbrown",
"ordermap",
"red_knot_module_resolver",
"ruff_db",
"ruff_index",
"ruff_python_ast",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_text_size",
"rustc-hash 2.0.0",
"salsa",
@@ -1995,7 +1992,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.5.1"
version = "0.5.3"
dependencies = [
"anyhow",
"argfile",
@@ -2054,7 +2051,6 @@ dependencies = [
"mimalloc",
"once_cell",
"red_knot",
"red_knot_module_resolver",
"ruff_db",
"ruff_linter",
"ruff_python_ast",
@@ -2089,14 +2085,17 @@ dependencies = [
"countme",
"dashmap 6.0.1",
"filetime",
"ignore",
"insta",
"once_cell",
"ruff_cache",
"ruff_notebook",
"ruff_python_ast",
"ruff_python_parser",
"ruff_source_file",
"ruff_text_size",
"rustc-hash 2.0.0",
"salsa",
"tempfile",
"tracing",
"zip",
]
@@ -2177,7 +2176,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.5.1"
version = "0.5.3"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2443,7 +2442,7 @@ version = "0.2.2"
dependencies = [
"anyhow",
"crossbeam",
"globset",
"ignore",
"insta",
"jod-thread",
"libc",
@@ -2468,7 +2467,6 @@ dependencies = [
"shellexpand",
"tracing",
"tracing-subscriber",
"walkdir",
]
[[package]]
@@ -2493,7 +2491,7 @@ dependencies = [
[[package]]
name = "ruff_wasm"
version = "0.0.0"
version = "0.5.3"
dependencies = [
"console_error_panic_hook",
"console_log",
@@ -2587,11 +2585,12 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.22.4"
version = "0.23.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf4ef73721ac7bcd79b2b315da7779d8fc09718c6b3d2d1b2d94850eb8c18432"
checksum = "05cff451f60db80f490f3c182b77c35260baace73209e9cdbbe526bfe3a4d402"
dependencies = [
"log",
"once_cell",
"ring",
"rustls-pki-types",
"rustls-webpki",
@@ -2601,15 +2600,15 @@ dependencies = [
[[package]]
name = "rustls-pki-types"
version = "1.5.0"
version = "1.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "beb461507cee2c2ff151784c52762cf4d9ff6a61f3e80968600ed24fa837fa54"
checksum = "976295e77ce332211c0d24d92c0e83e50f5c5f046d11082cea19f3df13a3562d"
[[package]]
name = "rustls-webpki"
version = "0.102.3"
version = "0.102.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3bce581c0dd41bce533ce695a1437fa16a7ab5ac3ccfa99fe1a620a7885eabf"
checksum = "f9a6fccd794a42c2c105b513a2f62bc3fd8f3ba57a4593677ceb0bd035164d78"
dependencies = [
"ring",
"rustls-pki-types",
@@ -2709,9 +2708,9 @@ checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
[[package]]
name = "serde"
version = "1.0.203"
version = "1.0.204"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12"
dependencies = [
"serde_derive",
]
@@ -2729,9 +2728,9 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.203"
version = "1.0.204"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222"
dependencies = [
"proc-macro2",
"quote",
@@ -2751,9 +2750,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.119"
version = "1.0.120"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e8eddb61f0697cc3989c5d64b452f5488e2b8a60fd7d5076a3045076ffef8cb0"
checksum = "4e0d21c9a8cae1235ad58a00c11cb40d4b1e5c784f1ef2c537876ed6ffd8b7c5"
dependencies = [
"itoa",
"ryu",
@@ -2791,9 +2790,9 @@ dependencies = [
[[package]]
name = "serde_with"
version = "3.8.2"
version = "3.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "079f3a42cd87588d924ed95b533f8d30a483388c4e400ab736a7058e34f16169"
checksum = "69cecfa94848272156ea67b2b1a53f20fc7bc638c4a46d2f8abde08f05f4b857"
dependencies = [
"serde",
"serde_derive",
@@ -2802,9 +2801,9 @@ dependencies = [
[[package]]
name = "serde_with_macros"
version = "3.8.2"
version = "3.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc03aad67c1d26b7de277d51c86892e7d9a0110a2fe44bf6b26cc569fba302d6"
checksum = "a8fee4991ef4f274617a51ad4af30519438dacb2f56ac773b08a1922ff743350"
dependencies = [
"darling",
"proc-macro2",
@@ -2911,9 +2910,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
[[package]]
name = "syn"
version = "2.0.68"
version = "2.0.71"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "901fa70d88b9d6c98022e23b4136f9f3e54e4662c3bc1bd1d84a42a9a0f0c1e9"
checksum = "b146dcf730474b4bcd16c311627b31ede9ab149045db4d6088b3becaea046462"
dependencies = [
"proc-macro2",
"quote",
@@ -3001,18 +3000,18 @@ dependencies = [
[[package]]
name = "thiserror"
version = "1.0.61"
version = "1.0.62"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c546c80d6be4bc6a00c0f01730c08df82eaa7a7a61f11d656526506112cc1709"
checksum = "f2675633b1499176c2dff06b0856a27976a8f9d436737b4cf4f312d4d91d8bbb"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "1.0.61"
version = "1.0.62"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "46c3384250002a6d5af4d114f2845d37b57521033f30d5c3f46c4d70e1197533"
checksum = "d20468752b09f49e909e55a5d338caa8bedf615594e9d80bc4c565d30faf798c"
dependencies = [
"proc-macro2",
"quote",
@@ -3031,9 +3030,9 @@ dependencies = [
[[package]]
name = "tikv-jemalloc-sys"
version = "0.5.4+5.3.0-patched"
version = "0.6.0+5.3.0-1-ge13ca993e8ccb9ba9847cc330696e02839f328f7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9402443cb8fd499b6f327e40565234ff34dbda27460c5b47db0db77443dd85d1"
checksum = "cd3c60906412afa9c2b5b5a48ca6a5abe5736aec9eb48ad05037a677e52e4e2d"
dependencies = [
"cc",
"libc",
@@ -3041,9 +3040,9 @@ dependencies = [
[[package]]
name = "tikv-jemallocator"
version = "0.5.4"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "965fe0c26be5c56c94e38ba547249074803efd52adfb66de62107d95aab3eaca"
checksum = "4cec5ff18518d81584f477e9bfdf957f5bb0979b0bac3af4ca30b5b3ae2d2865"
dependencies = [
"libc",
"tikv-jemalloc-sys",
@@ -3305,9 +3304,9 @@ checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1"
[[package]]
name = "ureq"
version = "2.9.7"
version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d11a831e3c0b56e438a28308e7c810799e3c118417f342d30ecec080105395cd"
checksum = "72139d247e5f97a3eff96229a7ae85ead5328a39efe76f8bf5a06313d505b6ea"
dependencies = [
"base64",
"flate2",
@@ -3315,7 +3314,6 @@ dependencies = [
"once_cell",
"rustls",
"rustls-pki-types",
"rustls-webpki",
"url",
"webpki-roots",
]

View File

@@ -50,14 +50,14 @@ cachedir = { version = "0.3.1" }
camino = { version = "1.1.7" }
chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
clap = { version = "4.5.3", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clap_complete_command = { version = "0.6.0" }
clearscreen = { version = "3.0.0" }
codspeed-criterion-compat = { version = "2.6.0", default-features = false }
colored = { version = "2.1.0" }
console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
compact_str = "0.7.1"
compact_str = "0.8.0"
criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "6.0.1" }
@@ -72,7 +72,6 @@ hashbrown = "0.14.3"
ignore = { version = "0.4.22" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
indexmap = { version = "2.2.6" }
indicatif = { version = "0.17.8" }
indoc = { version = "2.0.4" }
insta = { version = "1.35.1" }
@@ -95,6 +94,7 @@ mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
notify = { version = "6.1.1" }
once_cell = { version = "1.19.0" }
ordermap = { version = "0.5.0" }
path-absolutize = { version = "3.1.1" }
path-slash = { version = "0.2.1" }
pathdiff = { version = "0.2.1" }
@@ -128,7 +128,7 @@ syn = { version = "2.0.55" }
tempfile = { version = "3.9.0" }
test-case = { version = "3.3.1" }
thiserror = { version = "1.0.58" }
tikv-jemallocator = { version = "0.5.0" }
tikv-jemallocator = { version = "0.6.0" }
toml = { version = "0.8.11" }
tracing = { version = "0.1.40" }
tracing-indicatif = { version = "0.3.6" }
@@ -272,10 +272,10 @@ build-local-artifacts = false
# Local artifacts jobs to run in CI
local-artifacts-jobs = ["./build-binaries", "./build-docker"]
# Publish jobs to run in CI
publish-jobs = ["./publish-pypi"]
publish-jobs = ["./publish-pypi", "./publish-wasm"]
# Announcement jobs to run in CI
post-announce-jobs = ["./notify-dependents", "./publish-docs", "./publish-playground"]
# Custom permissions for GitHub Jobs
github-custom-job-permissions = { "build-docker" = { packages = "write", contents = "read" } }
github-custom-job-permissions = { "build-docker" = { packages = "write", contents = "read" }, "publish-wasm" = { contents = "read", id-token = "write", packages = "write" } }
# Whether to install an updater program
install-updater = false

View File

@@ -136,8 +136,8 @@ curl -LsSf https://astral.sh/ruff/install.sh | sh
powershell -c "irm https://astral.sh/ruff/install.ps1 | iex"
# For a specific version.
curl -LsSf https://astral.sh/ruff/0.5.1/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.5.1/install.ps1 | iex"
curl -LsSf https://astral.sh/ruff/0.5.3/install.sh | sh
powershell -c "irm https://astral.sh/ruff/0.5.3/install.ps1 | iex"
```
You can also install Ruff via [Homebrew](https://formulae.brew.sh/formula/ruff), [Conda](https://anaconda.org/conda-forge/ruff),
@@ -170,7 +170,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.5.1
rev: v0.5.3
hooks:
# Run the linter.
- id: ruff

View File

@@ -8,7 +8,6 @@ documentation.workspace = true
repository.workspace = true
authors.workspace = true
license.workspace = true
default-run = "red_knot"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
@@ -16,10 +15,11 @@ default-run = "red_knot"
red_knot_module_resolver = { workspace = true }
red_knot_python_semantic = { workspace = true }
ruff_db = { workspace = true }
ruff_db = { workspace = true, features = ["os", "cache"] }
ruff_python_ast = { workspace = true }
anyhow = { workspace = true }
clap = { workspace = true, features = ["wrap_help"] }
countme = { workspace = true, features = ["enable"] }
crossbeam = { workspace = true }
ctrlc = { version = "3.4.4" }

View File

@@ -0,0 +1,2 @@
pub(crate) mod target_version;
pub(crate) mod verbosity;

View File

@@ -0,0 +1,34 @@
/// Enumeration of all supported Python versions
///
/// TODO: unify with the `PythonVersion` enum in the linter/formatter crates?
#[derive(Copy, Clone, Hash, Debug, PartialEq, Eq, PartialOrd, Ord, Default, clap::ValueEnum)]
pub enum TargetVersion {
Py37,
#[default]
Py38,
Py39,
Py310,
Py311,
Py312,
Py313,
}
impl std::fmt::Display for TargetVersion {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
ruff_db::program::TargetVersion::from(*self).fmt(f)
}
}
impl From<TargetVersion> for ruff_db::program::TargetVersion {
fn from(value: TargetVersion) -> Self {
match value {
TargetVersion::Py37 => Self::Py37,
TargetVersion::Py38 => Self::Py38,
TargetVersion::Py39 => Self::Py39,
TargetVersion::Py310 => Self::Py310,
TargetVersion::Py311 => Self::Py311,
TargetVersion::Py312 => Self::Py312,
TargetVersion::Py313 => Self::Py313,
}
}
}

View File

@@ -0,0 +1,34 @@
#[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd)]
pub(crate) enum VerbosityLevel {
Info,
Debug,
Trace,
}
/// Logging flags to `#[command(flatten)]` into your CLI
#[derive(clap::Args, Debug, Clone, Default)]
#[command(about = None, long_about = None)]
pub(crate) struct Verbosity {
#[arg(
long,
short = 'v',
help = "Use verbose output (or `-vv` and `-vvv` for more verbose output)",
action = clap::ArgAction::Count,
global = true,
)]
verbose: u8,
}
impl Verbosity {
/// Returns the verbosity level based on the number of `-v` flags.
///
/// Returns `None` if the user did not specify any verbosity flags.
pub(crate) fn level(&self) -> Option<VerbosityLevel> {
match self.verbose {
0 => None,
1 => Some(VerbosityLevel::Info),
2 => Some(VerbosityLevel::Debug),
_ => Some(VerbosityLevel::Trace),
}
}
}

View File

@@ -1,10 +1,200 @@
use red_knot_python_semantic::Db as SemanticDb;
use ruff_db::Upcast;
use salsa::DbWithJar;
use std::panic::{AssertUnwindSafe, RefUnwindSafe};
use std::sync::Arc;
use crate::lint::{lint_semantic, lint_syntax, unwind_if_cancelled};
use salsa::{Cancelled, Database, DbWithJar};
use red_knot_module_resolver::{vendored_typeshed_stubs, Db as ResolverDb, Jar as ResolverJar};
use red_knot_python_semantic::{Db as SemanticDb, Jar as SemanticJar};
use ruff_db::files::{system_path_to_file, File, Files};
use ruff_db::program::{Program, ProgramSettings};
use ruff_db::system::System;
use ruff_db::vendored::VendoredFileSystem;
use ruff_db::{Db as SourceDb, Jar as SourceJar, Upcast};
use crate::lint::{lint_semantic, lint_syntax, unwind_if_cancelled, Diagnostics};
use crate::watch::{FileChangeKind, FileWatcherChange};
use crate::workspace::{check_file, Package, Workspace, WorkspaceMetadata};
pub trait Db: DbWithJar<Jar> + SemanticDb + Upcast<dyn SemanticDb> {}
#[salsa::jar(db=Db)]
pub struct Jar(lint_syntax, lint_semantic, unwind_if_cancelled);
pub struct Jar(
Workspace,
Package,
lint_syntax,
lint_semantic,
unwind_if_cancelled,
);
#[salsa::db(SourceJar, ResolverJar, SemanticJar, Jar)]
pub struct RootDatabase {
workspace: Option<Workspace>,
storage: salsa::Storage<RootDatabase>,
files: Files,
system: Arc<dyn System + Send + Sync + RefUnwindSafe>,
}
impl RootDatabase {
pub fn new<S>(workspace: WorkspaceMetadata, settings: ProgramSettings, system: S) -> Self
where
S: System + 'static + Send + Sync + RefUnwindSafe,
{
let mut db = Self {
workspace: None,
storage: salsa::Storage::default(),
files: Files::default(),
system: Arc::new(system),
};
let workspace = Workspace::from_metadata(&db, workspace);
// Initialize the `Program` singleton
Program::from_settings(&db, settings);
db.workspace = Some(workspace);
db
}
pub fn workspace(&self) -> Workspace {
// SAFETY: The workspace is always initialized in `new`.
self.workspace.unwrap()
}
#[tracing::instrument(level = "debug", skip(self, changes))]
pub fn apply_changes(&mut self, changes: Vec<FileWatcherChange>) {
let workspace = self.workspace();
let workspace_path = workspace.root(self).to_path_buf();
// TODO: Optimize change tracking by only reloading a package if a file that is part of the package was changed.
let mut structural_change = false;
for change in changes {
if matches!(
change.path.file_name(),
Some(".gitignore" | ".ignore" | "ruff.toml" | ".ruff.toml" | "pyproject.toml")
) {
// Changes to ignore files or settings can change the workspace structure or add/remove files
// from packages.
structural_change = true;
} else {
match change.kind {
FileChangeKind::Created => {
// Reload the package when a new file was added. This is necessary because the file might be excluded
// by a gitignore.
if workspace.package(self, &change.path).is_some() {
structural_change = true;
}
}
FileChangeKind::Modified => {}
FileChangeKind::Deleted => {
if let Some(package) = workspace.package(self, &change.path) {
if let Some(file) = system_path_to_file(self, &change.path) {
package.remove_file(self, file);
}
}
}
}
}
File::touch_path(self, &change.path);
}
if structural_change {
match WorkspaceMetadata::from_path(&workspace_path, self.system()) {
Ok(metadata) => {
tracing::debug!("Reload workspace after structural change.");
// TODO: Handle changes in the program settings.
workspace.reload(self, metadata);
}
Err(error) => {
tracing::error!("Failed to load workspace, keep old workspace: {error}");
}
}
}
}
/// Checks all open files in the workspace and its dependencies.
pub fn check(&self) -> Result<Vec<String>, Cancelled> {
self.with_db(|db| db.workspace().check(db))
}
pub fn check_file(&self, file: File) -> Result<Diagnostics, Cancelled> {
self.with_db(|db| check_file(db, file))
}
pub(crate) fn with_db<F, T>(&self, f: F) -> Result<T, Cancelled>
where
F: FnOnce(&RootDatabase) -> T + std::panic::UnwindSafe,
{
// The `AssertUnwindSafe` here looks scary, but is a consequence of Salsa's design.
// Salsa uses panics to implement cancellation and to recover from cycles. However, the Salsa
// storage isn't `UnwindSafe` or `RefUnwindSafe` because its dependencies `DashMap` and `parking_lot::*` aren't
// unwind safe.
//
// Having to use `AssertUnwindSafe` isn't as big as a deal as it might seem because
// the `UnwindSafe` and `RefUnwindSafe` traits are designed to catch logical bugs.
// They don't protect against [UB](https://internals.rust-lang.org/t/pre-rfc-deprecating-unwindsafe/15974).
// On top of that, `Cancelled` only catches specific Salsa-panics and propagates all other panics.
//
// That still leaves us with possible logical bugs in two sources:
// * In Salsa itself: This must be considered a bug in Salsa and needs fixing upstream.
// Reviewing Salsa code specifically around unwind safety seems doable.
// * Our code: This is the main concern. Luckily, it only involves code that uses internal mutability
// and calls into Salsa queries when mutating the internal state. Using `AssertUnwindSafe`
// certainly makes it harder to catch these issues in our user code.
//
// For now, this is the only solution at hand unless Salsa decides to change its design.
// [Zulip support thread](https://salsa.zulipchat.com/#narrow/stream/145099-general/topic/How.20to.20use.20.60Cancelled.3A.3Acatch.60)
let db = &AssertUnwindSafe(self);
Cancelled::catch(|| f(db))
}
}
impl Upcast<dyn SemanticDb> for RootDatabase {
fn upcast(&self) -> &(dyn SemanticDb + 'static) {
self
}
}
impl Upcast<dyn SourceDb> for RootDatabase {
fn upcast(&self) -> &(dyn SourceDb + 'static) {
self
}
}
impl Upcast<dyn ResolverDb> for RootDatabase {
fn upcast(&self) -> &(dyn ResolverDb + 'static) {
self
}
}
impl ResolverDb for RootDatabase {}
impl SemanticDb for RootDatabase {}
impl SourceDb for RootDatabase {
fn vendored(&self) -> &VendoredFileSystem {
vendored_typeshed_stubs()
}
fn system(&self) -> &dyn System {
&*self.system
}
fn files(&self) -> &Files {
&self.files
}
}
impl Database for RootDatabase {}
impl Db for RootDatabase {}
impl salsa::ParallelDatabase for RootDatabase {
fn snapshot(&self) -> salsa::Snapshot<Self> {
salsa::Snapshot::new(Self {
workspace: self.workspace,
storage: self.storage.snapshot(),
files: self.files.snapshot(),
system: self.system.clone(),
})
}
}

View File

@@ -1,52 +1,6 @@
use rustc_hash::FxHashSet;
use ruff_db::file_system::{FileSystemPath, FileSystemPathBuf};
use ruff_db::vfs::VfsFile;
use crate::db::Jar;
pub mod db;
pub mod lint;
pub mod program;
pub mod watch;
#[derive(Debug, Clone)]
pub struct Workspace {
root: FileSystemPathBuf,
/// The files that are open in the workspace.
///
/// * Editor: The files that are actively being edited in the editor (the user has a tab open with the file).
/// * CLI: The resolved files passed as arguments to the CLI.
open_files: FxHashSet<VfsFile>,
}
impl Workspace {
pub fn new(root: FileSystemPathBuf) -> Self {
Self {
root,
open_files: FxHashSet::default(),
}
}
pub fn root(&self) -> &FileSystemPath {
self.root.as_path()
}
// TODO having the content in workspace feels wrong.
pub fn open_file(&mut self, file_id: VfsFile) {
self.open_files.insert(file_id);
}
pub fn close_file(&mut self, file_id: VfsFile) {
self.open_files.remove(&file_id);
}
// TODO introduce an `OpenFile` type instead of using an anonymous tuple.
pub fn open_files(&self) -> impl Iterator<Item = VfsFile> + '_ {
self.open_files.iter().copied()
}
pub fn is_file_open(&self, file_id: VfsFile) -> bool {
self.open_files.contains(&file_id)
}
}
pub mod workspace;

View File

@@ -7,9 +7,9 @@ use tracing::trace_span;
use red_knot_module_resolver::ModuleName;
use red_knot_python_semantic::types::Type;
use red_knot_python_semantic::{HasTy, SemanticModel};
use ruff_db::files::File;
use ruff_db::parsed::{parsed_module, ParsedModule};
use ruff_db::source::{source_text, SourceText};
use ruff_db::vfs::VfsFile;
use ruff_python_ast as ast;
use ruff_python_ast::visitor::{walk_stmt, Visitor};
@@ -22,7 +22,7 @@ use crate::db::Db;
pub(crate) fn unwind_if_cancelled(db: &dyn Db) {}
#[salsa::tracked(return_ref)]
pub(crate) fn lint_syntax(db: &dyn Db, file_id: VfsFile) -> Diagnostics {
pub(crate) fn lint_syntax(db: &dyn Db, file_id: File) -> Diagnostics {
#[allow(clippy::print_stdout)]
if std::env::var("RED_KNOT_SLOW_LINT").is_ok() {
for i in 0..10 {
@@ -74,7 +74,7 @@ fn lint_lines(source: &str, diagnostics: &mut Vec<String>) {
}
#[salsa::tracked(return_ref)]
pub(crate) fn lint_semantic(db: &dyn Db, file_id: VfsFile) -> Diagnostics {
pub(crate) fn lint_semantic(db: &dyn Db, file_id: File) -> Diagnostics {
let _span = trace_span!("lint_semantic", ?file_id).entered();
let source = source_text(db.upcast(), file_id);
@@ -103,7 +103,7 @@ fn lint_unresolved_imports(context: &SemanticLintContext, import: AnyImportRef)
for alias in &import.names {
let ty = alias.ty(&context.semantic);
if ty.is_unknown() {
if ty.is_unbound() {
context.push_diagnostic(format!("Unresolved import '{}'", &alias.name));
}
}
@@ -112,7 +112,7 @@ fn lint_unresolved_imports(context: &SemanticLintContext, import: AnyImportRef)
for alias in &import.names {
let ty = alias.ty(&context.semantic);
if ty.is_unknown() {
if ty.is_unbound() {
context.push_diagnostic(format!("Unresolved import '{}'", &alias.name));
}
}
@@ -122,7 +122,6 @@ fn lint_unresolved_imports(context: &SemanticLintContext, import: AnyImportRef)
fn lint_bad_override(context: &SemanticLintContext, class: &ast::StmtClassDef) {
let semantic = &context.semantic;
let typing_context = semantic.typing_context();
// TODO we should have a special marker on the real typing module (from typeshed) so if you
// have your own "typing" module in your project, we don't consider it THE typing module (and
@@ -131,11 +130,7 @@ fn lint_bad_override(context: &SemanticLintContext, class: &ast::StmtClassDef) {
return;
};
let Some(typing_override) = semantic.public_symbol(&typing, "override") else {
return;
};
let override_ty = semantic.public_symbol_ty(typing_override);
let override_ty = semantic.global_symbol_ty(&typing, "override");
let Type::Class(class_ty) = class.ty(semantic) else {
return;
@@ -150,17 +145,20 @@ fn lint_bad_override(context: &SemanticLintContext, class: &ast::StmtClassDef) {
return;
};
if ty.has_decorator(&typing_context, override_ty) {
let method_name = ty.name(&typing_context);
// TODO this shouldn't make direct use of the Db; see comment on SemanticModel::db
let db = semantic.db();
if ty.has_decorator(db, override_ty) {
let method_name = ty.name(db);
if class_ty
.inherited_class_member(&typing_context, method_name)
.is_none()
.inherited_class_member(db, &method_name)
.is_unbound()
{
// TODO should have a qualname() method to support nested classes
context.push_diagnostic(
format!(
"Method {}.{} is decorated with `typing.override` but does not override any base class method",
class_ty.name(&typing_context),
class_ty.name(db),
method_name,
));
}

View File

@@ -1,5 +1,6 @@
use std::sync::Mutex;
use clap::Parser;
use crossbeam::channel as crossbeam_channel;
use salsa::ParallelDatabase;
use tracing::subscriber::Interest;
@@ -9,12 +10,54 @@ use tracing_subscriber::layer::{Context, Filter, SubscriberExt};
use tracing_subscriber::{Layer, Registry};
use tracing_tree::time::Uptime;
use red_knot::program::{FileWatcherChange, Program};
use red_knot::db::RootDatabase;
use red_knot::watch::FileWatcher;
use red_knot::Workspace;
use red_knot_module_resolver::{set_module_resolution_settings, ModuleResolutionSettings};
use ruff_db::file_system::{FileSystem, FileSystemPath, OsFileSystem};
use ruff_db::vfs::system_path_to_file;
use red_knot::watch::FileWatcherChange;
use red_knot::workspace::WorkspaceMetadata;
use ruff_db::program::{ProgramSettings, SearchPathSettings};
use ruff_db::system::{OsSystem, System, SystemPathBuf};
use cli::target_version::TargetVersion;
use cli::verbosity::{Verbosity, VerbosityLevel};
mod cli;
#[derive(Debug, Parser)]
#[command(
author,
name = "red-knot",
about = "An experimental multifile analysis backend for Ruff"
)]
#[command(version)]
struct Args {
#[arg(
long,
help = "Changes the current working directory.",
long_help = "Changes the current working directory before any specified operations. This affects the workspace and configuration discovery.",
value_name = "PATH"
)]
current_directory: Option<SystemPathBuf>,
#[arg(
long,
value_name = "DIRECTORY",
help = "Custom directory to use for stdlib typeshed stubs"
)]
custom_typeshed_dir: Option<SystemPathBuf>,
#[arg(
long,
value_name = "PATH",
help = "Additional path to use as a module-resolution source (can be passed multiple times)"
)]
extra_search_path: Vec<SystemPathBuf>,
#[arg(long, help = "Python version to assume when resolving types", default_value_t = TargetVersion::default(), value_name="VERSION")]
target_version: TargetVersion,
#[clap(flatten)]
verbosity: Verbosity,
}
#[allow(
clippy::print_stdout,
@@ -23,52 +66,46 @@ use ruff_db::vfs::system_path_to_file;
clippy::dbg_macro
)]
pub fn main() -> anyhow::Result<()> {
countme::enable(true);
setup_tracing();
let Args {
current_directory,
custom_typeshed_dir,
extra_search_path: extra_paths,
target_version,
verbosity,
} = Args::parse_from(std::env::args().collect::<Vec<_>>());
let arguments: Vec<_> = std::env::args().collect();
let verbosity = verbosity.level();
countme::enable(verbosity == Some(VerbosityLevel::Trace));
setup_tracing(verbosity);
if arguments.len() < 2 {
eprintln!("Usage: red_knot <path>");
return Err(anyhow::anyhow!("Invalid arguments"));
}
let cwd = if let Some(cwd) = current_directory {
let canonicalized = cwd.as_utf8_path().canonicalize_utf8().unwrap();
SystemPathBuf::from_utf8_path_buf(canonicalized)
} else {
let cwd = std::env::current_dir().unwrap();
SystemPathBuf::from_path_buf(cwd).unwrap()
};
let fs = OsFileSystem;
let entry_point = FileSystemPath::new(&arguments[1]);
let system = OsSystem::new(cwd.clone());
let workspace_metadata =
WorkspaceMetadata::from_path(system.current_directory(), &system).unwrap();
if !fs.exists(entry_point) {
eprintln!("The entry point does not exist.");
return Err(anyhow::anyhow!("Invalid arguments"));
}
if !fs.is_file(entry_point) {
eprintln!("The entry point is not a file.");
return Err(anyhow::anyhow!("Invalid arguments"));
}
let entry_point = entry_point.to_path_buf();
let workspace_folder = entry_point.parent().unwrap();
let workspace = Workspace::new(workspace_folder.to_path_buf());
let workspace_search_path = workspace.root().to_path_buf();
let mut program = Program::new(workspace, fs);
set_module_resolution_settings(
&mut program,
ModuleResolutionSettings {
extra_paths: vec![],
workspace_root: workspace_search_path,
// TODO: Respect the settings from the workspace metadata. when resolving the program settings.
let program_settings = ProgramSettings {
target_version: target_version.into(),
search_paths: SearchPathSettings {
extra_paths,
workspace_root: workspace_metadata.root().to_path_buf(),
custom_typeshed: custom_typeshed_dir,
site_packages: None,
custom_typeshed: None,
},
);
};
let entry_id = system_path_to_file(&program, entry_point.clone()).unwrap();
program.workspace_mut().open_file(entry_id);
// TODO: Use the `program_settings` to compute the key for the database's persistent
// cache and load the cache if it exists.
let mut db = RootDatabase::new(workspace_metadata, program_settings, system);
let (main_loop, main_loop_cancellation_token) = MainLoop::new();
let (main_loop, main_loop_cancellation_token) = MainLoop::new(verbosity);
// Listen to Ctrl+C and abort the watch mode.
let main_loop_cancellation_token = Mutex::new(Some(main_loop_cancellation_token));
@@ -87,9 +124,9 @@ pub fn main() -> anyhow::Result<()> {
file_changes_notifier.notify(changes);
})?;
file_watcher.watch_folder(workspace_folder.as_std_path())?;
file_watcher.watch_folder(db.workspace().root(&db).as_std_path())?;
main_loop.run(&mut program);
main_loop.run(&mut db);
println!("{}", countme::get_all());
@@ -97,18 +134,19 @@ pub fn main() -> anyhow::Result<()> {
}
struct MainLoop {
orchestrator_sender: crossbeam_channel::Sender<OrchestratorMessage>,
main_loop_receiver: crossbeam_channel::Receiver<MainLoopMessage>,
verbosity: Option<VerbosityLevel>,
orchestrator: crossbeam_channel::Sender<OrchestratorMessage>,
receiver: crossbeam_channel::Receiver<MainLoopMessage>,
}
impl MainLoop {
fn new() -> (Self, MainLoopCancellationToken) {
fn new(verbosity: Option<VerbosityLevel>) -> (Self, MainLoopCancellationToken) {
let (orchestrator_sender, orchestrator_receiver) = crossbeam_channel::bounded(1);
let (main_loop_sender, main_loop_receiver) = crossbeam_channel::bounded(1);
let mut orchestrator = Orchestrator {
receiver: orchestrator_receiver,
sender: main_loop_sender.clone(),
main_loop: main_loop_sender.clone(),
revision: 0,
};
@@ -118,8 +156,9 @@ impl MainLoop {
(
Self {
orchestrator_sender,
main_loop_receiver,
verbosity,
orchestrator: orchestrator_sender,
receiver: main_loop_receiver,
},
MainLoopCancellationToken {
sender: main_loop_sender,
@@ -129,30 +168,28 @@ impl MainLoop {
fn file_changes_notifier(&self) -> FileChangesNotifier {
FileChangesNotifier {
sender: self.orchestrator_sender.clone(),
sender: self.orchestrator.clone(),
}
}
#[allow(clippy::print_stderr)]
fn run(self, program: &mut Program) {
self.orchestrator_sender
.send(OrchestratorMessage::Run)
.unwrap();
fn run(self, db: &mut RootDatabase) {
self.orchestrator.send(OrchestratorMessage::Run).unwrap();
for message in &self.main_loop_receiver {
for message in &self.receiver {
tracing::trace!("Main Loop: Tick");
match message {
MainLoopMessage::CheckProgram { revision } => {
let program = program.snapshot();
let sender = self.orchestrator_sender.clone();
MainLoopMessage::CheckWorkspace { revision } => {
let db = db.snapshot();
let orchestrator = self.orchestrator.clone();
// Spawn a new task that checks the program. This needs to be done in a separate thread
// Spawn a new task that checks the workspace. This needs to be done in a separate thread
// to prevent blocking the main loop here.
rayon::spawn(move || {
if let Ok(result) = program.check() {
sender
.send(OrchestratorMessage::CheckProgramCompleted {
if let Ok(result) = db.check() {
orchestrator
.send(OrchestratorMessage::CheckCompleted {
diagnostics: result,
revision,
})
@@ -162,14 +199,18 @@ impl MainLoop {
}
MainLoopMessage::ApplyChanges(changes) => {
// Automatically cancels any pending queries and waits for them to complete.
program.apply_changes(changes);
db.apply_changes(changes);
}
MainLoopMessage::CheckCompleted(diagnostics) => {
eprintln!("{}", diagnostics.join("\n"));
eprintln!("{}", countme::get_all());
if self.verbosity == Some(VerbosityLevel::Trace) {
eprintln!("{}", countme::get_all());
}
}
MainLoopMessage::Exit => {
eprintln!("{}", countme::get_all());
if self.verbosity == Some(VerbosityLevel::Trace) {
eprintln!("{}", countme::get_all());
}
return;
}
}
@@ -179,7 +220,7 @@ impl MainLoop {
impl Drop for MainLoop {
fn drop(&mut self) {
self.orchestrator_sender
self.orchestrator
.send(OrchestratorMessage::Shutdown)
.unwrap();
}
@@ -211,7 +252,7 @@ impl MainLoopCancellationToken {
struct Orchestrator {
/// Sends messages to the main loop.
sender: crossbeam_channel::Sender<MainLoopMessage>,
main_loop: crossbeam_channel::Sender<MainLoopMessage>,
/// Receives messages from the main loop.
receiver: crossbeam_channel::Receiver<OrchestratorMessage>,
revision: usize,
@@ -223,20 +264,20 @@ impl Orchestrator {
while let Ok(message) = self.receiver.recv() {
match message {
OrchestratorMessage::Run => {
self.sender
.send(MainLoopMessage::CheckProgram {
self.main_loop
.send(MainLoopMessage::CheckWorkspace {
revision: self.revision,
})
.unwrap();
}
OrchestratorMessage::CheckProgramCompleted {
OrchestratorMessage::CheckCompleted {
diagnostics,
revision,
} => {
// Only take the diagnostics if they are for the latest revision.
if self.revision == revision {
self.sender
self.main_loop
.send(MainLoopMessage::CheckCompleted(diagnostics))
.unwrap();
} else {
@@ -271,7 +312,7 @@ impl Orchestrator {
changes.extend(file_changes);
}
Ok(OrchestratorMessage::CheckProgramCompleted { .. })=> {
Ok(OrchestratorMessage::CheckCompleted { .. })=> {
// disregard any outdated completion message.
}
Ok(OrchestratorMessage::Run) => unreachable!("The orchestrator is already running."),
@@ -284,8 +325,8 @@ impl Orchestrator {
},
default(std::time::Duration::from_millis(10)) => {
// No more file changes after 10 ms, send the changes and schedule a new analysis
self.sender.send(MainLoopMessage::ApplyChanges(changes)).unwrap();
self.sender.send(MainLoopMessage::CheckProgram { revision: self.revision}).unwrap();
self.main_loop.send(MainLoopMessage::ApplyChanges(changes)).unwrap();
self.main_loop.send(MainLoopMessage::CheckWorkspace { revision: self.revision}).unwrap();
return;
}
}
@@ -301,7 +342,7 @@ impl Orchestrator {
/// Message sent from the orchestrator to the main loop.
#[derive(Debug)]
enum MainLoopMessage {
CheckProgram { revision: usize },
CheckWorkspace { revision: usize },
CheckCompleted(Vec<String>),
ApplyChanges(Vec<FileWatcherChange>),
Exit,
@@ -312,7 +353,7 @@ enum OrchestratorMessage {
Run,
Shutdown,
CheckProgramCompleted {
CheckCompleted {
diagnostics: Vec<String>,
revision: usize,
},
@@ -320,7 +361,14 @@ enum OrchestratorMessage {
FileChanges(Vec<FileWatcherChange>),
}
fn setup_tracing() {
fn setup_tracing(verbosity: Option<VerbosityLevel>) {
let trace_level = match verbosity {
None => Level::WARN,
Some(VerbosityLevel::Info) => Level::INFO,
Some(VerbosityLevel::Debug) => Level::DEBUG,
Some(VerbosityLevel::Trace) => Level::TRACE,
};
let subscriber = Registry::default().with(
tracing_tree::HierarchicalLayer::default()
.with_indent_lines(true)
@@ -330,9 +378,7 @@ fn setup_tracing() {
.with_targets(true)
.with_writer(|| Box::new(std::io::stderr()))
.with_timer(Uptime::default())
.with_filter(LoggingFilter {
trace_level: Level::TRACE,
}),
.with_filter(LoggingFilter { trace_level }),
);
tracing::subscriber::set_global_default(subscriber).unwrap();

View File

@@ -1,32 +0,0 @@
use ruff_db::vfs::VfsFile;
use salsa::Cancelled;
use crate::lint::{lint_semantic, lint_syntax, Diagnostics};
use crate::program::Program;
impl Program {
/// Checks all open files in the workspace and its dependencies.
#[tracing::instrument(level = "debug", skip_all)]
pub fn check(&self) -> Result<Vec<String>, Cancelled> {
self.with_db(|db| {
let mut result = Vec::new();
for open_file in db.workspace.open_files() {
result.extend_from_slice(&db.check_file_impl(open_file));
}
result
})
}
#[tracing::instrument(level = "debug", skip(self))]
pub fn check_file(&self, file: VfsFile) -> Result<Diagnostics, Cancelled> {
self.with_db(|db| db.check_file_impl(file))
}
fn check_file_impl(&self, file: VfsFile) -> Diagnostics {
let mut diagnostics = Vec::new();
diagnostics.extend_from_slice(lint_syntax(self, file));
diagnostics.extend_from_slice(lint_semantic(self, file));
Diagnostics::from(diagnostics)
}
}

View File

@@ -1,131 +0,0 @@
use std::panic::{RefUnwindSafe, UnwindSafe};
use std::sync::Arc;
use salsa::{Cancelled, Database};
use red_knot_module_resolver::{Db as ResolverDb, Jar as ResolverJar};
use red_knot_python_semantic::{Db as SemanticDb, Jar as SemanticJar};
use ruff_db::file_system::{FileSystem, FileSystemPathBuf};
use ruff_db::vfs::{Vfs, VfsFile, VfsPath};
use ruff_db::{Db as SourceDb, Jar as SourceJar, Upcast};
use crate::db::{Db, Jar};
use crate::Workspace;
mod check;
#[salsa::db(SourceJar, ResolverJar, SemanticJar, Jar)]
pub struct Program {
storage: salsa::Storage<Program>,
vfs: Vfs,
fs: Arc<dyn FileSystem + Send + Sync + RefUnwindSafe>,
workspace: Workspace,
}
impl Program {
pub fn new<Fs>(workspace: Workspace, file_system: Fs) -> Self
where
Fs: FileSystem + 'static + Send + Sync + RefUnwindSafe,
{
Self {
storage: salsa::Storage::default(),
vfs: Vfs::default(),
fs: Arc::new(file_system),
workspace,
}
}
pub fn apply_changes<I>(&mut self, changes: I)
where
I: IntoIterator<Item = FileWatcherChange>,
{
for change in changes {
VfsFile::touch_path(self, &VfsPath::file_system(change.path));
}
}
pub fn workspace(&self) -> &Workspace {
&self.workspace
}
pub fn workspace_mut(&mut self) -> &mut Workspace {
&mut self.workspace
}
#[allow(clippy::unnecessary_wraps)]
fn with_db<F, T>(&self, f: F) -> Result<T, Cancelled>
where
F: FnOnce(&Program) -> T + UnwindSafe,
{
// TODO: Catch in `Caancelled::catch`
// See https://salsa.zulipchat.com/#narrow/stream/145099-general/topic/How.20to.20use.20.60Cancelled.3A.3Acatch.60
Ok(f(self))
}
}
impl Upcast<dyn SemanticDb> for Program {
fn upcast(&self) -> &(dyn SemanticDb + 'static) {
self
}
}
impl Upcast<dyn SourceDb> for Program {
fn upcast(&self) -> &(dyn SourceDb + 'static) {
self
}
}
impl Upcast<dyn ResolverDb> for Program {
fn upcast(&self) -> &(dyn ResolverDb + 'static) {
self
}
}
impl ResolverDb for Program {}
impl SemanticDb for Program {}
impl SourceDb for Program {
fn file_system(&self) -> &dyn FileSystem {
&*self.fs
}
fn vfs(&self) -> &Vfs {
&self.vfs
}
}
impl Database for Program {}
impl Db for Program {}
impl salsa::ParallelDatabase for Program {
fn snapshot(&self) -> salsa::Snapshot<Self> {
salsa::Snapshot::new(Self {
storage: self.storage.snapshot(),
vfs: self.vfs.snapshot(),
fs: self.fs.clone(),
workspace: self.workspace.clone(),
})
}
}
#[derive(Clone, Debug)]
pub struct FileWatcherChange {
path: FileSystemPathBuf,
#[allow(unused)]
kind: FileChangeKind,
}
impl FileWatcherChange {
pub fn new(path: FileSystemPathBuf, kind: FileChangeKind) -> Self {
Self { path, kind }
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub enum FileChangeKind {
Created,
Modified,
Deleted,
}

View File

@@ -1,10 +1,10 @@
use std::path::Path;
use crate::program::{FileChangeKind, FileWatcherChange};
use anyhow::Context;
use notify::event::{CreateKind, RemoveKind};
use notify::event::{CreateKind, ModifyKind, RemoveKind};
use notify::{recommended_watcher, Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher};
use ruff_db::file_system::FileSystemPath;
use ruff_db::system::{SystemPath, SystemPathBuf};
pub struct FileWatcher {
watcher: RecommendedWatcher,
@@ -33,12 +33,25 @@ impl FileWatcher {
}
fn from_handler(handler: Box<dyn EventHandler>) -> anyhow::Result<Self> {
let watcher = recommended_watcher(move |changes: notify::Result<Event>| {
match changes {
let watcher = recommended_watcher(move |event: notify::Result<Event>| {
match event {
Ok(event) => {
// TODO verify that this handles all events correctly
let change_kind = match event.kind {
EventKind::Create(CreateKind::File) => FileChangeKind::Created,
EventKind::Modify(ModifyKind::Name(notify::event::RenameMode::From)) => {
FileChangeKind::Deleted
}
EventKind::Modify(ModifyKind::Name(notify::event::RenameMode::To)) => {
FileChangeKind::Created
}
EventKind::Modify(ModifyKind::Name(notify::event::RenameMode::Any)) => {
// TODO Introduce a better catch all event for cases that we don't understand.
FileChangeKind::Created
}
EventKind::Modify(ModifyKind::Name(notify::event::RenameMode::Both)) => {
todo!("Handle both create and delete event.");
}
EventKind::Modify(_) => FileChangeKind::Modified,
EventKind::Remove(RemoveKind::File) => FileChangeKind::Deleted,
_ => {
@@ -49,13 +62,9 @@ impl FileWatcher {
let mut changes = Vec::new();
for path in event.paths {
if path.is_file() {
if let Some(fs_path) = FileSystemPath::from_std_path(&path) {
changes.push(FileWatcherChange::new(
fs_path.to_path_buf(),
change_kind,
));
}
if let Some(fs_path) = SystemPath::from_std_path(&path) {
changes
.push(FileWatcherChange::new(fs_path.to_path_buf(), change_kind));
}
}
@@ -80,3 +89,23 @@ impl FileWatcher {
Ok(())
}
}
#[derive(Clone, Debug)]
pub struct FileWatcherChange {
pub path: SystemPathBuf,
#[allow(unused)]
pub kind: FileChangeKind,
}
impl FileWatcherChange {
pub fn new(path: SystemPathBuf, kind: FileChangeKind) -> Self {
Self { path, kind }
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub enum FileChangeKind {
Created,
Modified,
Deleted,
}

View File

@@ -0,0 +1,344 @@
// TODO: Fix clippy warnings created by salsa macros
#![allow(clippy::used_underscore_binding)]
use std::{collections::BTreeMap, sync::Arc};
use rustc_hash::{FxBuildHasher, FxHashSet};
pub use metadata::{PackageMetadata, WorkspaceMetadata};
use ruff_db::{
files::{system_path_to_file, File},
system::{walk_directory::WalkState, SystemPath, SystemPathBuf},
};
use ruff_python_ast::{name::Name, PySourceType};
use crate::{
db::Db,
lint::{lint_semantic, lint_syntax, Diagnostics},
};
mod metadata;
/// The project workspace as a Salsa ingredient.
///
/// A workspace consists of one or multiple packages. Packages can be nested. A file in a workspace
/// belongs to no or exactly one package (files can't belong to multiple packages).
///
/// How workspaces and packages are discovered is TBD. For now, a workspace can be any directory,
/// and it always contains a single package which has the same root as the workspace.
///
/// ## Examples
///
/// ```text
/// app-1/
/// pyproject.toml
/// src/
/// ... python files
///
/// app-2/
/// pyproject.toml
/// src/
/// ... python files
///
/// shared/
/// pyproject.toml
/// src/
/// ... python files
///
/// pyproject.toml
/// ```
///
/// The above project structure has three packages: `app-1`, `app-2`, and `shared`.
/// Each of the packages can define their own settings in their `pyproject.toml` file, but
/// they must be compatible. For example, each package can define a different `requires-python` range,
/// but the ranges must overlap.
///
/// ## How is a workspace different from a program?
/// There are two (related) motivations:
///
/// 1. Program is defined in `ruff_db` and it can't reference the settings types for the linter and formatter
/// without introducing a cyclic dependency. The workspace is defined in a higher level crate
/// where it can reference these setting types.
/// 2. Running `ruff check` with different target versions results in different programs (settings) but
/// it remains the same workspace. That's why program is a narrowed view of the workspace only
/// holding on to the most fundamental settings required for checking.
#[salsa::input]
pub struct Workspace {
#[id]
#[return_ref]
root_buf: SystemPathBuf,
/// The files that are open in the workspace.
///
/// Setting the open files to a non-`None` value changes `check` to only check the
/// open files rather than all files in the workspace.
#[return_ref]
open_file_set: Option<Arc<FxHashSet<File>>>,
/// The (first-party) packages in this workspace.
#[return_ref]
package_tree: BTreeMap<SystemPathBuf, Package>,
}
/// A first-party package in a workspace.
#[salsa::input]
pub struct Package {
#[return_ref]
pub name: Name,
/// The path to the root directory of the package.
#[id]
#[return_ref]
root_buf: SystemPathBuf,
/// The files that are part of this package.
#[return_ref]
file_set: Arc<FxHashSet<File>>,
// TODO: Add the loaded settings.
}
impl Workspace {
/// Discovers the closest workspace at `path` and returns its metadata.
pub fn from_metadata(db: &dyn Db, metadata: WorkspaceMetadata) -> Self {
let mut packages = BTreeMap::new();
for package in metadata.packages {
packages.insert(package.root.clone(), Package::from_metadata(db, package));
}
Workspace::new(db, metadata.root, None, packages)
}
pub fn root(self, db: &dyn Db) -> &SystemPath {
self.root_buf(db)
}
pub fn packages(self, db: &dyn Db) -> impl Iterator<Item = Package> + '_ {
self.package_tree(db).values().copied()
}
pub fn reload(self, db: &mut dyn Db, metadata: WorkspaceMetadata) {
assert_eq!(self.root(db), metadata.root());
let mut old_packages = self.package_tree(db).clone();
let mut new_packages = BTreeMap::new();
for package_metadata in metadata.packages {
let path = package_metadata.root().to_path_buf();
let package = if let Some(old_package) = old_packages.remove(&path) {
old_package.update(db, package_metadata);
old_package
} else {
Package::from_metadata(db, package_metadata)
};
new_packages.insert(path, package);
}
self.set_package_tree(db).to(new_packages);
}
pub fn update_package(self, db: &mut dyn Db, metadata: PackageMetadata) -> anyhow::Result<()> {
let path = metadata.root().to_path_buf();
if let Some(package) = self.package_tree(db).get(&path).copied() {
package.update(db, metadata);
Ok(())
} else {
Err(anyhow::anyhow!("Package {path} not found"))
}
}
/// Returns the closest package to which the first-party `path` belongs.
///
/// Returns `None` if the `path` is outside of any package or if `file` isn't a first-party file
/// (e.g. third-party dependencies or `excluded`).
pub fn package(self, db: &dyn Db, path: &SystemPath) -> Option<Package> {
let packages = self.package_tree(db);
let (package_path, package) = packages.range(..path.to_path_buf()).next_back()?;
if path.starts_with(package_path) {
Some(*package)
} else {
None
}
}
/// Checks all open files in the workspace and its dependencies.
#[tracing::instrument(level = "debug", skip_all)]
pub fn check(self, db: &dyn Db) -> Vec<String> {
let mut result = Vec::new();
if let Some(open_files) = self.open_files(db) {
for file in open_files {
result.extend_from_slice(&check_file(db, *file));
}
} else {
for package in self.packages(db) {
result.extend(package.check(db));
}
}
result
}
/// Opens a file in the workspace.
///
/// This changes the behavior of `check` to only check the open files rather than all files in the workspace.
#[tracing::instrument(level = "debug", skip(self, db))]
pub fn open_file(self, db: &mut dyn Db, file: File) {
let mut open_files = self.take_open_files(db);
open_files.insert(file);
self.set_open_files(db, open_files);
}
/// Closes a file in the workspace.
#[tracing::instrument(level = "debug", skip(self, db))]
pub fn close_file(self, db: &mut dyn Db, file: File) -> bool {
let mut open_files = self.take_open_files(db);
let removed = open_files.remove(&file);
if removed {
self.set_open_files(db, open_files);
}
removed
}
/// Returns the open files in the workspace or `None` if the entire workspace should be checked.
pub fn open_files(self, db: &dyn Db) -> Option<&FxHashSet<File>> {
self.open_file_set(db).as_deref()
}
/// Sets the open files in the workspace.
///
/// This changes the behavior of `check` to only check the open files rather than all files in the workspace.
#[tracing::instrument(level = "debug", skip(self, db))]
pub fn set_open_files(self, db: &mut dyn Db, open_files: FxHashSet<File>) {
self.set_open_file_set(db).to(Some(Arc::new(open_files)));
}
/// This takes the open files from the workspace and returns them.
///
/// This changes the behavior of `check` to check all files in the workspace instead of just the open files.
pub fn take_open_files(self, db: &mut dyn Db) -> FxHashSet<File> {
let open_files = self.open_file_set(db).clone();
if let Some(open_files) = open_files {
// Salsa will cancel any pending queries and remove its own reference to `open_files`
// so that the reference counter to `open_files` now drops to 1.
self.set_open_file_set(db).to(None);
Arc::try_unwrap(open_files).unwrap()
} else {
FxHashSet::default()
}
}
}
impl Package {
pub fn root(self, db: &dyn Db) -> &SystemPath {
self.root_buf(db)
}
/// Returns `true` if `file` is a first-party file part of this package.
pub fn contains_file(self, db: &dyn Db, file: File) -> bool {
self.files(db).contains(&file)
}
pub fn files(self, db: &dyn Db) -> &FxHashSet<File> {
self.file_set(db)
}
pub fn remove_file(self, db: &mut dyn Db, file: File) -> bool {
let mut files_arc = self.file_set(db).clone();
// Set a dummy value. Salsa will cancel any pending queries and remove its own reference to `files`
// so that the reference counter to `files` now drops to 1.
self.set_file_set(db).to(Arc::new(FxHashSet::default()));
let files = Arc::get_mut(&mut files_arc).unwrap();
let removed = files.remove(&file);
self.set_file_set(db).to(files_arc);
removed
}
pub(crate) fn check(self, db: &dyn Db) -> Vec<String> {
let mut result = Vec::new();
for file in self.files(db) {
let diagnostics = check_file(db, *file);
result.extend_from_slice(&diagnostics);
}
result
}
fn from_metadata(db: &dyn Db, metadata: PackageMetadata) -> Self {
let files = discover_package_files(db, metadata.root());
Self::new(db, metadata.name, metadata.root, Arc::new(files))
}
fn update(self, db: &mut dyn Db, metadata: PackageMetadata) {
let root = self.root(db);
assert_eq!(root, metadata.root());
let files = discover_package_files(db, root);
self.set_name(db).to(metadata.name);
self.set_file_set(db).to(Arc::new(files));
}
}
pub(super) fn check_file(db: &dyn Db, file: File) -> Diagnostics {
let mut diagnostics = Vec::new();
diagnostics.extend_from_slice(lint_syntax(db, file));
diagnostics.extend_from_slice(lint_semantic(db, file));
Diagnostics::from(diagnostics)
}
fn discover_package_files(db: &dyn Db, path: &SystemPath) -> FxHashSet<File> {
let paths = std::sync::Mutex::new(Vec::new());
db.system().walk_directory(path).run(|| {
Box::new(|entry| {
match entry {
Ok(entry) => {
// Skip over any non python files to avoid creating too many entries in `Files`.
if entry.file_type().is_file()
&& entry
.path()
.extension()
.and_then(PySourceType::try_from_extension)
.is_some()
{
let mut paths = paths.lock().unwrap();
paths.push(entry.into_path());
}
}
Err(error) => {
// TODO Handle error
tracing::error!("Failed to walk path: {error}");
}
}
WalkState::Continue
})
});
let paths = paths.into_inner().unwrap();
let mut files = FxHashSet::with_capacity_and_hasher(paths.len(), FxBuildHasher);
for path in paths {
// If this returns `None`, then the file was deleted between the `walk_directory` call and now.
// We can ignore this.
if let Some(file) = system_path_to_file(db.upcast(), &path) {
files.insert(file);
}
}
files
}

View File

@@ -0,0 +1,68 @@
use ruff_db::system::{System, SystemPath, SystemPathBuf};
use ruff_python_ast::name::Name;
#[derive(Debug)]
pub struct WorkspaceMetadata {
pub(super) root: SystemPathBuf,
/// The (first-party) packages in this workspace.
pub(super) packages: Vec<PackageMetadata>,
}
/// A first-party package in a workspace.
#[derive(Debug)]
pub struct PackageMetadata {
pub(super) name: Name,
/// The path to the root directory of the package.
pub(super) root: SystemPathBuf,
// TODO: Add the loaded package configuration (not the nested ruff settings)
}
impl WorkspaceMetadata {
/// Discovers the closest workspace at `path` and returns its metadata.
pub fn from_path(path: &SystemPath, system: &dyn System) -> anyhow::Result<WorkspaceMetadata> {
let root = if system.is_file(path) {
path.parent().unwrap().to_path_buf()
} else {
path.to_path_buf()
};
if !system.is_directory(&root) {
anyhow::bail!("no workspace found at {:?}", root);
}
// TODO: Discover package name from `pyproject.toml`.
let package_name: Name = path.file_name().unwrap_or("<root>").into();
let package = PackageMetadata {
name: package_name,
root: root.clone(),
};
let workspace = WorkspaceMetadata {
root,
packages: vec![package],
};
Ok(workspace)
}
pub fn root(&self) -> &SystemPath {
&self.root
}
pub fn packages(&self) -> &[PackageMetadata] {
&self.packages
}
}
impl PackageMetadata {
pub fn name(&self) -> &Name {
&self.name
}
pub fn root(&self) -> &SystemPath {
&self.root
}
}

View File

@@ -15,6 +15,8 @@ ruff_db = { workspace = true }
ruff_python_stdlib = { workspace = true }
compact_str = { workspace = true }
camino = { workspace = true }
once_cell = { workspace = true }
rustc-hash = { workspace = true }
salsa = { workspace = true }
tracing = { workspace = true }
@@ -26,6 +28,8 @@ walkdir = { workspace = true }
zip = { workspace = true }
[dev-dependencies]
ruff_db = { workspace = true, features = ["os"] }
anyhow = { workspace = true }
insta = { workspace = true }
tempfile = { workspace = true }

View File

@@ -1,83 +1,61 @@
use ruff_db::Upcast;
use crate::resolver::{
file_to_module,
internal::{ModuleNameIngredient, ModuleResolverSearchPaths},
resolve_module_query,
editable_install_resolution_paths, file_to_module, internal::ModuleNameIngredient,
module_resolution_settings, resolve_module_query,
};
use crate::typeshed::parse_typeshed_versions;
#[salsa::jar(db=Db)]
pub struct Jar(
ModuleNameIngredient<'_>,
ModuleResolverSearchPaths,
module_resolution_settings,
editable_install_resolution_paths,
resolve_module_query,
file_to_module,
parse_typeshed_versions,
);
pub trait Db: salsa::DbWithJar<Jar> + ruff_db::Db + Upcast<dyn ruff_db::Db> {}
#[cfg(test)]
pub(crate) mod tests {
use std::sync;
use salsa::DebugWithDb;
use ruff_db::file_system::{FileSystem, MemoryFileSystem, OsFileSystem};
use ruff_db::vfs::Vfs;
use ruff_db::files::Files;
use ruff_db::system::{DbWithTestSystem, TestSystem};
use ruff_db::vendored::VendoredFileSystem;
use crate::vendored_typeshed_stubs;
use super::*;
#[salsa::db(Jar, ruff_db::Jar)]
pub(crate) struct TestDb {
storage: salsa::Storage<Self>,
file_system: TestFileSystem,
system: TestSystem,
vendored: VendoredFileSystem,
files: Files,
events: sync::Arc<sync::Mutex<Vec<salsa::Event>>>,
vfs: Vfs,
}
impl TestDb {
#[allow(unused)]
pub(crate) fn new() -> Self {
Self {
storage: salsa::Storage::default(),
file_system: TestFileSystem::Memory(MemoryFileSystem::default()),
system: TestSystem::default(),
vendored: vendored_typeshed_stubs().snapshot(),
events: sync::Arc::default(),
vfs: Vfs::with_stubbed_vendored(),
files: Files::default(),
}
}
/// Returns the memory file system.
///
/// ## Panics
/// If this test db isn't using a memory file system.
#[allow(unused)]
pub(crate) fn memory_file_system(&self) -> &MemoryFileSystem {
if let TestFileSystem::Memory(fs) = &self.file_system {
fs
} else {
panic!("The test db is not using a memory file system");
}
}
/// Uses the real file system instead of the memory file system.
///
/// This useful for testing advanced file system features like permissions, symlinks, etc.
///
/// Note that any files written to the memory file system won't be copied over.
#[allow(unused)]
pub(crate) fn with_os_file_system(&mut self) {
self.file_system = TestFileSystem::Os(OsFileSystem);
}
#[allow(unused)]
pub(crate) fn vfs_mut(&mut self) -> &mut Vfs {
&mut self.vfs
}
/// Takes the salsa events.
///
/// ## Panics
/// If there are any pending salsa snapshots.
#[allow(unused)]
pub(crate) fn take_salsa_events(&mut self) -> Vec<salsa::Event> {
let inner = sync::Arc::get_mut(&mut self.events).expect("no pending salsa snapshots");
@@ -89,7 +67,6 @@ pub(crate) mod tests {
///
/// ## Panics
/// If there are any pending salsa snapshots.
#[allow(unused)]
pub(crate) fn clear_salsa_events(&mut self) {
self.take_salsa_events();
}
@@ -102,17 +79,31 @@ pub(crate) mod tests {
}
impl ruff_db::Db for TestDb {
fn file_system(&self) -> &dyn ruff_db::file_system::FileSystem {
self.file_system.inner()
fn vendored(&self) -> &VendoredFileSystem {
&self.vendored
}
fn vfs(&self) -> &ruff_db::vfs::Vfs {
&self.vfs
fn system(&self) -> &dyn ruff_db::system::System {
&self.system
}
fn files(&self) -> &Files {
&self.files
}
}
impl Db for TestDb {}
impl DbWithTestSystem for TestDb {
fn test_system(&self) -> &TestSystem {
&self.system
}
fn test_system_mut(&mut self) -> &mut TestSystem {
&mut self.system
}
}
impl salsa::Database for TestDb {
fn salsa_event(&self, event: salsa::Event) {
tracing::trace!("event: {:?}", event.debug(self));
@@ -125,32 +116,11 @@ pub(crate) mod tests {
fn snapshot(&self) -> salsa::Snapshot<Self> {
salsa::Snapshot::new(Self {
storage: self.storage.snapshot(),
file_system: self.file_system.snapshot(),
system: self.system.snapshot(),
vendored: self.vendored.snapshot(),
files: self.files.snapshot(),
events: self.events.clone(),
vfs: self.vfs.snapshot(),
})
}
}
enum TestFileSystem {
Memory(MemoryFileSystem),
#[allow(unused)]
Os(OsFileSystem),
}
impl TestFileSystem {
fn inner(&self) -> &dyn FileSystem {
match self {
Self::Memory(inner) => inner,
Self::Os(inner) => inner,
}
}
fn snapshot(&self) -> Self {
match self {
Self::Memory(inner) => Self::Memory(inner.snapshot()),
Self::Os(inner) => Self::Os(inner.snapshot()),
}
}
}
}

View File

@@ -1,9 +1,18 @@
mod db;
mod module;
mod module_name;
mod path;
mod resolver;
mod state;
mod typeshed;
#[cfg(test)]
mod testing;
pub use db::{Db, Jar};
pub use module::{Module, ModuleKind, ModuleName};
pub use resolver::{resolve_module, set_module_resolution_settings, ModuleResolutionSettings};
pub use typeshed::versions::TypeshedVersions;
pub use module::{Module, ModuleKind};
pub use module_name::ModuleName;
pub use resolver::resolve_module;
pub use typeshed::{
vendored_typeshed_stubs, TypeshedVersionsParseError, TypeshedVersionsParseErrorKind,
};

View File

@@ -1,188 +1,11 @@
use compact_str::ToCompactString;
use std::fmt::Formatter;
use std::ops::Deref;
use std::sync::Arc;
use ruff_db::file_system::FileSystemPath;
use ruff_db::vfs::{VfsFile, VfsPath};
use ruff_python_stdlib::identifiers::is_identifier;
use ruff_db::files::File;
use crate::Db;
/// A module name, e.g. `foo.bar`.
///
/// Always normalized to the absolute form (never a relative module name, i.e., never `.foo`).
#[derive(Clone, Debug, Eq, PartialEq, Hash, PartialOrd, Ord)]
pub struct ModuleName(compact_str::CompactString);
impl ModuleName {
/// Creates a new module name for `name`. Returns `Some` if `name` is a valid, absolute
/// module name and `None` otherwise.
///
/// The module name is invalid if:
///
/// * The name is empty
/// * The name is relative
/// * The name ends with a `.`
/// * The name contains a sequence of multiple dots
/// * A component of a name (the part between two dots) isn't a valid python identifier.
#[inline]
pub fn new(name: &str) -> Option<Self> {
Self::is_valid_name(name).then(|| Self(compact_str::CompactString::from(name)))
}
/// Creates a new module name for `name` where `name` is a static string.
/// Returns `Some` if `name` is a valid, absolute module name and `None` otherwise.
///
/// The module name is invalid if:
///
/// * The name is empty
/// * The name is relative
/// * The name ends with a `.`
/// * The name contains a sequence of multiple dots
/// * A component of a name (the part between two dots) isn't a valid python identifier.
///
/// ## Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(ModuleName::new_static("foo.bar").as_deref(), Some("foo.bar"));
/// assert_eq!(ModuleName::new_static(""), None);
/// assert_eq!(ModuleName::new_static("..foo"), None);
/// assert_eq!(ModuleName::new_static(".foo"), None);
/// assert_eq!(ModuleName::new_static("foo."), None);
/// assert_eq!(ModuleName::new_static("foo..bar"), None);
/// assert_eq!(ModuleName::new_static("2000"), None);
/// ```
#[inline]
pub fn new_static(name: &'static str) -> Option<Self> {
// TODO(Micha): Use CompactString::const_new once we upgrade to 0.8 https://github.com/ParkMyCar/compact_str/pull/336
Self::is_valid_name(name).then(|| Self(compact_str::CompactString::from(name)))
}
fn is_valid_name(name: &str) -> bool {
if name.is_empty() {
return false;
}
name.split('.').all(is_identifier)
}
/// An iterator over the components of the module name:
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(ModuleName::new_static("foo.bar.baz").unwrap().components().collect::<Vec<_>>(), vec!["foo", "bar", "baz"]);
/// ```
pub fn components(&self) -> impl DoubleEndedIterator<Item = &str> {
self.0.split('.')
}
/// The name of this module's immediate parent, if it has a parent.
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(ModuleName::new_static("foo.bar").unwrap().parent(), Some(ModuleName::new_static("foo").unwrap()));
/// assert_eq!(ModuleName::new_static("foo.bar.baz").unwrap().parent(), Some(ModuleName::new_static("foo.bar").unwrap()));
/// assert_eq!(ModuleName::new_static("root").unwrap().parent(), None);
/// ```
pub fn parent(&self) -> Option<ModuleName> {
let (parent, _) = self.0.rsplit_once('.')?;
Some(Self(parent.to_compact_string()))
}
/// Returns `true` if the name starts with `other`.
///
/// This is equivalent to checking if `self` is a sub-module of `other`.
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert!(ModuleName::new_static("foo.bar").unwrap().starts_with(&ModuleName::new_static("foo").unwrap()));
///
/// assert!(!ModuleName::new_static("foo.bar").unwrap().starts_with(&ModuleName::new_static("bar").unwrap()));
/// assert!(!ModuleName::new_static("foo_bar").unwrap().starts_with(&ModuleName::new_static("foo").unwrap()));
/// ```
pub fn starts_with(&self, other: &ModuleName) -> bool {
let mut self_components = self.components();
let other_components = other.components();
for other_component in other_components {
if self_components.next() != Some(other_component) {
return false;
}
}
true
}
#[inline]
pub fn as_str(&self) -> &str {
&self.0
}
pub(crate) fn from_relative_path(path: &FileSystemPath) -> Option<Self> {
let path = if path.ends_with("__init__.py") || path.ends_with("__init__.pyi") {
path.parent()?
} else {
path
};
let name = if let Some(parent) = path.parent() {
let mut name = compact_str::CompactString::with_capacity(path.as_str().len());
for component in parent.components() {
name.push_str(component.as_os_str().to_str()?);
name.push('.');
}
// SAFETY: Unwrap is safe here or `parent` would have returned `None`.
name.push_str(path.file_stem().unwrap());
name
} else {
path.file_stem()?.to_compact_string()
};
Some(Self(name))
}
}
impl Deref for ModuleName {
type Target = str;
#[inline]
fn deref(&self) -> &Self::Target {
self.as_str()
}
}
impl PartialEq<str> for ModuleName {
fn eq(&self, other: &str) -> bool {
self.as_str() == other
}
}
impl PartialEq<ModuleName> for str {
fn eq(&self, other: &ModuleName) -> bool {
self == other.as_str()
}
}
impl std::fmt::Display for ModuleName {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.0)
}
}
use crate::db::Db;
use crate::module_name::ModuleName;
use crate::path::{ModuleResolutionPathBuf, ModuleResolutionPathRef};
/// Representation of a Python module.
#[derive(Clone, PartialEq, Eq)]
@@ -194,8 +17,8 @@ impl Module {
pub(crate) fn new(
name: ModuleName,
kind: ModuleKind,
search_path: ModuleSearchPath,
file: VfsFile,
search_path: Arc<ModuleResolutionPathBuf>,
file: File,
) -> Self {
Self {
inner: Arc::new(ModuleInner {
@@ -213,13 +36,13 @@ impl Module {
}
/// The file to the source code that defines this module
pub fn file(&self) -> VfsFile {
pub fn file(&self) -> File {
self.inner.file
}
/// The search path from which the module was resolved.
pub fn search_path(&self) -> &ModuleSearchPath {
&self.inner.search_path
pub(crate) fn search_path(&self) -> ModuleResolutionPathRef {
ModuleResolutionPathRef::from(&*self.inner.search_path)
}
/// Determine whether this module is a single-file module or a package
@@ -254,8 +77,8 @@ impl salsa::DebugWithDb<dyn Db> for Module {
struct ModuleInner {
name: ModuleName,
kind: ModuleKind,
search_path: ModuleSearchPath,
file: VfsFile,
search_path: Arc<ModuleResolutionPathBuf>,
file: File,
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
@@ -266,78 +89,3 @@ pub enum ModuleKind {
/// A python package (`foo/__init__.py` or `foo/__init__.pyi`)
Package,
}
/// A search path in which to search modules.
/// Corresponds to a path in [`sys.path`](https://docs.python.org/3/library/sys_path_init.html) at runtime.
///
/// Cloning a search path is cheap because it's an `Arc`.
#[derive(Clone, PartialEq, Eq)]
pub struct ModuleSearchPath {
inner: Arc<ModuleSearchPathInner>,
}
impl ModuleSearchPath {
pub fn new<P>(path: P, kind: ModuleSearchPathKind) -> Self
where
P: Into<VfsPath>,
{
Self {
inner: Arc::new(ModuleSearchPathInner {
path: path.into(),
kind,
}),
}
}
/// Determine whether this is a first-party, third-party or standard-library search path
pub fn kind(&self) -> ModuleSearchPathKind {
self.inner.kind
}
/// Return the location of the search path on the file system
pub fn path(&self) -> &VfsPath {
&self.inner.path
}
}
impl std::fmt::Debug for ModuleSearchPath {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ModuleSearchPath")
.field("path", &self.inner.path)
.field("kind", &self.kind())
.finish()
}
}
#[derive(Eq, PartialEq)]
struct ModuleSearchPathInner {
path: VfsPath,
kind: ModuleSearchPathKind,
}
/// Enumeration of the different kinds of search paths type checkers are expected to support.
///
/// N.B. Although we don't implement `Ord` for this enum, they are ordered in terms of the
/// priority that we want to give these modules when resolving them.
/// This is roughly [the order given in the typing spec], but typeshed's stubs
/// for the standard library are moved higher up to match Python's semantics at runtime.
///
/// [the order given in the typing spec]: https://typing.readthedocs.io/en/latest/spec/distributing.html#import-resolution-ordering
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub enum ModuleSearchPathKind {
/// "Extra" paths provided by the user in a config file, env var or CLI flag.
/// E.g. mypy's `MYPYPATH` env var, or pyright's `stubPath` configuration setting
Extra,
/// Files in the project we're directly being invoked on
FirstParty,
/// The `stdlib` directory of typeshed (either vendored or custom)
StandardLibrary,
/// Stubs or runtime modules installed in site-packages
SitePackagesThirdParty,
/// Vendored third-party stubs from typeshed
VendoredThirdParty,
}

View File

@@ -0,0 +1,198 @@
use std::fmt;
use std::ops::Deref;
use compact_str::{CompactString, ToCompactString};
use ruff_python_stdlib::identifiers::is_identifier;
/// A module name, e.g. `foo.bar`.
///
/// Always normalized to the absolute form (never a relative module name, i.e., never `.foo`).
#[derive(Clone, Debug, Eq, PartialEq, Hash, PartialOrd, Ord)]
pub struct ModuleName(compact_str::CompactString);
impl ModuleName {
/// Creates a new module name for `name`. Returns `Some` if `name` is a valid, absolute
/// module name and `None` otherwise.
///
/// The module name is invalid if:
///
/// * The name is empty
/// * The name is relative
/// * The name ends with a `.`
/// * The name contains a sequence of multiple dots
/// * A component of a name (the part between two dots) isn't a valid python identifier.
#[inline]
#[must_use]
pub fn new(name: &str) -> Option<Self> {
Self::is_valid_name(name).then(|| Self(CompactString::from(name)))
}
/// Creates a new module name for `name` where `name` is a static string.
/// Returns `Some` if `name` is a valid, absolute module name and `None` otherwise.
///
/// The module name is invalid if:
///
/// * The name is empty
/// * The name is relative
/// * The name ends with a `.`
/// * The name contains a sequence of multiple dots
/// * A component of a name (the part between two dots) isn't a valid python identifier.
///
/// ## Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(ModuleName::new_static("foo.bar").as_deref(), Some("foo.bar"));
/// assert_eq!(ModuleName::new_static(""), None);
/// assert_eq!(ModuleName::new_static("..foo"), None);
/// assert_eq!(ModuleName::new_static(".foo"), None);
/// assert_eq!(ModuleName::new_static("foo."), None);
/// assert_eq!(ModuleName::new_static("foo..bar"), None);
/// assert_eq!(ModuleName::new_static("2000"), None);
/// ```
#[inline]
#[must_use]
pub fn new_static(name: &'static str) -> Option<Self> {
Self::is_valid_name(name).then(|| Self(CompactString::const_new(name)))
}
#[must_use]
fn is_valid_name(name: &str) -> bool {
!name.is_empty() && name.split('.').all(is_identifier)
}
/// An iterator over the components of the module name:
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(ModuleName::new_static("foo.bar.baz").unwrap().components().collect::<Vec<_>>(), vec!["foo", "bar", "baz"]);
/// ```
#[must_use]
pub fn components(&self) -> impl DoubleEndedIterator<Item = &str> {
self.0.split('.')
}
/// The name of this module's immediate parent, if it has a parent.
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(ModuleName::new_static("foo.bar").unwrap().parent(), Some(ModuleName::new_static("foo").unwrap()));
/// assert_eq!(ModuleName::new_static("foo.bar.baz").unwrap().parent(), Some(ModuleName::new_static("foo.bar").unwrap()));
/// assert_eq!(ModuleName::new_static("root").unwrap().parent(), None);
/// ```
#[must_use]
pub fn parent(&self) -> Option<ModuleName> {
let (parent, _) = self.0.rsplit_once('.')?;
Some(Self(parent.to_compact_string()))
}
/// Returns `true` if the name starts with `other`.
///
/// This is equivalent to checking if `self` is a sub-module of `other`.
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert!(ModuleName::new_static("foo.bar").unwrap().starts_with(&ModuleName::new_static("foo").unwrap()));
///
/// assert!(!ModuleName::new_static("foo.bar").unwrap().starts_with(&ModuleName::new_static("bar").unwrap()));
/// assert!(!ModuleName::new_static("foo_bar").unwrap().starts_with(&ModuleName::new_static("foo").unwrap()));
/// ```
#[must_use]
pub fn starts_with(&self, other: &ModuleName) -> bool {
let mut self_components = self.components();
let other_components = other.components();
for other_component in other_components {
if self_components.next() != Some(other_component) {
return false;
}
}
true
}
#[must_use]
#[inline]
pub fn as_str(&self) -> &str {
&self.0
}
/// Construct a [`ModuleName`] from a sequence of parts.
///
/// # Examples
///
/// ```
/// use red_knot_module_resolver::ModuleName;
///
/// assert_eq!(&*ModuleName::from_components(["a"]).unwrap(), "a");
/// assert_eq!(&*ModuleName::from_components(["a", "b"]).unwrap(), "a.b");
/// assert_eq!(&*ModuleName::from_components(["a", "b", "c"]).unwrap(), "a.b.c");
///
/// assert_eq!(ModuleName::from_components(["a-b"]), None);
/// assert_eq!(ModuleName::from_components(["a", "a-b"]), None);
/// assert_eq!(ModuleName::from_components(["a", "b", "a-b-c"]), None);
/// ```
#[must_use]
pub fn from_components<'a>(components: impl IntoIterator<Item = &'a str>) -> Option<Self> {
let mut components = components.into_iter();
let first_part = components.next()?;
if !is_identifier(first_part) {
return None;
}
let name = if let Some(second_part) = components.next() {
if !is_identifier(second_part) {
return None;
}
let mut name = format!("{first_part}.{second_part}");
for part in components {
if !is_identifier(part) {
return None;
}
name.push('.');
name.push_str(part);
}
CompactString::from(&name)
} else {
CompactString::from(first_part)
};
Some(Self(name))
}
}
impl Deref for ModuleName {
type Target = str;
#[inline]
fn deref(&self) -> &Self::Target {
self.as_str()
}
}
impl PartialEq<str> for ModuleName {
fn eq(&self, other: &str) -> bool {
self.as_str() == other
}
}
impl PartialEq<ModuleName> for str {
fn eq(&self, other: &ModuleName) -> bool {
self == other.as_str()
}
}
impl std::fmt::Display for ModuleName {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.0)
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,30 @@
use ruff_db::program::TargetVersion;
use ruff_db::system::System;
use ruff_db::vendored::VendoredFileSystem;
use crate::db::Db;
use crate::typeshed::LazyTypeshedVersions;
pub(crate) struct ResolverState<'db> {
pub(crate) db: &'db dyn Db,
pub(crate) typeshed_versions: LazyTypeshedVersions<'db>,
pub(crate) target_version: TargetVersion,
}
impl<'db> ResolverState<'db> {
pub(crate) fn new(db: &'db dyn Db, target_version: TargetVersion) -> Self {
Self {
db,
typeshed_versions: LazyTypeshedVersions::new(),
target_version,
}
}
pub(crate) fn system(&self) -> &dyn System {
self.db.system()
}
pub(crate) fn vendored(&self) -> &VendoredFileSystem {
self.db.vendored()
}
}

View File

@@ -0,0 +1,289 @@
use ruff_db::program::{Program, SearchPathSettings, TargetVersion};
use ruff_db::system::{DbWithTestSystem, SystemPath, SystemPathBuf};
use ruff_db::vendored::VendoredPathBuf;
use crate::db::tests::TestDb;
/// A test case for the module resolver.
///
/// You generally shouldn't construct instances of this struct directly;
/// instead, use the [`TestCaseBuilder`].
pub(crate) struct TestCase<T> {
pub(crate) db: TestDb,
pub(crate) src: SystemPathBuf,
pub(crate) stdlib: T,
pub(crate) site_packages: SystemPathBuf,
pub(crate) target_version: TargetVersion,
}
/// A `(file_name, file_contents)` tuple
pub(crate) type FileSpec = (&'static str, &'static str);
/// Specification for a typeshed mock to be created as part of a test
#[derive(Debug, Clone, Copy, Default)]
pub(crate) struct MockedTypeshed {
/// The stdlib files to be created in the typeshed mock
pub(crate) stdlib_files: &'static [FileSpec],
/// The contents of the `stdlib/VERSIONS` file
/// to be created in the typeshed mock
pub(crate) versions: &'static str,
}
#[derive(Debug)]
pub(crate) struct VendoredTypeshed;
#[derive(Debug)]
pub(crate) struct UnspecifiedTypeshed;
/// A builder for a module-resolver test case.
///
/// The builder takes care of creating a [`TestDb`]
/// instance, applying the module resolver settings,
/// and creating mock directories for the stdlib, `site-packages`,
/// first-party code, etc.
///
/// For simple tests that do not involve typeshed,
/// test cases can be created as follows:
///
/// ```rs
/// let test_case = TestCaseBuilder::new()
/// .with_src_files(...)
/// .build();
///
/// let test_case2 = TestCaseBuilder::new()
/// .with_site_packages_files(...)
/// .build();
/// ```
///
/// Any tests can specify the target Python version that should be used
/// in the module resolver settings:
///
/// ```rs
/// let test_case = TestCaseBuilder::new()
/// .with_src_files(...)
/// .with_target_version(...)
/// .build();
/// ```
///
/// For tests checking that standard-library module resolution is working
/// correctly, you should usually create a [`MockedTypeshed`] instance
/// and pass it to the [`TestCaseBuilder::with_custom_typeshed`] method.
/// If you need to check something that involves the vendored typeshed stubs
/// we include as part of the binary, you can instead use the
/// [`TestCaseBuilder::with_vendored_typeshed`] method.
/// For either of these, you should almost always try to be explicit
/// about the Python version you want to be specified in the module-resolver
/// settings for the test:
///
/// ```rs
/// const TYPESHED = MockedTypeshed { ... };
///
/// let test_case = resolver_test_case()
/// .with_custom_typeshed(TYPESHED)
/// .with_target_version(...)
/// .build();
///
/// let test_case2 = resolver_test_case()
/// .with_vendored_typeshed()
/// .with_target_version(...)
/// .build();
/// ```
///
/// If you have not called one of those options, the `stdlib` field
/// on the [`TestCase`] instance created from `.build()` will be set
/// to `()`.
pub(crate) struct TestCaseBuilder<T> {
typeshed_option: T,
target_version: TargetVersion,
first_party_files: Vec<FileSpec>,
site_packages_files: Vec<FileSpec>,
}
impl<T> TestCaseBuilder<T> {
/// Specify files to be created in the `src` mock directory
pub(crate) fn with_src_files(mut self, files: &[FileSpec]) -> Self {
self.first_party_files.extend(files.iter().copied());
self
}
/// Specify files to be created in the `site-packages` mock directory
pub(crate) fn with_site_packages_files(mut self, files: &[FileSpec]) -> Self {
self.site_packages_files.extend(files.iter().copied());
self
}
/// Specify the target Python version the module resolver should assume
pub(crate) fn with_target_version(mut self, target_version: TargetVersion) -> Self {
self.target_version = target_version;
self
}
fn write_mock_directory(
db: &mut TestDb,
location: impl AsRef<SystemPath>,
files: impl IntoIterator<Item = FileSpec>,
) -> SystemPathBuf {
let root = location.as_ref().to_path_buf();
db.write_files(
files
.into_iter()
.map(|(relative_path, contents)| (root.join(relative_path), contents)),
)
.unwrap();
root
}
}
impl TestCaseBuilder<UnspecifiedTypeshed> {
pub(crate) fn new() -> TestCaseBuilder<UnspecifiedTypeshed> {
Self {
typeshed_option: UnspecifiedTypeshed,
target_version: TargetVersion::default(),
first_party_files: vec![],
site_packages_files: vec![],
}
}
/// Use the vendored stdlib stubs included in the Ruff binary for this test case
pub(crate) fn with_vendored_typeshed(self) -> TestCaseBuilder<VendoredTypeshed> {
let TestCaseBuilder {
typeshed_option: _,
target_version,
first_party_files,
site_packages_files,
} = self;
TestCaseBuilder {
typeshed_option: VendoredTypeshed,
target_version,
first_party_files,
site_packages_files,
}
}
/// Use a mock typeshed directory for this test case
pub(crate) fn with_custom_typeshed(
self,
typeshed: MockedTypeshed,
) -> TestCaseBuilder<MockedTypeshed> {
let TestCaseBuilder {
typeshed_option: _,
target_version,
first_party_files,
site_packages_files,
} = self;
TestCaseBuilder {
typeshed_option: typeshed,
target_version,
first_party_files,
site_packages_files,
}
}
pub(crate) fn build(self) -> TestCase<()> {
let TestCase {
db,
src,
stdlib: _,
site_packages,
target_version,
} = self.with_custom_typeshed(MockedTypeshed::default()).build();
TestCase {
db,
src,
stdlib: (),
site_packages,
target_version,
}
}
}
impl TestCaseBuilder<MockedTypeshed> {
pub(crate) fn build(self) -> TestCase<SystemPathBuf> {
let TestCaseBuilder {
typeshed_option,
target_version,
first_party_files,
site_packages_files,
} = self;
let mut db = TestDb::new();
let site_packages =
Self::write_mock_directory(&mut db, "/site-packages", site_packages_files);
let src = Self::write_mock_directory(&mut db, "/src", first_party_files);
let typeshed = Self::build_typeshed_mock(&mut db, &typeshed_option);
Program::new(
&db,
target_version,
SearchPathSettings {
extra_paths: vec![],
workspace_root: src.clone(),
custom_typeshed: Some(typeshed.clone()),
site_packages: Some(site_packages.clone()),
},
);
TestCase {
db,
src,
stdlib: typeshed.join("stdlib"),
site_packages,
target_version,
}
}
fn build_typeshed_mock(db: &mut TestDb, typeshed_to_build: &MockedTypeshed) -> SystemPathBuf {
let typeshed = SystemPathBuf::from("/typeshed");
let MockedTypeshed {
stdlib_files,
versions,
} = typeshed_to_build;
Self::write_mock_directory(
db,
typeshed.join("stdlib"),
stdlib_files
.iter()
.copied()
.chain(std::iter::once(("VERSIONS", *versions))),
);
typeshed
}
}
impl TestCaseBuilder<VendoredTypeshed> {
pub(crate) fn build(self) -> TestCase<VendoredPathBuf> {
let TestCaseBuilder {
typeshed_option: VendoredTypeshed,
target_version,
first_party_files,
site_packages_files,
} = self;
let mut db = TestDb::new();
let site_packages =
Self::write_mock_directory(&mut db, "/site-packages", site_packages_files);
let src = Self::write_mock_directory(&mut db, "/src", first_party_files);
Program::new(
&db,
target_version,
SearchPathSettings {
extra_paths: vec![],
workspace_root: src.clone(),
custom_typeshed: None,
site_packages: Some(site_packages.clone()),
},
);
TestCase {
db,
src,
stdlib: VendoredPathBuf::from("stdlib"),
site_packages,
target_version,
}
}
}

View File

@@ -1,91 +1,8 @@
pub(crate) mod versions;
pub use self::vendored::vendored_typeshed_stubs;
pub(crate) use self::versions::{
parse_typeshed_versions, LazyTypeshedVersions, TypeshedVersionsQueryResult,
};
pub use self::versions::{TypeshedVersionsParseError, TypeshedVersionsParseErrorKind};
#[cfg(test)]
mod tests {
use std::io::{self, Read};
use std::path::Path;
use ruff_db::vendored::VendoredFileSystem;
use ruff_db::vfs::VendoredPath;
// The file path here is hardcoded in this crate's `build.rs` script.
// Luckily this crate will fail to build if this file isn't available at build time.
const TYPESHED_ZIP_BYTES: &[u8] =
include_bytes!(concat!(env!("OUT_DIR"), "/zipped_typeshed.zip"));
#[test]
fn typeshed_zip_created_at_build_time() {
let mut typeshed_zip_archive =
zip::ZipArchive::new(io::Cursor::new(TYPESHED_ZIP_BYTES)).unwrap();
let mut functools_module_stub = typeshed_zip_archive
.by_name("stdlib/functools.pyi")
.unwrap();
assert!(functools_module_stub.is_file());
let mut functools_module_stub_source = String::new();
functools_module_stub
.read_to_string(&mut functools_module_stub_source)
.unwrap();
assert!(functools_module_stub_source.contains("def update_wrapper("));
}
#[test]
fn typeshed_vfs_consistent_with_vendored_stubs() {
let vendored_typeshed_dir = Path::new("vendor/typeshed").canonicalize().unwrap();
let vendored_typeshed_stubs = VendoredFileSystem::new(TYPESHED_ZIP_BYTES).unwrap();
let mut empty_iterator = true;
for entry in walkdir::WalkDir::new(&vendored_typeshed_dir).min_depth(1) {
empty_iterator = false;
let entry = entry.unwrap();
let absolute_path = entry.path();
let file_type = entry.file_type();
let relative_path = absolute_path
.strip_prefix(&vendored_typeshed_dir)
.unwrap_or_else(|_| {
panic!("Expected {absolute_path:?} to be a child of {vendored_typeshed_dir:?}")
});
let vendored_path = <&VendoredPath>::try_from(relative_path)
.unwrap_or_else(|_| panic!("Expected {relative_path:?} to be valid UTF-8"));
assert!(
vendored_typeshed_stubs.exists(vendored_path),
"Expected {vendored_path:?} to exist in the `VendoredFileSystem`!
Vendored file system:
{vendored_typeshed_stubs:#?}
"
);
let vendored_path_kind = vendored_typeshed_stubs
.metadata(vendored_path)
.unwrap_or_else(|| {
panic!(
"Expected metadata for {vendored_path:?} to be retrievable from the `VendoredFileSystem!
Vendored file system:
{vendored_typeshed_stubs:#?}
"
)
})
.kind();
assert_eq!(
vendored_path_kind.is_directory(),
file_type.is_dir(),
"{vendored_path:?} had type {vendored_path_kind:?}, inconsistent with fs path {relative_path:?}: {file_type:?}"
);
}
assert!(
!empty_iterator,
"Expected there to be at least one file or directory in the vendored typeshed stubs!"
);
}
}
mod vendored;
mod versions;

View File

@@ -0,0 +1,99 @@
use once_cell::sync::Lazy;
use ruff_db::vendored::VendoredFileSystem;
// The file path here is hardcoded in this crate's `build.rs` script.
// Luckily this crate will fail to build if this file isn't available at build time.
static TYPESHED_ZIP_BYTES: &[u8] = include_bytes!(concat!(env!("OUT_DIR"), "/zipped_typeshed.zip"));
pub fn vendored_typeshed_stubs() -> &'static VendoredFileSystem {
static VENDORED_TYPESHED_STUBS: Lazy<VendoredFileSystem> =
Lazy::new(|| VendoredFileSystem::new_static(TYPESHED_ZIP_BYTES).unwrap());
&VENDORED_TYPESHED_STUBS
}
#[cfg(test)]
mod tests {
use std::io::{self, Read};
use std::path::Path;
use ruff_db::vendored::VendoredPath;
use super::*;
#[test]
fn typeshed_zip_created_at_build_time() {
let mut typeshed_zip_archive =
zip::ZipArchive::new(io::Cursor::new(TYPESHED_ZIP_BYTES)).unwrap();
let mut functools_module_stub = typeshed_zip_archive
.by_name("stdlib/functools.pyi")
.unwrap();
assert!(functools_module_stub.is_file());
let mut functools_module_stub_source = String::new();
functools_module_stub
.read_to_string(&mut functools_module_stub_source)
.unwrap();
assert!(functools_module_stub_source.contains("def update_wrapper("));
}
#[test]
fn typeshed_vfs_consistent_with_vendored_stubs() {
let vendored_typeshed_dir = Path::new("vendor/typeshed").canonicalize().unwrap();
let vendored_typeshed_stubs = vendored_typeshed_stubs();
let mut empty_iterator = true;
for entry in walkdir::WalkDir::new(&vendored_typeshed_dir).min_depth(1) {
empty_iterator = false;
let entry = entry.unwrap();
let absolute_path = entry.path();
let file_type = entry.file_type();
let relative_path = absolute_path
.strip_prefix(&vendored_typeshed_dir)
.unwrap_or_else(|_| {
panic!("Expected {absolute_path:?} to be a child of {vendored_typeshed_dir:?}")
});
let vendored_path = <&VendoredPath>::try_from(relative_path)
.unwrap_or_else(|_| panic!("Expected {relative_path:?} to be valid UTF-8"));
assert!(
vendored_typeshed_stubs.exists(vendored_path),
"Expected {vendored_path:?} to exist in the `VendoredFileSystem`!
Vendored file system:
{vendored_typeshed_stubs:#?}
"
);
let vendored_path_kind = vendored_typeshed_stubs
.metadata(vendored_path)
.unwrap_or_else(|_| {
panic!(
"Expected metadata for {vendored_path:?} to be retrievable from the `VendoredFileSystem!
Vendored file system:
{vendored_typeshed_stubs:#?}
"
)
})
.kind();
assert_eq!(
vendored_path_kind.is_directory(),
file_type.is_dir(),
"{vendored_path:?} had type {vendored_path_kind:?}, inconsistent with fs path {relative_path:?}: {file_type:?}"
);
}
assert!(
!empty_iterator,
"Expected there to be at least one file or directory in the vendored typeshed stubs!"
);
}
}

View File

@@ -1,16 +1,97 @@
use std::cell::OnceCell;
use std::collections::BTreeMap;
use std::fmt;
use std::num::{NonZeroU16, NonZeroUsize};
use std::ops::{RangeFrom, RangeInclusive};
use std::str::FromStr;
use once_cell::sync::Lazy;
use ruff_db::program::TargetVersion;
use ruff_db::system::SystemPath;
use rustc_hash::FxHashMap;
use crate::module::ModuleName;
use ruff_db::files::{system_path_to_file, File};
#[derive(Debug, PartialEq, Eq)]
use crate::db::Db;
use crate::module_name::ModuleName;
use super::vendored::vendored_typeshed_stubs;
#[derive(Debug)]
pub(crate) struct LazyTypeshedVersions<'db>(OnceCell<&'db TypeshedVersions>);
impl<'db> LazyTypeshedVersions<'db> {
#[must_use]
pub(crate) fn new() -> Self {
Self(OnceCell::new())
}
/// Query whether a module exists at runtime in the stdlib on a certain Python version.
///
/// Simply probing whether a file exists in typeshed is insufficient for this question,
/// as a module in the stdlib may have been added in Python 3.10, but the typeshed stub
/// will still be available (either in a custom typeshed dir or in our vendored copy)
/// even if the user specified Python 3.8 as the target version.
///
/// For top-level modules and packages, the VERSIONS file can always provide an unambiguous answer
/// as to whether the module exists on the specified target version. However, VERSIONS does not
/// provide comprehensive information on all submodules, meaning that this method sometimes
/// returns [`TypeshedVersionsQueryResult::MaybeExists`].
/// See [`TypeshedVersionsQueryResult`] for more details.
#[must_use]
pub(crate) fn query_module(
&self,
db: &'db dyn Db,
module: &ModuleName,
stdlib_root: Option<&SystemPath>,
target_version: TargetVersion,
) -> TypeshedVersionsQueryResult {
let versions = self.0.get_or_init(|| {
let versions_path = if let Some(system_path) = stdlib_root {
system_path.join("VERSIONS")
} else {
return &VENDORED_VERSIONS;
};
let Some(versions_file) = system_path_to_file(db.upcast(), &versions_path) else {
todo!(
"Still need to figure out how to handle VERSIONS files being deleted \
from custom typeshed directories! Expected a file to exist at {versions_path}"
)
};
// TODO(Alex/Micha): If VERSIONS is invalid,
// this should invalidate not just the specific module resolution we're currently attempting,
// but all type inference that depends on any standard-library types.
// Unwrapping here is not correct...
parse_typeshed_versions(db, versions_file).as_ref().unwrap()
});
versions.query_module(module, PyVersion::from(target_version))
}
}
#[salsa::tracked(return_ref)]
pub(crate) fn parse_typeshed_versions(
db: &dyn Db,
versions_file: File,
) -> Result<TypeshedVersions, TypeshedVersionsParseError> {
// TODO: Handle IO errors
let file_content = versions_file
.read_to_string(db.upcast())
.unwrap_or_default();
file_content.parse()
}
static VENDORED_VERSIONS: Lazy<TypeshedVersions> = Lazy::new(|| {
TypeshedVersions::from_str(
&vendored_typeshed_stubs()
.read_to_string("stdlib/VERSIONS")
.unwrap(),
)
.unwrap()
});
#[derive(Debug, PartialEq, Eq, Clone)]
pub struct TypeshedVersionsParseError {
line_number: NonZeroU16,
line_number: Option<NonZeroU16>,
reason: TypeshedVersionsParseErrorKind,
}
@@ -20,10 +101,14 @@ impl fmt::Display for TypeshedVersionsParseError {
line_number,
reason,
} = self;
write!(
f,
"Error while parsing line {line_number} of typeshed's VERSIONS file: {reason}"
)
if let Some(line_number) = line_number {
write!(
f,
"Error while parsing line {line_number} of typeshed's VERSIONS file: {reason}"
)
} else {
write!(f, "Error while parsing typeshed's VERSIONS file: {reason}")
}
}
}
@@ -37,7 +122,7 @@ impl std::error::Error for TypeshedVersionsParseError {
}
}
#[derive(Debug, PartialEq, Eq)]
#[derive(Debug, PartialEq, Eq, Clone)]
pub enum TypeshedVersionsParseErrorKind {
TooManyLines(NonZeroUsize),
UnexpectedNumberOfColons,
@@ -81,38 +166,94 @@ impl fmt::Display for TypeshedVersionsParseErrorKind {
}
#[derive(Debug, PartialEq, Eq)]
pub struct TypeshedVersions(FxHashMap<ModuleName, PyVersionRange>);
pub(crate) struct TypeshedVersions(FxHashMap<ModuleName, PyVersionRange>);
impl TypeshedVersions {
pub fn len(&self) -> usize {
self.0.len()
#[must_use]
fn exact(&self, module_name: &ModuleName) -> Option<&PyVersionRange> {
self.0.get(module_name)
}
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
pub fn contains_module(&self, module_name: &ModuleName) -> bool {
self.0.contains_key(module_name)
}
pub fn module_exists_on_version(
#[must_use]
fn query_module(
&self,
module: ModuleName,
version: impl Into<PyVersion>,
) -> bool {
let version = version.into();
let mut module: Option<ModuleName> = Some(module);
while let Some(module_to_try) = module {
if let Some(range) = self.0.get(&module_to_try) {
return range.contains(version);
module: &ModuleName,
target_version: PyVersion,
) -> TypeshedVersionsQueryResult {
if let Some(range) = self.exact(module) {
if range.contains(target_version) {
TypeshedVersionsQueryResult::Exists
} else {
TypeshedVersionsQueryResult::DoesNotExist
}
module = module_to_try.parent();
} else {
let mut module = module.parent();
while let Some(module_to_try) = module {
if let Some(range) = self.exact(&module_to_try) {
return {
if range.contains(target_version) {
TypeshedVersionsQueryResult::MaybeExists
} else {
TypeshedVersionsQueryResult::DoesNotExist
}
};
}
module = module_to_try.parent();
}
TypeshedVersionsQueryResult::DoesNotExist
}
false
}
}
/// Possible answers [`LazyTypeshedVersions::query_module()`] could give to the question:
/// "Does this module exist in the stdlib at runtime on a certain target version?"
#[derive(Debug, Copy, PartialEq, Eq, Clone, Hash)]
pub(crate) enum TypeshedVersionsQueryResult {
/// The module definitely exists in the stdlib at runtime on the user-specified target version.
///
/// For example:
/// - The target version is Python 3.8
/// - We're querying whether the `asyncio.tasks` module exists in the stdlib
/// - The VERSIONS file contains the line `asyncio.tasks: 3.8-`
Exists,
/// The module definitely does not exist in the stdlib on the user-specified target version.
///
/// For example:
/// - We're querying whether the `foo` module exists in the stdlib
/// - There is no top-level `foo` module in VERSIONS
///
/// OR:
/// - The target version is Python 3.8
/// - We're querying whether the module `importlib.abc` exists in the stdlib
/// - The VERSIONS file contains the line `importlib.abc: 3.10-`,
/// indicating that the module was added in 3.10
///
/// OR:
/// - The target version is Python 3.8
/// - We're querying whether the module `collections.abc` exists in the stdlib
/// - The VERSIONS file does not contain any information about the `collections.abc` submodule,
/// but *does* contain the line `collections: 3.10-`,
/// indicating that the entire `collections` package was added in Python 3.10.
DoesNotExist,
/// The module potentially exists in the stdlib and, if it does,
/// it definitely exists on the user-specified target version.
///
/// This variant is only relevant for submodules,
/// for which the typeshed VERSIONS file does not provide comprehensive information.
/// (The VERSIONS file is guaranteed to provide information about all top-level stdlib modules and packages,
/// but not necessarily about all submodules within each top-level package.)
///
/// For example:
/// - The target version is Python 3.8
/// - We're querying whether the `asyncio.staggered` module exists in the stdlib
/// - The typeshed VERSIONS file contains the line `asyncio: 3.8`,
/// indicating that the `asyncio` package was added in Python 3.8,
/// but does not contain any explicit information about the `asyncio.staggered` submodule.
MaybeExists,
}
impl FromStr for TypeshedVersions {
type Err = TypeshedVersionsParseError;
@@ -125,7 +266,7 @@ impl FromStr for TypeshedVersions {
let Ok(line_number) = NonZeroU16::try_from(line_number) else {
return Err(TypeshedVersionsParseError {
line_number: NonZeroU16::MAX,
line_number: None,
reason: TypeshedVersionsParseErrorKind::TooManyLines(line_number),
});
};
@@ -141,14 +282,14 @@ impl FromStr for TypeshedVersions {
let (Some(module_name), Some(rest), None) = (parts.next(), parts.next(), parts.next())
else {
return Err(TypeshedVersionsParseError {
line_number,
line_number: Some(line_number),
reason: TypeshedVersionsParseErrorKind::UnexpectedNumberOfColons,
});
};
let Some(module_name) = ModuleName::new(module_name) else {
return Err(TypeshedVersionsParseError {
line_number,
line_number: Some(line_number),
reason: TypeshedVersionsParseErrorKind::InvalidModuleName(
module_name.to_string(),
),
@@ -159,7 +300,7 @@ impl FromStr for TypeshedVersions {
Ok(version) => map.insert(module_name, version),
Err(reason) => {
return Err(TypeshedVersionsParseError {
line_number,
line_number: Some(line_number),
reason,
})
}
@@ -180,13 +321,14 @@ impl fmt::Display for TypeshedVersions {
}
}
#[derive(Debug, Clone, Eq, PartialEq)]
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
enum PyVersionRange {
AvailableFrom(RangeFrom<PyVersion>),
AvailableWithin(RangeInclusive<PyVersion>),
}
impl PyVersionRange {
#[must_use]
fn contains(&self, version: PyVersion) -> bool {
match self {
Self::AvailableFrom(inner) => inner.contains(&version),
@@ -222,7 +364,7 @@ impl fmt::Display for PyVersionRange {
}
#[derive(Debug, Clone, Copy, Eq, PartialEq, Ord, PartialOrd, Hash)]
pub struct PyVersion {
struct PyVersion {
major: u8,
minor: u8,
}
@@ -266,38 +408,25 @@ impl fmt::Display for PyVersion {
}
}
// TODO: unify with the PythonVersion enum in the linter/formatter crates?
#[derive(Copy, Clone, Hash, Debug, PartialEq, Eq, PartialOrd, Ord, Default)]
pub enum SupportedPyVersion {
Py37,
#[default]
Py38,
Py39,
Py310,
Py311,
Py312,
Py313,
}
impl From<SupportedPyVersion> for PyVersion {
fn from(value: SupportedPyVersion) -> Self {
impl From<TargetVersion> for PyVersion {
fn from(value: TargetVersion) -> Self {
match value {
SupportedPyVersion::Py37 => PyVersion { major: 3, minor: 7 },
SupportedPyVersion::Py38 => PyVersion { major: 3, minor: 8 },
SupportedPyVersion::Py39 => PyVersion { major: 3, minor: 9 },
SupportedPyVersion::Py310 => PyVersion {
TargetVersion::Py37 => PyVersion { major: 3, minor: 7 },
TargetVersion::Py38 => PyVersion { major: 3, minor: 8 },
TargetVersion::Py39 => PyVersion { major: 3, minor: 9 },
TargetVersion::Py310 => PyVersion {
major: 3,
minor: 10,
},
SupportedPyVersion::Py311 => PyVersion {
TargetVersion::Py311 => PyVersion {
major: 3,
minor: 11,
},
SupportedPyVersion::Py312 => PyVersion {
TargetVersion::Py312 => PyVersion {
major: 3,
minor: 12,
},
SupportedPyVersion::Py313 => PyVersion {
TargetVersion::Py313 => PyVersion {
major: 3,
minor: 13,
},
@@ -310,14 +439,27 @@ mod tests {
use std::num::{IntErrorKind, NonZeroU16};
use std::path::Path;
use super::*;
use insta::assert_snapshot;
use ruff_db::program::TargetVersion;
use super::*;
const TYPESHED_STDLIB_DIR: &str = "stdlib";
#[allow(unsafe_code)]
const ONE: NonZeroU16 = unsafe { NonZeroU16::new_unchecked(1) };
const ONE: Option<NonZeroU16> = Some(unsafe { NonZeroU16::new_unchecked(1) });
impl TypeshedVersions {
#[must_use]
fn contains_exact(&self, module: &ModuleName) -> bool {
self.exact(module).is_some()
}
#[must_use]
fn len(&self) -> usize {
self.0.len()
}
}
#[test]
fn can_parse_vendored_versions_file() {
@@ -334,18 +476,31 @@ mod tests {
let asyncio_staggered = ModuleName::new_static("asyncio.staggered").unwrap();
let audioop = ModuleName::new_static("audioop").unwrap();
assert!(versions.contains_module(&asyncio));
assert!(versions.module_exists_on_version(asyncio, SupportedPyVersion::Py310));
assert!(versions.contains_module(&asyncio_staggered));
assert!(
versions.module_exists_on_version(asyncio_staggered.clone(), SupportedPyVersion::Py38)
assert!(versions.contains_exact(&asyncio));
assert_eq!(
versions.query_module(&asyncio, TargetVersion::Py310.into()),
TypeshedVersionsQueryResult::Exists
);
assert!(!versions.module_exists_on_version(asyncio_staggered, SupportedPyVersion::Py37));
assert!(versions.contains_module(&audioop));
assert!(versions.module_exists_on_version(audioop.clone(), SupportedPyVersion::Py312));
assert!(!versions.module_exists_on_version(audioop, SupportedPyVersion::Py313));
assert!(versions.contains_exact(&asyncio_staggered));
assert_eq!(
versions.query_module(&asyncio_staggered, TargetVersion::Py38.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
versions.query_module(&asyncio_staggered, TargetVersion::Py37.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
assert!(versions.contains_exact(&audioop));
assert_eq!(
versions.query_module(&audioop, TargetVersion::Py312.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
versions.query_module(&audioop, TargetVersion::Py313.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
}
#[test]
@@ -393,7 +548,7 @@ mod tests {
let top_level_module = ModuleName::new(top_level_module)
.unwrap_or_else(|| panic!("{top_level_module:?} was not a valid module name!"));
assert!(vendored_typeshed_versions.contains_module(&top_level_module));
assert!(vendored_typeshed_versions.contains_exact(&top_level_module));
}
assert!(
@@ -426,30 +581,102 @@ foo: 3.8- # trailing comment
foo: 3.8-
"###
);
}
let foo = ModuleName::new_static("foo").unwrap();
#[test]
fn version_within_range_parsed_correctly() {
let parsed_versions = TypeshedVersions::from_str("bar: 2.7-3.10").unwrap();
let bar = ModuleName::new_static("bar").unwrap();
assert!(parsed_versions.contains_exact(&bar));
assert_eq!(
parsed_versions.query_module(&bar, TargetVersion::Py37.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
parsed_versions.query_module(&bar, TargetVersion::Py310.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
parsed_versions.query_module(&bar, TargetVersion::Py311.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
}
#[test]
fn version_from_range_parsed_correctly() {
let parsed_versions = TypeshedVersions::from_str("foo: 3.8-").unwrap();
let foo = ModuleName::new_static("foo").unwrap();
assert!(parsed_versions.contains_exact(&foo));
assert_eq!(
parsed_versions.query_module(&foo, TargetVersion::Py37.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
assert_eq!(
parsed_versions.query_module(&foo, TargetVersion::Py38.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
parsed_versions.query_module(&foo, TargetVersion::Py311.into()),
TypeshedVersionsQueryResult::Exists
);
}
#[test]
fn explicit_submodule_parsed_correctly() {
let parsed_versions = TypeshedVersions::from_str("bar.baz: 3.1-3.9").unwrap();
let bar_baz = ModuleName::new_static("bar.baz").unwrap();
assert!(parsed_versions.contains_exact(&bar_baz));
assert_eq!(
parsed_versions.query_module(&bar_baz, TargetVersion::Py37.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
parsed_versions.query_module(&bar_baz, TargetVersion::Py39.into()),
TypeshedVersionsQueryResult::Exists
);
assert_eq!(
parsed_versions.query_module(&bar_baz, TargetVersion::Py310.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
}
#[test]
fn implicit_submodule_queried_correctly() {
let parsed_versions = TypeshedVersions::from_str("bar: 2.7-3.10").unwrap();
let bar_eggs = ModuleName::new_static("bar.eggs").unwrap();
assert!(!parsed_versions.contains_exact(&bar_eggs));
assert_eq!(
parsed_versions.query_module(&bar_eggs, TargetVersion::Py37.into()),
TypeshedVersionsQueryResult::MaybeExists
);
assert_eq!(
parsed_versions.query_module(&bar_eggs, TargetVersion::Py310.into()),
TypeshedVersionsQueryResult::MaybeExists
);
assert_eq!(
parsed_versions.query_module(&bar_eggs, TargetVersion::Py311.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
}
#[test]
fn nonexistent_module_queried_correctly() {
let parsed_versions = TypeshedVersions::from_str("eggs: 3.8-").unwrap();
let spam = ModuleName::new_static("spam").unwrap();
assert!(parsed_versions.contains_module(&foo));
assert!(!parsed_versions.module_exists_on_version(foo.clone(), SupportedPyVersion::Py37));
assert!(parsed_versions.module_exists_on_version(foo.clone(), SupportedPyVersion::Py38));
assert!(parsed_versions.module_exists_on_version(foo, SupportedPyVersion::Py311));
assert!(parsed_versions.contains_module(&bar));
assert!(parsed_versions.module_exists_on_version(bar.clone(), SupportedPyVersion::Py37));
assert!(parsed_versions.module_exists_on_version(bar.clone(), SupportedPyVersion::Py310));
assert!(!parsed_versions.module_exists_on_version(bar, SupportedPyVersion::Py311));
assert!(parsed_versions.contains_module(&bar_baz));
assert!(parsed_versions.module_exists_on_version(bar_baz.clone(), SupportedPyVersion::Py37));
assert!(parsed_versions.module_exists_on_version(bar_baz.clone(), SupportedPyVersion::Py39));
assert!(!parsed_versions.module_exists_on_version(bar_baz, SupportedPyVersion::Py310));
assert!(!parsed_versions.contains_module(&spam));
assert!(!parsed_versions.module_exists_on_version(spam.clone(), SupportedPyVersion::Py37));
assert!(!parsed_versions.module_exists_on_version(spam, SupportedPyVersion::Py313));
assert!(!parsed_versions.contains_exact(&spam));
assert_eq!(
parsed_versions.query_module(&spam, TargetVersion::Py37.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
assert_eq!(
parsed_versions.query_module(&spam, TargetVersion::Py313.into()),
TypeshedVersionsQueryResult::DoesNotExist
);
}
#[test]
@@ -465,7 +692,7 @@ foo: 3.8- # trailing comment
assert_eq!(
TypeshedVersions::from_str(&massive_versions_file),
Err(TypeshedVersionsParseError {
line_number: NonZeroU16::MAX,
line_number: None,
reason: TypeshedVersionsParseErrorKind::TooManyLines(
NonZeroUsize::new(too_many + 1 - offset).unwrap()
)

View File

@@ -1 +1 @@
dcab6e88883c629ede9637fb011958f8b4918f52
f863db6bc5242348ceaa6a3bca4e59aa9e62faaa

View File

@@ -35,6 +35,8 @@ _dummy_threading: 3.0-3.8
_heapq: 3.0-
_imp: 3.0-
_interpchannels: 3.13-
_interpqueues: 3.13-
_interpreters: 3.13-
_json: 3.0-
_locale: 3.0-
_lsprof: 3.0-
@@ -112,6 +114,7 @@ curses: 3.0-
dataclasses: 3.7-
datetime: 3.0-
dbm: 3.0-
dbm.sqlite3: 3.13-
decimal: 3.0-
difflib: 3.0-
dis: 3.0-
@@ -155,6 +158,7 @@ importlib: 3.0-
importlib._abc: 3.10-
importlib.metadata: 3.8-
importlib.metadata._meta: 3.10-
importlib.metadata.diagnose: 3.13-
importlib.readers: 3.10-
importlib.resources: 3.7-
importlib.resources.abc: 3.11-

View File

@@ -70,6 +70,8 @@ _VT_co = TypeVar("_VT_co", covariant=True) # Value type covariant containers.
@final
class dict_keys(KeysView[_KT_co], Generic[_KT_co, _VT_co]): # undocumented
def __eq__(self, value: object, /) -> bool: ...
if sys.version_info >= (3, 13):
def isdisjoint(self, other: Iterable[_KT_co], /) -> bool: ...
if sys.version_info >= (3, 10):
@property
def mapping(self) -> MappingProxyType[_KT_co, _VT_co]: ...
@@ -83,6 +85,8 @@ class dict_values(ValuesView[_VT_co], Generic[_KT_co, _VT_co]): # undocumented
@final
class dict_items(ItemsView[_KT_co, _VT_co]): # undocumented
def __eq__(self, value: object, /) -> bool: ...
if sys.version_info >= (3, 13):
def isdisjoint(self, other: Iterable[tuple[_KT_co, _VT_co]], /) -> bool: ...
if sys.version_info >= (3, 10):
@property
def mapping(self) -> MappingProxyType[_KT_co, _VT_co]: ...

View File

@@ -64,7 +64,6 @@ class _CData(metaclass=_CDataMeta):
# Structure.from_buffer(...) # valid at runtime
# Structure(...).from_buffer(...) # invalid at runtime
#
@classmethod
def from_buffer(cls, source: WriteableBuffer, offset: int = ...) -> Self: ...
@classmethod
@@ -100,8 +99,8 @@ class _Pointer(_PointerLike, _CData, Generic[_CT]):
def __getitem__(self, key: slice, /) -> list[Any]: ...
def __setitem__(self, key: int, value: Any, /) -> None: ...
def POINTER(type: type[_CT]) -> type[_Pointer[_CT]]: ...
def pointer(arg: _CT, /) -> _Pointer[_CT]: ...
def POINTER(type: type[_CT], /) -> type[_Pointer[_CT]]: ...
def pointer(obj: _CT, /) -> _Pointer[_CT]: ...
class _CArgObject: ...
@@ -203,9 +202,9 @@ class Array(_CData, Generic[_CT]):
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def addressof(obj: _CData) -> int: ...
def alignment(obj_or_type: _CData | type[_CData]) -> int: ...
def addressof(obj: _CData, /) -> int: ...
def alignment(obj_or_type: _CData | type[_CData], /) -> int: ...
def get_errno() -> int: ...
def resize(obj: _CData, size: int) -> None: ...
def set_errno(value: int) -> int: ...
def sizeof(obj_or_type: _CData | type[_CData]) -> int: ...
def resize(obj: _CData, size: int, /) -> None: ...
def set_errno(value: int, /) -> int: ...
def sizeof(obj_or_type: _CData | type[_CData], /) -> int: ...

View File

@@ -0,0 +1,16 @@
from typing import Any, SupportsIndex
class QueueError(RuntimeError): ...
class QueueNotFoundError(QueueError): ...
def bind(qid: SupportsIndex) -> None: ...
def create(maxsize: SupportsIndex, fmt: SupportsIndex) -> int: ...
def destroy(qid: SupportsIndex) -> None: ...
def get(qid: SupportsIndex) -> tuple[Any, int]: ...
def get_count(qid: SupportsIndex) -> int: ...
def get_maxsize(qid: SupportsIndex) -> int: ...
def get_queue_defaults(qid: SupportsIndex) -> tuple[int]: ...
def is_full(qid: SupportsIndex) -> bool: ...
def list_all() -> list[tuple[int, int]]: ...
def put(qid: SupportsIndex, obj: Any, fmt: SupportsIndex) -> None: ...
def release(qid: SupportsIndex) -> None: ...

View File

@@ -0,0 +1,50 @@
import types
from collections.abc import Callable, Mapping
from typing import Final, Literal, SupportsIndex
from typing_extensions import TypeAlias
_Configs: TypeAlias = Literal["default", "isolated", "legacy", "empty", ""]
class InterpreterError(Exception): ...
class InterpreterNotFoundError(InterpreterError): ...
class NotShareableError(Exception): ...
class CrossInterpreterBufferView:
def __buffer__(self, flags: int, /) -> memoryview: ...
def new_config(name: _Configs = "isolated", /, **overides: object) -> types.SimpleNamespace: ...
def create(config: types.SimpleNamespace | _Configs | None = "isolated", *, reqrefs: bool = False) -> int: ...
def destroy(id: SupportsIndex, *, restrict: bool = False) -> None: ...
def list_all(*, require_ready: bool) -> list[tuple[int, int]]: ...
def get_current() -> tuple[int, int]: ...
def get_main() -> tuple[int, int]: ...
def is_running(id: SupportsIndex, *, restrict: bool = False) -> bool: ...
def get_config(id: SupportsIndex, *, restrict: bool = False) -> types.SimpleNamespace: ...
def whence(id: SupportsIndex) -> int: ...
def exec(id: SupportsIndex, code: str, shared: bool | None = None, *, restrict: bool = False) -> None: ...
def call(
id: SupportsIndex,
callable: Callable[..., object],
args: tuple[object, ...] | None = None,
kwargs: dict[str, object] | None = None,
*,
restrict: bool = False,
) -> object: ...
def run_string(
id: SupportsIndex, script: str | types.CodeType | Callable[[], object], shared: bool | None = None, *, restrict: bool = False
) -> None: ...
def run_func(
id: SupportsIndex, func: types.CodeType | Callable[[], object], shared: bool | None = None, *, restrict: bool = False
) -> None: ...
def set___main___attrs(id: SupportsIndex, updates: Mapping[str, object], *, restrict: bool = False) -> None: ...
def incref(id: SupportsIndex, *, implieslink: bool = False, restrict: bool = False) -> None: ...
def decref(id: SupportsIndex, *, restrict: bool = False) -> None: ...
def is_shareable(obj: object) -> bool: ...
def capture_exception(exc: BaseException | None = None) -> types.SimpleNamespace: ...
WHENCE_UNKNOWN: Final = 0
WHENCE_RUNTIME: Final = 1
WHENCE_LEGACY_CAPI: Final = 2
WHENCE_CAPI: Final = 3
WHENCE_XI: Final = 4
WHENCE_STDLIB: Final = 5

View File

@@ -13,7 +13,7 @@ error = RuntimeError
def _count() -> int: ...
@final
class LockType:
def acquire(self, blocking: bool = ..., timeout: float = ...) -> bool: ...
def acquire(self, blocking: bool = True, timeout: float = -1) -> bool: ...
def release(self) -> None: ...
def locked(self) -> bool: ...
def __enter__(self) -> bool: ...
@@ -22,14 +22,14 @@ class LockType:
) -> None: ...
@overload
def start_new_thread(function: Callable[[Unpack[_Ts]], object], args: tuple[Unpack[_Ts]]) -> int: ...
def start_new_thread(function: Callable[[Unpack[_Ts]], object], args: tuple[Unpack[_Ts]], /) -> int: ...
@overload
def start_new_thread(function: Callable[..., object], args: tuple[Any, ...], kwargs: dict[str, Any]) -> int: ...
def start_new_thread(function: Callable[..., object], args: tuple[Any, ...], kwargs: dict[str, Any], /) -> int: ...
def interrupt_main() -> None: ...
def exit() -> NoReturn: ...
def allocate_lock() -> LockType: ...
def get_ident() -> int: ...
def stack_size(size: int = ...) -> int: ...
def stack_size(size: int = 0, /) -> int: ...
TIMEOUT_MAX: float

View File

@@ -28,17 +28,17 @@ class ABCMeta(type):
def register(cls: ABCMeta, subclass: type[_T]) -> type[_T]: ...
def abstractmethod(funcobj: _FuncT) -> _FuncT: ...
@deprecated("Deprecated, use 'classmethod' with 'abstractmethod' instead")
@deprecated("Use 'classmethod' with 'abstractmethod' instead")
class abstractclassmethod(classmethod[_T, _P, _R_co]):
__isabstractmethod__: Literal[True]
def __init__(self, callable: Callable[Concatenate[type[_T], _P], _R_co]) -> None: ...
@deprecated("Deprecated, use 'staticmethod' with 'abstractmethod' instead")
@deprecated("Use 'staticmethod' with 'abstractmethod' instead")
class abstractstaticmethod(staticmethod[_P, _R_co]):
__isabstractmethod__: Literal[True]
def __init__(self, callable: Callable[_P, _R_co]) -> None: ...
@deprecated("Deprecated, use 'property' with 'abstractmethod' instead")
@deprecated("Use 'property' with 'abstractmethod' instead")
class abstractproperty(property):
__isabstractmethod__: Literal[True]

View File

@@ -49,6 +49,10 @@ class Server(AbstractServer):
ssl_handshake_timeout: float | None,
) -> None: ...
if sys.version_info >= (3, 13):
def close_clients(self) -> None: ...
def abort_clients(self) -> None: ...
def get_loop(self) -> AbstractEventLoop: ...
def is_serving(self) -> bool: ...
async def start_serving(self) -> None: ...
@@ -222,7 +226,48 @@ class BaseEventLoop(AbstractEventLoop):
happy_eyeballs_delay: float | None = None,
interleave: int | None = None,
) -> tuple[Transport, _ProtocolT]: ...
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
# 3.13 added `keep_alive`.
@overload
async def create_server(
self,
protocol_factory: _ProtocolFactory,
host: str | Sequence[str] | None = None,
port: int = ...,
*,
family: int = ...,
flags: int = ...,
sock: None = None,
backlog: int = 100,
ssl: _SSLContext = None,
reuse_address: bool | None = None,
reuse_port: bool | None = None,
keep_alive: bool | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
@overload
async def create_server(
self,
protocol_factory: _ProtocolFactory,
host: None = None,
port: None = None,
*,
family: int = ...,
flags: int = ...,
sock: socket = ...,
backlog: int = 100,
ssl: _SSLContext = None,
reuse_address: bool | None = None,
reuse_port: bool | None = None,
keep_alive: bool | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
elif sys.version_info >= (3, 11):
@overload
async def create_server(
self,
@@ -259,26 +304,6 @@ class BaseEventLoop(AbstractEventLoop):
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
async def start_tls(
self,
transport: BaseTransport,
protocol: BaseProtocol,
sslcontext: ssl.SSLContext,
*,
server_side: bool = False,
server_hostname: str | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
) -> Transport | None: ...
async def connect_accepted_socket(
self,
protocol_factory: Callable[[], _ProtocolT],
sock: socket,
*,
ssl: _SSLContext = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
) -> tuple[Transport, _ProtocolT]: ...
else:
@overload
async def create_server(
@@ -314,6 +339,29 @@ class BaseEventLoop(AbstractEventLoop):
ssl_handshake_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
if sys.version_info >= (3, 11):
async def start_tls(
self,
transport: BaseTransport,
protocol: BaseProtocol,
sslcontext: ssl.SSLContext,
*,
server_side: bool = False,
server_hostname: str | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
) -> Transport | None: ...
async def connect_accepted_socket(
self,
protocol_factory: Callable[[], _ProtocolT],
sock: socket,
*,
ssl: _SSLContext = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
) -> tuple[Transport, _ProtocolT]: ...
else:
async def start_tls(
self,
transport: BaseTransport,

View File

@@ -94,6 +94,12 @@ class TimerHandle(Handle):
class AbstractServer:
@abstractmethod
def close(self) -> None: ...
if sys.version_info >= (3, 13):
@abstractmethod
def close_clients(self) -> None: ...
@abstractmethod
def abort_clients(self) -> None: ...
async def __aenter__(self) -> Self: ...
async def __aexit__(self, *exc: Unused) -> None: ...
@abstractmethod
@@ -272,7 +278,50 @@ class AbstractEventLoop:
happy_eyeballs_delay: float | None = None,
interleave: int | None = None,
) -> tuple[Transport, _ProtocolT]: ...
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
# 3.13 added `keep_alive`.
@overload
@abstractmethod
async def create_server(
self,
protocol_factory: _ProtocolFactory,
host: str | Sequence[str] | None = None,
port: int = ...,
*,
family: int = ...,
flags: int = ...,
sock: None = None,
backlog: int = 100,
ssl: _SSLContext = None,
reuse_address: bool | None = None,
reuse_port: bool | None = None,
keep_alive: bool | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
@overload
@abstractmethod
async def create_server(
self,
protocol_factory: _ProtocolFactory,
host: None = None,
port: None = None,
*,
family: int = ...,
flags: int = ...,
sock: socket = ...,
backlog: int = 100,
ssl: _SSLContext = None,
reuse_address: bool | None = None,
reuse_port: bool | None = None,
keep_alive: bool | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
elif sys.version_info >= (3, 11):
@overload
@abstractmethod
async def create_server(
@@ -311,30 +360,6 @@ class AbstractEventLoop:
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
@abstractmethod
async def start_tls(
self,
transport: WriteTransport,
protocol: BaseProtocol,
sslcontext: ssl.SSLContext,
*,
server_side: bool = False,
server_hostname: str | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
) -> Transport | None: ...
async def create_unix_server(
self,
protocol_factory: _ProtocolFactory,
path: StrPath | None = None,
*,
sock: socket | None = None,
backlog: int = 100,
ssl: _SSLContext = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
else:
@overload
@abstractmethod
@@ -372,6 +397,33 @@ class AbstractEventLoop:
ssl_handshake_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
if sys.version_info >= (3, 11):
@abstractmethod
async def start_tls(
self,
transport: WriteTransport,
protocol: BaseProtocol,
sslcontext: ssl.SSLContext,
*,
server_side: bool = False,
server_hostname: str | None = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
) -> Transport | None: ...
async def create_unix_server(
self,
protocol_factory: _ProtocolFactory,
path: StrPath | None = None,
*,
sock: socket | None = None,
backlog: int = 100,
ssl: _SSLContext = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
else:
@abstractmethod
async def start_tls(
self,
@@ -394,6 +446,7 @@ class AbstractEventLoop:
ssl_handshake_timeout: float | None = None,
start_serving: bool = True,
) -> Server: ...
if sys.version_info >= (3, 11):
async def connect_accepted_socket(
self,

View File

@@ -1,4 +1,5 @@
import functools
import sys
import traceback
from collections.abc import Iterable
from types import FrameType, FunctionType
@@ -14,7 +15,17 @@ _FuncType: TypeAlias = FunctionType | _HasWrapper | functools.partial[Any] | fun
def _get_function_source(func: _FuncType) -> tuple[str, int]: ...
@overload
def _get_function_source(func: object) -> tuple[str, int] | None: ...
def _format_callback_source(func: object, args: Iterable[Any]) -> str: ...
def _format_args_and_kwargs(args: Iterable[Any], kwargs: dict[str, Any]) -> str: ...
def _format_callback(func: object, args: Iterable[Any], kwargs: dict[str, Any], suffix: str = "") -> str: ...
if sys.version_info >= (3, 13):
def _format_callback_source(func: object, args: Iterable[Any], *, debug: bool = False) -> str: ...
def _format_args_and_kwargs(args: Iterable[Any], kwargs: dict[str, Any], *, debug: bool = False) -> str: ...
def _format_callback(
func: object, args: Iterable[Any], kwargs: dict[str, Any], *, debug: bool = False, suffix: str = ""
) -> str: ...
else:
def _format_callback_source(func: object, args: Iterable[Any]) -> str: ...
def _format_args_and_kwargs(args: Iterable[Any], kwargs: dict[str, Any]) -> str: ...
def _format_callback(func: object, args: Iterable[Any], kwargs: dict[str, Any], suffix: str = "") -> str: ...
def extract_stack(f: FrameType | None = None, limit: int | None = None) -> traceback.StackSummary: ...

View File

@@ -10,13 +10,20 @@ if sys.version_info >= (3, 10):
else:
_LoopBoundMixin = object
__all__ = ("Queue", "PriorityQueue", "LifoQueue", "QueueFull", "QueueEmpty")
class QueueEmpty(Exception): ...
class QueueFull(Exception): ...
if sys.version_info >= (3, 13):
__all__ = ("Queue", "PriorityQueue", "LifoQueue", "QueueFull", "QueueEmpty", "QueueShutDown")
else:
__all__ = ("Queue", "PriorityQueue", "LifoQueue", "QueueFull", "QueueEmpty")
_T = TypeVar("_T")
if sys.version_info >= (3, 13):
class QueueShutDown(Exception): ...
# If Generic[_T] is last and _LoopBoundMixin is object, pyright is unhappy.
# We can remove the noqa pragma when dropping 3.9 support.
class Queue(Generic[_T], _LoopBoundMixin): # noqa: Y059
@@ -42,6 +49,8 @@ class Queue(Generic[_T], _LoopBoundMixin): # noqa: Y059
def task_done(self) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, type: Any, /) -> GenericAlias: ...
if sys.version_info >= (3, 13):
def shutdown(self, immediate: bool = False) -> None: ...
class PriorityQueue(Queue[_T]): ...
class LifoQueue(Queue[_T]): ...

View File

@@ -2,6 +2,7 @@ import ssl
import sys
from _typeshed import ReadableBuffer, StrPath
from collections.abc import AsyncIterator, Awaitable, Callable, Iterable, Sequence, Sized
from types import ModuleType
from typing import Any, Protocol, SupportsIndex
from typing_extensions import Self, TypeAlias
@@ -130,7 +131,10 @@ class StreamWriter:
async def start_tls(
self, sslcontext: ssl.SSLContext, *, server_hostname: str | None = None, ssl_handshake_timeout: float | None = None
) -> None: ...
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
def __del__(self, warnings: ModuleType = ...) -> None: ...
elif sys.version_info >= (3, 11):
def __del__(self) -> None: ...
class StreamReader(AsyncIterator[bytes]):

View File

@@ -1,15 +1,55 @@
import sys
import types
from _typeshed import StrPath
from abc import ABCMeta, abstractmethod
from collections.abc import Callable
from socket import socket
from typing import Literal
from typing_extensions import Self, TypeVarTuple, Unpack, deprecated
from .base_events import Server, _ProtocolFactory, _SSLContext
from .events import AbstractEventLoop, BaseDefaultEventLoopPolicy
from .selector_events import BaseSelectorEventLoop
_Ts = TypeVarTuple("_Ts")
if sys.platform != "win32":
if sys.version_info >= (3, 14):
__all__ = ("SelectorEventLoop", "DefaultEventLoopPolicy", "EventLoop")
elif sys.version_info >= (3, 13):
__all__ = (
"SelectorEventLoop",
"AbstractChildWatcher",
"SafeChildWatcher",
"FastChildWatcher",
"PidfdChildWatcher",
"MultiLoopChildWatcher",
"ThreadedChildWatcher",
"DefaultEventLoopPolicy",
"EventLoop",
)
elif sys.version_info >= (3, 9):
__all__ = (
"SelectorEventLoop",
"AbstractChildWatcher",
"SafeChildWatcher",
"FastChildWatcher",
"PidfdChildWatcher",
"MultiLoopChildWatcher",
"ThreadedChildWatcher",
"DefaultEventLoopPolicy",
)
else:
__all__ = (
"SelectorEventLoop",
"AbstractChildWatcher",
"SafeChildWatcher",
"FastChildWatcher",
"MultiLoopChildWatcher",
"ThreadedChildWatcher",
"DefaultEventLoopPolicy",
)
# This is also technically not available on Win,
# but other parts of typeshed need this definition.
# So, it is special cased.
@@ -58,30 +98,6 @@ if sys.version_info < (3, 14):
def is_active(self) -> bool: ...
if sys.platform != "win32":
if sys.version_info >= (3, 14):
__all__ = ("SelectorEventLoop", "DefaultEventLoopPolicy")
elif sys.version_info >= (3, 9):
__all__ = (
"SelectorEventLoop",
"AbstractChildWatcher",
"SafeChildWatcher",
"FastChildWatcher",
"PidfdChildWatcher",
"MultiLoopChildWatcher",
"ThreadedChildWatcher",
"DefaultEventLoopPolicy",
)
else:
__all__ = (
"SelectorEventLoop",
"AbstractChildWatcher",
"SafeChildWatcher",
"FastChildWatcher",
"MultiLoopChildWatcher",
"ThreadedChildWatcher",
"DefaultEventLoopPolicy",
)
if sys.version_info < (3, 14):
if sys.version_info >= (3, 12):
# Doesn't actually have ABCMeta metaclass at runtime, but mypy complains if we don't have it in the stub.
@@ -141,7 +157,21 @@ if sys.platform != "win32":
) -> None: ...
def remove_child_handler(self, pid: int) -> bool: ...
class _UnixSelectorEventLoop(BaseSelectorEventLoop): ...
class _UnixSelectorEventLoop(BaseSelectorEventLoop):
if sys.version_info >= (3, 13):
async def create_unix_server( # type: ignore[override]
self,
protocol_factory: _ProtocolFactory,
path: StrPath | None = None,
*,
sock: socket | None = None,
backlog: int = 100,
ssl: _SSLContext = None,
ssl_handshake_timeout: float | None = None,
ssl_shutdown_timeout: float | None = None,
start_serving: bool = True,
cleanup_socket: bool = True,
) -> Server: ...
class _UnixDefaultEventLoopPolicy(BaseDefaultEventLoopPolicy):
if sys.version_info < (3, 14):
@@ -158,6 +188,9 @@ if sys.platform != "win32":
DefaultEventLoopPolicy = _UnixDefaultEventLoopPolicy
if sys.version_info >= (3, 13):
EventLoop = SelectorEventLoop
if sys.version_info < (3, 14):
if sys.version_info >= (3, 12):
@deprecated("Deprecated as of Python 3.12; will be removed in Python 3.14")

View File

@@ -7,14 +7,26 @@ from typing import IO, Any, ClassVar, Literal, NoReturn
from . import events, futures, proactor_events, selector_events, streams, windows_utils
if sys.platform == "win32":
__all__ = (
"SelectorEventLoop",
"ProactorEventLoop",
"IocpProactor",
"DefaultEventLoopPolicy",
"WindowsSelectorEventLoopPolicy",
"WindowsProactorEventLoopPolicy",
)
if sys.version_info >= (3, 13):
# 3.13 added `EventLoop`.
__all__ = (
"SelectorEventLoop",
"ProactorEventLoop",
"IocpProactor",
"DefaultEventLoopPolicy",
"WindowsSelectorEventLoopPolicy",
"WindowsProactorEventLoopPolicy",
"EventLoop",
)
else:
__all__ = (
"SelectorEventLoop",
"ProactorEventLoop",
"IocpProactor",
"DefaultEventLoopPolicy",
"WindowsSelectorEventLoopPolicy",
"WindowsProactorEventLoopPolicy",
)
NULL: Literal[0]
INFINITE: Literal[0xFFFFFFFF]
@@ -84,3 +96,5 @@ if sys.platform == "win32":
def set_child_watcher(self, watcher: Any) -> NoReturn: ...
DefaultEventLoopPolicy = WindowsSelectorEventLoopPolicy
if sys.version_info >= (3, 13):
EventLoop = ProactorEventLoop

View File

@@ -1,5 +1,5 @@
import sys
from _typeshed import ExcInfo, TraceFunction
from _typeshed import ExcInfo, TraceFunction, Unused
from collections.abc import Callable, Iterable, Mapping
from types import CodeType, FrameType, TracebackType
from typing import IO, Any, Literal, SupportsInt, TypeVar
@@ -32,6 +32,9 @@ class Bdb:
def dispatch_call(self, frame: FrameType, arg: None) -> TraceFunction: ...
def dispatch_return(self, frame: FrameType, arg: Any) -> TraceFunction: ...
def dispatch_exception(self, frame: FrameType, arg: ExcInfo) -> TraceFunction: ...
if sys.version_info >= (3, 13):
def dispatch_opcode(self, frame: FrameType, arg: Unused) -> Callable[[FrameType, str, Any], TraceFunction]: ...
def is_skipped_module(self, module_name: str) -> bool: ...
def stop_here(self, frame: FrameType) -> bool: ...
def break_here(self, frame: FrameType) -> bool: ...
@@ -42,7 +45,13 @@ class Bdb:
def user_return(self, frame: FrameType, return_value: Any) -> None: ...
def user_exception(self, frame: FrameType, exc_info: ExcInfo) -> None: ...
def set_until(self, frame: FrameType, lineno: int | None = None) -> None: ...
if sys.version_info >= (3, 13):
def user_opcode(self, frame: FrameType) -> None: ... # undocumented
def set_step(self) -> None: ...
if sys.version_info >= (3, 13):
def set_stepinstr(self) -> None: ... # undocumented
def set_next(self, frame: FrameType) -> None: ...
def set_return(self, frame: FrameType) -> None: ...
def set_trace(self, frame: FrameType | None = None) -> None: ...

View File

@@ -75,6 +75,7 @@ if sys.version_info >= (3, 9):
from types import GenericAlias
_T = TypeVar("_T")
_I = TypeVar("_I", default=int)
_T_co = TypeVar("_T_co", covariant=True)
_T_contra = TypeVar("_T_contra", contravariant=True)
_R_co = TypeVar("_R_co", covariant=True)
@@ -823,8 +824,12 @@ class bytearray(MutableSequence[int]):
def __buffer__(self, flags: int, /) -> memoryview: ...
def __release_buffer__(self, buffer: memoryview, /) -> None: ...
_IntegerFormats: TypeAlias = Literal[
"b", "B", "@b", "@B", "h", "H", "@h", "@H", "i", "I", "@i", "@I", "l", "L", "@l", "@L", "q", "Q", "@q", "@Q", "P", "@P"
]
@final
class memoryview(Sequence[int]):
class memoryview(Sequence[_I]):
@property
def format(self) -> str: ...
@property
@@ -854,13 +859,20 @@ class memoryview(Sequence[int]):
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None, /
) -> None: ...
def cast(self, format: str, shape: list[int] | tuple[int, ...] = ...) -> memoryview: ...
@overload
def __getitem__(self, key: SupportsIndex | tuple[SupportsIndex, ...], /) -> int: ...
def cast(self, format: Literal["c", "@c"], shape: list[int] | tuple[int, ...] = ...) -> memoryview[bytes]: ...
@overload
def __getitem__(self, key: slice, /) -> memoryview: ...
def cast(self, format: Literal["f", "@f", "d", "@d"], shape: list[int] | tuple[int, ...] = ...) -> memoryview[float]: ...
@overload
def cast(self, format: Literal["?"], shape: list[int] | tuple[int, ...] = ...) -> memoryview[bool]: ...
@overload
def cast(self, format: _IntegerFormats, shape: list[int] | tuple[int, ...] = ...) -> memoryview: ...
@overload
def __getitem__(self, key: SupportsIndex | tuple[SupportsIndex, ...], /) -> _I: ...
@overload
def __getitem__(self, key: slice, /) -> memoryview[_I]: ...
def __contains__(self, x: object, /) -> bool: ...
def __iter__(self) -> Iterator[int]: ...
def __iter__(self) -> Iterator[_I]: ...
def __len__(self) -> int: ...
def __eq__(self, value: object, /) -> bool: ...
def __hash__(self) -> int: ...
@@ -2006,9 +2018,9 @@ if sys.version_info >= (3, 10):
class EncodingWarning(Warning): ...
if sys.version_info >= (3, 11):
_BaseExceptionT_co = TypeVar("_BaseExceptionT_co", bound=BaseException, covariant=True)
_BaseExceptionT_co = TypeVar("_BaseExceptionT_co", bound=BaseException, covariant=True, default=BaseException)
_BaseExceptionT = TypeVar("_BaseExceptionT", bound=BaseException)
_ExceptionT_co = TypeVar("_ExceptionT_co", bound=Exception, covariant=True)
_ExceptionT_co = TypeVar("_ExceptionT_co", bound=Exception, covariant=True, default=Exception)
_ExceptionT = TypeVar("_ExceptionT", bound=Exception)
# See `check_exception_group.py` for use-cases and comments.
@@ -2072,5 +2084,4 @@ if sys.version_info >= (3, 11):
) -> tuple[ExceptionGroup[_ExceptionT_co] | None, ExceptionGroup[_ExceptionT_co] | None]: ...
if sys.version_info >= (3, 13):
class IncompleteInputError(SyntaxError): ...
class PythonFinalizationError(RuntimeError): ...

View File

@@ -1,3 +1,5 @@
import sys
from ._base import (
ALL_COMPLETED as ALL_COMPLETED,
FIRST_COMPLETED as FIRST_COMPLETED,
@@ -14,19 +16,36 @@ from ._base import (
from .process import ProcessPoolExecutor as ProcessPoolExecutor
from .thread import ThreadPoolExecutor as ThreadPoolExecutor
__all__ = (
"FIRST_COMPLETED",
"FIRST_EXCEPTION",
"ALL_COMPLETED",
"CancelledError",
"TimeoutError",
"BrokenExecutor",
"Future",
"Executor",
"wait",
"as_completed",
"ProcessPoolExecutor",
"ThreadPoolExecutor",
)
if sys.version_info >= (3, 13):
__all__ = (
"FIRST_COMPLETED",
"FIRST_EXCEPTION",
"ALL_COMPLETED",
"CancelledError",
"TimeoutError",
"InvalidStateError",
"BrokenExecutor",
"Future",
"Executor",
"wait",
"as_completed",
"ProcessPoolExecutor",
"ThreadPoolExecutor",
)
else:
__all__ = (
"FIRST_COMPLETED",
"FIRST_EXCEPTION",
"ALL_COMPLETED",
"CancelledError",
"TimeoutError",
"BrokenExecutor",
"Future",
"Executor",
"wait",
"as_completed",
"ProcessPoolExecutor",
"ThreadPoolExecutor",
)
def __dir__() -> tuple[str, ...]: ...

View File

@@ -19,6 +19,9 @@ if sys.platform != "win32":
def reorganize(self) -> None: ...
def sync(self) -> None: ...
def close(self) -> None: ...
if sys.version_info >= (3, 13):
def clear(self) -> None: ...
def __getitem__(self, item: _KeyType) -> bytes: ...
def __setitem__(self, key: _KeyType, value: _ValueType) -> None: ...
def __delitem__(self, key: _KeyType) -> None: ...

View File

@@ -15,6 +15,9 @@ if sys.platform != "win32":
# Actual typename dbm, not exposed by the implementation
class _dbm:
def close(self) -> None: ...
if sys.version_info >= (3, 13):
def clear(self) -> None: ...
def __getitem__(self, item: _KeyType) -> bytes: ...
def __setitem__(self, key: _KeyType, value: _ValueType) -> None: ...
def __delitem__(self, key: _KeyType) -> None: ...

View File

@@ -0,0 +1,29 @@
from _typeshed import ReadableBuffer, StrOrBytesPath, Unused
from collections.abc import Generator, MutableMapping
from typing import Final, Literal
from typing_extensions import LiteralString, Self, TypeAlias
BUILD_TABLE: Final[LiteralString]
GET_SIZE: Final[LiteralString]
LOOKUP_KEY: Final[LiteralString]
STORE_KV: Final[LiteralString]
DELETE_KEY: Final[LiteralString]
ITER_KEYS: Final[LiteralString]
_SqliteData: TypeAlias = str | ReadableBuffer | int | float
class error(OSError): ...
class _Database(MutableMapping[bytes, bytes]):
def __init__(self, path: StrOrBytesPath, /, *, flag: Literal["r", "w", "c", "n"], mode: int) -> None: ...
def __len__(self) -> int: ...
def __getitem__(self, key: _SqliteData) -> bytes: ...
def __setitem__(self, key: _SqliteData, value: _SqliteData) -> None: ...
def __delitem__(self, key: _SqliteData) -> None: ...
def __iter__(self) -> Generator[bytes]: ...
def close(self) -> None: ...
def keys(self) -> list[bytes]: ... # type: ignore[override]
def __enter__(self) -> Self: ...
def __exit__(self, *args: Unused) -> None: ...
def open(filename: StrOrBytesPath, /, flag: Literal["r", "w,", "c", "n"] = "r", mode: int = 0o666) -> _Database: ...

View File

@@ -31,6 +31,9 @@ __all__ = [
"EXTENDED_ARG",
"stack_effect",
]
if sys.version_info >= (3, 13):
__all__ += ["hasjump"]
if sys.version_info >= (3, 12):
__all__ += ["hasarg", "hasexc"]
else:
@@ -86,12 +89,41 @@ else:
is_jump_target: bool
class Instruction(_Instruction):
def _disassemble(self, lineno_width: int = 3, mark_as_current: bool = False, offset_width: int = 4) -> str: ...
if sys.version_info < (3, 13):
def _disassemble(self, lineno_width: int = 3, mark_as_current: bool = False, offset_width: int = 4) -> str: ...
if sys.version_info >= (3, 13):
@property
def oparg(self) -> int: ...
@property
def baseopcode(self) -> int: ...
@property
def baseopname(self) -> str: ...
@property
def cache_offset(self) -> int: ...
@property
def end_offset(self) -> int: ...
@property
def jump_target(self) -> int: ...
@property
def is_jump_target(self) -> bool: ...
class Bytecode:
codeobj: types.CodeType
first_line: int
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
show_offsets: bool
# 3.13 added `show_offsets`
def __init__(
self,
x: _HaveCodeType | str,
*,
first_line: int | None = None,
current_offset: int | None = None,
show_caches: bool = False,
adaptive: bool = False,
show_offsets: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
x: _HaveCodeType | str,
@@ -101,12 +133,15 @@ class Bytecode:
show_caches: bool = False,
adaptive: bool = False,
) -> None: ...
@classmethod
def from_traceback(cls, tb: types.TracebackType, *, show_caches: bool = False, adaptive: bool = False) -> Self: ...
else:
def __init__(
self, x: _HaveCodeType | str, *, first_line: int | None = None, current_offset: int | None = None
) -> None: ...
if sys.version_info >= (3, 11):
@classmethod
def from_traceback(cls, tb: types.TracebackType, *, show_caches: bool = False, adaptive: bool = False) -> Self: ...
else:
@classmethod
def from_traceback(cls, tb: types.TracebackType) -> Self: ...
@@ -121,7 +156,41 @@ def findlinestarts(code: _HaveCodeType) -> Iterator[tuple[int, int]]: ...
def pretty_flags(flags: int) -> str: ...
def code_info(x: _HaveCodeType | str) -> str: ...
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
# 3.13 added `show_offsets`
def dis(
x: _HaveCodeType | str | bytes | bytearray | None = None,
*,
file: IO[str] | None = None,
depth: int | None = None,
show_caches: bool = False,
adaptive: bool = False,
show_offsets: bool = False,
) -> None: ...
def disassemble(
co: _HaveCodeType,
lasti: int = -1,
*,
file: IO[str] | None = None,
show_caches: bool = False,
adaptive: bool = False,
show_offsets: bool = False,
) -> None: ...
def distb(
tb: types.TracebackType | None = None,
*,
file: IO[str] | None = None,
show_caches: bool = False,
adaptive: bool = False,
show_offsets: bool = False,
) -> None: ...
# 3.13 made `show_cache` `None` by default
def get_instructions(
x: _HaveCodeType, *, first_line: int | None = None, show_caches: bool | None = None, adaptive: bool = False
) -> Iterator[Instruction]: ...
elif sys.version_info >= (3, 11):
# 3.11 added `show_caches` and `adaptive`
def dis(
x: _HaveCodeType | str | bytes | bytearray | None = None,
*,
@@ -130,19 +199,9 @@ if sys.version_info >= (3, 11):
show_caches: bool = False,
adaptive: bool = False,
) -> None: ...
else:
def dis(
x: _HaveCodeType | str | bytes | bytearray | None = None, *, file: IO[str] | None = None, depth: int | None = None
) -> None: ...
if sys.version_info >= (3, 11):
def disassemble(
co: _HaveCodeType, lasti: int = -1, *, file: IO[str] | None = None, show_caches: bool = False, adaptive: bool = False
) -> None: ...
def disco(
co: _HaveCodeType, lasti: int = -1, *, file: IO[str] | None = None, show_caches: bool = False, adaptive: bool = False
) -> None: ...
def distb(
tb: types.TracebackType | None = None, *, file: IO[str] | None = None, show_caches: bool = False, adaptive: bool = False
) -> None: ...
@@ -151,9 +210,13 @@ if sys.version_info >= (3, 11):
) -> Iterator[Instruction]: ...
else:
def dis(
x: _HaveCodeType | str | bytes | bytearray | None = None, *, file: IO[str] | None = None, depth: int | None = None
) -> None: ...
def disassemble(co: _HaveCodeType, lasti: int = -1, *, file: IO[str] | None = None) -> None: ...
def disco(co: _HaveCodeType, lasti: int = -1, *, file: IO[str] | None = None) -> None: ...
def distb(tb: types.TracebackType | None = None, *, file: IO[str] | None = None) -> None: ...
def get_instructions(x: _HaveCodeType, *, first_line: int | None = None) -> Iterator[Instruction]: ...
def show_code(co: _HaveCodeType, *, file: IO[str] | None = None) -> None: ...
disco = disassemble

View File

@@ -1,6 +1,7 @@
import datetime
import sys
from _typeshed import Unused
from collections.abc import Iterable
from email import _ParamType
from email.charset import Charset
from typing import overload
@@ -28,9 +29,21 @@ _PDTZ: TypeAlias = tuple[int, int, int, int, int, int, int, int, int, int | None
def quote(str: str) -> str: ...
def unquote(str: str) -> str: ...
def parseaddr(addr: str | None) -> tuple[str, str]: ...
if sys.version_info >= (3, 13):
def parseaddr(addr: str | list[str], *, strict: bool = True) -> tuple[str, str]: ...
else:
def parseaddr(addr: str) -> tuple[str, str]: ...
def formataddr(pair: tuple[str | None, str], charset: str | Charset = "utf-8") -> str: ...
def getaddresses(fieldvalues: list[str]) -> list[tuple[str, str]]: ...
if sys.version_info >= (3, 13):
def getaddresses(fieldvalues: Iterable[str], *, strict: bool = True) -> list[tuple[str, str]]: ...
else:
def getaddresses(fieldvalues: Iterable[str]) -> list[tuple[str, str]]: ...
@overload
def parsedate(data: None) -> None: ...
@overload

View File

@@ -1,6 +1,7 @@
import abc
import pathlib
import sys
import types
from _collections_abc import dict_keys, dict_values
from _typeshed import StrPath
from collections.abc import Iterable, Iterator, Mapping
@@ -36,11 +37,8 @@ if sys.version_info >= (3, 10):
from importlib.metadata._meta import PackageMetadata as PackageMetadata, SimplePath
def packages_distributions() -> Mapping[str, list[str]]: ...
if sys.version_info >= (3, 12):
# It's generic but shouldn't be
_SimplePath: TypeAlias = SimplePath[Any]
else:
_SimplePath: TypeAlias = SimplePath
_SimplePath: TypeAlias = SimplePath
else:
_SimplePath: TypeAlias = Path
@@ -48,7 +46,9 @@ class PackageNotFoundError(ModuleNotFoundError):
@property
def name(self) -> str: ... # type: ignore[override]
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
_EntryPointBase = object
elif sys.version_info >= (3, 11):
class DeprecatedTuple:
def __getitem__(self, item: int) -> str: ...
@@ -226,6 +226,9 @@ class Distribution(_distribution_parent):
if sys.version_info >= (3, 10):
@property
def name(self) -> str: ...
if sys.version_info >= (3, 13):
@property
def origin(self) -> types.SimpleNamespace: ...
class DistributionFinder(MetaPathFinder):
class Context:

View File

@@ -1,9 +1,12 @@
import sys
from _typeshed import StrPath
from collections.abc import Iterator
from typing import Any, Protocol, TypeVar, overload
from os import PathLike
from typing import Any, Protocol, overload
from typing_extensions import TypeVar
_T = TypeVar("_T")
_T_co = TypeVar("_T_co", covariant=True)
_T_co = TypeVar("_T_co", covariant=True, default=Any)
class PackageMetadata(Protocol):
def __len__(self) -> int: ...
@@ -22,7 +25,18 @@ class PackageMetadata(Protocol):
@overload
def get(self, name: str, failobj: _T) -> _T | str: ...
if sys.version_info >= (3, 12):
if sys.version_info >= (3, 13):
class SimplePath(Protocol):
def joinpath(self, other: StrPath, /) -> SimplePath: ...
def __truediv__(self, other: StrPath, /) -> SimplePath: ...
# Incorrect at runtime
@property
def parent(self) -> PathLike[str]: ...
def read_text(self, encoding: str | None = None) -> str: ...
def read_bytes(self) -> bytes: ...
def exists(self) -> bool: ...
elif sys.version_info >= (3, 12):
class SimplePath(Protocol[_T_co]):
# At runtime this is defined as taking `str | _T`, but that causes trouble.
# See #11436.

View File

@@ -0,0 +1,2 @@
def inspect(path: str) -> None: ...
def run() -> None: ...

View File

@@ -176,20 +176,24 @@ TPFLAGS_IS_ABSTRACT: Literal[1048576]
modulesbyfile: dict[str, Any]
_GetMembersPredicateTypeGuard: TypeAlias = Callable[[Any], TypeGuard[_T]]
_GetMembersPredicateTypeIs: TypeAlias = Callable[[Any], TypeIs[_T]]
_GetMembersPredicate: TypeAlias = Callable[[Any], bool]
_GetMembersReturnTypeGuard: TypeAlias = list[tuple[str, _T]]
_GetMembersReturn: TypeAlias = list[tuple[str, Any]]
_GetMembersReturn: TypeAlias = list[tuple[str, _T]]
@overload
def getmembers(object: object, predicate: _GetMembersPredicateTypeGuard[_T]) -> _GetMembersReturnTypeGuard[_T]: ...
def getmembers(object: object, predicate: _GetMembersPredicateTypeGuard[_T]) -> _GetMembersReturn[_T]: ...
@overload
def getmembers(object: object, predicate: _GetMembersPredicate | None = None) -> _GetMembersReturn: ...
def getmembers(object: object, predicate: _GetMembersPredicateTypeIs[_T]) -> _GetMembersReturn[_T]: ...
@overload
def getmembers(object: object, predicate: _GetMembersPredicate | None = None) -> _GetMembersReturn[Any]: ...
if sys.version_info >= (3, 11):
@overload
def getmembers_static(object: object, predicate: _GetMembersPredicateTypeGuard[_T]) -> _GetMembersReturnTypeGuard[_T]: ...
def getmembers_static(object: object, predicate: _GetMembersPredicateTypeGuard[_T]) -> _GetMembersReturn[_T]: ...
@overload
def getmembers_static(object: object, predicate: _GetMembersPredicate | None = None) -> _GetMembersReturn: ...
def getmembers_static(object: object, predicate: _GetMembersPredicateTypeIs[_T]) -> _GetMembersReturn[_T]: ...
@overload
def getmembers_static(object: object, predicate: _GetMembersPredicate | None = None) -> _GetMembersReturn[Any]: ...
def getmodulename(path: StrPath) -> str | None: ...
def ismodule(object: object) -> TypeIs[ModuleType]: ...

View File

@@ -6,7 +6,7 @@ from _typeshed import FileDescriptorOrPath, ReadableBuffer, WriteableBuffer
from collections.abc import Callable, Iterable, Iterator
from os import _Opener
from types import TracebackType
from typing import IO, Any, BinaryIO, Literal, Protocol, TextIO, TypeVar, overload, type_check_only
from typing import IO, Any, BinaryIO, Generic, Literal, Protocol, TextIO, TypeVar, overload, type_check_only
from typing_extensions import Self
__all__ = [
@@ -173,12 +173,12 @@ class _WrappedBuffer(Protocol):
# def seek(self, offset: Literal[0], whence: Literal[2]) -> int: ...
# def tell(self) -> int: ...
# TODO: Should be generic over the buffer type, but needs to wait for
# TypeVar defaults.
class TextIOWrapper(TextIOBase, TextIO): # type: ignore[misc] # incompatible definitions of write in the base classes
_BufferT_co = TypeVar("_BufferT_co", bound=_WrappedBuffer, default=_WrappedBuffer, covariant=True)
class TextIOWrapper(TextIOBase, TextIO, Generic[_BufferT_co]): # type: ignore[misc] # incompatible definitions of write in the base classes
def __init__(
self,
buffer: _WrappedBuffer,
buffer: _BufferT_co,
encoding: str | None = None,
errors: str | None = None,
newline: str | None = None,
@@ -187,7 +187,7 @@ class TextIOWrapper(TextIOBase, TextIO): # type: ignore[misc] # incompatible d
) -> None: ...
# Equals the "buffer" argument passed in to the constructor.
@property
def buffer(self) -> BinaryIO: ...
def buffer(self) -> _BufferT_co: ... # type: ignore[override]
@property
def closed(self) -> bool: ...
@property
@@ -211,7 +211,7 @@ class TextIOWrapper(TextIOBase, TextIO): # type: ignore[misc] # incompatible d
def readline(self, size: int = -1, /) -> str: ... # type: ignore[override]
def readlines(self, hint: int = -1, /) -> list[str]: ... # type: ignore[override]
# Equals the "buffer" argument passed in to the constructor.
def detach(self) -> BinaryIO: ...
def detach(self) -> _BufferT_co: ... # type: ignore[override]
# TextIOWrapper's version of seek only supports a limited subset of
# operations.
def seek(self, cookie: int, whence: int = 0, /) -> int: ...

View File

@@ -326,6 +326,10 @@ if sys.version_info >= (3, 10):
if sys.version_info >= (3, 12):
class batched(Iterator[tuple[_T_co, ...]], Generic[_T_co]):
def __new__(cls, iterable: Iterable[_T_co], n: int) -> Self: ...
if sys.version_info >= (3, 13):
def __new__(cls, iterable: Iterable[_T_co], n: int, *, strict: bool = False) -> Self: ...
else:
def __new__(cls, iterable: Iterable[_T_co], n: int) -> Self: ...
def __iter__(self) -> Self: ...
def __next__(self) -> tuple[_T_co, ...]: ...

View File

@@ -115,6 +115,14 @@ class Maildir(Mailbox[MaildirMessage]):
def get_message(self, key: str) -> MaildirMessage: ...
def get_bytes(self, key: str) -> bytes: ...
def get_file(self, key: str) -> _ProxyFile[bytes]: ...
if sys.version_info >= (3, 13):
def get_info(self, key: str) -> str: ...
def set_info(self, key: str, info: str) -> None: ...
def get_flags(self, key: str) -> str: ...
def set_flags(self, key: str, flags: str) -> None: ...
def add_flag(self, key: str, flag: str) -> None: ...
def remove_flag(self, key: str, flag: str) -> None: ...
def iterkeys(self) -> Iterator[str]: ...
def __contains__(self, key: str) -> bool: ...
def __len__(self) -> int: ...

View File

@@ -45,6 +45,7 @@ class MimeTypes:
types_map: tuple[dict[str, str], dict[str, str]]
types_map_inv: tuple[dict[str, str], dict[str, str]]
def __init__(self, filenames: tuple[str, ...] = (), strict: bool = True) -> None: ...
def add_type(self, type: str, ext: str, strict: bool = True) -> None: ...
def guess_extension(self, type: str, strict: bool = True) -> str | None: ...
def guess_type(self, url: StrPath, strict: bool = True) -> tuple[str | None, str | None]: ...
def guess_all_extensions(self, type: str, strict: bool = True) -> list[str]: ...

View File

@@ -1,7 +1,7 @@
import sys
from _typeshed import ReadableBuffer, Unused
from collections.abc import Iterable, Iterator, Sized
from typing import Final, NoReturn, overload
from typing import Final, Literal, NoReturn, overload
from typing_extensions import Self
ACCESS_DEFAULT: int
@@ -77,7 +77,7 @@ class mmap(Iterable[int], Sized):
def __buffer__(self, flags: int, /) -> memoryview: ...
def __release_buffer__(self, buffer: memoryview, /) -> None: ...
if sys.version_info >= (3, 13):
def seekable(self) -> bool: ...
def seekable(self) -> Literal[True]: ...
if sys.platform != "win32":
MADV_NORMAL: int

View File

@@ -113,7 +113,7 @@ class Path(PurePath):
if sys.version_info >= (3, 13):
@classmethod
def from_uri(cls, uri: str) -> Path: ...
def from_uri(cls, uri: str) -> Self: ...
def is_dir(self, *, follow_symlinks: bool = True) -> bool: ...
def is_file(self, *, follow_symlinks: bool = True) -> bool: ...
def read_text(self, encoding: str | None = None, errors: str | None = None, newline: str | None = None) -> str: ...

View File

@@ -5,7 +5,7 @@ from cmd import Cmd
from collections.abc import Callable, Iterable, Mapping, Sequence
from inspect import _SourceObjectType
from types import CodeType, FrameType, TracebackType
from typing import IO, Any, ClassVar, TypeVar
from typing import IO, Any, ClassVar, Final, TypeVar
from typing_extensions import ParamSpec, Self
__all__ = ["run", "pm", "Pdb", "runeval", "runctx", "runcall", "set_trace", "post_mortem", "help"]
@@ -30,6 +30,9 @@ class Pdb(Bdb, Cmd):
commands_resuming: ClassVar[list[str]]
if sys.version_info >= (3, 13):
MAX_CHAINED_EXCEPTION_DEPTH: Final = 999
aliases: dict[str, str]
mainpyfile: str
_wait_for_mainpyfile: bool
@@ -58,8 +61,16 @@ class Pdb(Bdb, Cmd):
if sys.version_info < (3, 11):
def execRcLines(self) -> None: ...
if sys.version_info >= (3, 13):
user_opcode = Bdb.user_line
def bp_commands(self, frame: FrameType) -> bool: ...
def interaction(self, frame: FrameType | None, traceback: TracebackType | None) -> None: ...
if sys.version_info >= (3, 13):
def interaction(self, frame: FrameType | None, tb_or_exc: TracebackType | BaseException | None) -> None: ...
else:
def interaction(self, frame: FrameType | None, traceback: TracebackType | None) -> None: ...
def displayhook(self, obj: object) -> None: ...
def handle_command_def(self, line: str) -> bool: ...
def defaultFile(self) -> str: ...
@@ -72,6 +83,9 @@ class Pdb(Bdb, Cmd):
if sys.version_info < (3, 11):
def _runscript(self, filename: str) -> None: ...
if sys.version_info >= (3, 13):
def completedefault(self, text: str, line: str, begidx: int, endidx: int) -> list[str]: ... # type: ignore[override]
def do_commands(self, arg: str) -> bool | None: ...
def do_break(self, arg: str, temporary: bool = ...) -> bool | None: ...
def do_tbreak(self, arg: str) -> bool | None: ...
@@ -81,6 +95,9 @@ class Pdb(Bdb, Cmd):
def do_ignore(self, arg: str) -> bool | None: ...
def do_clear(self, arg: str) -> bool | None: ...
def do_where(self, arg: str) -> bool | None: ...
if sys.version_info >= (3, 13):
def do_exceptions(self, arg: str) -> bool | None: ...
def do_up(self, arg: str) -> bool | None: ...
def do_down(self, arg: str) -> bool | None: ...
def do_until(self, arg: str) -> bool | None: ...
@@ -125,8 +142,14 @@ class Pdb(Bdb, Cmd):
def help_exec(self) -> None: ...
def help_pdb(self) -> None: ...
def sigint_handler(self, signum: signal.Signals, frame: FrameType) -> None: ...
def message(self, msg: str) -> None: ...
if sys.version_info >= (3, 13):
def message(self, msg: str, end: str = "\n") -> None: ...
else:
def message(self, msg: str) -> None: ...
def error(self, msg: str) -> None: ...
if sys.version_info >= (3, 13):
def completenames(self, text: str, line: str, begidx: int, endidx: int) -> list[str]: ... # type: ignore[override]
if sys.version_info >= (3, 12):
def set_convenience_variable(self, frame: FrameType, name: str, value: Any) -> None: ...

View File

@@ -5,7 +5,7 @@ from builtins import list as _list # "list" conflicts with method name
from collections.abc import Callable, Container, Mapping, MutableMapping
from reprlib import Repr
from types import MethodType, ModuleType, TracebackType
from typing import IO, Any, AnyStr, Final, NoReturn, TypeVar
from typing import IO, Any, AnyStr, Final, NoReturn, Protocol, TypeVar
from typing_extensions import TypeGuard
__all__ = ["help"]
@@ -17,6 +17,9 @@ __date__: Final[str]
__version__: Final[str]
__credits__: Final[str]
class _Pager(Protocol):
def __call__(self, text: str, title: str = "") -> None: ...
def pathdirs() -> list[str]: ...
def getdoc(object: object) -> str: ...
def splitdoc(doc: AnyStr) -> tuple[AnyStr, AnyStr]: ...
@@ -229,16 +232,36 @@ class TextDoc(Doc):
doc: Any | None = None,
) -> str: ...
def pager(text: str) -> None: ...
def getpager() -> Callable[[str], None]: ...
if sys.version_info >= (3, 13):
def pager(text: str, title: str = "") -> None: ...
else:
def pager(text: str) -> None: ...
def plain(text: str) -> str: ...
def pipepager(text: str, cmd: str) -> None: ...
def tempfilepager(text: str, cmd: str) -> None: ...
def ttypager(text: str) -> None: ...
def plainpager(text: str) -> None: ...
def describe(thing: Any) -> str: ...
def locate(path: str, forceload: bool = ...) -> object: ...
if sys.version_info >= (3, 13):
def get_pager() -> _Pager: ...
def pipe_pager(text: str, cmd: str, title: str = "") -> None: ...
def tempfile_pager(text: str, cmd: str, title: str = "") -> None: ...
def tty_pager(text: str, title: str = "") -> None: ...
def plain_pager(text: str, title: str = "") -> None: ...
# For backwards compatibility.
getpager = get_pager
pipepager = pipe_pager
tempfilepager = tempfile_pager
ttypager = tty_pager
plainpager = plain_pager
else:
def getpager() -> Callable[[str], None]: ...
def pipepager(text: str, cmd: str) -> None: ...
def tempfilepager(text: str, cmd: str) -> None: ...
def ttypager(text: str) -> None: ...
def plainpager(text: str) -> None: ...
text: TextDoc
html: HTMLDoc

View File

@@ -1,3 +1,4 @@
import sys
from _typeshed import StrPath
from collections.abc import Iterable
@@ -13,7 +14,15 @@ def addsitedir(sitedir: str, known_paths: set[str] | None = None) -> None: ...
def addsitepackages(known_paths: set[str] | None, prefixes: Iterable[str] | None = None) -> set[str] | None: ... # undocumented
def addusersitepackages(known_paths: set[str] | None) -> set[str] | None: ... # undocumented
def check_enableusersite() -> bool | None: ... # undocumented
if sys.version_info >= (3, 13):
def gethistoryfile() -> str: ... # undocumented
def enablerlcompleter() -> None: ... # undocumented
if sys.version_info >= (3, 13):
def register_readline() -> None: ... # undocumented
def execsitecustomize() -> None: ... # undocumented
def execusercustomize() -> None: ... # undocumented
def getsitepackages(prefixes: Iterable[str] | None = None) -> list[str]: ...

View File

@@ -30,7 +30,8 @@ AT_LOCALE: dict[_NamedIntConstant, _NamedIntConstant]
AT_UNICODE: dict[_NamedIntConstant, _NamedIntConstant]
CH_LOCALE: dict[_NamedIntConstant, _NamedIntConstant]
CH_UNICODE: dict[_NamedIntConstant, _NamedIntConstant]
SRE_FLAG_TEMPLATE: int
if sys.version_info < (3, 13):
SRE_FLAG_TEMPLATE: int
SRE_FLAG_IGNORECASE: int
SRE_FLAG_LOCALE: int
SRE_FLAG_MULTILINE: int

View File

@@ -5,11 +5,30 @@ from typing import Any
__all__ = ["symtable", "SymbolTable", "Class", "Function", "Symbol"]
if sys.version_info >= (3, 13):
__all__ += ["SymbolTableType"]
def symtable(code: str, filename: str, compile_type: str) -> SymbolTable: ...
if sys.version_info >= (3, 13):
from enum import StrEnum
class SymbolTableType(StrEnum):
MODULE = "module"
FUNCTION = "function"
CLASS = "class"
ANNOTATION = "annotation"
TYPE_ALIAS = "type alias"
TYPE_PARAMETERS = "type parameters"
TYPE_VARIABLE = "type variable"
class SymbolTable:
def __init__(self, raw_table: Any, filename: str) -> None: ...
def get_type(self) -> str: ...
if sys.version_info >= (3, 13):
def get_type(self) -> SymbolTableType: ...
else:
def get_type(self) -> str: ...
def get_id(self) -> int: ...
def get_name(self) -> str: ...
def get_lineno(self) -> int: ...
@@ -42,13 +61,23 @@ class Symbol:
def get_name(self) -> str: ...
def is_referenced(self) -> bool: ...
def is_parameter(self) -> bool: ...
if sys.version_info >= (3, 14):
def is_type_parameter(self) -> bool: ...
def is_global(self) -> bool: ...
def is_declared_global(self) -> bool: ...
def is_local(self) -> bool: ...
def is_annotated(self) -> bool: ...
def is_free(self) -> bool: ...
if sys.version_info >= (3, 14):
def is_free_class(self) -> bool: ...
def is_imported(self) -> bool: ...
def is_assigned(self) -> bool: ...
if sys.version_info >= (3, 14):
def is_comp_iter(self) -> bool: ...
def is_comp_cell(self) -> bool: ...
def is_namespace(self) -> bool: ...
def get_namespaces(self) -> Sequence[SymbolTable]: ...
def get_namespace(self) -> SymbolTable: ...

View File

@@ -355,7 +355,11 @@ def set_int_max_str_digits(maxdigits: int) -> None: ...
def get_int_max_str_digits() -> int: ...
if sys.version_info >= (3, 12):
def getunicodeinternedsize() -> int: ...
if sys.version_info >= (3, 13):
def getunicodeinternedsize(*, _only_immortal: bool = False) -> int: ...
else:
def getunicodeinternedsize() -> int: ...
def deactivate_stack_trampoline() -> None: ...
def is_stack_trampoline_active() -> bool: ...
# It always exists, but raises on non-linux platforms:

View File

@@ -61,7 +61,7 @@ if sys.version_info >= (3, 10):
def gettrace() -> TraceFunction | None: ...
def getprofile() -> ProfileFunction | None: ...
def stack_size(size: int = ...) -> int: ...
def stack_size(size: int = 0, /) -> int: ...
TIMEOUT_MAX: float

View File

@@ -1,7 +1,7 @@
import _tkinter
import sys
from _typeshed import Incomplete, StrEnum, StrOrBytesPath
from collections.abc import Callable, Mapping, Sequence
from collections.abc import Callable, Iterable, Mapping, Sequence
from tkinter.constants import *
from tkinter.font import _FontDescription
from types import TracebackType
@@ -3331,9 +3331,33 @@ class PhotoImage(Image, _PhotoImageLike):
def blank(self) -> None: ...
def cget(self, option: str) -> str: ...
def __getitem__(self, key: str) -> str: ... # always string: image['height'] can be '0'
def copy(self) -> PhotoImage: ...
def zoom(self, x: int, y: int | Literal[""] = "") -> PhotoImage: ...
def subsample(self, x: int, y: int | Literal[""] = "") -> PhotoImage: ...
if sys.version_info >= (3, 13):
def copy(
self,
*,
from_coords: Iterable[int] | None = None,
zoom: int | tuple[int, int] | list[int] | None = None,
subsample: int | tuple[int, int] | list[int] | None = None,
) -> PhotoImage: ...
def subsample(self, x: int, y: Literal[""] = "", *, from_coords: Iterable[int] | None = None) -> PhotoImage: ...
def zoom(self, x: int, y: Literal[""] = "", *, from_coords: Iterable[int] | None = None) -> PhotoImage: ...
def copy_replace(
self,
sourceImage: PhotoImage | str,
*,
from_coords: Iterable[int] | None = None,
to: Iterable[int] | None = None,
shrink: bool = False,
zoom: int | tuple[int, int] | list[int] | None = None,
subsample: int | tuple[int, int] | list[int] | None = None,
# `None` defaults to overlay.
compositingrule: Literal["overlay", "set"] | None = None,
) -> None: ...
else:
def copy(self) -> PhotoImage: ...
def zoom(self, x: int, y: int | Literal[""] = "") -> PhotoImage: ...
def subsample(self, x: int, y: int | Literal[""] = "") -> PhotoImage: ...
def get(self, x: int, y: int) -> tuple[int, int, int]: ...
def put(
self,
@@ -3348,7 +3372,44 @@ class PhotoImage(Image, _PhotoImageLike):
),
to: tuple[int, int] | None = None,
) -> None: ...
def write(self, filename: StrOrBytesPath, format: str | None = None, from_coords: tuple[int, int] | None = None) -> None: ...
if sys.version_info >= (3, 13):
def read(
self,
filename: StrOrBytesPath,
format: str | None = None,
*,
from_coords: Iterable[int] | None = None,
to: Iterable[int] | None = None,
shrink: bool = False,
) -> None: ...
def write(
self,
filename: StrOrBytesPath,
format: str | None = None,
from_coords: Iterable[int] | None = None,
*,
background: str | None = None,
grayscale: bool = False,
) -> None: ...
@overload
def data(
self, format: str, *, from_coords: Iterable[int] | None = None, background: str | None = None, grayscale: bool = False
) -> bytes: ...
@overload
def data(
self,
format: None = None,
*,
from_coords: Iterable[int] | None = None,
background: str | None = None,
grayscale: bool = False,
) -> tuple[str, ...]: ...
else:
def write(
self, filename: StrOrBytesPath, format: str | None = None, from_coords: tuple[int, int] | None = None
) -> None: ...
def transparency_get(self, x: int, y: int) -> bool: ...
def transparency_set(self, x: int, y: int, boolean: bool) -> None: ...

View File

@@ -27,7 +27,18 @@ class CoverageResults:
outfile: StrPath | None = None,
) -> None: ... # undocumented
def update(self, other: CoverageResults) -> None: ...
def write_results(self, show_missing: bool = True, summary: bool = False, coverdir: StrPath | None = None) -> None: ...
if sys.version_info >= (3, 13):
def write_results(
self,
show_missing: bool = True,
summary: bool = False,
coverdir: StrPath | None = None,
*,
ignore_missing_files: bool = False,
) -> None: ...
else:
def write_results(self, show_missing: bool = True, summary: bool = False, coverdir: StrPath | None = None) -> None: ...
def write_results_file(
self, path: StrPath, lines: Sequence[str], lnotab: Any, lines_hit: Mapping[int, int], encoding: str | None = None
) -> tuple[int, int]: ...

View File

@@ -101,7 +101,6 @@ __all__ = [
"setheading",
"setpos",
"setposition",
"settiltangle",
"setundobuffer",
"setx",
"sety",
@@ -132,6 +131,9 @@ __all__ = [
if sys.version_info >= (3, 12):
__all__ += ["teleport"]
if sys.version_info < (3, 13):
__all__ += ["settiltangle"]
# Note: '_Color' is the alias we use for arguments and _AnyColor is the
# alias we use for return types. Really, these two aliases should be the
# same, but as per the "no union returns" typeshed policy, we'll return
@@ -399,7 +401,10 @@ class RawTurtle(TPen, TNavigator):
self, t11: float | None = None, t12: float | None = None, t21: float | None = None, t22: float | None = None
) -> None: ...
def get_shapepoly(self) -> _PolygonCoords | None: ...
def settiltangle(self, angle: float) -> None: ...
if sys.version_info < (3, 13):
def settiltangle(self, angle: float) -> None: ...
@overload
def tiltangle(self, angle: None = None) -> float: ...
@overload
@@ -672,7 +677,10 @@ def shapetransform(
t11: float | None = None, t12: float | None = None, t21: float | None = None, t22: float | None = None
) -> None: ...
def get_shapepoly() -> _PolygonCoords | None: ...
def settiltangle(angle: float) -> None: ...
if sys.version_info < (3, 13):
def settiltangle(angle: float) -> None: ...
@overload
def tiltangle(angle: None = None) -> float: ...
@overload

View File

@@ -245,7 +245,7 @@ class CodeType:
co_qualname: str = ...,
co_linetable: bytes = ...,
co_exceptiontable: bytes = ...,
) -> CodeType: ...
) -> Self: ...
elif sys.version_info >= (3, 10):
def replace(
self,
@@ -266,7 +266,7 @@ class CodeType:
co_filename: str = ...,
co_name: str = ...,
co_linetable: bytes = ...,
) -> CodeType: ...
) -> Self: ...
else:
def replace(
self,
@@ -287,7 +287,10 @@ class CodeType:
co_filename: str = ...,
co_name: str = ...,
co_lnotab: bytes = ...,
) -> CodeType: ...
) -> Self: ...
if sys.version_info >= (3, 13):
__replace__ = replace
@final
class MappingProxyType(Mapping[_KT, _VT_co]):
@@ -309,11 +312,17 @@ class MappingProxyType(Mapping[_KT, _VT_co]):
class SimpleNamespace:
__hash__: ClassVar[None] # type: ignore[assignment]
def __init__(self, **kwargs: Any) -> None: ...
if sys.version_info >= (3, 13):
def __init__(self, mapping_or_iterable: Mapping[str, Any] | Iterable[tuple[str, Any]] = (), /, **kwargs: Any) -> None: ...
else:
def __init__(self, **kwargs: Any) -> None: ...
def __eq__(self, value: object, /) -> bool: ...
def __getattribute__(self, name: str, /) -> Any: ...
def __setattr__(self, name: str, value: Any, /) -> None: ...
def __delattr__(self, name: str, /) -> None: ...
if sys.version_info >= (3, 13):
def __replace__(self, **kwargs: Any) -> Self: ...
class ModuleType:
__name__: str

View File

@@ -542,16 +542,18 @@ class AsyncIterator(AsyncIterable[_T_co], Protocol[_T_co]):
class AsyncGenerator(AsyncIterator[_YieldT_co], Generic[_YieldT_co, _SendT_contra]):
def __anext__(self) -> Awaitable[_YieldT_co]: ...
@abstractmethod
def asend(self, value: _SendT_contra, /) -> Awaitable[_YieldT_co]: ...
def asend(self, value: _SendT_contra, /) -> Coroutine[Any, Any, _YieldT_co]: ...
@overload
@abstractmethod
def athrow(
self, typ: type[BaseException], val: BaseException | object = None, tb: TracebackType | None = None, /
) -> Awaitable[_YieldT_co]: ...
) -> Coroutine[Any, Any, _YieldT_co]: ...
@overload
@abstractmethod
def athrow(self, typ: BaseException, val: None = None, tb: TracebackType | None = None, /) -> Awaitable[_YieldT_co]: ...
def aclose(self) -> Awaitable[None]: ...
def athrow(
self, typ: BaseException, val: None = None, tb: TracebackType | None = None, /
) -> Coroutine[Any, Any, _YieldT_co]: ...
def aclose(self) -> Coroutine[Any, Any, None]: ...
@property
def ag_await(self) -> Any: ...
@property

View File

@@ -11,13 +11,7 @@ from .case import (
skipIf as skipIf,
skipUnless as skipUnless,
)
from .loader import (
TestLoader as TestLoader,
defaultTestLoader as defaultTestLoader,
findTestCases as findTestCases,
getTestCaseNames as getTestCaseNames,
makeSuite as makeSuite,
)
from .loader import TestLoader as TestLoader, defaultTestLoader as defaultTestLoader
from .main import TestProgram as TestProgram, main as main
from .result import TestResult as TestResult
from .runner import TextTestResult as TextTestResult, TextTestRunner as TextTestRunner
@@ -52,12 +46,14 @@ __all__ = [
"registerResult",
"removeResult",
"removeHandler",
"getTestCaseNames",
"makeSuite",
"findTestCases",
"addModuleCleanup",
]
if sys.version_info < (3, 13):
from .loader import findTestCases as findTestCases, getTestCaseNames as getTestCaseNames, makeSuite as makeSuite
__all__ += ["getTestCaseNames", "makeSuite", "findTestCases"]
if sys.version_info >= (3, 11):
__all__ += ["enterModuleContext", "doModuleCleanups"]

View File

@@ -1,4 +1,5 @@
import sys
from asyncio.events import AbstractEventLoop
from collections.abc import Awaitable, Callable
from typing import TypeVar
from typing_extensions import ParamSpec
@@ -12,6 +13,9 @@ _T = TypeVar("_T")
_P = ParamSpec("_P")
class IsolatedAsyncioTestCase(TestCase):
if sys.version_info >= (3, 13):
loop_factory: Callable[[], AbstractEventLoop] | None = None
async def asyncSetUp(self) -> None: ...
async def asyncTearDown(self) -> None: ...
def addAsyncCleanup(self, func: Callable[_P, Awaitable[object]], /, *args: _P.args, **kwargs: _P.kwargs) -> None: ...

View File

@@ -5,7 +5,7 @@ from collections.abc import Callable, Sequence
from re import Pattern
from types import ModuleType
from typing import Any
from typing_extensions import TypeAlias
from typing_extensions import TypeAlias, deprecated
_SortComparisonMethod: TypeAlias = Callable[[str, str], int]
_SuiteClass: TypeAlias = Callable[[list[unittest.case.TestCase]], unittest.suite.TestSuite]
@@ -34,18 +34,22 @@ class TestLoader:
defaultTestLoader: TestLoader
def getTestCaseNames(
testCaseClass: type[unittest.case.TestCase],
prefix: str,
sortUsing: _SortComparisonMethod = ...,
testNamePatterns: list[str] | None = None,
) -> Sequence[str]: ...
def makeSuite(
testCaseClass: type[unittest.case.TestCase],
prefix: str = "test",
sortUsing: _SortComparisonMethod = ...,
suiteClass: _SuiteClass = ...,
) -> unittest.suite.TestSuite: ...
def findTestCases(
module: ModuleType, prefix: str = "test", sortUsing: _SortComparisonMethod = ..., suiteClass: _SuiteClass = ...
) -> unittest.suite.TestSuite: ...
if sys.version_info < (3, 13):
@deprecated("Deprecated in Python 3.11; removal scheduled for Python 3.13")
def getTestCaseNames(
testCaseClass: type[unittest.case.TestCase],
prefix: str,
sortUsing: _SortComparisonMethod = ...,
testNamePatterns: list[str] | None = None,
) -> Sequence[str]: ...
@deprecated("Deprecated in Python 3.11; removal scheduled for Python 3.13")
def makeSuite(
testCaseClass: type[unittest.case.TestCase],
prefix: str = "test",
sortUsing: _SortComparisonMethod = ...,
suiteClass: _SuiteClass = ...,
) -> unittest.suite.TestSuite: ...
@deprecated("Deprecated in Python 3.11; removal scheduled for Python 3.13")
def findTestCases(
module: ModuleType, prefix: str = "test", sortUsing: _SortComparisonMethod = ..., suiteClass: _SuiteClass = ...
) -> unittest.suite.TestSuite: ...

View File

@@ -6,6 +6,7 @@ import unittest.suite
from collections.abc import Iterable
from types import ModuleType
from typing import Any, Protocol
from typing_extensions import deprecated
MAIN_EXAMPLES: str
MODULE_EXAMPLES: str
@@ -61,7 +62,10 @@ class TestProgram:
tb_locals: bool = False,
) -> None: ...
def usageExit(self, msg: Any = None) -> None: ...
if sys.version_info < (3, 13):
@deprecated("Deprecated in Python 3.11; removal scheduled for Python 3.13")
def usageExit(self, msg: Any = None) -> None: ...
def parseArgs(self, argv: list[str]) -> None: ...
def createTests(self, from_discovery: bool = False, Loader: unittest.loader.TestLoader | None = None) -> None: ...
def runTests(self) -> None: ... # undocumented

View File

@@ -12,23 +12,44 @@ _F = TypeVar("_F", bound=Callable[..., Any])
_AF = TypeVar("_AF", bound=Callable[..., Coroutine[Any, Any, Any]])
_P = ParamSpec("_P")
__all__ = (
"Mock",
"MagicMock",
"patch",
"sentinel",
"DEFAULT",
"ANY",
"call",
"create_autospec",
"AsyncMock",
"FILTER_DIR",
"NonCallableMock",
"NonCallableMagicMock",
"mock_open",
"PropertyMock",
"seal",
)
if sys.version_info >= (3, 13):
# ThreadingMock added in 3.13
__all__ = (
"Mock",
"MagicMock",
"patch",
"sentinel",
"DEFAULT",
"ANY",
"call",
"create_autospec",
"ThreadingMock",
"AsyncMock",
"FILTER_DIR",
"NonCallableMock",
"NonCallableMagicMock",
"mock_open",
"PropertyMock",
"seal",
)
else:
__all__ = (
"Mock",
"MagicMock",
"patch",
"sentinel",
"DEFAULT",
"ANY",
"call",
"create_autospec",
"AsyncMock",
"FILTER_DIR",
"NonCallableMock",
"NonCallableMagicMock",
"mock_open",
"PropertyMock",
"seal",
)
if sys.version_info < (3, 9):
__version__: Final[str]
@@ -124,7 +145,6 @@ class NonCallableMock(Base, Any):
def __delattr__(self, name: str) -> None: ...
def __setattr__(self, name: str, value: Any) -> None: ...
def __dir__(self) -> list[str]: ...
def _calls_repr(self, prefix: str = "Calls") -> str: ...
def assert_called_with(self, *args: Any, **kwargs: Any) -> None: ...
def assert_not_called(self) -> None: ...
def assert_called_once_with(self, *args: Any, **kwargs: Any) -> None: ...
@@ -150,6 +170,10 @@ class NonCallableMock(Base, Any):
def _format_mock_call_signature(self, args: Any, kwargs: Any) -> str: ...
def _call_matcher(self, _call: tuple[_Call, ...]) -> _Call: ...
def _get_child_mock(self, **kw: Any) -> NonCallableMock: ...
if sys.version_info >= (3, 13):
def _calls_repr(self) -> str: ...
else:
def _calls_repr(self, prefix: str = "Calls") -> str: ...
class CallableMixin(Base):
side_effect: Any
@@ -427,4 +451,16 @@ class PropertyMock(Mock):
def __get__(self, obj: _T, obj_type: type[_T] | None = None) -> Self: ...
def __set__(self, obj: Any, val: Any) -> None: ...
if sys.version_info >= (3, 13):
class ThreadingMixin(Base):
DEFAULT_TIMEOUT: Final[float | None] = None
def __init__(self, /, *args: Any, timeout: float | None | _SentinelObject = ..., **kwargs: Any) -> None: ...
# Same as `NonCallableMock.reset_mock.`
def reset_mock(self, visited: Any = None, *, return_value: bool = False, side_effect: bool = False) -> None: ...
def wait_until_called(self, *, timeout: float | None | _SentinelObject = ...) -> None: ...
def wait_until_any_call_with(self, *args: Any, **kwargs: Any) -> None: ...
class ThreadingMock(ThreadingMixin, MagicMixin, Mock): ...
def seal(mock: Any) -> None: ...

View File

@@ -21,8 +21,10 @@ if sys.version_info >= (3, 13):
_T = TypeVar("_T")
_W = TypeVar("_W", bound=list[WarningMessage] | None)
_ActionKind: TypeAlias = Literal["default", "error", "ignore", "always", "module", "once"]
if sys.version_info >= (3, 14):
_ActionKind: TypeAlias = Literal["default", "error", "ignore", "always", "module", "once"]
else:
_ActionKind: TypeAlias = Literal["default", "error", "ignore", "always", "all", "module", "once"]
filters: Sequence[tuple[str, str | None, type[Warning], str | None, int]] # undocumented, do not mutate
def showwarning(

View File

@@ -239,9 +239,15 @@ if sys.version_info >= (3, 9):
def indent(tree: Element | ElementTree, space: str = " ", level: int = 0) -> None: ...
def parse(source: _FileRead, parser: XMLParser | None = None) -> ElementTree: ...
def iterparse(
source: _FileRead, events: Sequence[str] | None = None, parser: XMLParser | None = None
) -> Iterator[tuple[str, Any]]: ...
class _IterParseIterator(Iterator[tuple[str, Any]]):
def __next__(self) -> tuple[str, Any]: ...
if sys.version_info >= (3, 13):
def close(self) -> None: ...
if sys.version_info >= (3, 11):
def __del__(self) -> None: ...
def iterparse(source: _FileRead, events: Sequence[str] | None = None, parser: XMLParser | None = None) -> _IterParseIterator: ...
class XMLPullParser:
def __init__(self, events: Sequence[str] | None = None, *, _parser: XMLParser | None = None) -> None: ...

View File

@@ -206,6 +206,9 @@ class ZipInfo:
compress_size: int
file_size: int
orig_filename: str # undocumented
if sys.version_info >= (3, 13):
compress_level: int | None
def __init__(self, filename: str = "NoName", date_time: _DateTuple = (1980, 1, 1, 0, 0, 0)) -> None: ...
@classmethod
def from_file(cls, filename: StrPath, arcname: StrPath | None = None, *, strict_timestamps: bool = True) -> Self: ...

View File

@@ -3,12 +3,14 @@ from _typeshed import StrPath
from collections.abc import Iterator, Sequence
from io import TextIOWrapper
from os import PathLike
from typing import IO, Literal, overload
from typing import IO, Literal, TypeVar, overload
from typing_extensions import Self, TypeAlias
from zipfile import ZipFile
_ReadWriteBinaryMode: TypeAlias = Literal["r", "w", "rb", "wb"]
_ZF = TypeVar("_ZF", bound=ZipFile)
if sys.version_info >= (3, 12):
class InitializedState:
def __init__(self, *args: object, **kwargs: object) -> None: ...
@@ -23,6 +25,9 @@ if sys.version_info >= (3, 12):
@overload
@classmethod
def make(cls, source: StrPath | IO[bytes]) -> Self: ...
if sys.version_info >= (3, 13):
@classmethod
def inject(cls, zf: _ZF) -> _ZF: ...
class Path:
root: CompleteDirs

View File

@@ -15,10 +15,11 @@ red_knot_module_resolver = { workspace = true }
ruff_db = { workspace = true }
ruff_index = { workspace = true }
ruff_python_ast = { workspace = true }
ruff_python_trivia = { workspace = true }
ruff_text_size = { workspace = true }
bitflags = { workspace = true }
indexmap = { workspace = true }
ordermap = { workspace = true }
salsa = { workspace = true }
tracing = { workspace = true }
rustc-hash = { workspace = true }

View File

@@ -27,12 +27,13 @@ pub struct AstNodeRef<T> {
#[allow(unsafe_code)]
impl<T> AstNodeRef<T> {
/// Creates a new `AstNodeRef` that reference `node`. The `parsed` is the [`ParsedModule`] to which
/// the `AstNodeRef` belongs.
/// Creates a new `AstNodeRef` that reference `node`. The `parsed` is the [`ParsedModule`] to
/// which the `AstNodeRef` belongs.
///
/// ## Safety
/// Dereferencing the `node` can result in undefined behavior if `parsed` isn't the [`ParsedModule`] to
/// which `node` belongs. It's the caller's responsibility to ensure that the invariant `node belongs to parsed` is upheld.
/// Dereferencing the `node` can result in undefined behavior if `parsed` isn't the
/// [`ParsedModule`] to which `node` belongs. It's the caller's responsibility to ensure that
/// the invariant `node belongs to parsed` is upheld.
pub(super) unsafe fn new(parsed: ParsedModule, node: &T) -> Self {
Self {
@@ -43,8 +44,8 @@ impl<T> AstNodeRef<T> {
/// Returns a reference to the wrapped node.
pub fn node(&self) -> &T {
// SAFETY: Holding on to `parsed` ensures that the AST to which `node` belongs is still alive
// and not moved.
// SAFETY: Holding on to `parsed` ensures that the AST to which `node` belongs is still
// alive and not moved.
unsafe { self.node.as_ref() }
}
}

View File

@@ -1,25 +1,33 @@
use salsa::DbWithJar;
use red_knot_module_resolver::Db as ResolverDb;
use ruff_db::{Db as SourceDb, Upcast};
use red_knot_module_resolver::Db as ResolverDb;
use crate::semantic_index::definition::Definition;
use crate::semantic_index::symbol::{public_symbols_map, PublicSymbolId, ScopeId};
use crate::semantic_index::{root_scope, semantic_index, symbol_table};
use crate::types::{infer_types, public_symbol_ty};
use crate::semantic_index::expression::Expression;
use crate::semantic_index::symbol::ScopeId;
use crate::semantic_index::{global_scope, semantic_index, symbol_table, use_def_map};
use crate::types::{
infer_definition_types, infer_expression_types, infer_scope_types, ClassType, FunctionType,
IntersectionType, UnionType,
};
#[salsa::jar(db=Db)]
pub struct Jar(
ScopeId<'_>,
PublicSymbolId<'_>,
Definition<'_>,
Expression<'_>,
FunctionType<'_>,
ClassType<'_>,
UnionType<'_>,
IntersectionType<'_>,
symbol_table,
root_scope,
use_def_map,
global_scope,
semantic_index,
infer_types,
public_symbol_ty,
public_symbols_map,
infer_definition_types,
infer_expression_types,
infer_scope_types,
);
/// Database giving access to semantic information about a Python program.
@@ -30,27 +38,25 @@ pub trait Db:
#[cfg(test)]
pub(crate) mod tests {
use std::fmt::Formatter;
use std::marker::PhantomData;
use std::sync::Arc;
use salsa::id::AsId;
use salsa::ingredient::Ingredient;
use salsa::storage::HasIngredientsFor;
use salsa::DebugWithDb;
use red_knot_module_resolver::{Db as ResolverDb, Jar as ResolverJar};
use ruff_db::file_system::{FileSystem, MemoryFileSystem, OsFileSystem};
use ruff_db::vfs::Vfs;
use red_knot_module_resolver::{vendored_typeshed_stubs, Db as ResolverDb, Jar as ResolverJar};
use ruff_db::files::Files;
use ruff_db::system::{DbWithTestSystem, System, TestSystem};
use ruff_db::vendored::VendoredFileSystem;
use ruff_db::{Db as SourceDb, Jar as SourceJar, Upcast};
use ruff_python_trivia::textwrap;
use super::{Db, Jar};
#[salsa::db(Jar, ResolverJar, SourceJar)]
pub(crate) struct TestDb {
storage: salsa::Storage<Self>,
vfs: Vfs,
file_system: TestFileSystem,
files: Files,
system: TestSystem,
vendored: VendoredFileSystem,
events: std::sync::Arc<std::sync::Mutex<Vec<salsa::Event>>>,
}
@@ -58,29 +64,13 @@ pub(crate) mod tests {
pub(crate) fn new() -> Self {
Self {
storage: salsa::Storage::default(),
file_system: TestFileSystem::Memory(MemoryFileSystem::default()),
system: TestSystem::default(),
vendored: vendored_typeshed_stubs().snapshot(),
events: std::sync::Arc::default(),
vfs: Vfs::with_stubbed_vendored(),
files: Files::default(),
}
}
/// Returns the memory file system.
///
/// ## Panics
/// If this test db isn't using a memory file system.
pub(crate) fn memory_file_system(&self) -> &MemoryFileSystem {
if let TestFileSystem::Memory(fs) = &self.file_system {
fs
} else {
panic!("The test db is not using a memory file system");
}
}
#[allow(unused)]
pub(crate) fn vfs_mut(&mut self) -> &mut Vfs {
&mut self.vfs
}
/// Takes the salsa events.
///
/// ## Panics
@@ -99,18 +89,35 @@ pub(crate) mod tests {
pub(crate) fn clear_salsa_events(&mut self) {
self.take_salsa_events();
}
/// Write auto-dedented text to a file.
pub(crate) fn write_dedented(&mut self, path: &str, content: &str) -> anyhow::Result<()> {
self.write_file(path, textwrap::dedent(content))?;
Ok(())
}
}
impl DbWithTestSystem for TestDb {
fn test_system(&self) -> &TestSystem {
&self.system
}
fn test_system_mut(&mut self) -> &mut TestSystem {
&mut self.system
}
}
impl SourceDb for TestDb {
fn file_system(&self) -> &dyn FileSystem {
match &self.file_system {
TestFileSystem::Memory(fs) => fs,
TestFileSystem::Os(fs) => fs,
}
fn vendored(&self) -> &VendoredFileSystem {
&self.vendored
}
fn vfs(&self) -> &Vfs {
&self.vfs
fn system(&self) -> &dyn System {
&self.system
}
fn files(&self) -> &Files {
&self.files
}
}
@@ -141,121 +148,11 @@ pub(crate) mod tests {
fn snapshot(&self) -> salsa::Snapshot<Self> {
salsa::Snapshot::new(Self {
storage: self.storage.snapshot(),
vfs: self.vfs.snapshot(),
file_system: match &self.file_system {
TestFileSystem::Memory(memory) => TestFileSystem::Memory(memory.snapshot()),
TestFileSystem::Os(fs) => TestFileSystem::Os(fs.snapshot()),
},
files: self.files.snapshot(),
system: self.system.snapshot(),
vendored: self.vendored.snapshot(),
events: self.events.clone(),
})
}
}
enum TestFileSystem {
Memory(MemoryFileSystem),
#[allow(dead_code)]
Os(OsFileSystem),
}
pub(crate) fn assert_will_run_function_query<'db, C, Db, Jar>(
db: &'db Db,
to_function: impl FnOnce(&C) -> &salsa::function::FunctionIngredient<C>,
input: &C::Input<'db>,
events: &[salsa::Event],
) where
C: salsa::function::Configuration<Jar = Jar>
+ salsa::storage::IngredientsFor<Jar = Jar, Ingredients = C>,
Jar: HasIngredientsFor<C>,
Db: salsa::DbWithJar<Jar>,
C::Input<'db>: AsId,
{
will_run_function_query(db, to_function, input, events, true);
}
pub(crate) fn assert_will_not_run_function_query<'db, C, Db, Jar>(
db: &'db Db,
to_function: impl FnOnce(&C) -> &salsa::function::FunctionIngredient<C>,
input: &C::Input<'db>,
events: &[salsa::Event],
) where
C: salsa::function::Configuration<Jar = Jar>
+ salsa::storage::IngredientsFor<Jar = Jar, Ingredients = C>,
Jar: HasIngredientsFor<C>,
Db: salsa::DbWithJar<Jar>,
C::Input<'db>: AsId,
{
will_run_function_query(db, to_function, input, events, false);
}
fn will_run_function_query<'db, C, Db, Jar>(
db: &'db Db,
to_function: impl FnOnce(&C) -> &salsa::function::FunctionIngredient<C>,
input: &C::Input<'db>,
events: &[salsa::Event],
should_run: bool,
) where
C: salsa::function::Configuration<Jar = Jar>
+ salsa::storage::IngredientsFor<Jar = Jar, Ingredients = C>,
Jar: HasIngredientsFor<C>,
Db: salsa::DbWithJar<Jar>,
C::Input<'db>: AsId,
{
let (jar, _) =
<_ as salsa::storage::HasJar<<C as salsa::storage::IngredientsFor>::Jar>>::jar(db);
let ingredient = jar.ingredient();
let function_ingredient = to_function(ingredient);
let ingredient_index =
<salsa::function::FunctionIngredient<C> as Ingredient<Db>>::ingredient_index(
function_ingredient,
);
let did_run = events.iter().any(|event| {
if let salsa::EventKind::WillExecute { database_key } = event.kind {
database_key.ingredient_index() == ingredient_index
&& database_key.key_index() == input.as_id()
} else {
false
}
});
if should_run && !did_run {
panic!(
"Expected query {:?} to run but it didn't",
DebugIdx {
db: PhantomData::<Db>,
value_id: input.as_id(),
ingredient: function_ingredient,
}
);
} else if !should_run && did_run {
panic!(
"Expected query {:?} not to run but it did",
DebugIdx {
db: PhantomData::<Db>,
value_id: input.as_id(),
ingredient: function_ingredient,
}
);
}
}
struct DebugIdx<'a, I, Db>
where
I: Ingredient<Db>,
{
value_id: salsa::Id,
ingredient: &'a I,
db: PhantomData<Db>,
}
impl<'a, I, Db> std::fmt::Debug for DebugIdx<'a, I, Db>
where
I: Ingredient<Db>,
{
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
self.ingredient.fmt_index(Some(self.value_id), f)
}
}
}

View File

@@ -12,4 +12,4 @@ pub mod semantic_index;
mod semantic_model;
pub mod types;
type FxIndexSet<V> = indexmap::set::IndexSet<V, BuildHasherDefault<FxHasher>>;
type FxOrderSet<V> = ordermap::set::OrderSet<V, BuildHasherDefault<FxHasher>>;

View File

@@ -1,10 +0,0 @@
use std::hash::BuildHasherDefault;
use rustc_hash::FxHasher;
pub mod ast_node_ref;
mod node_key;
pub mod semantic_index;
pub mod types;
pub(crate) type FxIndexSet<V> = indexmap::set::IndexSet<V, BuildHasherDefault<FxHasher>>;

View File

@@ -3,24 +3,28 @@ use std::sync::Arc;
use rustc_hash::FxHashMap;
use ruff_db::files::File;
use ruff_db::parsed::parsed_module;
use ruff_db::vfs::VfsFile;
use ruff_index::{IndexSlice, IndexVec};
use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
use crate::semantic_index::ast_ids::AstIds;
use crate::semantic_index::builder::SemanticIndexBuilder;
use crate::semantic_index::definition::{Definition, DefinitionNodeKey, DefinitionNodeRef};
use crate::semantic_index::definition::{Definition, DefinitionNodeKey};
use crate::semantic_index::expression::Expression;
use crate::semantic_index::symbol::{
FileScopeId, NodeWithScopeKey, NodeWithScopeRef, PublicSymbolId, Scope, ScopeId,
ScopedSymbolId, SymbolTable,
FileScopeId, NodeWithScopeKey, NodeWithScopeRef, Scope, ScopeId, ScopedSymbolId, SymbolTable,
};
use crate::Db;
pub mod ast_ids;
mod builder;
pub mod definition;
pub mod expression;
pub mod symbol;
mod use_def;
pub(crate) use self::use_def::UseDefMap;
type SymbolMap = hashbrown::HashMap<ScopedSymbolId, (), ()>;
@@ -28,7 +32,7 @@ type SymbolMap = hashbrown::HashMap<ScopedSymbolId, (), ()>;
///
/// Prefer using [`symbol_table`] when working with symbols from a single scope.
#[salsa::tracked(return_ref, no_eq)]
pub(crate) fn semantic_index(db: &dyn Db, file: VfsFile) -> SemanticIndex<'_> {
pub(crate) fn semantic_index(db: &dyn Db, file: File) -> SemanticIndex<'_> {
let _span = tracing::trace_span!("semantic_index", ?file).entered();
let parsed = parsed_module(db.upcast(), file);
@@ -42,57 +46,63 @@ pub(crate) fn semantic_index(db: &dyn Db, file: VfsFile) -> SemanticIndex<'_> {
/// Salsa can avoid invalidating dependent queries if this scope's symbol table
/// is unchanged.
#[salsa::tracked]
pub(crate) fn symbol_table<'db>(db: &'db dyn Db, scope: ScopeId<'db>) -> Arc<SymbolTable<'db>> {
pub(crate) fn symbol_table<'db>(db: &'db dyn Db, scope: ScopeId<'db>) -> Arc<SymbolTable> {
let _span = tracing::trace_span!("symbol_table", ?scope).entered();
let index = semantic_index(db, scope.file(db));
index.symbol_table(scope.file_scope_id(db))
}
/// Returns the root scope of `file`.
/// Returns the use-def map for a specific `scope`.
///
/// Using [`use_def_map`] over [`semantic_index`] has the advantage that
/// Salsa can avoid invalidating dependent queries if this scope's use-def map
/// is unchanged.
#[salsa::tracked]
pub(crate) fn root_scope(db: &dyn Db, file: VfsFile) -> ScopeId<'_> {
let _span = tracing::trace_span!("root_scope", ?file).entered();
pub(crate) fn use_def_map<'db>(db: &'db dyn Db, scope: ScopeId<'db>) -> Arc<UseDefMap<'db>> {
let _span = tracing::trace_span!("use_def_map", ?scope).entered();
let index = semantic_index(db, scope.file(db));
FileScopeId::root().to_scope_id(db, file)
index.use_def_map(scope.file_scope_id(db))
}
/// Returns the symbol with the given name in `file`'s public scope or `None` if
/// no symbol with the given name exists.
pub(crate) fn public_symbol<'db>(
db: &'db dyn Db,
file: VfsFile,
name: &str,
) -> Option<PublicSymbolId<'db>> {
let root_scope = root_scope(db, file);
let symbol_table = symbol_table(db, root_scope);
let local = symbol_table.symbol_id_by_name(name)?;
Some(local.to_public_symbol(db, file))
/// Returns the module global scope of `file`.
#[salsa::tracked]
pub(crate) fn global_scope(db: &dyn Db, file: File) -> ScopeId<'_> {
let _span = tracing::trace_span!("global_scope", ?file).entered();
FileScopeId::global().to_scope_id(db, file)
}
/// The symbol tables for an entire file.
/// The symbol tables and use-def maps for all scopes in a file.
#[derive(Debug)]
pub(crate) struct SemanticIndex<'db> {
/// List of all symbol tables in this file, indexed by scope.
symbol_tables: IndexVec<FileScopeId, Arc<SymbolTable<'db>>>,
symbol_tables: IndexVec<FileScopeId, Arc<SymbolTable>>,
/// List of all scopes in this file.
scopes: IndexVec<FileScopeId, Scope>,
/// Maps expressions to their corresponding scope.
/// Map expressions to their corresponding scope.
/// We can't use [`ExpressionId`] here, because the challenge is how to get from
/// an [`ast::Expr`] to an [`ExpressionId`] (which requires knowing the scope).
scopes_by_expression: FxHashMap<ExpressionNodeKey, FileScopeId>,
/// Maps from a node creating a definition node to its definition.
/// Map from a node creating a definition to its definition.
definitions_by_node: FxHashMap<DefinitionNodeKey, Definition<'db>>,
/// Map from a standalone expression to its [`Expression`] ingredient.
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
/// Map from nodes that create a scope to the scope they create.
scopes_by_node: FxHashMap<NodeWithScopeKey, FileScopeId>,
/// Map from the file-local [`FileScopeId`] to the salsa-ingredient [`ScopeId`].
scope_ids_by_scope: IndexVec<FileScopeId, ScopeId<'db>>,
/// Use-def map for each scope in this file.
use_def_maps: IndexVec<FileScopeId, Arc<UseDefMap<'db>>>,
/// Lookup table to map between node ids and ast nodes.
///
/// Note: We should not depend on this map when analysing other files or
@@ -105,10 +115,18 @@ impl<'db> SemanticIndex<'db> {
///
/// Use the Salsa cached [`symbol_table`] query if you only need the
/// symbol table for a single scope.
pub(super) fn symbol_table(&self, scope_id: FileScopeId) -> Arc<SymbolTable<'db>> {
pub(super) fn symbol_table(&self, scope_id: FileScopeId) -> Arc<SymbolTable> {
self.symbol_tables[scope_id].clone()
}
/// Returns the use-def map for a specific scope.
///
/// Use the Salsa cached [`use_def_map`] query if you only need the
/// use-def map for a single scope.
pub(super) fn use_def_map(&self, scope_id: FileScopeId) -> Arc<UseDefMap> {
self.use_def_maps[scope_id].clone()
}
pub(crate) fn ast_ids(&self, scope_id: FileScopeId) -> &AstIds {
&self.ast_ids[scope_id]
}
@@ -157,16 +175,28 @@ impl<'db> SemanticIndex<'db> {
}
/// Returns an iterator over all ancestors of `scope`, starting with `scope` itself.
#[allow(unused)]
pub(crate) fn ancestor_scopes(&self, scope: FileScopeId) -> AncestorsIter {
AncestorsIter::new(self, scope)
}
/// Returns the [`Definition`] salsa ingredient for `definition_node`.
pub(crate) fn definition<'def>(
/// Returns the [`Definition`] salsa ingredient for `definition_key`.
pub(crate) fn definition(
&self,
definition_node: impl Into<DefinitionNodeRef<'def>>,
definition_key: impl Into<DefinitionNodeKey>,
) -> Definition<'db> {
self.definitions_by_node[&definition_node.into().key()]
self.definitions_by_node[&definition_key.into()]
}
/// Returns the [`Expression`] ingredient for an expression node.
/// Panics if we have no expression ingredient for that node. We can only call this method for
/// standalone-inferable expressions, which we call `add_standalone_expression` for in
/// [`SemanticIndexBuilder`].
pub(crate) fn expression(
&self,
expression_key: impl Into<ExpressionNodeKey>,
) -> Expression<'db> {
self.expressions_by_node[&expression_key.into()]
}
/// Returns the id of the scope that `node` creates. This is different from [`Definition::scope`] which
@@ -176,8 +206,6 @@ impl<'db> SemanticIndex<'db> {
}
}
/// ID that uniquely identifies an expression inside a [`Scope`].
pub struct AncestorsIter<'a> {
scopes: &'a IndexSlice<FileScopeId, Scope>,
next_id: Option<FileScopeId>,
@@ -272,24 +300,26 @@ impl FusedIterator for ChildrenIter<'_> {}
#[cfg(test)]
mod tests {
use ruff_db::files::{system_path_to_file, File};
use ruff_db::parsed::parsed_module;
use ruff_db::vfs::{system_path_to_file, VfsFile};
use ruff_db::system::DbWithTestSystem;
use ruff_python_ast as ast;
use crate::db::tests::TestDb;
use crate::semantic_index::ast_ids::HasScopedUseId;
use crate::semantic_index::definition::DefinitionKind;
use crate::semantic_index::symbol::{FileScopeId, Scope, ScopeKind, SymbolTable};
use crate::semantic_index::{root_scope, semantic_index, symbol_table};
use crate::semantic_index::{global_scope, semantic_index, symbol_table, use_def_map};
use crate::Db;
struct TestCase {
db: TestDb,
file: VfsFile,
file: File,
}
fn test_case(content: impl ToString) -> TestCase {
let db = TestDb::new();
db.memory_file_system()
.write_file("test.py", content)
.unwrap();
let mut db = TestDb::new();
db.write_file("test.py", content).unwrap();
let file = system_path_to_file(&db, "test.py").unwrap();
@@ -306,95 +336,113 @@ mod tests {
#[test]
fn empty() {
let TestCase { db, file } = test_case("");
let root_table = symbol_table(&db, root_scope(&db, file));
let global_table = symbol_table(&db, global_scope(&db, file));
let root_names = names(&root_table);
let global_names = names(&global_table);
assert_eq!(root_names, Vec::<&str>::new());
assert_eq!(global_names, Vec::<&str>::new());
}
#[test]
fn simple() {
let TestCase { db, file } = test_case("x");
let root_table = symbol_table(&db, root_scope(&db, file));
let global_table = symbol_table(&db, global_scope(&db, file));
assert_eq!(names(&root_table), vec!["x"]);
assert_eq!(names(&global_table), vec!["x"]);
}
#[test]
fn annotation_only() {
let TestCase { db, file } = test_case("x: int");
let root_table = symbol_table(&db, root_scope(&db, file));
let global_table = symbol_table(&db, global_scope(&db, file));
assert_eq!(names(&root_table), vec!["int", "x"]);
assert_eq!(names(&global_table), vec!["int", "x"]);
// TODO record definition
}
#[test]
fn import() {
let TestCase { db, file } = test_case("import foo");
let root_table = symbol_table(&db, root_scope(&db, file));
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
assert_eq!(names(&root_table), vec!["foo"]);
let foo = root_table.symbol_by_name("foo").unwrap();
assert_eq!(names(&global_table), vec!["foo"]);
let foo = global_table.symbol_id_by_name("foo").unwrap();
assert_eq!(foo.definitions().len(), 1);
let use_def = use_def_map(&db, scope);
let [definition] = use_def.public_definitions(foo) else {
panic!("expected one definition");
};
assert!(matches!(definition.node(&db), DefinitionKind::Import(_)));
}
#[test]
fn import_sub() {
let TestCase { db, file } = test_case("import foo.bar");
let root_table = symbol_table(&db, root_scope(&db, file));
let global_table = symbol_table(&db, global_scope(&db, file));
assert_eq!(names(&root_table), vec!["foo"]);
assert_eq!(names(&global_table), vec!["foo"]);
}
#[test]
fn import_as() {
let TestCase { db, file } = test_case("import foo.bar as baz");
let root_table = symbol_table(&db, root_scope(&db, file));
let global_table = symbol_table(&db, global_scope(&db, file));
assert_eq!(names(&root_table), vec!["baz"]);
assert_eq!(names(&global_table), vec!["baz"]);
}
#[test]
fn import_from() {
let TestCase { db, file } = test_case("from bar import foo");
let root_table = symbol_table(&db, root_scope(&db, file));
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
assert_eq!(names(&root_table), vec!["foo"]);
assert_eq!(
root_table
.symbol_by_name("foo")
.unwrap()
.definitions()
.len(),
1
);
assert_eq!(names(&global_table), vec!["foo"]);
assert!(
root_table
global_table
.symbol_by_name("foo")
.is_some_and(|symbol| { symbol.is_defined() || !symbol.is_used() }),
.is_some_and(|symbol| { symbol.is_defined() && !symbol.is_used() }),
"symbols that are defined get the defined flag"
);
let use_def = use_def_map(&db, scope);
let [definition] = use_def.public_definitions(
global_table
.symbol_id_by_name("foo")
.expect("symbol to exist"),
) else {
panic!("expected one definition");
};
assert!(matches!(
definition.node(&db),
DefinitionKind::ImportFrom(_)
));
}
#[test]
fn assign() {
let TestCase { db, file } = test_case("x = foo");
let root_table = symbol_table(&db, root_scope(&db, file));
let scope = global_scope(&db, file);
let global_table = symbol_table(&db, scope);
assert_eq!(names(&root_table), vec!["foo", "x"]);
assert_eq!(
root_table.symbol_by_name("x").unwrap().definitions().len(),
1
);
assert_eq!(names(&global_table), vec!["foo", "x"]);
assert!(
root_table
global_table
.symbol_by_name("foo")
.is_some_and(|symbol| { !symbol.is_defined() && symbol.is_used() }),
"a symbol used but not defined in a scope should have only the used flag"
);
let use_def = use_def_map(&db, scope);
let [definition] =
use_def.public_definitions(global_table.symbol_id_by_name("x").expect("symbol exists"))
else {
panic!("expected one definition");
};
assert!(matches!(
definition.node(&db),
DefinitionKind::Assignment(_)
));
}
#[test]
@@ -406,26 +454,34 @@ class C:
y = 2
",
);
let root_table = symbol_table(&db, root_scope(&db, file));
let global_table = symbol_table(&db, global_scope(&db, file));
assert_eq!(names(&root_table), vec!["C", "y"]);
assert_eq!(names(&global_table), vec!["C", "y"]);
let index = semantic_index(&db, file);
let scopes: Vec<_> = index.child_scopes(FileScopeId::root()).collect();
assert_eq!(scopes.len(), 1);
let (class_scope_id, class_scope) = scopes[0];
let [(class_scope_id, class_scope)] = index
.child_scopes(FileScopeId::global())
.collect::<Vec<_>>()[..]
else {
panic!("expected one child scope")
};
assert_eq!(class_scope.kind(), ScopeKind::Class);
assert_eq!(class_scope_id.to_scope_id(&db, file).name(&db), "C");
let class_table = index.symbol_table(class_scope_id);
assert_eq!(names(&class_table), vec!["x"]);
assert_eq!(
class_table.symbol_by_name("x").unwrap().definitions().len(),
1
);
let use_def = index.use_def_map(class_scope_id);
let [definition] =
use_def.public_definitions(class_table.symbol_id_by_name("x").expect("symbol exists"))
else {
panic!("expected one definition");
};
assert!(matches!(
definition.node(&db),
DefinitionKind::Assignment(_)
));
}
#[test]
@@ -438,27 +494,34 @@ y = 2
",
);
let index = semantic_index(&db, file);
let root_table = index.symbol_table(FileScopeId::root());
let global_table = index.symbol_table(FileScopeId::global());
assert_eq!(names(&root_table), vec!["func", "y"]);
assert_eq!(names(&global_table), vec!["func", "y"]);
let scopes = index.child_scopes(FileScopeId::root()).collect::<Vec<_>>();
assert_eq!(scopes.len(), 1);
let (function_scope_id, function_scope) = scopes[0];
let [(function_scope_id, function_scope)] = index
.child_scopes(FileScopeId::global())
.collect::<Vec<_>>()[..]
else {
panic!("expected one child scope")
};
assert_eq!(function_scope.kind(), ScopeKind::Function);
assert_eq!(function_scope_id.to_scope_id(&db, file).name(&db), "func");
let function_table = index.symbol_table(function_scope_id);
assert_eq!(names(&function_table), vec!["x"]);
assert_eq!(
let use_def = index.use_def_map(function_scope_id);
let [definition] = use_def.public_definitions(
function_table
.symbol_by_name("x")
.unwrap()
.definitions()
.len(),
1
);
.symbol_id_by_name("x")
.expect("symbol exists"),
) else {
panic!("expected one definition");
};
assert!(matches!(
definition.node(&db),
DefinitionKind::Assignment(_)
));
}
#[test]
@@ -472,14 +535,15 @@ def func():
",
);
let index = semantic_index(&db, file);
let root_table = index.symbol_table(FileScopeId::root());
let global_table = index.symbol_table(FileScopeId::global());
assert_eq!(names(&root_table), vec!["func"]);
let scopes: Vec<_> = index.child_scopes(FileScopeId::root()).collect();
assert_eq!(scopes.len(), 2);
let (func_scope1_id, func_scope_1) = scopes[0];
let (func_scope2_id, func_scope_2) = scopes[1];
assert_eq!(names(&global_table), vec!["func"]);
let [(func_scope1_id, func_scope_1), (func_scope2_id, func_scope_2)] = index
.child_scopes(FileScopeId::global())
.collect::<Vec<_>>()[..]
else {
panic!("expected two child scopes");
};
assert_eq!(func_scope_1.kind(), ScopeKind::Function);
@@ -491,14 +555,16 @@ def func():
let func2_table = index.symbol_table(func_scope2_id);
assert_eq!(names(&func1_table), vec!["x"]);
assert_eq!(names(&func2_table), vec!["y"]);
assert_eq!(
root_table
.symbol_by_name("func")
.unwrap()
.definitions()
.len(),
2
);
let use_def = index.use_def_map(FileScopeId::global());
let [definition] = use_def.public_definitions(
global_table
.symbol_id_by_name("func")
.expect("symbol exists"),
) else {
panic!("expected one definition");
};
assert!(matches!(definition.node(&db), DefinitionKind::Function(_)));
}
#[test]
@@ -511,22 +577,27 @@ def func[T]():
);
let index = semantic_index(&db, file);
let root_table = index.symbol_table(FileScopeId::root());
let global_table = index.symbol_table(FileScopeId::global());
assert_eq!(names(&root_table), vec!["func"]);
assert_eq!(names(&global_table), vec!["func"]);
let scopes: Vec<_> = index.child_scopes(FileScopeId::root()).collect();
assert_eq!(scopes.len(), 1);
let (ann_scope_id, ann_scope) = scopes[0];
let [(ann_scope_id, ann_scope)] = index
.child_scopes(FileScopeId::global())
.collect::<Vec<_>>()[..]
else {
panic!("expected one child scope");
};
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope_id.to_scope_id(&db, file).name(&db), "func");
let ann_table = index.symbol_table(ann_scope_id);
assert_eq!(names(&ann_table), vec!["T"]);
let scopes: Vec<_> = index.child_scopes(ann_scope_id).collect();
assert_eq!(scopes.len(), 1);
let (func_scope_id, func_scope) = scopes[0];
let [(func_scope_id, func_scope)] =
index.child_scopes(ann_scope_id).collect::<Vec<_>>()[..]
else {
panic!("expected one child scope");
};
assert_eq!(func_scope.kind(), ScopeKind::Function);
assert_eq!(func_scope_id.to_scope_id(&db, file).name(&db), "func");
let func_table = index.symbol_table(func_scope_id);
@@ -543,14 +614,17 @@ class C[T]:
);
let index = semantic_index(&db, file);
let root_table = index.symbol_table(FileScopeId::root());
let global_table = index.symbol_table(FileScopeId::global());
assert_eq!(names(&root_table), vec!["C"]);
assert_eq!(names(&global_table), vec!["C"]);
let scopes: Vec<_> = index.child_scopes(FileScopeId::root()).collect();
let [(ann_scope_id, ann_scope)] = index
.child_scopes(FileScopeId::global())
.collect::<Vec<_>>()[..]
else {
panic!("expected one child scope");
};
assert_eq!(scopes.len(), 1);
let (ann_scope_id, ann_scope) = scopes[0];
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope_id.to_scope_id(&db, file).name(&db), "C");
let ann_table = index.symbol_table(ann_scope_id);
@@ -562,48 +636,49 @@ class C[T]:
"type parameters are defined by the scope that introduces them"
);
let scopes: Vec<_> = index.child_scopes(ann_scope_id).collect();
assert_eq!(scopes.len(), 1);
let (class_scope_id, class_scope) = scopes[0];
let [(class_scope_id, class_scope)] =
index.child_scopes(ann_scope_id).collect::<Vec<_>>()[..]
else {
panic!("expected one child scope");
};
assert_eq!(class_scope.kind(), ScopeKind::Class);
assert_eq!(class_scope_id.to_scope_id(&db, file).name(&db), "C");
assert_eq!(names(&index.symbol_table(class_scope_id)), vec!["x"]);
}
// TODO: After porting the control flow graph.
// #[test]
// fn reachability_trivial() {
// let parsed = parse("x = 1; x");
// let ast = parsed.syntax();
// let index = SemanticIndex::from_ast(ast);
// let table = &index.symbol_table;
// let x_sym = table
// .root_symbol_id_by_name("x")
// .expect("x symbol should exist");
// let ast::Stmt::Expr(ast::StmtExpr { value: x_use, .. }) = &ast.body[1] else {
// panic!("should be an expr")
// };
// let x_defs: Vec<_> = index
// .reachable_definitions(x_sym, x_use)
// .map(|constrained_definition| constrained_definition.definition)
// .collect();
// assert_eq!(x_defs.len(), 1);
// let Definition::Assignment(node_key) = &x_defs[0] else {
// panic!("def should be an assignment")
// };
// let Some(def_node) = node_key.resolve(ast.into()) else {
// panic!("node key should resolve")
// };
// let ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
// value: ast::Number::Int(num),
// ..
// }) = &*def_node.value
// else {
// panic!("should be a number literal")
// };
// assert_eq!(*num, 1);
// }
#[test]
fn reachability_trivial() {
let TestCase { db, file } = test_case("x = 1; x");
let parsed = parsed_module(&db, file);
let scope = global_scope(&db, file);
let ast = parsed.syntax();
let ast::Stmt::Expr(ast::StmtExpr {
value: x_use_expr, ..
}) = &ast.body[1]
else {
panic!("should be an expr")
};
let ast::Expr::Name(x_use_expr_name) = x_use_expr.as_ref() else {
panic!("expected a Name");
};
let x_use_id = x_use_expr_name.scoped_use_id(&db, scope);
let use_def = use_def_map(&db, scope);
let [definition] = use_def.use_definitions(x_use_id) else {
panic!("expected one definition");
};
let DefinitionKind::Assignment(assignment) = definition.node(&db) else {
panic!("should be an assignment definition")
};
let ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(num),
..
}) = &*assignment.assignment().value
else {
panic!("should be a number literal")
};
assert_eq!(*num, 1);
}
#[test]
fn expression_scope() {
@@ -617,7 +692,7 @@ class C[T]:
let x = &x_stmt.targets[0];
assert_eq!(index.expression_scope(x).kind(), ScopeKind::Module);
assert_eq!(index.expression_scope_id(x), FileScopeId::root());
assert_eq!(index.expression_scope_id(x), FileScopeId::global());
let def = ast.body[1].as_function_def_stmt().unwrap();
let y_stmt = def.body[0].as_assign_stmt().unwrap();
@@ -631,7 +706,7 @@ class C[T]:
fn scope_names<'a>(
scopes: impl Iterator<Item = (FileScopeId, &'a Scope)>,
db: &'a dyn Db,
file: VfsFile,
file: File,
) -> Vec<&'a str> {
scopes
.into_iter()
@@ -654,16 +729,16 @@ def x():
let index = semantic_index(&db, file);
let descendents = index.descendent_scopes(FileScopeId::root());
let descendents = index.descendent_scopes(FileScopeId::global());
assert_eq!(
scope_names(descendents, &db, file),
vec!["Test", "foo", "bar", "baz", "x"]
);
let children = index.child_scopes(FileScopeId::root());
let children = index.child_scopes(FileScopeId::global());
assert_eq!(scope_names(children, &db, file), vec!["Test", "x"]);
let test_class = index.child_scopes(FileScopeId::root()).next().unwrap().0;
let test_class = index.child_scopes(FileScopeId::global()).next().unwrap().0;
let test_child_scopes = index.child_scopes(test_class);
assert_eq!(
scope_names(test_child_scopes, &db, file),
@@ -671,7 +746,7 @@ def x():
);
let bar_scope = index
.descendent_scopes(FileScopeId::root())
.descendent_scopes(FileScopeId::global())
.nth(2)
.unwrap()
.0;

View File

@@ -1,6 +1,6 @@
use rustc_hash::FxHashMap;
use ruff_index::{newtype_index, Idx};
use ruff_index::newtype_index;
use ruff_python_ast as ast;
use ruff_python_ast::ExpressionRef;
@@ -28,18 +28,54 @@ use crate::Db;
pub(crate) struct AstIds {
/// Maps expressions to their expression id. Uses `NodeKey` because it avoids cloning [`Parsed`].
expressions_map: FxHashMap<ExpressionNodeKey, ScopedExpressionId>,
/// Maps expressions which "use" a symbol (that is, [`ExprName`]) to a use id.
uses_map: FxHashMap<ExpressionNodeKey, ScopedUseId>,
}
impl AstIds {
fn expression_id(&self, key: impl Into<ExpressionNodeKey>) -> ScopedExpressionId {
self.expressions_map[&key.into()]
}
fn use_id(&self, key: impl Into<ExpressionNodeKey>) -> ScopedUseId {
self.uses_map[&key.into()]
}
}
fn ast_ids<'db>(db: &'db dyn Db, scope: ScopeId) -> &'db AstIds {
semantic_index(db, scope.file(db)).ast_ids(scope.file_scope_id(db))
}
pub trait HasScopedUseId {
/// The type of the ID uniquely identifying the use.
type Id: Copy;
/// Returns the ID that uniquely identifies the use in `scope`.
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id;
}
/// Uniquely identifies a use of a name in a [`crate::semantic_index::symbol::FileScopeId`].
#[newtype_index]
pub struct ScopedUseId;
impl HasScopedUseId for ast::ExprName {
type Id = ScopedUseId;
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
let expression_ref = ExpressionRef::from(self);
expression_ref.scoped_use_id(db, scope)
}
}
impl HasScopedUseId for ast::ExpressionRef<'_> {
type Id = ScopedUseId;
fn scoped_use_id(&self, db: &dyn Db, scope: ScopeId) -> Self::Id {
let ast_ids = ast_ids(db, scope);
ast_ids.use_id(*self)
}
}
pub trait HasScopedAstId {
/// The type of the ID uniquely identifying the node.
type Id: Copy;
@@ -110,38 +146,43 @@ impl HasScopedAstId for ast::ExpressionRef<'_> {
#[derive(Debug)]
pub(super) struct AstIdsBuilder {
next_id: ScopedExpressionId,
expressions_map: FxHashMap<ExpressionNodeKey, ScopedExpressionId>,
uses_map: FxHashMap<ExpressionNodeKey, ScopedUseId>,
}
impl AstIdsBuilder {
pub(super) fn new() -> Self {
Self {
next_id: ScopedExpressionId::new(0),
expressions_map: FxHashMap::default(),
uses_map: FxHashMap::default(),
}
}
/// Adds `expr` to the AST ids map and returns its id.
///
/// ## Safety
/// The function is marked as unsafe because it calls [`AstNodeRef::new`] which requires
/// that `expr` is a child of `parsed`.
#[allow(unsafe_code)]
/// Adds `expr` to the expression ids map and returns its id.
pub(super) fn record_expression(&mut self, expr: &ast::Expr) -> ScopedExpressionId {
let expression_id = self.next_id;
self.next_id = expression_id + 1;
let expression_id = self.expressions_map.len().into();
self.expressions_map.insert(expr.into(), expression_id);
expression_id
}
/// Adds `expr` to the use ids map and returns its id.
pub(super) fn record_use(&mut self, expr: &ast::Expr) -> ScopedUseId {
let use_id = self.uses_map.len().into();
self.uses_map.insert(expr.into(), use_id);
use_id
}
pub(super) fn finish(mut self) -> AstIds {
self.expressions_map.shrink_to_fit();
self.uses_map.shrink_to_fit();
AstIds {
expressions_map: self.expressions_map,
uses_map: self.uses_map,
}
}
}

View File

@@ -2,62 +2,69 @@ use std::sync::Arc;
use rustc_hash::FxHashMap;
use ruff_db::files::File;
use ruff_db::parsed::ParsedModule;
use ruff_db::vfs::VfsFile;
use ruff_index::IndexVec;
use ruff_python_ast as ast;
use ruff_python_ast::name::Name;
use ruff_python_ast::visitor::{walk_expr, walk_stmt, Visitor};
use crate::ast_node_ref::AstNodeRef;
use crate::semantic_index::ast_ids::node_key::ExpressionNodeKey;
use crate::semantic_index::ast_ids::AstIdsBuilder;
use crate::semantic_index::definition::{Definition, DefinitionNodeKey, DefinitionNodeRef};
use crate::semantic_index::definition::{
AssignmentDefinitionNodeRef, Definition, DefinitionNodeKey, DefinitionNodeRef,
ImportFromDefinitionNodeRef,
};
use crate::semantic_index::expression::Expression;
use crate::semantic_index::symbol::{
FileScopeId, NodeWithScopeKey, NodeWithScopeRef, Scope, ScopeId, ScopedSymbolId, SymbolFlags,
SymbolTableBuilder,
};
use crate::semantic_index::use_def::{FlowSnapshot, UseDefMapBuilder};
use crate::semantic_index::SemanticIndex;
use crate::Db;
pub(super) struct SemanticIndexBuilder<'db, 'ast> {
pub(super) struct SemanticIndexBuilder<'db> {
// Builder state
db: &'db dyn Db,
file: VfsFile,
file: File,
module: &'db ParsedModule,
scope_stack: Vec<FileScopeId>,
/// the target we're currently inferring
current_target: Option<CurrentTarget<'ast>>,
/// the assignment we're currently visiting
current_assignment: Option<CurrentAssignment<'db>>,
// Semantic Index fields
scopes: IndexVec<FileScopeId, Scope>,
scope_ids_by_scope: IndexVec<FileScopeId, ScopeId<'db>>,
symbol_tables: IndexVec<FileScopeId, SymbolTableBuilder<'db>>,
symbol_tables: IndexVec<FileScopeId, SymbolTableBuilder>,
ast_ids: IndexVec<FileScopeId, AstIdsBuilder>,
use_def_maps: IndexVec<FileScopeId, UseDefMapBuilder<'db>>,
scopes_by_node: FxHashMap<NodeWithScopeKey, FileScopeId>,
scopes_by_expression: FxHashMap<ExpressionNodeKey, FileScopeId>,
definitions_by_node: FxHashMap<DefinitionNodeKey, Definition<'db>>,
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
}
impl<'db, 'ast> SemanticIndexBuilder<'db, 'ast>
where
'db: 'ast,
{
pub(super) fn new(db: &'db dyn Db, file: VfsFile, parsed: &'db ParsedModule) -> Self {
impl<'db> SemanticIndexBuilder<'db> {
pub(super) fn new(db: &'db dyn Db, file: File, parsed: &'db ParsedModule) -> Self {
let mut builder = Self {
db,
file,
module: parsed,
scope_stack: Vec::new(),
current_target: None,
current_assignment: None,
scopes: IndexVec::new(),
symbol_tables: IndexVec::new(),
ast_ids: IndexVec::new(),
scope_ids_by_scope: IndexVec::new(),
use_def_maps: IndexVec::new(),
scopes_by_expression: FxHashMap::default(),
scopes_by_node: FxHashMap::default(),
definitions_by_node: FxHashMap::default(),
expressions_by_node: FxHashMap::default(),
};
builder.push_scope_with_parent(NodeWithScopeRef::Module, None);
@@ -72,16 +79,12 @@ where
.expect("Always to have a root scope")
}
fn push_scope(&mut self, node: NodeWithScopeRef<'ast>) {
fn push_scope(&mut self, node: NodeWithScopeRef) {
let parent = self.current_scope();
self.push_scope_with_parent(node, Some(parent));
}
fn push_scope_with_parent(
&mut self,
node: NodeWithScopeRef<'ast>,
parent: Option<FileScopeId>,
) {
fn push_scope_with_parent(&mut self, node: NodeWithScopeRef, parent: Option<FileScopeId>) {
let children_start = self.scopes.next_index() + 1;
let scope = Scope {
@@ -92,6 +95,7 @@ where
let file_scope_id = self.scopes.push(scope);
self.symbol_tables.push(SymbolTableBuilder::new());
self.use_def_maps.push(UseDefMapBuilder::new());
let ast_id_scope = self.ast_ids.push(AstIdsBuilder::new());
#[allow(unsafe_code)]
@@ -116,32 +120,54 @@ where
id
}
fn current_symbol_table(&mut self) -> &mut SymbolTableBuilder<'db> {
fn current_symbol_table(&mut self) -> &mut SymbolTableBuilder {
let scope_id = self.current_scope();
&mut self.symbol_tables[scope_id]
}
fn current_use_def_map(&mut self) -> &mut UseDefMapBuilder<'db> {
let scope_id = self.current_scope();
&mut self.use_def_maps[scope_id]
}
fn current_ast_ids(&mut self) -> &mut AstIdsBuilder {
let scope_id = self.current_scope();
&mut self.ast_ids[scope_id]
}
fn add_or_update_symbol(&mut self, name: Name, flags: SymbolFlags) -> ScopedSymbolId {
let symbol_table = self.current_symbol_table();
symbol_table.add_or_update_symbol(name, flags)
fn flow_snapshot(&mut self) -> FlowSnapshot {
self.current_use_def_map().snapshot()
}
fn add_definition(
fn flow_restore(&mut self, state: FlowSnapshot) {
self.current_use_def_map().restore(state);
}
fn flow_merge(&mut self, state: &FlowSnapshot) {
self.current_use_def_map().merge(state);
}
fn add_or_update_symbol(&mut self, name: Name, flags: SymbolFlags) -> ScopedSymbolId {
let symbol_table = self.current_symbol_table();
let (symbol_id, added) = symbol_table.add_or_update_symbol(name, flags);
if added {
let use_def_map = self.current_use_def_map();
use_def_map.add_symbol(symbol_id);
}
symbol_id
}
fn add_definition<'a>(
&mut self,
definition_node: impl Into<DefinitionNodeRef<'ast>>,
symbol_id: ScopedSymbolId,
symbol: ScopedSymbolId,
definition_node: impl Into<DefinitionNodeRef<'a>>,
) -> Definition<'db> {
let definition_node = definition_node.into();
let definition = Definition::new(
self.db,
self.file,
self.current_scope(),
symbol_id,
symbol,
#[allow(unsafe_code)]
unsafe {
definition_node.into_owned(self.module.clone())
@@ -150,26 +176,31 @@ where
self.definitions_by_node
.insert(definition_node.key(), definition);
self.current_use_def_map()
.record_definition(symbol, definition);
definition
}
fn add_or_update_symbol_with_definition(
&mut self,
name: Name,
definition: impl Into<DefinitionNodeRef<'ast>>,
) -> (ScopedSymbolId, Definition<'db>) {
let symbol_table = self.current_symbol_table();
let id = symbol_table.add_or_update_symbol(name, SymbolFlags::IS_DEFINED);
let definition = self.add_definition(definition, id);
self.current_symbol_table().add_definition(id, definition);
(id, definition)
/// Record an expression that needs to be a Salsa ingredient, because we need to infer its type
/// standalone (type narrowing tests, RHS of an assignment.)
fn add_standalone_expression(&mut self, expression_node: &ast::Expr) {
let expression = Expression::new(
self.db,
self.file,
self.current_scope(),
#[allow(unsafe_code)]
unsafe {
AstNodeRef::new(self.module.clone(), expression_node)
},
);
self.expressions_by_node
.insert(expression_node.into(), expression);
}
fn with_type_params(
&mut self,
with_params: &WithTypeParams<'ast>,
with_params: &WithTypeParams,
nested: impl FnOnce(&mut Self) -> FileScopeId,
) -> FileScopeId {
let type_params = with_params.type_parameters();
@@ -213,7 +244,7 @@ where
self.pop_scope();
assert!(self.scope_stack.is_empty());
assert!(self.current_target.is_none());
assert!(self.current_assignment.is_none());
let mut symbol_tables: IndexVec<_, _> = self
.symbol_tables
@@ -221,6 +252,12 @@ where
.map(|builder| Arc::new(builder.finish()))
.collect();
let mut use_def_maps: IndexVec<_, _> = self
.use_def_maps
.into_iter()
.map(|builder| Arc::new(builder.finish()))
.collect();
let mut ast_ids: IndexVec<_, _> = self
.ast_ids
.into_iter()
@@ -228,8 +265,9 @@ where
.collect();
self.scopes.shrink_to_fit();
ast_ids.shrink_to_fit();
symbol_tables.shrink_to_fit();
use_def_maps.shrink_to_fit();
ast_ids.shrink_to_fit();
self.scopes_by_expression.shrink_to_fit();
self.definitions_by_node.shrink_to_fit();
@@ -240,17 +278,19 @@ where
symbol_tables,
scopes: self.scopes,
definitions_by_node: self.definitions_by_node,
expressions_by_node: self.expressions_by_node,
scope_ids_by_scope: self.scope_ids_by_scope,
ast_ids,
scopes_by_expression: self.scopes_by_expression,
scopes_by_node: self.scopes_by_node,
use_def_maps,
}
}
}
impl<'db, 'ast> Visitor<'ast> for SemanticIndexBuilder<'db, 'ast>
impl<'db, 'ast> Visitor<'ast> for SemanticIndexBuilder<'db>
where
'db: 'ast,
'ast: 'db,
{
fn visit_stmt(&mut self, stmt: &'ast ast::Stmt) {
match stmt {
@@ -259,10 +299,9 @@ where
self.visit_decorator(decorator);
}
self.add_or_update_symbol_with_definition(
function_def.name.id.clone(),
function_def,
);
let symbol = self
.add_or_update_symbol(function_def.name.id.clone(), SymbolFlags::IS_DEFINED);
self.add_definition(symbol, function_def);
self.with_type_params(
&WithTypeParams::FunctionDef { node: function_def },
@@ -283,7 +322,9 @@ where
self.visit_decorator(decorator);
}
self.add_or_update_symbol_with_definition(class.name.id.clone(), class);
let symbol =
self.add_or_update_symbol(class.name.id.clone(), SymbolFlags::IS_DEFINED);
self.add_definition(symbol, class);
self.with_type_params(&WithTypeParams::ClassDef { node: class }, |builder| {
if let Some(arguments) = &class.arguments {
@@ -296,41 +337,84 @@ where
builder.pop_scope()
});
}
ast::Stmt::Import(ast::StmtImport { names, .. }) => {
for alias in names {
ast::Stmt::Import(node) => {
for alias in &node.names {
let symbol_name = if let Some(asname) = &alias.asname {
asname.id.clone()
} else {
Name::new(alias.name.id.split('.').next().unwrap())
};
self.add_or_update_symbol_with_definition(symbol_name, alias);
let symbol = self.add_or_update_symbol(symbol_name, SymbolFlags::IS_DEFINED);
self.add_definition(symbol, alias);
}
}
ast::Stmt::ImportFrom(ast::StmtImportFrom {
module: _,
names,
level: _,
..
}) => {
for alias in names {
ast::Stmt::ImportFrom(node) => {
for (alias_index, alias) in node.names.iter().enumerate() {
let symbol_name = if let Some(asname) = &alias.asname {
&asname.id
} else {
&alias.name.id
};
self.add_or_update_symbol_with_definition(symbol_name.clone(), alias);
let symbol =
self.add_or_update_symbol(symbol_name.clone(), SymbolFlags::IS_DEFINED);
self.add_definition(symbol, ImportFromDefinitionNodeRef { node, alias_index });
}
}
ast::Stmt::Assign(node) => {
debug_assert!(self.current_target.is_none());
debug_assert!(self.current_assignment.is_none());
self.visit_expr(&node.value);
self.add_standalone_expression(&node.value);
self.current_assignment = Some(node.into());
for target in &node.targets {
self.current_target = Some(CurrentTarget::Expr(target));
self.visit_expr(target);
}
self.current_target = None;
self.current_assignment = None;
}
ast::Stmt::AnnAssign(node) => {
debug_assert!(self.current_assignment.is_none());
// TODO deferred annotation visiting
self.visit_expr(&node.annotation);
match &node.value {
Some(value) => {
self.visit_expr(value);
self.current_assignment = Some(node.into());
self.visit_expr(&node.target);
self.current_assignment = None;
}
None => {
// TODO annotation-only assignments
self.visit_expr(&node.target);
}
}
}
ast::Stmt::If(node) => {
self.visit_expr(&node.test);
let pre_if = self.flow_snapshot();
self.visit_body(&node.body);
let mut post_clauses: Vec<FlowSnapshot> = vec![];
for clause in &node.elif_else_clauses {
// snapshot after every block except the last; the last one will just become
// the state that we merge the other snapshots into
post_clauses.push(self.flow_snapshot());
// we can only take an elif/else branch if none of the previous ones were
// taken, so the block entry state is always `pre_if`
self.flow_restore(pre_if.clone());
self.visit_elif_else_clause(clause);
}
for post_clause_state in post_clauses {
self.flow_merge(&post_clause_state);
}
let has_else = node
.elif_else_clauses
.last()
.is_some_and(|clause| clause.test.is_none());
if !has_else {
// if there's no else clause, then it's possible we took none of the branches,
// and the pre_if state can reach here
self.flow_merge(&pre_if);
}
}
_ => {
walk_stmt(self, stmt);
@@ -344,57 +428,64 @@ where
self.current_ast_ids().record_expression(expr);
match expr {
ast::Expr::Name(ast::ExprName { id, ctx, .. }) => {
ast::Expr::Name(name_node) => {
let ast::ExprName { id, ctx, .. } = name_node;
let flags = match ctx {
ast::ExprContext::Load => SymbolFlags::IS_USED,
ast::ExprContext::Store => SymbolFlags::IS_DEFINED,
ast::ExprContext::Del => SymbolFlags::IS_DEFINED,
ast::ExprContext::Invalid => SymbolFlags::empty(),
};
match self.current_target {
Some(target) if flags.contains(SymbolFlags::IS_DEFINED) => {
self.add_or_update_symbol_with_definition(id.clone(), target);
}
_ => {
self.add_or_update_symbol(id.clone(), flags);
let symbol = self.add_or_update_symbol(id.clone(), flags);
if flags.contains(SymbolFlags::IS_DEFINED) {
match self.current_assignment {
Some(CurrentAssignment::Assign(assignment)) => {
self.add_definition(
symbol,
AssignmentDefinitionNodeRef {
assignment,
target: name_node,
},
);
}
Some(CurrentAssignment::AnnAssign(ann_assign)) => {
self.add_definition(symbol, ann_assign);
}
Some(CurrentAssignment::Named(named)) => {
self.add_definition(symbol, named);
}
None => {}
}
}
if flags.contains(SymbolFlags::IS_USED) {
let use_id = self.current_ast_ids().record_use(expr);
self.current_use_def_map().record_use(symbol, use_id);
}
walk_expr(self, expr);
}
ast::Expr::Named(node) => {
debug_assert!(self.current_target.is_none());
self.current_target = Some(CurrentTarget::ExprNamed(node));
debug_assert!(self.current_assignment.is_none());
self.current_assignment = Some(node.into());
// TODO walrus in comprehensions is implicitly nonlocal
self.visit_expr(&node.target);
self.current_target = None;
self.current_assignment = None;
self.visit_expr(&node.value);
}
ast::Expr::If(ast::ExprIf {
body, test, orelse, ..
}) => {
// TODO detect statically known truthy or falsy test (via type inference, not naive
// AST inspection, so we can't simplify here, need to record test expression in CFG
// for later checking)
// AST inspection, so we can't simplify here, need to record test expression for
// later checking)
self.visit_expr(test);
// let if_branch = self.flow_graph_builder.add_branch(self.current_flow_node());
// self.set_current_flow_node(if_branch);
// self.insert_constraint(test);
let pre_if = self.flow_snapshot();
self.visit_expr(body);
// let post_body = self.current_flow_node();
// self.set_current_flow_node(if_branch);
let post_body = self.flow_snapshot();
self.flow_restore(pre_if);
self.visit_expr(orelse);
// let post_else = self
// .flow_graph_builder
// .add_phi(self.current_flow_node(), post_body);
// self.set_current_flow_node(post_else);
self.flow_merge(&post_body);
}
_ => {
walk_expr(self, expr);
@@ -418,16 +509,26 @@ impl<'node> WithTypeParams<'node> {
}
#[derive(Copy, Clone, Debug)]
enum CurrentTarget<'a> {
Expr(&'a ast::Expr),
ExprNamed(&'a ast::ExprNamed),
enum CurrentAssignment<'a> {
Assign(&'a ast::StmtAssign),
AnnAssign(&'a ast::StmtAnnAssign),
Named(&'a ast::ExprNamed),
}
impl<'a> From<CurrentTarget<'a>> for DefinitionNodeRef<'a> {
fn from(val: CurrentTarget<'a>) -> Self {
match val {
CurrentTarget::Expr(expression) => DefinitionNodeRef::Target(expression),
CurrentTarget::ExprNamed(named) => DefinitionNodeRef::NamedExpression(named),
}
impl<'a> From<&'a ast::StmtAssign> for CurrentAssignment<'a> {
fn from(value: &'a ast::StmtAssign) -> Self {
Self::Assign(value)
}
}
impl<'a> From<&'a ast::StmtAnnAssign> for CurrentAssignment<'a> {
fn from(value: &'a ast::StmtAnnAssign) -> Self {
Self::AnnAssign(value)
}
}
impl<'a> From<&'a ast::ExprNamed> for CurrentAssignment<'a> {
fn from(value: &'a ast::ExprNamed) -> Self {
Self::Named(value)
}
}

View File

@@ -1,66 +1,114 @@
use ruff_db::files::File;
use ruff_db::parsed::ParsedModule;
use ruff_db::vfs::VfsFile;
use ruff_python_ast as ast;
use crate::ast_node_ref::AstNodeRef;
use crate::node_key::NodeKey;
use crate::semantic_index::symbol::{FileScopeId, ScopedSymbolId};
use crate::semantic_index::symbol::{FileScopeId, ScopeId, ScopedSymbolId};
use crate::Db;
#[salsa::tracked]
pub struct Definition<'db> {
/// The file in which the definition is defined.
/// The file in which the definition occurs.
#[id]
pub(super) file: VfsFile,
pub(crate) file: File,
/// The scope in which the definition is defined.
/// The scope in which the definition occurs.
#[id]
pub(crate) scope: FileScopeId,
pub(crate) file_scope: FileScopeId,
/// The id of the corresponding symbol. Mainly used as ID.
/// The symbol defined.
#[id]
symbol_id: ScopedSymbolId,
pub(crate) symbol: ScopedSymbolId,
#[no_eq]
#[return_ref]
pub(crate) node: DefinitionKind,
}
impl<'db> Definition<'db> {
pub(crate) fn scope(self, db: &'db dyn Db) -> ScopeId<'db> {
self.file_scope(db).to_scope_id(db, self.file(db))
}
}
#[derive(Copy, Clone, Debug)]
pub(crate) enum DefinitionNodeRef<'a> {
Alias(&'a ast::Alias),
Import(&'a ast::Alias),
ImportFrom(ImportFromDefinitionNodeRef<'a>),
Function(&'a ast::StmtFunctionDef),
Class(&'a ast::StmtClassDef),
NamedExpression(&'a ast::ExprNamed),
Target(&'a ast::Expr),
Assignment(AssignmentDefinitionNodeRef<'a>),
AnnotatedAssignment(&'a ast::StmtAnnAssign),
}
impl<'a> From<&'a ast::Alias> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::Alias) -> Self {
Self::Alias(node)
}
}
impl<'a> From<&'a ast::StmtFunctionDef> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtFunctionDef) -> Self {
Self::Function(node)
}
}
impl<'a> From<&'a ast::StmtClassDef> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtClassDef) -> Self {
Self::Class(node)
}
}
impl<'a> From<&'a ast::ExprNamed> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::ExprNamed) -> Self {
Self::NamedExpression(node)
}
}
impl<'a> From<&'a ast::StmtAnnAssign> for DefinitionNodeRef<'a> {
fn from(node: &'a ast::StmtAnnAssign) -> Self {
Self::AnnotatedAssignment(node)
}
}
impl<'a> From<&'a ast::Alias> for DefinitionNodeRef<'a> {
fn from(node_ref: &'a ast::Alias) -> Self {
Self::Import(node_ref)
}
}
impl<'a> From<ImportFromDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: ImportFromDefinitionNodeRef<'a>) -> Self {
Self::ImportFrom(node_ref)
}
}
impl<'a> From<AssignmentDefinitionNodeRef<'a>> for DefinitionNodeRef<'a> {
fn from(node_ref: AssignmentDefinitionNodeRef<'a>) -> Self {
Self::Assignment(node_ref)
}
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct ImportFromDefinitionNodeRef<'a> {
pub(crate) node: &'a ast::StmtImportFrom,
pub(crate) alias_index: usize,
}
#[derive(Copy, Clone, Debug)]
pub(crate) struct AssignmentDefinitionNodeRef<'a> {
pub(crate) assignment: &'a ast::StmtAssign,
pub(crate) target: &'a ast::ExprName,
}
impl DefinitionNodeRef<'_> {
#[allow(unsafe_code)]
pub(super) unsafe fn into_owned(self, parsed: ParsedModule) -> DefinitionKind {
match self {
DefinitionNodeRef::Alias(alias) => {
DefinitionKind::Alias(AstNodeRef::new(parsed, alias))
DefinitionNodeRef::Import(alias) => {
DefinitionKind::Import(AstNodeRef::new(parsed, alias))
}
DefinitionNodeRef::ImportFrom(ImportFromDefinitionNodeRef { node, alias_index }) => {
DefinitionKind::ImportFrom(ImportFromDefinitionKind {
node: AstNodeRef::new(parsed, node),
alias_index,
})
}
DefinitionNodeRef::Function(function) => {
DefinitionKind::Function(AstNodeRef::new(parsed, function))
@@ -71,33 +119,111 @@ impl DefinitionNodeRef<'_> {
DefinitionNodeRef::NamedExpression(named) => {
DefinitionKind::NamedExpression(AstNodeRef::new(parsed, named))
}
DefinitionNodeRef::Target(target) => {
DefinitionKind::Target(AstNodeRef::new(parsed, target))
DefinitionNodeRef::Assignment(AssignmentDefinitionNodeRef { assignment, target }) => {
DefinitionKind::Assignment(AssignmentDefinitionKind {
assignment: AstNodeRef::new(parsed.clone(), assignment),
target: AstNodeRef::new(parsed, target),
})
}
DefinitionNodeRef::AnnotatedAssignment(assign) => {
DefinitionKind::AnnotatedAssignment(AstNodeRef::new(parsed, assign))
}
}
}
}
impl DefinitionNodeRef<'_> {
pub(super) fn key(self) -> DefinitionNodeKey {
match self {
Self::Alias(node) => DefinitionNodeKey(NodeKey::from_node(node)),
Self::Function(node) => DefinitionNodeKey(NodeKey::from_node(node)),
Self::Class(node) => DefinitionNodeKey(NodeKey::from_node(node)),
Self::NamedExpression(node) => DefinitionNodeKey(NodeKey::from_node(node)),
Self::Target(node) => DefinitionNodeKey(NodeKey::from_node(node)),
Self::Import(node) => node.into(),
Self::ImportFrom(ImportFromDefinitionNodeRef { node, alias_index }) => {
(&node.names[alias_index]).into()
}
Self::Function(node) => node.into(),
Self::Class(node) => node.into(),
Self::NamedExpression(node) => node.into(),
Self::Assignment(AssignmentDefinitionNodeRef {
assignment: _,
target,
}) => target.into(),
Self::AnnotatedAssignment(node) => node.into(),
}
}
}
#[derive(Clone, Debug)]
pub enum DefinitionKind {
Alias(AstNodeRef<ast::Alias>),
Import(AstNodeRef<ast::Alias>),
ImportFrom(ImportFromDefinitionKind),
Function(AstNodeRef<ast::StmtFunctionDef>),
Class(AstNodeRef<ast::StmtClassDef>),
NamedExpression(AstNodeRef<ast::ExprNamed>),
Target(AstNodeRef<ast::Expr>),
Assignment(AssignmentDefinitionKind),
AnnotatedAssignment(AstNodeRef<ast::StmtAnnAssign>),
}
#[derive(Clone, Debug)]
pub struct ImportFromDefinitionKind {
node: AstNodeRef<ast::StmtImportFrom>,
alias_index: usize,
}
impl ImportFromDefinitionKind {
pub(crate) fn import(&self) -> &ast::StmtImportFrom {
self.node.node()
}
pub(crate) fn alias(&self) -> &ast::Alias {
&self.node.node().names[self.alias_index]
}
}
#[derive(Clone, Debug)]
#[allow(dead_code)]
pub struct AssignmentDefinitionKind {
assignment: AstNodeRef<ast::StmtAssign>,
target: AstNodeRef<ast::ExprName>,
}
impl AssignmentDefinitionKind {
pub(crate) fn assignment(&self) -> &ast::StmtAssign {
self.assignment.node()
}
}
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug)]
pub(super) struct DefinitionNodeKey(NodeKey);
pub(crate) struct DefinitionNodeKey(NodeKey);
impl From<&ast::Alias> for DefinitionNodeKey {
fn from(node: &ast::Alias) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::StmtFunctionDef> for DefinitionNodeKey {
fn from(node: &ast::StmtFunctionDef) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::StmtClassDef> for DefinitionNodeKey {
fn from(node: &ast::StmtClassDef) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::ExprName> for DefinitionNodeKey {
fn from(node: &ast::ExprName) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::ExprNamed> for DefinitionNodeKey {
fn from(node: &ast::ExprNamed) -> Self {
Self(NodeKey::from_node(node))
}
}
impl From<&ast::StmtAnnAssign> for DefinitionNodeKey {
fn from(node: &ast::StmtAnnAssign) -> Self {
Self(NodeKey::from_node(node))
}
}

View File

@@ -0,0 +1,31 @@
use crate::ast_node_ref::AstNodeRef;
use crate::db::Db;
use crate::semantic_index::symbol::{FileScopeId, ScopeId};
use ruff_db::files::File;
use ruff_python_ast as ast;
use salsa;
/// An independently type-inferable expression.
///
/// Includes constraint expressions (e.g. if tests) and the RHS of an unpacking assignment.
#[salsa::tracked]
pub(crate) struct Expression<'db> {
/// The file in which the expression occurs.
#[id]
pub(crate) file: File,
/// The scope in which the expression occurs.
#[id]
pub(crate) file_scope: FileScopeId,
/// The expression node.
#[no_eq]
#[return_ref]
pub(crate) node: AstNodeRef<ast::Expr>,
}
impl<'db> Expression<'db> {
pub(crate) fn scope(self, db: &'db dyn Db) -> ScopeId<'db> {
self.file_scope(db).to_scope_id(db, self.file(db))
}
}

View File

@@ -3,8 +3,8 @@ use std::ops::Range;
use bitflags::bitflags;
use hashbrown::hash_map::RawEntryMut;
use ruff_db::files::File;
use ruff_db::parsed::ParsedModule;
use ruff_db::vfs::VfsFile;
use ruff_index::{newtype_index, IndexVec};
use ruff_python_ast::name::Name;
use ruff_python_ast::{self as ast};
@@ -12,33 +12,23 @@ use rustc_hash::FxHasher;
use crate::ast_node_ref::AstNodeRef;
use crate::node_key::NodeKey;
use crate::semantic_index::definition::Definition;
use crate::semantic_index::{root_scope, semantic_index, symbol_table, SymbolMap};
use crate::semantic_index::{semantic_index, SymbolMap};
use crate::Db;
#[derive(Eq, PartialEq, Debug)]
pub struct Symbol<'db> {
pub struct Symbol {
name: Name,
flags: SymbolFlags,
/// The nodes that define this symbol, in source order.
///
/// TODO: Use smallvec here, but it creates the same lifetime issues as in [QualifiedName](https://github.com/astral-sh/ruff/blob/5109b50bb3847738eeb209352cf26bda392adf62/crates/ruff_python_ast/src/name.rs#L562-L569)
definitions: Vec<Definition<'db>>,
}
impl<'db> Symbol<'db> {
impl Symbol {
fn new(name: Name) -> Self {
Self {
name,
flags: SymbolFlags::empty(),
definitions: Vec::new(),
}
}
fn push_definition(&mut self, definition: Definition<'db>) {
self.definitions.push(definition);
}
fn insert_flags(&mut self, flags: SymbolFlags) {
self.flags.insert(flags);
}
@@ -57,10 +47,6 @@ impl<'db> Symbol<'db> {
pub fn is_defined(&self) -> bool {
self.flags.contains(SymbolFlags::IS_DEFINED)
}
pub fn definitions(&self) -> &[Definition] {
&self.definitions
}
}
bitflags! {
@@ -75,15 +61,6 @@ bitflags! {
}
}
/// ID that uniquely identifies a public symbol defined in a module's root scope.
#[salsa::tracked]
pub struct PublicSymbolId<'db> {
#[id]
pub(crate) file: VfsFile,
#[id]
pub(crate) scoped_symbol_id: ScopedSymbolId,
}
/// ID that uniquely identifies a symbol in a file.
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct FileSymbolId {
@@ -111,53 +88,11 @@ impl From<FileSymbolId> for ScopedSymbolId {
#[newtype_index]
pub struct ScopedSymbolId;
impl ScopedSymbolId {
/// Converts the symbol to a public symbol.
///
/// # Panics
/// May panic if the symbol does not belong to `file` or is not a symbol of `file`'s root scope.
pub(crate) fn to_public_symbol(self, db: &dyn Db, file: VfsFile) -> PublicSymbolId {
let symbols = public_symbols_map(db, file);
symbols.public(self)
}
}
#[salsa::tracked(return_ref)]
pub(crate) fn public_symbols_map(db: &dyn Db, file: VfsFile) -> PublicSymbolsMap<'_> {
let _span = tracing::trace_span!("public_symbols_map", ?file).entered();
let module_scope = root_scope(db, file);
let symbols = symbol_table(db, module_scope);
let public_symbols: IndexVec<_, _> = symbols
.symbol_ids()
.map(|id| PublicSymbolId::new(db, file, id))
.collect();
PublicSymbolsMap {
symbols: public_symbols,
}
}
/// Maps [`LocalSymbolId`] of a file's root scope to the corresponding [`PublicSymbolId`] (Salsa ingredients).
#[derive(Eq, PartialEq, Debug)]
pub(crate) struct PublicSymbolsMap<'db> {
symbols: IndexVec<ScopedSymbolId, PublicSymbolId<'db>>,
}
impl<'db> PublicSymbolsMap<'db> {
/// Resolve the [`PublicSymbolId`] for the module-level `symbol_id`.
fn public(&self, symbol_id: ScopedSymbolId) -> PublicSymbolId<'db> {
self.symbols[symbol_id]
}
}
/// A cross-module identifier of a scope that can be used as a salsa query parameter.
#[salsa::tracked]
pub struct ScopeId<'db> {
#[allow(clippy::used_underscore_binding)]
#[id]
pub file: VfsFile,
pub file: File,
#[id]
pub file_scope_id: FileScopeId,
@@ -168,6 +103,17 @@ pub struct ScopeId<'db> {
}
impl<'db> ScopeId<'db> {
pub(crate) fn is_function_like(self, db: &'db dyn Db) -> bool {
// Type parameter scopes behave like function scopes in terms of name resolution; CPython
// symbol table also uses the term "function-like" for these scopes.
matches!(
self.node(db),
NodeWithScopeKind::ClassTypeParameters(_)
| NodeWithScopeKind::FunctionTypeParameters(_)
| NodeWithScopeKind::Function(_)
)
}
#[cfg(test)]
pub(crate) fn name(self, db: &'db dyn Db) -> &'db str {
match self.node(db) {
@@ -186,12 +132,12 @@ impl<'db> ScopeId<'db> {
pub struct FileScopeId;
impl FileScopeId {
/// Returns the scope id of the Root scope.
pub fn root() -> Self {
/// Returns the scope id of the module-global scope.
pub fn global() -> Self {
FileScopeId::from_u32(0)
}
pub fn to_scope_id(self, db: &dyn Db, file: VfsFile) -> ScopeId<'_> {
pub fn to_scope_id(self, db: &dyn Db, file: File) -> ScopeId<'_> {
let index = semantic_index(db, file);
index.scope_ids_by_scope[self]
}
@@ -224,15 +170,15 @@ pub enum ScopeKind {
/// Symbol table for a specific [`Scope`].
#[derive(Debug)]
pub struct SymbolTable<'db> {
pub struct SymbolTable {
/// The symbols in this scope.
symbols: IndexVec<ScopedSymbolId, Symbol<'db>>,
symbols: IndexVec<ScopedSymbolId, Symbol>,
/// The symbols indexed by name.
symbols_by_name: SymbolMap,
}
impl<'db> SymbolTable<'db> {
impl SymbolTable {
fn new() -> Self {
Self {
symbols: IndexVec::new(),
@@ -244,21 +190,21 @@ impl<'db> SymbolTable<'db> {
self.symbols.shrink_to_fit();
}
pub(crate) fn symbol(&self, symbol_id: impl Into<ScopedSymbolId>) -> &Symbol<'db> {
pub(crate) fn symbol(&self, symbol_id: impl Into<ScopedSymbolId>) -> &Symbol {
&self.symbols[symbol_id.into()]
}
pub(crate) fn symbol_ids(&self) -> impl Iterator<Item = ScopedSymbolId> + 'db {
#[allow(unused)]
pub(crate) fn symbol_ids(&self) -> impl Iterator<Item = ScopedSymbolId> {
self.symbols.indices()
}
pub fn symbols(&self) -> impl Iterator<Item = &Symbol<'db>> {
pub fn symbols(&self) -> impl Iterator<Item = &Symbol> {
self.symbols.iter()
}
/// Returns the symbol named `name`.
#[allow(unused)]
pub(crate) fn symbol_by_name(&self, name: &str) -> Option<&Symbol<'db>> {
pub(crate) fn symbol_by_name(&self, name: &str) -> Option<&Symbol> {
let id = self.symbol_id_by_name(name)?;
Some(self.symbol(id))
}
@@ -282,21 +228,21 @@ impl<'db> SymbolTable<'db> {
}
}
impl PartialEq for SymbolTable<'_> {
impl PartialEq for SymbolTable {
fn eq(&self, other: &Self) -> bool {
// We don't need to compare the symbols_by_name because the name is already captured in `Symbol`.
self.symbols == other.symbols
}
}
impl Eq for SymbolTable<'_> {}
impl Eq for SymbolTable {}
#[derive(Debug)]
pub(super) struct SymbolTableBuilder<'db> {
table: SymbolTable<'db>,
pub(super) struct SymbolTableBuilder {
table: SymbolTable,
}
impl<'db> SymbolTableBuilder<'db> {
impl SymbolTableBuilder {
pub(super) fn new() -> Self {
Self {
table: SymbolTable::new(),
@@ -307,7 +253,7 @@ impl<'db> SymbolTableBuilder<'db> {
&mut self,
name: Name,
flags: SymbolFlags,
) -> ScopedSymbolId {
) -> (ScopedSymbolId, bool) {
let hash = SymbolTable::hash_name(&name);
let entry = self
.table
@@ -320,7 +266,7 @@ impl<'db> SymbolTableBuilder<'db> {
let symbol = &mut self.table.symbols[*entry.key()];
symbol.insert_flags(flags);
*entry.key()
(*entry.key(), false)
}
RawEntryMut::Vacant(entry) => {
let mut symbol = Symbol::new(name);
@@ -330,16 +276,12 @@ impl<'db> SymbolTableBuilder<'db> {
entry.insert_with_hasher(hash, id, (), |id| {
SymbolTable::hash_name(self.table.symbols[*id].name().as_str())
});
id
(id, true)
}
}
}
pub(super) fn add_definition(&mut self, symbol: ScopedSymbolId, definition: Definition<'db>) {
self.table.symbols[symbol].push_definition(definition);
}
pub(super) fn finish(mut self) -> SymbolTable<'db> {
pub(super) fn finish(mut self) -> SymbolTable {
self.table.shrink_to_fit();
self.table
}

View File

@@ -0,0 +1,353 @@
//! Build a map from each use of a symbol to the definitions visible from that use.
//!
//! Let's take this code sample:
//!
//! ```python
//! x = 1
//! x = 2
//! y = x
//! if flag:
//! x = 3
//! else:
//! x = 4
//! z = x
//! ```
//!
//! In this snippet, we have four definitions of `x` (the statements assigning `1`, `2`, `3`,
//! and `4` to it), and two uses of `x` (the `y = x` and `z = x` assignments). The first
//! [`Definition`] of `x` is never visible to any use, because it's immediately replaced by the
//! second definition, before any use happens. (A linter could thus flag the statement `x = 1`
//! as likely superfluous.)
//!
//! The first use of `x` has one definition visible to it: the assignment `x = 2`.
//!
//! Things get a bit more complex when we have branches. We will definitely take either the `if` or
//! the `else` branch. Thus, the second use of `x` has two definitions visible to it: `x = 3` and
//! `x = 4`. The `x = 2` definition is no longer visible, because it must be replaced by either `x
//! = 3` or `x = 4`, no matter which branch was taken. We don't know which branch was taken, so we
//! must consider both definitions as visible, which means eventually we would (in type inference)
//! look at these two definitions and infer a type of `Literal[3, 4]` -- the union of `Literal[3]`
//! and `Literal[4]` -- for the second use of `x`.
//!
//! So that's one question our use-def map needs to answer: given a specific use of a symbol, which
//! definition(s) is/are visible from that use. In
//! [`AstIds`](crate::semantic_index::ast_ids::AstIds) we number all uses (that means a `Name` node
//! with `Load` context) so we have a `ScopedUseId` to efficiently represent each use.
//!
//! The other case we need to handle is when a symbol is referenced from a different scope (the
//! most obvious example of this is an import). We call this "public" use of a symbol. So the other
//! question we need to be able to answer is, what are the publicly-visible definitions of each
//! symbol?
//!
//! Technically, public use of a symbol could also occur from any point in control flow of the
//! scope where the symbol is defined (via inline imports and import cycles, in the case of an
//! import, or via a function call partway through the local scope that ends up using a symbol from
//! the scope via a global or nonlocal reference.) But modeling this fully accurately requires
//! whole-program analysis that isn't tractable for an efficient incremental compiler, since it
//! means a given symbol could have a different type every place it's referenced throughout the
//! program, depending on the shape of arbitrarily-sized call/import graphs. So we follow other
//! Python type-checkers in making the simplifying assumption that usually the scope will finish
//! execution before its symbols are made visible to other scopes; for instance, most imports will
//! import from a complete module, not a partially-executed module. (We may want to get a little
//! smarter than this in the future, in particular for closures, but for now this is where we
//! start.)
//!
//! So this means that the publicly-visible definitions of a symbol are the definitions still
//! visible at the end of the scope.
//!
//! The data structure we build to answer these two questions is the `UseDefMap`. It has a
//! `definitions_by_use` vector indexed by [`ScopedUseId`] and a `public_definitions` vector
//! indexed by [`ScopedSymbolId`]. The values in each of these vectors are (in principle) a list of
//! visible definitions at that use, or at the end of the scope for that symbol.
//!
//! In order to avoid vectors-of-vectors and all the allocations that would entail, we don't
//! actually store these "list of visible definitions" as a vector of [`Definition`] IDs. Instead,
//! the values in `definitions_by_use` and `public_definitions` are a [`Definitions`] struct that
//! keeps a [`Range`] into a third vector of [`Definition`] IDs, `all_definitions`. The trick with
//! this representation is that it requires that the definitions visible at any given use of a
//! symbol are stored sequentially in `all_definitions`.
//!
//! There is another special kind of possible "definition" for a symbol: it might be unbound in the
//! scope. (This isn't equivalent to "zero visible definitions", since we may go through an `if`
//! that has a definition for the symbol, leaving us with one visible definition, but still also
//! the "unbound" possibility, since we might not have taken the `if` branch.)
//!
//! The simplest way to model "unbound" would be as an actual [`Definition`] itself: the initial
//! visible [`Definition`] for each symbol in a scope. But actually modeling it this way would
//! dramatically increase the number of [`Definition`] that Salsa must track. Since "unbound" is a
//! special definition in that all symbols share it, and it doesn't have any additional per-symbol
//! state, we can represent it more efficiently: we use the `may_be_unbound` boolean on the
//! [`Definitions`] struct. If this flag is `true`, it means the symbol/use really has one
//! additional visible "definition", which is the unbound state. If this flag is `false`, it means
//! we've eliminated the possibility of unbound: every path we've followed includes a definition
//! for this symbol.
//!
//! To build a [`UseDefMap`], the [`UseDefMapBuilder`] is notified of each new use and definition
//! as they are encountered by the
//! [`SemanticIndexBuilder`](crate::semantic_index::builder::SemanticIndexBuilder) AST visit. For
//! each symbol, the builder tracks the currently-visible definitions for that symbol. When we hit
//! a use of a symbol, it records the currently-visible definitions for that symbol as the visible
//! definitions for that use. When we reach the end of the scope, it records the currently-visible
//! definitions for each symbol as the public definitions of that symbol.
//!
//! Let's walk through the above example. Initially we record for `x` that it has no visible
//! definitions, and may be unbound. When we see `x = 1`, we record that as the sole visible
//! definition of `x`, and flip `may_be_unbound` to `false`. Then we see `x = 2`, and it replaces
//! `x = 1` as the sole visible definition of `x`. When we get to `y = x`, we record that the
//! visible definitions for that use of `x` are just the `x = 2` definition.
//!
//! Then we hit the `if` branch. We visit the `test` node (`flag` in this case), since that will
//! happen regardless. Then we take a pre-branch snapshot of the currently visible definitions for
//! all symbols, which we'll need later. Then we go ahead and visit the `if` body. When we see `x =
//! 3`, it replaces `x = 2` as the sole visible definition of `x`. At the end of the `if` body, we
//! take another snapshot of the currently-visible definitions; we'll call this the post-if-body
//! snapshot.
//!
//! Now we need to visit the `else` clause. The conditions when entering the `else` clause should
//! be the pre-if conditions; if we are entering the `else` clause, we know that the `if` test
//! failed and we didn't execute the `if` body. So we first reset the builder to the pre-if state,
//! using the snapshot we took previously (meaning we now have `x = 2` as the sole visible
//! definition for `x` again), then visit the `else` clause, where `x = 4` replaces `x = 2` as the
//! sole visible definition of `x`.
//!
//! Now we reach the end of the if/else, and want to visit the following code. The state here needs
//! to reflect that we might have gone through the `if` branch, or we might have gone through the
//! `else` branch, and we don't know which. So we need to "merge" our current builder state
//! (reflecting the end-of-else state, with `x = 4` as the only visible definition) with our
//! post-if-body snapshot (which has `x = 3` as the only visible definition). The result of this
//! merge is that we now have two visible definitions of `x`: `x = 3` and `x = 4`.
//!
//! The [`UseDefMapBuilder`] itself just exposes methods for taking a snapshot, resetting to a
//! snapshot, and merging a snapshot into the current state. The logic using these methods lives in
//! [`SemanticIndexBuilder`](crate::semantic_index::builder::SemanticIndexBuilder), e.g. where it
//! visits a `StmtIf` node.
//!
//! (In the future we may have some other questions we want to answer as well, such as "is this
//! definition used?", which will require tracking a bit more info in our map, e.g. a "used" bit
//! for each [`Definition`] which is flipped to true when we record that definition for a use.)
use crate::semantic_index::ast_ids::ScopedUseId;
use crate::semantic_index::definition::Definition;
use crate::semantic_index::symbol::ScopedSymbolId;
use ruff_index::IndexVec;
use std::ops::Range;
/// All definitions that can reach a given use of a name.
#[derive(Debug, PartialEq, Eq)]
pub(crate) struct UseDefMap<'db> {
// TODO store constraints with definitions for type narrowing
/// Definition IDs array for `definitions_by_use` and `public_definitions` to slice into.
all_definitions: Vec<Definition<'db>>,
/// Definitions that can reach a [`ScopedUseId`].
definitions_by_use: IndexVec<ScopedUseId, Definitions>,
/// Definitions of each symbol visible at end of scope.
public_definitions: IndexVec<ScopedSymbolId, Definitions>,
}
impl<'db> UseDefMap<'db> {
pub(crate) fn use_definitions(&self, use_id: ScopedUseId) -> &[Definition<'db>] {
&self.all_definitions[self.definitions_by_use[use_id].definitions_range.clone()]
}
pub(crate) fn use_may_be_unbound(&self, use_id: ScopedUseId) -> bool {
self.definitions_by_use[use_id].may_be_unbound
}
pub(crate) fn public_definitions(&self, symbol: ScopedSymbolId) -> &[Definition<'db>] {
&self.all_definitions[self.public_definitions[symbol].definitions_range.clone()]
}
pub(crate) fn public_may_be_unbound(&self, symbol: ScopedSymbolId) -> bool {
self.public_definitions[symbol].may_be_unbound
}
}
/// Definitions visible for a symbol at a particular use (or end-of-scope).
#[derive(Clone, Debug, PartialEq, Eq)]
struct Definitions {
/// [`Range`] in `all_definitions` of the visible definition IDs.
definitions_range: Range<usize>,
/// Is the symbol possibly unbound at this point?
may_be_unbound: bool,
}
impl Definitions {
/// The default state of a symbol is "no definitions, may be unbound", aka definitely-unbound.
fn unbound() -> Self {
Self {
definitions_range: Range::default(),
may_be_unbound: true,
}
}
}
impl Default for Definitions {
fn default() -> Self {
Definitions::unbound()
}
}
/// A snapshot of the visible definitions for each symbol at a particular point in control flow.
#[derive(Clone, Debug)]
pub(super) struct FlowSnapshot {
definitions_by_symbol: IndexVec<ScopedSymbolId, Definitions>,
}
pub(super) struct UseDefMapBuilder<'db> {
/// Definition IDs array for `definitions_by_use` and `definitions_by_symbol` to slice into.
all_definitions: Vec<Definition<'db>>,
/// Visible definitions at each so-far-recorded use.
definitions_by_use: IndexVec<ScopedUseId, Definitions>,
/// Currently visible definitions for each symbol.
definitions_by_symbol: IndexVec<ScopedSymbolId, Definitions>,
}
impl<'db> UseDefMapBuilder<'db> {
pub(super) fn new() -> Self {
Self {
all_definitions: Vec::new(),
definitions_by_use: IndexVec::new(),
definitions_by_symbol: IndexVec::new(),
}
}
pub(super) fn add_symbol(&mut self, symbol: ScopedSymbolId) {
let new_symbol = self.definitions_by_symbol.push(Definitions::unbound());
debug_assert_eq!(symbol, new_symbol);
}
pub(super) fn record_definition(
&mut self,
symbol: ScopedSymbolId,
definition: Definition<'db>,
) {
// We have a new definition of a symbol; this replaces any previous definitions in this
// path.
let def_idx = self.all_definitions.len();
self.all_definitions.push(definition);
self.definitions_by_symbol[symbol] = Definitions {
#[allow(clippy::range_plus_one)]
definitions_range: def_idx..(def_idx + 1),
may_be_unbound: false,
};
}
pub(super) fn record_use(&mut self, symbol: ScopedSymbolId, use_id: ScopedUseId) {
// We have a use of a symbol; clone the currently visible definitions for that symbol, and
// record them as the visible definitions for this use.
let new_use = self
.definitions_by_use
.push(self.definitions_by_symbol[symbol].clone());
debug_assert_eq!(use_id, new_use);
}
/// Take a snapshot of the current visible-symbols state.
pub(super) fn snapshot(&self) -> FlowSnapshot {
FlowSnapshot {
definitions_by_symbol: self.definitions_by_symbol.clone(),
}
}
/// Restore the current builder visible-definitions state to the given snapshot.
pub(super) fn restore(&mut self, snapshot: FlowSnapshot) {
// We never remove symbols from `definitions_by_symbol` (it's an IndexVec, and the symbol
// IDs must line up), so the current number of known symbols must always be equal to or
// greater than the number of known symbols in a previously-taken snapshot.
let num_symbols = self.definitions_by_symbol.len();
debug_assert!(num_symbols >= snapshot.definitions_by_symbol.len());
// Restore the current visible-definitions state to the given snapshot.
self.definitions_by_symbol = snapshot.definitions_by_symbol;
// If the snapshot we are restoring is missing some symbols we've recorded since, we need
// to fill them in so the symbol IDs continue to line up. Since they don't exist in the
// snapshot, the correct state to fill them in with is "unbound", the default.
self.definitions_by_symbol
.resize(num_symbols, Definitions::unbound());
}
/// Merge the given snapshot into the current state, reflecting that we might have taken either
/// path to get here. The new visible-definitions state for each symbol should include
/// definitions from both the prior state and the snapshot.
pub(super) fn merge(&mut self, snapshot: &FlowSnapshot) {
// The tricky thing about merging two Ranges pointing into `all_definitions` is that if the
// two Ranges aren't already adjacent in `all_definitions`, we will have to copy at least
// one or the other of the ranges to the end of `all_definitions` so as to make them
// adjacent. We can't ever move things around in `all_definitions` because previously
// recorded uses may still have ranges pointing to any part of it; all we can do is append.
// It's possible we may end up with some old entries in `all_definitions` that nobody is
// pointing to, but that's OK.
// We never remove symbols from `definitions_by_symbol` (it's an IndexVec, and the symbol
// IDs must line up), so the current number of known symbols must always be equal to or
// greater than the number of known symbols in a previously-taken snapshot.
debug_assert!(self.definitions_by_symbol.len() >= snapshot.definitions_by_symbol.len());
for (symbol_id, current) in self.definitions_by_symbol.iter_mut_enumerated() {
let Some(snapshot) = snapshot.definitions_by_symbol.get(symbol_id) else {
// Symbol not present in snapshot, so it's unbound from that path.
current.may_be_unbound = true;
continue;
};
// If the symbol can be unbound in either predecessor, it can be unbound post-merge.
current.may_be_unbound |= snapshot.may_be_unbound;
// Merge the definition ranges.
let current = &mut current.definitions_range;
let snapshot = &snapshot.definitions_range;
// We never create reversed ranges.
debug_assert!(current.end >= current.start);
debug_assert!(snapshot.end >= snapshot.start);
if current == snapshot {
// Ranges already identical, nothing to do.
} else if snapshot.is_empty() {
// Merging from an empty range; nothing to do.
} else if (*current).is_empty() {
// Merging to an empty range; just use the incoming range.
*current = snapshot.clone();
} else if snapshot.end >= current.start && snapshot.start <= current.end {
// Ranges are adjacent or overlapping, merge them in-place.
*current = current.start.min(snapshot.start)..current.end.max(snapshot.end);
} else if current.end == self.all_definitions.len() {
// Ranges are not adjacent or overlapping, `current` is at the end of
// `all_definitions`, we need to copy `snapshot` to the end so they are adjacent
// and can be merged into one range.
self.all_definitions.extend_from_within(snapshot.clone());
current.end = self.all_definitions.len();
} else if snapshot.end == self.all_definitions.len() {
// Ranges are not adjacent or overlapping, `snapshot` is at the end of
// `all_definitions`, we need to copy `current` to the end so they are adjacent and
// can be merged into one range.
self.all_definitions.extend_from_within(current.clone());
current.start = snapshot.start;
current.end = self.all_definitions.len();
} else {
// Ranges are not adjacent and neither one is at the end of `all_definitions`, we
// have to copy both to the end so they are adjacent and we can merge them.
let start = self.all_definitions.len();
self.all_definitions.extend_from_within(current.clone());
self.all_definitions.extend_from_within(snapshot.clone());
current.start = start;
current.end = self.all_definitions.len();
}
}
}
pub(super) fn finish(mut self) -> UseDefMap<'db> {
self.all_definitions.shrink_to_fit();
self.definitions_by_symbol.shrink_to_fit();
self.definitions_by_use.shrink_to_fit();
UseDefMap {
all_definitions: self.all_definitions,
definitions_by_use: self.definitions_by_use,
public_definitions: self.definitions_by_symbol,
}
}
}

Some files were not shown because too many files have changed in this diff Show More