Compare commits

...

141 Commits

Author SHA1 Message Date
Dhruv Manilawala
a8cf7096ff Bump version to v0.4.8 (#11755)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-06-05 20:51:31 +05:30
Carl Meyer
895eb3ef48 [red-knot] refactor CFG outside of symbol table (#11746) 2024-06-05 06:23:43 -06:00
Dhruv Manilawala
2e0a9755e0 Disallow access to Parsed output, use the API instead (#11741)
## Summary

This PR is a follow-up to #11740 to restrict access to the `Parsed`
output by replacing the `parsed` API function with a more specific one.
Currently, that is `comment_ranges` but the linked PR exposes a `tokens`
method.

The main motivation is so that there's no way to get an incorrect
information from the checker. And, it also encapsulates the source of
the comment ranges and the tokens itself. This way it would become
easier to just update the checker if the source for these information
changes in the future.

## Test Plan

`cargo insta test`
2024-06-05 08:24:19 +00:00
Dhruv Manilawala
b021b5babe Use Tokens from parsed type annotation or parsed source (#11740)
## Summary

This PR fixes a bug where the checker would require the tokens for an
invalid offset w.r.t. the source code.

Taking the source code from the linked issue as an example:
```py
relese_version :"0.0is 64"
```

Now, this isn't really a valid type annotation but that's what this PR
is fixing. Regardless of whether it's valid or not, Ruff shouldn't
panic.

The checker would visit the parsed type annotation (`0.0is 64`) and try
to detect any violations. Certain rule logic requests the tokens for the
same but it would fail because the lexer would only have the `String`
token considering original source code. This worked before because the
lexer was invoked again for each rule logic.

The solution is to store the parsed type annotation on the checker if
it's in a typing context and use the tokens from that instead if it's
available. This is enforced by creating a new API on the checker to get
the tokens.

But, this means that there are two ways to get the tokens via the
checker API. I want to restrict this in a follow-up PR (#11741) to only
expose `tokens` and `comment_ranges` as methods and restrict access to
the parsed source code.

fixes: #11736 

## Test Plan

- [x] Add a test case for `F632` rule and update the snapshot
- [x] Check all affected rules
- [x] No ecosystem changes
2024-06-05 07:50:33 +00:00
Dhruv Manilawala
eed6d784df Update type annotation parsing API to return Parsed (#11739)
## Summary

This PR updates the return type of `parse_type_annotation` from `Expr`
to `Parsed<ModExpression>`. This is to allow accessing the tokens for
the parsed sub-expression in the follow-up PR.

## Test Plan

`cargo insta test`
2024-06-05 12:59:43 +05:30
Jane Lewis
8338db6c12 ruff server: Formatting a document with syntax problems no longer spams a visible error popup (#11745)
## Summary

Fixes https://github.com/astral-sh/ruff-vscode/issues/482.

I've made adjustments to `format` and `format_range` that handle parsing
errors before they become server errors. We'll still log this as a
problem, but there will no longer be a visible popup.

## Test Plan

Instead of seeing a visible error when formatting a document with syntax
issues, you should see this warning in the LSP logs:

<img width="991" alt="Screenshot 2024-06-04 at 3 38 23 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/9d68947d-6462-4ca6-ab5a-65e573c91db6">

Similarly, if you try to format a range with syntax issues, you should
see this warning in the LSP logs instead of a visible error popup:

<img width="1010" alt="Screenshot 2024-06-04 at 3 39 10 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/99fff098-798d-406a-976e-81ead0da0352">

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-06-04 17:18:21 -07:00
Carl Meyer
d056d09547 [red-knot] add if-statement support to FlowGraph (#11673)
## Summary

Add if-statement support to FlowGraph. This introduces branches and
joins in the graph for the first time.

## Test Plan

Added tests.
2024-06-04 15:09:39 -06:00
Mateusz Sokół
1645be018d Update NPY001 rule for NumPy 2.0 (#11735)
Hi!

This PR addresses https://github.com/astral-sh/ruff/issues/11093.

It skips `np.bool` and `np.long` replacements as both of these names
were reintroduced in NumPy 2.0 with a different meaning
(https://github.com/numpy/numpy/pull/24922,
https://github.com/numpy/numpy/pull/25080).
With this change `NPY001` will no longer conflict with `NPY201`. For
projects using NumPy 1.x `np.bool` and `np.long` has been deprecated and
removed long time ago, and accessing them yields an informative error
message.
2024-06-04 19:23:42 +00:00
Michael Oultram
2c865023ac CI: add job to run tests under minimum supported rust version (msrv) (#11737)
## Summary

This change adds a GitHub Actions CI job to check that the project
builds and test pass under the declared minimum supported rust compiler.
I have bumped the msrv to 1.74 as that is the lowest version I could get
this project to build on.

## Test Plan

The CI job has run on this PR, and will also run on the main branch.
2024-06-04 15:14:50 -04:00
Dhruv Manilawala
2567e14b7a Lexer should consider BOM for the start offset (#11732)
## Summary

This PR fixes a bug where the lexer didn't consider the BOM into the
start offset.

fixes: #11731

## Test Plan

Add multiple test cases which involves BOM character in the source for
the lexer and verify the snapshot.
2024-06-04 08:45:46 +00:00
Dhruv Manilawala
3b19df04d7 Use cursor offset for lexer checkpoint (#11734)
## Summary

This PR updates the lexer checkpoint to store the cursor offset instead
of cloning the cursor itself. This reduces the size of `LexerCheckpoint`
from 136 to 112 bytes and also removes the need for lifetime.

## Test Plan

`cargo insta test`
2024-06-04 14:13:57 +05:30
Micha Reiser
6ffb96171a red-knot: Change resolve_global_symbol to take Module as an argument (#11723) 2024-06-04 06:20:50 +00:00
Micha Reiser
64165bee43 red-knot: Use parse_unchecked to get all parse errors (#11725) 2024-06-04 06:04:48 +00:00
Charlie Marsh
0c75548146 Respect per-file ignores for blanket and redirected noqa rules (#11728)
## Summary

Ensures that we respect per-file ignores and exemptions for these rules.
Specifically, we allow:

```python
# ruff: noqa: PGH004
```

...to ignore `PGH004`.
2024-06-04 03:57:59 +00:00
Alex
b56a577f25 [pygrep_hooks] Check blanket ignores via file-level pragmas (PGH004) (#11540)
## Summary

Should resolve https://github.com/astral-sh/ruff/issues/11454.

This is my first PR to `ruff`, so I may have missed something.

If I understood the suggestion in the issue correctly, rule `PGH004`
should be set to `Preview` again.

## Test Plan

Created two fixtures derived from the issue.
2024-06-04 03:42:58 +00:00
Tushar Sadhwani
e1133a24ed [flake8-pyi] Implement PYI063 (#11699)
## Summary
Implements `Y063` from `flake8-pyi`.

## Test Plan
`cargo test` / `cargo insta review`
2024-06-04 03:15:04 +00:00
Charlie Marsh
2f8ac1e9b3 Fix red-knot compilation (#11727)
## Summary

Perhaps a result of a bad rebase, but `cargo clippy --fix --workspace
--all-targets -- -D warnings` does not pass on main as-is.
2024-06-04 03:03:38 +00:00
Carl Meyer
3fb2028506 [red-knot] extract helper functions in inference tests (#11671)
There's a lot of repeat boilerplate in the type inference tests; this
cuts it down a lot.
2024-06-03 17:46:04 -06:00
Carl Meyer
3f9ee31efb [red-knot] use reachable definitions in infer_expression_type (#11670)
## Summary

Switch name resolution in `infer_expression_type` from resolving the
public type of a symbol, to resolving the reachable definitions of that
symbol from the reference point, using the flow graph.

This surfaced a bug in the flow graph implementation and a bug in symbol
table building, both of which are also fixed here.

The bug in flow graph implementation was that when we pushed and popped
scopes, we didn't maintain a stack of "current flow nodes" in all
stacked scopes, to be restored when we returned to that scope. Now we
do.

The bug in symbol table building that we didn't visit the parts of
functions and class definitions in the correct scopes. E.g. decorators
should be visited in the outer scope, arguments should be visited inside
the type-params scope (if any) but not inside the function body scope,
and only the body itself should actually be visited inside the body
scope. Fixing this requires that we no longer use `walk_stmt` here,
instead we have to visit each individual component.

## Test Plan

Added test.
2024-06-03 17:45:31 -06:00
Carl Meyer
b02d3f3fd9 [red-knot] infer_symbol_public_type infers union of all definitions (#11669)
## Summary

Rename `infer_symbol_type` to `infer_symbol_public_type`, and allow it
to work on symbols with more than one definition. For now, use the most
cautious/sound inference, which is the union of all definitions. We can
prune this union more in future by eliminating definitions if we can
show that they can't be visible (this requires both that the symbol is
definitely later reassigned, and that there is no intervening
call/import that might be able to see the over-written definition).

## Test Plan

Added a test showing inference of union from multiple definitions.
2024-06-03 17:27:06 -06:00
Dhruv Manilawala
2b28889ca9 Isolate non-breaking whitespace indentation test case (#11721)
As discussed in Discord, this moves the test case for non-breaking
whitespace into its own method.
2024-06-03 13:20:55 +00:00
Dhruv Manilawala
8db147c09d Generator should add a newline before type statement (#11720)
## Summary

This PR fixes a bug where the `Generator` wouldn't add a newline before
a type alias statement. This is because it wasn't using the `statement`
macro which takes care of the newline.

Without this fix, a code like:
```py
type X = int
type Y = str
```

The generator would produce:
```py
type X = inttype Y = str
```

## Test Plan

Add a test case.
2024-06-03 18:44:21 +05:30
Dhruv Manilawala
a58bde6958 Remove less used parser dependencies (#11718)
## Summary

This PR removes the following dependencies from the `ruff_python_parser`
crate:
* `anyhow` (moved to dev dependencies)
* `is-macro`
* `itertools`

The main motivation is that they aren't used much.

Additionally, it updates the return type of `parse_type_annotation` to
use a more specific `ParseError` instead of the generic `anyhow::Error`.

## Test Plan

`cargo insta test`
2024-06-03 13:08:24 +00:00
Dhruv Manilawala
f4e23d2dff Use string expression for parsing type annotation (#11717)
## Summary

This PR updates the logic for parsing type annotation to accept a
`ExprStringLiteral` node instead of the string value and the range.

The main motivation of this change is to simplify the implementation of
`parse_type_annotation` function with:
* Use the `opener_len` and `closer_len` from the string flags to get the
raw contents range instead of extracting it via
	* `str::leading_quote(expression).unwrap().text_len()`
	* `str::trailing_quote(expression).unwrap().text_len()`
* Avoid comparing the string content if we already know that it's
implicitly concatenated

## Test Plan

`cargo insta test`
2024-06-03 13:04:03 +00:00
Dhruv Manilawala
4a155e2b22 Re-order lexer methods (#11716)
## Summary

This PR re-orders the lexer methods in the following order:

1. `next_token`
2. `lex_token`
3. `eat_indentation`
4. `handle_indentation`
5. `skip_whitespace`
6. `consume_ascii_character`
7. `try_single_char_prefix`
8. `try_double_char_prefix`
9. `lex_identifier`
10. `lex_fstring_start`
11. `lex_fstring_middle_or_end`
12. `lex_string`
13. `lex_number`
14. `lex_number_radix`
15. `lex_decimal_number`
16. `radix_run`
17. `lex_comment`
18. `lex_ipython_escape_command`
19. `consume_end`

Following was considered for the ordering:
* 1 is the main entry point which delegates to 2
* 3, 4, 5 are all related to whitespace which is done first
* 6 is the entrypoint for an ascii character which delegates to 9, 12,
13, 17, 18, 19
* Others are grouped around similar kind of methods
2024-06-03 12:58:35 +00:00
Dhruv Manilawala
bf5b62edac Maintain synchronicity between the lexer and the parser (#11457)
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
2024-06-03 18:23:50 +05:30
renovate[bot]
c69a789aa5 Update NPM Development dependencies (#11713) 2024-06-03 01:59:07 +00:00
renovate[bot]
140c408a92 Update pre-commit dependencies (#11712) 2024-06-02 21:51:42 -04:00
renovate[bot]
27085a93d9 Update cloudflare/wrangler-action action to v3.6.1 (#11709) 2024-06-02 21:51:27 -04:00
renovate[bot]
a9b6c4f269 Update dependency monaco-editor to ^0.49.0 (#11710) 2024-06-02 21:51:23 -04:00
renovate[bot]
ded010cf9c Update Rust crate tracing-tree to v0.3.1 (#11703) 2024-06-02 21:51:13 -04:00
renovate[bot]
436dc18b15 Update Rust crate libcst to v1.4.0 (#11707) 2024-06-03 01:05:32 +00:00
renovate[bot]
9599bd7622 Update Rust crate itertools to 0.13.0 (#11706) 2024-06-03 01:05:17 +00:00
renovate[bot]
ec3f523924 Update Rust crate insta to v1.39.0 (#11705) 2024-06-03 01:04:26 +00:00
renovate[bot]
010434015e Update Rust crate proc-macro2 to v1.0.85 (#11700) 2024-06-03 01:03:31 +00:00
renovate[bot]
25131da2c3 Update Rust crate toml to v0.8.13 (#11702) 2024-06-02 21:03:09 -04:00
renovate[bot]
712783825d Update Rust crate strum_macros to v0.26.3 (#11701) 2024-06-02 21:03:03 -04:00
Alex Waygood
94a3c53841 Update UP035 for Python 3.13 and the latest version of typing_extensions (#11693) 2024-06-02 22:59:48 +01:00
Tobias Fischer
0ea2519e80 Add RDJson support. (#11682)
## Summary

Implement support for RDJson output for `ruff check`, as requested in
#8655.

## Test Plan

Tested using a snapshot test. Same approach as for e.g. the JSON output
formatter.

## Additional info

I tried to keep the implementation close to the JSON implementation.

I had to deviate a bit to make the `suggestions` key work: If there are
no suggestions, then setting `suggestions` to `null` is invalid
according to the JSONSchema. Therefore, I opted for a slightly more
complex implementation, that skips the `suggestions` key entirely if
there are no fixes available for the given diagnostic. Maybe it would
have been easier to set `"suggestions": []`, but I ended up doing it
this way.

I didn't consider notebooks, as I _think_ that RDJson doesn't work with
notebooks. This should be confirmed, and if so, there should be some
form of warning or error emitted when trying to output diagnostics for a
notebook.

I also didn't consider `ruff format`, as this comment:
https://github.com/astral-sh/ruff/issues/8655#issuecomment-1811446160
suggests that that wouldn't be compatible.

I'm new to Rust, any feedback is appreciated. 🙂 I
implemented this in order to have a productive rainy saturday afternoon,
I'm not knowledgeable about RDJson beyond the sources linked in the
issue.
2024-06-02 17:59:57 +00:00
Charlie Marsh
6d79ddc0aa [pyupgrade] Write empty string in lieu of panic (#11696)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11692.
2024-06-02 17:51:03 +00:00
Alex Waygood
9f3e609278 Make tests aware that py313 is the latest supported Python version (#11690) 2024-06-02 13:06:04 +00:00
Charlie Marsh
b36dd1aa51 [flake8-simplify] Simplify double negatives in SIM103 (#11684)
## Summary

Closes: https://github.com/astral-sh/ruff/issues/11685.
2024-06-01 23:21:11 +00:00
Charlie Marsh
fd9d68051e Update CHANGELOG.md (#11683) 2024-06-01 18:08:02 +00:00
github-actions[bot]
99834ee93d Sync vendored typeshed stubs (#11668)
Close and reopen this PR to trigger CI

Co-authored-by: typeshedbot <>
2024-05-31 22:26:20 -06:00
Charlie Marsh
b80bf22c4d Omit red-knot PRs from the changelog (#11666)
## Summary

This just ensures that PRs labelled with `red-knot` are automatically
filtered out from the auto-generated changelog (which we then manually
finalize anyway).
2024-05-31 19:18:53 -04:00
Tobias Fischer
312f6640b8 [flake8-bugbear] Implement return-in-generator (B901) (#11644)
## Summary

This PR implements the rule B901, which is part of the opinionated rules
of `flake8-bugbear`.

This rule seems to be desired in `ruff` as per
https://github.com/astral-sh/ruff/issues/3758 and
https://github.com/astral-sh/ruff/issues/2954#issuecomment-1441162976.

## Test Plan

As this PR was made closely following the
[CONTRIBUTING.md](8a25531a71/CONTRIBUTING.md),
it tests using the snapshot approach, that is described there.

## Sources

The implementation is inspired by [the original implementation in the
`flake8-bugbear`
repository](d1aec4cbef/bugbear.py (L1092)).
The error message and [test
file](d1aec4cbef/tests/b901.py)
where also copied from there.

The documentation I came up with on my own and needs improvement. Maybe
the example given in
https://github.com/astral-sh/ruff/issues/2954#issuecomment-1441162976
could be used, but maybe they are too complex, I'm not sure.

## Open Questions

- [ ] Documentation. (See above.)

- [x] Can I access the parent in a visitor?

The [original
implementation](d1aec4cbef/bugbear.py (L1100))
references the `yield` statement's parent to check if it is an
expression statement. I didn't find a way to do this in `ruff` and used
the `is_expresssion_statement` field on the visitor instead. What are
your thoughts on this? Is it possible and / or desired to access the
parent node here?

- [x] Is `Option::is_some(...)` -> `...unwrap()` the right thing to do?

Referring to [this piece of
code](9d5a280f71/crates/ruff_linter/src/rules/flake8_bugbear/rules/return_x_in_generator.rs (L91-L96)).
From my understanding, the `.unwrap()` is safe, because it is checked
that `return_` is not `None`. However, I feel like I missed a more
elegant solution that does both in one.

## Other

I don't know a lot about this rule, I just implemented it because I
found it in a
https://github.com/astral-sh/ruff/labels/good%20first%20issue.

I'm new to Rust, so any constructive critisism is appreciated.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-05-31 21:48:36 +00:00
Charlie Marsh
91a5fdee7a Use find in indent detection (#11650) 2024-05-31 20:35:19 +00:00
Charlie Marsh
1ad5f9c038 Bump version to v0.4.7 (#11646) 2024-05-31 16:30:36 -04:00
plredmond
e914bc300b F401 sort bindings before adding to __all__ (#11648)
Sort the binding IDs before passing them to the add-to-`__all__`
function to address #11619.
2024-05-31 20:29:08 +00:00
Carl Meyer
27f6f048f0 [red-knot] initial (very incomplete) flow graph (#11624)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Introduces the skeleton of the flow graph. So far it doesn't actually
handle any non-linear control flow :) But it does show how we can go
from an expression that references a symbol, backward through the flow
graph, to find reachable definitions of that symbol.

Adding non-linear control flow will mean adding flow nodes with multiple
predecessors, which will introduce more complexity into
`ReachableDefinitionsIterator.next()`. But one step at a time.

## Test Plan

Added a (very basic) test.
2024-05-31 14:27:17 -06:00
Alex Waygood
d62a617938 red-knot: Don't refer to Module instances as IDs (#11649) 2024-05-31 20:04:47 +00:00
Carl Meyer
16a926d138 [red-knot] infer int literal types (#11623)
## Summary

Give red-knot the ability to infer int literal types. This is quick and
easy, mostly because these types are a convenient way to observe
control-flow handling with simple assignments.

## Test Plan

Added test.
2024-05-31 13:52:29 -06:00
Jakub Marcowski
05566c6075 Update Who's Using Ruff? section to include Godot (#11647)
## Summary

- Ever since https://github.com/godotengine/godot/pull/90457 was merged
into the `master` branch, Godot has been using ruff for linting and
formatting Python files. As such, this PR adds Godot to the "Who's Using
Ruff?" section of the main `README.md` file.

## Test Plan

- N/A
2024-05-31 15:33:39 -04:00
JaRoSchm
7ce17b7736 Add Vim and Kate setup guide for ruff server (#11615)
## Summary

In the [roadmap for `ruff
server`](https://github.com/astral-sh/ruff/discussions/10581) support
for vim and kate is listed. Therefore I added setup guides for them
based on the neovim guide. As I don't use pyright I wasn't able to
translate the corresponding part from the neovim guide.

## Test Plan

Doesn't apply.
2024-05-31 19:06:55 +00:00
Charlie Marsh
f9a64503c8 Use char index rather than position for indent slice (#11645)
## Summary

A beginner's mistake :)

Closes https://github.com/astral-sh/ruff/issues/11641.
2024-05-31 19:04:36 +00:00
Alex Waygood
8a25531a71 red-knot: improve internal documentation in module.rs (#11638) 2024-05-31 16:11:18 +00:00
Micha Reiser
9b6d2ce1f2 Fix incorect placement of trailing stub function comments (#11632) 2024-05-31 12:06:17 +00:00
Carl Meyer
889667ad84 [red-knot] Update CODEOWNERS (#11625)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-31 06:47:53 +00:00
T-256
5b500fc4dc ruff server: Add support for documents not exist on disk (#11588)
Co-authored-by: T-256 <Tester@test.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-31 08:34:10 +02:00
Charlie Marsh
685d11a909 Mark repeated-isinstance-calls as unsafe on Python 3.10 and later (#11622)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11616.
2024-05-30 18:05:24 +00:00
plredmond
dcabd04caf F401 use BTreeMap instead of FxHashMap (#11621)
* Potentially resolves #11619 (nondeterministic hashmap order across
different architectures) in F401 by replacing a hashmap with
nondeterministic traversal order with an ordered mapping.

I'm not sure how to test this with our CI/CD. I don't have an s390x
machine at home. Should I try it in Qemu?
2024-05-30 10:54:46 -07:00
Charlie Marsh
3aa7e35a4c Avoid removing newlines between docstring headers and rST blocks (#11609)
Given:

```python
def func():
    """
    Example:

    .. code-block:: python

        import foo
    """
```

Removing the newline after the `Example:` header breaks Sphinx
rendering.

See: https://github.com/astral-sh/ruff/issues/11577
2024-05-30 13:29:20 -04:00
Micha Reiser
b0a751012e Document bump to win 10 (#11613) 2024-05-30 07:49:38 +00:00
Charlie Marsh
bd46cd1fcf Infer indentation with imports when logical indent is absent (#11608)
## Summary

In an `__init__.py` file, it's not uncommon to lack a logical indent
(since it may just contain imports). In such cases, we were always
falling back to four-space indent. This PR adds detection for indents
within import groups.

Closes https://github.com/astral-sh/ruff/issues/11606.
2024-05-30 00:18:07 -04:00
Charlie Marsh
a8d1328c1a [flake8-comprehension] Strip parentheses around generators in C400 (#11607)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11603.
2024-05-30 03:26:56 +00:00
Christoph Hasse
e35deee583 fix(F822): add option to enable F822 in __init__.py files (#11370)
## Summary

This PR aims to close #10095 by adding an option
`init-allow-undef-export` to the `pyflakes` settings. This option is
currently set to `true` such that behavior is kept identical.
But setting this option to `false` will lead to `F822` warnings to be
shown in all files, **including** `__init__.py` files.

As I've mentioned on #10095, I think `init-allow-undef-export=false`
would be the more user-friendly default option, as it creates fewer
surprises. @charliermarsh what do you think about making that the
default?

With this option in place, it's a single line fix for people that rely
on the old behavior.

And thinking longer term, for future major releases, one could probably
consider deprecating the option and eventually having people just `noqa`
these warnings if they are not wanted.


## Test Plan

I've added a `test_init_f822_enabled` test which repeats the test that
is done in the `init` test but this time with
`init-allow-undef-export=false` and the snap file correctly shows that
ruff will then trigger the otherwise suppressed F822 warning.


closes #10095
2024-05-30 03:15:05 +00:00
Micha Reiser
921bc15542 use owned ast and tokens in bench (#11598) 2024-05-29 18:10:32 +02:00
Vitaliy
e14096f0a8 docs: Minor formatting typo in F401 example. (#11601)
## Summary

Removed stray space in sample code snippet that is against ruff's own
default formatting rules.

This documentation appears on
https://docs.astral.sh/ruff/rules/unused-import/

## Test Plan

This is a trivially obvious change, verifiable with `ruff format
--check`
2024-05-29 11:14:53 -04:00
T-256
5f976cae07 Windows: Statically linked C runtime (#11589)
Co-authored-by: T-256 <Tester@test.com>
2024-05-29 14:00:12 +02:00
Tomas R
7659114eb3 [flake8-pyi] Implement PYI057 (#11486)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-05-29 10:04:36 +00:00
Micha Reiser
163c374242 Reduce extensive use of snapshot.query (#11596) 2024-05-29 10:11:46 +02:00
Charlie Marsh
204c59e353 Respect file exclusions in ruff server (#11590)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11587.

## Test Plan

- Added a lint error to `test_server.py` in `vscode-ruff`.
- Validated that, prior to this change, diagnostics appeared in the
file.
- Validated that, with this change, no diagnostics were shown.
- Validated that, with this change, no diagnostics were fixed on-save.
2024-05-29 02:58:36 +00:00
Tushar Sadhwani
531ae5227c [flake8-pyi] Implement PYI066 (#11541)
## Summary

- Implements `Y066` from `flake8-pyi` as `PYI066`
- Fixes `PYI006` not being raised for `elif` clauses. This would have
conflicted with PYI006's implementation, so decided to do it in the same
PR.

## Test Plan

`cargo test` / `cargo insta review`
2024-05-29 00:30:00 +00:00
Tushar Sadhwani
e0169d8dea [flake8-pyi] Implement PYI064 (#11325)
## Summary

Implements `Y064` from `flake8-pyi` and its autofix.

## Test Plan

`cargo test` / `cargo insta review`
2024-05-28 23:57:13 +00:00
plredmond
9a3b9f9fb5 [redknot] add module type and attribute lookup for some types (#11416)
* Add a module type, `ModuleTypeId`
* Add an attribute lookup method `get_member` for `Type`
  * Only implemented for `ModuleTypeId` and `ClassTypeId`
  * [x] Should this be a trait?
    *Answer: no*
* [x] Uses `unwrap`, but we should remove that. Maybe add a new variant
to `QueryError`?
    *Answer: Return `Option<Type>` as is done elsewhere*
* Add `infer_definition_type` case for `Import`
* Add `infer_expr_type` case for `Attribute`
* Add a test to exercise these
* [x] remove all NOTE/FIXME/TODO after discussing with reviewers
2024-05-28 13:13:03 -07:00
Charlie Marsh
49a5a9ccc2 Bump version to v0.4.6 (#11585) 2024-05-28 15:10:53 -04:00
Charlie Marsh
69d9212817 Propagate reads on global variables (#11584)
## Summary

This PR ensures that if a variable is bound via `global`, and then the
`global` is read, the originating variable is also marked as read. It's
not perfect, in that it won't detect _rebindings_, like:

```python
from app import redis_connection

def func():
    global redis_connection

    redis_connection = 1
    redis_connection()
```

So, above, `redis_connection` is still marked as unused.

But it does avoid flagging `redis_connection` as unused in:

```python
from app import redis_connection

def func():
    global redis_connection

    redis_connection()
```

Closes https://github.com/astral-sh/ruff/issues/11518.
2024-05-28 14:47:05 -04:00
Akshet Pandey
4a305588e9 [flake8-bandit] request-without-timeout should warn for requests.request (#11548)
## Summary
Update
[S113](https://docs.astral.sh/ruff/rules/request-without-timeout/) to
also warns for missing timeout on when calling `requests.request`
2024-05-28 16:31:12 +00:00
Charlie Marsh
16acd4913f Remove some unused pub functions (#11576)
## Summary

I left anything in `red-knot`, any `with_` methods, etc.
2024-05-28 09:56:51 -04:00
Micha Reiser
3989cb8b56 Make ruff_notebook a workspace dependency in ruff_server (#11572) 2024-05-28 09:26:39 +02:00
Charlie Marsh
a38c05bf13 Avoid recommending context manager in __enter__ implementations (#11575)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11567.
2024-05-28 01:44:24 +00:00
Charlie Marsh
ab107ef1f3 Avoid recomending operator.itemgetter with dependence on lambda arg (#11574)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11573.
2024-05-28 01:29:29 +00:00
Ahmed Ilyas
b36c713279 Consider irrefutable pattern similar to if .. else for C901 (#11565)
## Summary

Follow up to https://github.com/astral-sh/ruff/pull/11521

Removes the extra added complexity for catch all match cases. This
matches the implementation of plain `else` statements.

## Test Plan
Added new test cases.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-05-27 17:33:36 +00:00
Charlie Marsh
34a5063aa2 Respect excludes in ruff server configuration discovery (#11551)
## Summary

Right now, we're discovering configuration files even within (e.g.)
virtual environments, because we're recursing without respecting the
`exclude` field on parent configuration.

Closes https://github.com/astral-sh/ruff-vscode/issues/478.

## Test Plan

Installed Pandas; verified that I saw no warnings:

![Screenshot 2024-05-26 at 8 09
05 PM](https://github.com/astral-sh/ruff/assets/1309177/dcf4115c-d7b3-453b-b7c7-afdd4804d6f5)
2024-05-27 16:59:46 +00:00
Micha Reiser
adc0a5d126 Rename document module to text_document (#11571) 2024-05-27 18:32:21 +02:00
Dhruv Manilawala
e28e737296 Update FStringElements to deref to a slice (#11570)
Ref: https://github.com/astral-sh/ruff/pull/11400#discussion_r1615600354
2024-05-27 15:52:13 +00:00
Dhruv Manilawala
37ad994318 Use default settings if initialization options is empty or not provided (#11566)
## Summary

This PR fixes the bug to avoid flattening the global-only settings for
the new server.

This was added in https://github.com/astral-sh/ruff/pull/11497, possibly
to correctly de-serialize an empty value (`{}`). But, this lead to a bug
where the configuration under the `settings` key was not being read for
global-only variant.

By using #[serde(default)], we ensure that the settings field in the
`GlobalOnly` variant is optional and that an empty JSON object `{}` is
correctly deserialized into `GlobalOnly` with a default `ClientSettings`
instance.

fixes: #11507 

## Test Plan

Update the snapshot and existing test case. Also, verify the following
settings in Neovim:

1. Nothing

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
}
```

2. Empty dictionary

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
  init_options = vim.empty_dict(),
}
```

3. Empty `settings`

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
  init_options = {
    settings = vim.empty_dict(),
  },
}
```

4. With some configuration:

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
  init_options = {
    settings = {
      configuration = '/tmp/ruff-repro/pyproject.toml',
    },
  },
}
```
2024-05-27 21:06:34 +05:30
Alex Waygood
246a3388ee Implement a common trait for the string flags (#11564) 2024-05-27 16:02:01 +01:00
Evan Kohilas
6be00d5775 Adds recommended extension settings for vscode (#11519) 2024-05-27 13:04:32 +02:00
Dhruv Manilawala
9200dfc79f Remove empty strings when converting to f-string (UP032) (#11524)
## Summary

This PR brings back the functionality to remove empty strings when
converting to an f-string in `UP032`.

For context, https://github.com/astral-sh/ruff/pull/8712 added this
functionality to remove _trailing_ empty strings but it got removed in
https://github.com/astral-sh/ruff/pull/8697 possibly unexpectedly so.

There's one difference which is that this PR will remove _any_ empty
strings and not just trailing ones. For example,

```diff
--- /Users/dhruv/playground/ruff/src/UP032.py
+++ /Users/dhruv/playground/ruff/src/UP032.py
@@ -1,7 +1,5 @@
 (
-    "{a}"
-    ""
-    "{b}"
-    ""
-).format(a=1, b=1)
+    f"{1}"
+    f"{1}"
+)
```

## Test Plan

Run `cargo insta test` and update the snapshots.
2024-05-27 05:05:22 +00:00
renovate[bot]
5dcde88099 Update Rust crate thiserror to v1.0.61 (#11561) 2024-05-27 00:33:54 +00:00
renovate[bot]
7794eb2bde Update Rust crate proc-macro2 to v1.0.84 (#11556) 2024-05-26 20:21:50 -04:00
renovate[bot]
40bfae4f99 Update Rust crate syn to v2.0.66 (#11560) 2024-05-27 00:21:44 +00:00
renovate[bot]
7b064b25b2 Update Rust crate mimalloc to v0.1.42 (#11554) 2024-05-26 20:21:39 -04:00
renovate[bot]
9993115f63 Update Rust crate smol_str to v0.2.2 (#11559) 2024-05-26 20:21:25 -04:00
renovate[bot]
f0a21c9161 Update Rust crate serde to v1.0.203 (#11558) 2024-05-26 20:21:19 -04:00
renovate[bot]
f26c155de5 Update Rust crate schemars to v0.8.21 (#11557) 2024-05-26 20:21:13 -04:00
renovate[bot]
c3fa826b0a Update Rust crate parking_lot to v0.12.3 (#11555) 2024-05-26 20:21:03 -04:00
renovate[bot]
8b69794f1d Update Rust crate libc to v0.2.155 (#11553) 2024-05-26 20:20:47 -04:00
renovate[bot]
4e7c84df1d Update Rust crate anyhow to v1.0.86 (#11552) 2024-05-26 20:20:38 -04:00
Dhruv Manilawala
99c400000a Avoid owned token data in sequence sorting (#11533)
## Summary

This PR updates the sequence sorting (`RUF022` and `RUF023`) to avoid
using the owned data from the string token. Instead, we will directly
use the reference to the data on the AST. This does introduce a lot of
lifetimes but that's required.

The main motivation for this is to allow removing the `lex_starts_at`
usage easily.

### Alternatives

1. Extract the raw string content (stripping the prefix and quotes)
using the `Locator` and use that for comparison
2. Build up an
[`IndexVec`](3e30962077/crates/ruff_index/src/vec.rs)
and use the newtype index in place of the string value itself. This also
does require lifetimes so we might as well just use the method in this
PR.

## Test Plan

`cargo insta test` and no ecosystem changes
2024-05-26 20:20:20 -04:00
Charlie Marsh
b5d147d219 Create intermediary directories for --output-file (#11550)
Closes https://github.com/astral-sh/ruff/issues/11549.
2024-05-26 23:23:11 +00:00
Aleksei Latyshev
77da4615c1 [pyupgrade] Support TypeAliasType in UP040 (#11530)
## Summary
Lint `TypeAliasType` in UP040.

Fixes #11422 

## Test Plan

cargo test
2024-05-26 19:05:35 +00:00
Jane Lewis
627d230688 ruff server searches for configuration in parent directories (#11537)
## Summary

Fixes #11506.

`RuffSettingsIndex::new` now searches for configuration files in parent
directories.

## Test Plan

I confirmed that the original test case described in the issue worked as
expected.
2024-05-26 18:11:08 +00:00
Fergus Longley
0eef834e89 Use project-relative path when calculating gitlab message fingerprint (#11532)
## Summary

Concurrent GitLab runners clone projects into separate directories, e.g.
`{builds_dir}/$RUNNER_TOKEN_KEY/$CONCURRENT_ID/$NAMESPACE/$PROJECT_NAME`.
Since the fingerprint uses the full path to the file, the fingerprints
calculated by Ruff are different depending on which concurrent runner it
executes on, so often an MR will appear to remove all existing issues
and add them with new fingerprints.

I've adjusted the fingerprint function to use the project relative path,
which fixes this. Unfortunately this will have a breaking change for any
current users of this output - the fingerprints will change and appear
in GitLab as all linting messages having been fixed and then created.

## Test Plan

`cargo nextest run`

Running `ruff check --output-format gitlab` in a git repo, moving the
repo and running again, verifying no diffs between the outputs
2024-05-26 14:10:04 -04:00
Charlie Marsh
650c578e07 [flake8-self] Ignore sunder accesses in flake8-self rule (#11546)
## Summary

We already ignore dunders, so ignoring sunders (as in
https://docs.python.org/3/library/enum.html#supported-sunder-names)
makes sense to me.
2024-05-26 13:57:24 -04:00
Jane Lewis
9567fddf69 ruff server correctly treats .pyi files as stub files (#11535)
## Summary

Fixes #11534.

`DocumentQuery::source_type` now returns `PySourceType::Stub` when the
document is a `.pyi` file.

## Test Plan

I confirmed that stub-specific rule violations appeared with a build
from this PR (they were not visible from a `main` build).

<img width="1066" alt="Screenshot 2024-05-24 at 2 15 38 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/cd519b7e-21e4-41c8-bc30-43eb6d4d438e">
2024-05-26 13:42:48 -04:00
Mateusz Sokół
ab6d9d4658 Add missing functions to NumPy 2.0 migration rule (#11528)
Hi! 

I left out some of the functions in the migration rule which became
removed in NumPy 2.0:
- `np.alltrue`
- `np.anytrue`
- `np.cumproduct`
- `np.product`

Addressing: https://github.com/numpy/numpy/issues/26493
2024-05-26 13:24:20 -04:00
Amar Paul
677893226a [flake8-2020] fix minor typo in YTT301 documentation (#11543)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Current doc says `sys.version[0]` will select the first digit of a major
version number (correct) then as an example says

> e.g., `"3.10"` would evaluate to `"1"`

(would actually evaluate to `"3"`). Changed the example version to a
two-digit number to make the problem more clear.

## Test Plan

<!-- How was it tested? -->
ran the following:
- `cargo run -p ruff -- check
crates/ruff_linter/resources/test/fixtures/flake8_2020/YTT301.py
--no-cache`
- `cargo insta review`
- `cargo test`
which all passed.
2024-05-26 13:23:41 -04:00
Ahmed Ilyas
33fd50027c Consider match-case stmts for C901, PLR0912, and PLR0915 (#11521)
Resolves #11421

## Summary

Instead of counting match/case as one statement, consider each `case` as
a conditional.

## Test Plan

`cargo test`
2024-05-24 14:44:46 +05:30
Dmitry Bogorad
3e30962077 [flake8-logging-format] Fix the autofix title in logging-warn (G010) (#11514)
## Summary

Rule `logging-warn` (`G010`) prescribes a change from `warn` to
`warning` and has a corresponding autofix, but the autofix is mistakenly
titled ```"Convert to `warn`"``` instead of ```"Convert to `warning`"```
(the latter is what the autofix actually does). Seems to be a plain
typo.
2024-05-24 13:13:42 +05:30
Jane Lewis
81275a6c3d ruff server: An empty code action filter no longer returns notebook source actions (#11526)
## Summary

Fixes #11516

`ruff server` was sending both regular source actions and notebook
source actions back when passed an empty action filter. This PR makes a
few small changes so that notebook source actions are not sent when
regular source actions are sent, which means that an empty filter will
only return regular source actions.

## Test Plan

I confirmed that duplicate code actions no longer appeared in Neovim,
using a configuration similar to the one from the original issue.

<img width="509" alt="Screenshot 2024-05-23 at 11 48 48 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/9a5d6907-dd41-48bd-b015-8a344c5e0b3f">
2024-05-24 07:20:39 +00:00
Charlie Marsh
52c946a4c5 Treat all singledispatch arguments as runtime-required (#11523)
## Summary

It turns out that `singledispatch` does end up evaluating all arguments,
even though only the first is used to dispatch.

Closes https://github.com/astral-sh/ruff/issues/11520.
2024-05-23 20:36:24 -04:00
Evan Kohilas
ebdaf5765a [flake8-async] Sleep with >24 hour interval should usually sleep forever (ASYNC116) (#11498)
## Summary

Addresses #8451 by implementing rule 116 to add an unsafe fix when sleep
is used with a >24 hour interval to instead consider sleeping forever.

This rule is added as async instead as I my understanding was that these
trio rules would be moved to async anyway.

There are a couple of TODOs, which address further extending the rule by
adding support for lookups and evaluations, and also supporting `anyio`.
2024-05-23 16:25:50 -04:00
Christian Adell
9a93409e1c Update README.md - new Ruff user (#11509) 2024-05-23 15:50:17 -04:00
Dhruv Manilawala
102b9d930f Use Importer available on Checker (#11513)
## Summary

This PR updates the `FA102` rule logic to use the `Importer` which is
available on the `Checker`.

The main motivation is that this would make updating the `Importer` to
use the `Tokens` struct which will be required to remove the
`lex_starts_at` usage in `Insertion::start_of_block` method.

## Test Plan

`cargo insta test`
2024-05-23 11:19:08 +00:00
Jane Lewis
550aa871d3 Bump version to v0.4.5 (#11502) 2024-05-23 01:09:01 +00:00
Charlie Marsh
3c22a3bdcc Minor edits to ruff server docs (#11500)
## Summary

Minor copy edits based on my read-through. Feel free to disagree
anywhere.
2024-05-22 23:53:53 +00:00
Jane Lewis
6263923915 Update documentation for ruff server with new migration guide (#11499)
## Summary

Introduces a migration guide from `ruff-lsp` to `ruff server` and makes
small updates to the `README.md`.
2024-05-22 14:36:33 -07:00
Jane Lewis
94abea4b08 ruff server: Fix multiple issues with Neovim and Helix (#11497)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/11236.

This PR fixes several issues, most of which relate to non-VS Code
editors (Helix and Neovim).

1. Global-only initialization options are now correctly deserialized
from Neovim and Helix
2. Empty diagnostics are now published correctly for Neovim and Helix.
3. A workspace folder is created at the current working directory if the
initialization parameters send an empty list of workspace folders.
4. The server now gracefully handles opening files outside of any known
workspace, and will use global fallback settings taken from client
editor settings and a user settings TOML, if it exists.

## Test Plan

I've tested to confirm that each issue has been fixed.

* Global-only initialization options are now correctly deserialized from
Neovim and Helix + the server gracefully handles opening files outside
of any known workspace


https://github.com/astral-sh/ruff/assets/19577865/4f33477f-20c8-4e50-8214-6608b1a1ea6b

* Empty diagnostics are now published correctly for Neovim and Helix


https://github.com/astral-sh/ruff/assets/19577865/c93f56a0-f75d-466f-9f40-d77f99cf0637

* A workspace folder is created at the current working directory if the
initialization parameters send an empty list of workspace folders.



https://github.com/astral-sh/ruff/assets/19577865/b4b2e818-4b0d-40ce-961d-5831478cc726
2024-05-22 20:50:58 +00:00
Charlie Marsh
519a65007f Mark quotes as unnecessary for non-evaluated annotations (#11485)
## Summary

Similar to #11414, this PR extends `UP037` to flag quoted annotations
that are located in positions that won't be evaluated at runtime.

For example, the quotes on `Tuple` are unnecessary in:

```python
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from typing import Tuple


def foo():
    x: "Tuple[int, int]" = (0, 0)

foo()
```
2024-05-22 15:44:31 -04:00
Jane Lewis
573facd2ba Fix automatic configuration reloading for text and notebook documents (#11492)
## Summary

Recent changes made in the [Jupyter Notebook feature
PR](https://github.com/astral-sh/ruff/pull/11206) caused automatic
configuration reloading to stop working. This was because we would check
for paths to reload using the changed path, when we should have been
using the parent path of the changed path (to get the directory it was
changed in).

Additionally, this PR fixes an issue where `ruff.toml` and `.ruff.toml`
files were not being automatically reloaded.

Finally, this PR improves configuration reloading by actively publishing
diagnostics for notebook documents (which won't be affected by the
workspace refresh since they don't use pull diagnostics). It will also
publish diagnostics for text documents if pull diagnostics aren't
supported.

## Test Plan
To test this, open an existing configuration file in a codebase, and
make modifications that will affect one or more open Python / Jupyter
Notebook files. You should observe that the diagnostics for both kinds
of files update automatically when the file changes are saved.

Here's a test video showing what a successful test should look like:



https://github.com/astral-sh/ruff/assets/19577865/7172b598-d6de-4965-b33c-6cb8b911ef6c
2024-05-22 11:20:45 -07:00
Jane Lewis
3cb2e677aa ruff.applyFormat now formats an entire notebook document (#11493)
## Summary

Previously, `ruff.applyFormat`, seen in VS Code as the command `Ruff:
Format Document`, would only format the currently active notebook cell
inside a notebook document. This PR makes `ruff.applyFormat` format the
entire notebook document at once, operating on each code cell in order.

## Test Plan

1. Open a notebook document that has multiple unformatted code cells.
2. Run `Ruff: Format Document` through the Command Palette
(`Ctrl/Cmd+Shift+P` by default)
3. Observe that all code cells in the notebook have been formatted.
2024-05-22 09:02:46 -07:00
Dhruv Manilawala
f0046ab28e Move has_comments to CommentRanges (#11495)
## Summary

This PR moves the `has_comments` function from `Indexer` to
`CommentRanges`. The main motivation is that the `CommentRanges` will
now be built by the parser which is shared between the linter and the
formatter. Thus, the `CommentRanges` will be removed from the `Indexer`.

## Test Plan

`cargo test`
2024-05-22 13:35:16 +00:00
Charlie Marsh
5bb9720a10 Avoid multiline quotes warning with quote-style = preserve (#11490)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11063.
2024-05-22 04:31:03 +00:00
Dhruv Manilawala
9ff18bf9d3 Simplify Neovim docs for the LSP setup (#11489)
Similar to what we have at
https://github.com/astral-sh/ruff-lsp#example-neovim
2024-05-22 09:51:02 +05:30
Charlie Marsh
aa906b9c75 [pylint] Ignore __slots__ with dynamic values (#11488)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11333.
2024-05-22 04:18:01 +00:00
Evan Kohilas
3476e2f359 fixes invalid rule from hyphen (#11484)
## Summary

When using `add_rule.py`, it produces the following line in `codes.rs`
```
        (Flake8Async, "102") => (RuleGroup::Stable, rules::flake8-async::rules::BlockingOsCallInAsyncFunction),
```

Causing a syntax error.

This PR resolves that issue so that the script can be used again.

## Test Plan

Tested manually in new rule creation
2024-05-21 23:39:50 -04:00
Charlie Marsh
8848eca3c6 [pylint] Remove try body from branch counting (#11487)
## Summary

Matching Pylint, we now omit the `try` body itself from branch counting.
Each `except` counts as a branch, as does the `else` and the `finally`.

Closes https://github.com/astral-sh/ruff/issues/11205.
2024-05-21 23:38:51 -04:00
Jane Lewis
b0731ef9cb ruff server: Support Jupyter Notebook (*.ipynb) files (#11206)
## Summary

Closes https://github.com/astral-sh/ruff/issues/10858.

`ruff server` now supports `*.ipynb` (aka Jupyter Notebook) files.
Extensive internal changes have been made to facilitate this, which I've
done some work to contextualize with documentation and an pre-review
that highlights notable sections of the code.

`*.ipynb` cells should behave similarly to `*.py` documents, with one
major exception. The format command `ruff.applyFormat` will only apply
to the currently selected notebook cell - if you want to format an
entire notebook document, use `Format Notebook` from the VS Code context
menu.

## Test Plan

The VS Code extension does not yet have Jupyter Notebook support
enabled, so you'll first need to enable it manually. To do this,
checkout the `pre-release` branch and modify `src/common/server.ts` as
follows:

Before:
![Screenshot 2024-05-13 at 10 59
06 PM](https://github.com/astral-sh/ruff/assets/19577865/c6a3c604-c405-4968-b8a2-5d670de89172)

After:
![Screenshot 2024-05-13 at 10 58
24 PM](https://github.com/astral-sh/ruff/assets/19577865/94ab2e3d-0609-448d-9c8c-cd07c69a513b)

I recommend testing this PR with large, complicated notebook files. I
used notebook files from [this popular
repository](https://github.com/jakevdp/PythonDataScienceHandbook/tree/master/notebooks)
in my preliminary testing.

The main thing to test is ensuring that notebook cells behave the same
as Python documents, besides the aforementioned issue with
`ruff.applyFormat`. You should also test adding and deleting cells (in
particular, deleting all the code cells and ensure that doesn't break
anything), changing the kind of a cell (i.e. from markup -> code or vice
versa), and creating a new notebook file from scratch. Finally, you
should also test that source actions work as expected (and across the
entire notebook).

Note: `ruff.applyAutofix` and `ruff.applyOrganizeImports` are currently
broken for notebook files, and I suspect it has something to do with
https://github.com/astral-sh/ruff/issues/11248. Once this is fixed, I
will update the test plan accordingly.

---------

Co-authored-by: nolan <nolan.king90@gmail.com>
2024-05-21 22:29:30 +00:00
Nicolas Jeker
84531d1644 Clarify motivation for E713 and E714 (#11483)
The wording 'negative comparison' is a rather vague description of the
'is not' operation and does not describe what the 'not in' operation
does (potentially copied from 'is not'). This was replaced with more
precise language to describe the operators taken from the official
python docs[1].

Both rules didn't have a strong reasoning besides 'it's bad, use the
other'. The origin of these rules seems to be PEP8[2] which prefers 'is
not' over 'not ... is' for readability. This is now reflected in the
description.

[1]:
https://docs.python.org/3/reference/expressions.html#membership-test-operations
[2]: https://peps.python.org/pep-0008/#programming-recommendations
2024-05-21 14:12:18 -05:00
Charlie Marsh
83b8b62e3e Avoid flagging __future__ annotations as required for non-evaluated type annotations (#11414)
## Summary

If an annotation won't be evaluated at runtime, we don't need to flag
`from __future__ import annotations` as required. This applies both to
quoted annotations and annotations outside of runtime-evaluated
positions, like:

```python
def main() -> None:
    a_list: list[str] | None = []
    a_list.append("hello")
```

Closes https://github.com/astral-sh/ruff/issues/11397.
2024-05-21 18:57:13 +00:00
plredmond
7225732859 F401 - update documentation and deprecate ignore_init_module_imports (#11436)
## Summary

* Update documentation for F401 following recent PRs
  * #11168
  * #11314
* Deprecate `ignore_init_module_imports`
* Add a deprecation pragma to the option and a "warn user once" message
when the option is used.
* Restore the old behavior for stable (non-preview) mode:
* When `ignore_init_module_imports` is set to `true` (default) there are
no `__init_.py` fixes (but we get nice fix titles!).
* When `ignore_init_module_imports` is set to `false` there are unsafe
`__init__.py` fixes to remove unused imports.
* When preview mode is enabled, it overrides
`ignore_init_module_imports`.
* Fixed a bug in fix titles where `import foo as bar` would recommend
reexporting `bar as bar`. It now says to reexport `foo as foo`. (In this
case we don't issue a fix, fwiw; it was just a fix title bug.)

## Test plan

Added new fixture tests that reuse the existing fixtures for
`__init__.py` files. Each of the three situations listed above has
fixture tests. The F401 "stable" tests cover:

> * When `ignore_init_module_imports` is set to `true` (default) there
are no `__init_.py` fixes (but we get nice fix titles!).

The F401 "deprecated option" tests cover:

> * When `ignore_init_module_imports` is set to `false` there are unsafe
`__init__.py` fixes to remove unused imports.

These complement existing "preview" tests that show the new behavior
which recommends fixes in `__init__.py` according to whether the import
is 1st party and other circumstances (for more on that behavior see:
#11314).
2024-05-21 09:23:45 -07:00
Dhruv Manilawala
403f0dccd8 Consider soft keywords for E27 rules (#11446)
## Summary

This is a follow-up PR to #11445 update the `E27` rules to consider soft
keywords as well.

## Test Plan

Add test cases consisting of soft keywords and update the snapshot.
2024-05-20 05:38:06 +00:00
Zanie Blue
46fcd19ca6 Fix division by zero error in ecosystem check (#11469)
e.g.
https://github.com/astral-sh/ruff/actions/runs/9144809516/job/25143076896?pr=11468

<img width="1388" alt="Screenshot 2024-05-19 at 12 02 15 AM"
src="https://github.com/astral-sh/ruff/assets/2586601/0df7cbcd-712c-4ea9-96f5-73f871570525">
2024-05-19 09:08:10 -05:00
Charlie Marsh
d9ec3d56b0 Add some new projects to the ecosystem CI (#11468)
Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-05-19 08:08:38 -05:00
Auguste Lalande
cd87b787d9 Fix windows-ci failure (#11470)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

The recent issues with the windows CI seem to be caused by
https://github.com/nextest-rs/nextest/issues/1493. With this
https://github.com/nextest-rs/nextest/issues/1493#issuecomment-2106331574
as a fix.

(Let's see if it works)
2024-05-19 07:25:06 -05:00
Charlie Marsh
dd6d411026 Remove comma from ecosystem checks (#11466)
## Summary

Something's up with this repo -- they added a post-checkout hook? So
let's just remove it for now. We should go through and add a new batch
of repositories some time.
2024-05-18 23:37:56 -04:00
Charlie Marsh
cfceb437a8 Treat escaped newline as valid sequence (#11465)
## Summary

We weren't treating the escaped newline as a valid condition to trigger
the safer fix (add an extra backslash before each invalid escape
sequence).

Closes https://github.com/astral-sh/ruff/issues/11461.
2024-05-19 03:32:32 +00:00
Charlie Marsh
48b0660228 Respect operator precedence in FURB110 (#11464)
## Summary

Ensures that we parenthesize expressions (if necessary) to preserve
operator precedence in `FURB110`.

Closes https://github.com/astral-sh/ruff/issues/11398.
2024-05-19 03:17:11 +00:00
Charlie Marsh
24899efe50 Remove example from tab-indentation (#11462)
## Summary

I think the example is more confusing than helpful, since there's no
visual difference between the tab and space here (even if it rendered
properly).

Closes
https://github.com/astral-sh/ruff/issues/11460#issuecomment-2118397278.
2024-05-17 17:49:16 -04:00
654 changed files with 22577 additions and 12112 deletions

View File

@@ -1,3 +1,10 @@
[alias]
dev = "run --package ruff_dev --bin ruff_dev"
benchmark = "bench -p ruff_benchmark --bench linter --bench formatter --"
# statically link the C runtime so the executable does not depend on
# that shared/dynamic library.
#
# See: https://github.com/astral-sh/ruff/issues/11503
[target.'cfg(all(target_env="msvc", target_os = "windows"))']
rustflags = ["-C", "target-feature=+crt-static"]

3
.github/CODEOWNERS vendored
View File

@@ -15,3 +15,6 @@
# Script for fuzzing the parser
/scripts/fuzz-parser/ @AlexWaygood
# red-knot
/crates/red_knot/ @carljm @MichaReiser

View File

@@ -167,6 +167,9 @@ jobs:
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
# Workaround for <https://github.com/nextest-rs/nextest/issues/1493>.
RUSTUP_WINDOWS_PATH_ADD_BIN: 1
run: |
cargo nextest run --all-features --profile ci
cargo test --all-features --doc
@@ -209,6 +212,38 @@ jobs:
- name: "Build"
run: cargo build --release --locked
cargo-build-msrv:
name: "cargo build (msrv)"
runs-on: ubuntu-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- uses: SebRollen/toml-action@v1.2.0
id: msrv
with:
file: "Cargo.toml"
field: "workspace.package.rust-version"
- name: "Install Rust toolchain"
run: rustup default ${{ steps.msrv.outputs.value }}
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo +${{ steps.msrv.outputs.value }} insta test --all-features --unreferenced reject --test-runner nextest
cargo-fuzz:
name: "cargo fuzz"
runs-on: ubuntu-latest

View File

@@ -47,7 +47,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.5.0
uses: cloudflare/wrangler-action@v3.6.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -40,7 +40,7 @@ jobs:
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.5.0
uses: cloudflare/wrangler-action@v3.6.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -14,7 +14,7 @@ exclude: |
repos:
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.17
rev: v0.18
hooks:
- id: validate-pyproject
@@ -32,7 +32,7 @@ repos:
)$
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.40.0
rev: v0.41.0
hooks:
- id: markdownlint-fix
exclude: |
@@ -56,7 +56,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.4
rev: v0.4.7
hooks:
- id: ruff-format
- id: ruff

5
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,5 @@
{
"recommendations": [
"rust-lang.rust-analyzer"
]
}

6
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,6 @@
{
"rust-analyzer.check.extraArgs": [
"--all-features"
],
"rust-analyzer.check.command": "clippy",
}

View File

@@ -1,5 +1,162 @@
# Changelog
## 0.4.8
### Performance
- Linter performance has been improved by around 10% on some microbenchmarks by refactoring the lexer and parser to maintain synchronicity between them ([#11457](https://github.com/astral-sh/ruff/pull/11457))
### Preview features
- \[`flake8-bugbear`\] Implement `return-in-generator` (`B901`) ([#11644](https://github.com/astral-sh/ruff/pull/11644))
- \[`flake8-pyi`\] Implement `PYI063` ([#11699](https://github.com/astral-sh/ruff/pull/11699))
- \[`pygrep_hooks`\] Check blanket ignores via file-level pragmas (`PGH004`) ([#11540](https://github.com/astral-sh/ruff/pull/11540))
### Rule changes
- \[`pyupgrade`\] Update `UP035` for Python 3.13 and the latest version of `typing_extensions` ([#11693](https://github.com/astral-sh/ruff/pull/11693))
- \[`numpy`\] Update `NPY001` rule for NumPy 2.0 ([#11735](https://github.com/astral-sh/ruff/pull/11735))
### Server
- Formatting a document with syntax problems no longer spams a visible error popup ([#11745](https://github.com/astral-sh/ruff/pull/11745))
### CLI
- Add RDJson support for `--output-format` flag ([#11682](https://github.com/astral-sh/ruff/pull/11682))
### Bug fixes
- \[`pyupgrade`\] Write empty string in lieu of panic when fixing `UP032` ([#11696](https://github.com/astral-sh/ruff/pull/11696))
- \[`flake8-simplify`\] Simplify double negatives in `SIM103` ([#11684](https://github.com/astral-sh/ruff/pull/11684))
- Ensure the expression generator adds a newline before `type` statements ([#11720](https://github.com/astral-sh/ruff/pull/11720))
- Respect per-file ignores for blanket and redirected noqa rules ([#11728](https://github.com/astral-sh/ruff/pull/11728))
## 0.4.7
### Preview features
- \[`flake8-pyi`\] Implement `PYI064` ([#11325](https://github.com/astral-sh/ruff/pull/11325))
- \[`flake8-pyi`\] Implement `PYI066` ([#11541](https://github.com/astral-sh/ruff/pull/11541))
- \[`flake8-pyi`\] Implement `PYI057` ([#11486](https://github.com/astral-sh/ruff/pull/11486))
- \[`pyflakes`\] Enable `F822` in `__init__.py` files by default ([#11370](https://github.com/astral-sh/ruff/pull/11370))
### Formatter
- Fix incorrect placement of trailing stub function comments ([#11632](https://github.com/astral-sh/ruff/pull/11632))
### Server
- Respect file exclusions in `ruff server` ([#11590](https://github.com/astral-sh/ruff/pull/11590))
- Add support for documents not exist on disk ([#11588](https://github.com/astral-sh/ruff/pull/11588))
- Add Vim and Kate setup guide for `ruff server` ([#11615](https://github.com/astral-sh/ruff/pull/11615))
### Bug fixes
- Avoid removing newlines between docstring headers and rST blocks ([#11609](https://github.com/astral-sh/ruff/pull/11609))
- Infer indentation with imports when logical indent is absent ([#11608](https://github.com/astral-sh/ruff/pull/11608))
- Use char index rather than position for indent slice ([#11645](https://github.com/astral-sh/ruff/pull/11645))
- \[`flake8-comprehension`\] Strip parentheses around generators in `C400` ([#11607](https://github.com/astral-sh/ruff/pull/11607))
- Mark `repeated-isinstance-calls` as unsafe on Python 3.10 and later ([#11622](https://github.com/astral-sh/ruff/pull/11622))
## 0.4.6
### Breaking changes
- Use project-relative paths when calculating GitLab fingerprints ([#11532](https://github.com/astral-sh/ruff/pull/11532))
- Bump minimum supported Windows version to Windows 10 ([#11613](https://github.com/astral-sh/ruff/pull/11613))
### Preview features
- \[`flake8-async`\] Sleep with >24 hour interval should usually sleep forever (`ASYNC116`) ([#11498](https://github.com/astral-sh/ruff/pull/11498))
### Rule changes
- \[`numpy`\] Add missing functions to NumPy 2.0 migration rule ([#11528](https://github.com/astral-sh/ruff/pull/11528))
- \[`mccabe`\] Consider irrefutable pattern similar to `if .. else` for `C901` ([#11565](https://github.com/astral-sh/ruff/pull/11565))
- Consider `match`-`case` statements for `C901`, `PLR0912`, and `PLR0915` ([#11521](https://github.com/astral-sh/ruff/pull/11521))
- Remove empty strings when converting to f-string (`UP032`) ([#11524](https://github.com/astral-sh/ruff/pull/11524))
- \[`flake8-bandit`\] `request-without-timeout` should warn for `requests.request` ([#11548](https://github.com/astral-sh/ruff/pull/11548))
- \[`flake8-self`\] Ignore sunder accesses in `flake8-self` rules ([#11546](https://github.com/astral-sh/ruff/pull/11546))
- \[`pyupgrade`\] Lint for `TypeAliasType` usages (`UP040`) ([#11530](https://github.com/astral-sh/ruff/pull/11530))
### Server
- Respect excludes in `ruff server` configuration discovery ([#11551](https://github.com/astral-sh/ruff/pull/11551))
- Use default settings if initialization options is empty or not provided ([#11566](https://github.com/astral-sh/ruff/pull/11566))
- `ruff server` correctly treats `.pyi` files as stub files ([#11535](https://github.com/astral-sh/ruff/pull/11535))
- `ruff server` searches for configuration in parent directories ([#11537](https://github.com/astral-sh/ruff/pull/11537))
- `ruff server`: An empty code action filter no longer returns notebook source actions ([#11526](https://github.com/astral-sh/ruff/pull/11526))
### Bug fixes
- \[`flake8-logging-format`\] Fix autofix title in `logging-warn` (`G010`) ([#11514](https://github.com/astral-sh/ruff/pull/11514))
- \[`refurb`\] Avoid recommending `operator.itemgetter` with dependence on lambda arguments ([#11574](https://github.com/astral-sh/ruff/pull/11574))
- \[`flake8-simplify`\] Avoid recommending context manager in `__enter__` implementations ([#11575](https://github.com/astral-sh/ruff/pull/11575))
- Create intermediary directories for `--output-file` ([#11550](https://github.com/astral-sh/ruff/pull/11550))
- Propagate reads on global variables ([#11584](https://github.com/astral-sh/ruff/pull/11584))
- Treat all `singledispatch` arguments as runtime-required ([#11523](https://github.com/astral-sh/ruff/pull/11523))
## 0.4.5
### Ruff's language server is now in Beta
`v0.4.5` marks the official Beta release of `ruff server`, an integrated language server built into Ruff.
`ruff server` supports the same feature set as `ruff-lsp`, powering linting, formatting, and
code fixes in Ruff's editor integrations -- but with superior performance and
no installation required. We'd love your feedback!
You can enable `ruff server` in the [VS Code extension](https://github.com/astral-sh/ruff-vscode?tab=readme-ov-file#enabling-the-rust-based-language-server) today.
To read more about this exciting milestone, check out our [blog post](https://astral.sh/blog/ruff-v0.4.5)!
### Rule changes
- \[`flake8-future-annotations`\] Reword `future-rewritable-type-annotation` (`FA100`) message ([#11381](https://github.com/astral-sh/ruff/pull/11381))
- \[`pycodestyle`\] Consider soft keywords for `E27` rules ([#11446](https://github.com/astral-sh/ruff/pull/11446))
- \[`pyflakes`\] Recommend adding unused import bindings to `__all__` ([#11314](https://github.com/astral-sh/ruff/pull/11314))
- \[`pyflakes`\] Update documentation and deprecate `ignore_init_module_imports` ([#11436](https://github.com/astral-sh/ruff/pull/11436))
- \[`pyupgrade`\] Mark quotes as unnecessary for non-evaluated annotations ([#11485](https://github.com/astral-sh/ruff/pull/11485))
### Formatter
- Avoid multiline quotes warning with `quote-style = preserve` ([#11490](https://github.com/astral-sh/ruff/pull/11490))
### Server
- Support Jupyter Notebook files ([#11206](https://github.com/astral-sh/ruff/pull/11206))
- Support `noqa` comment code actions ([#11276](https://github.com/astral-sh/ruff/pull/11276))
- Fix automatic configuration reloading ([#11492](https://github.com/astral-sh/ruff/pull/11492))
- Fix several issues with configuration in Neovim and Helix ([#11497](https://github.com/astral-sh/ruff/pull/11497))
### CLI
- Add `--output-format` as a CLI option for `ruff config` ([#11438](https://github.com/astral-sh/ruff/pull/11438))
### Bug fixes
- Avoid `PLE0237` for property with setter ([#11377](https://github.com/astral-sh/ruff/pull/11377))
- Avoid `TCH005` for `if` stmt with `elif`/`else` block ([#11376](https://github.com/astral-sh/ruff/pull/11376))
- Avoid flagging `__future__` annotations as required for non-evaluated type annotations ([#11414](https://github.com/astral-sh/ruff/pull/11414))
- Check for ruff executable in 'bin' directory as installed by 'pip install --target'. ([#11450](https://github.com/astral-sh/ruff/pull/11450))
- Sort edits prior to deduplicating in quotation fix ([#11452](https://github.com/astral-sh/ruff/pull/11452))
- Treat escaped newline as valid sequence ([#11465](https://github.com/astral-sh/ruff/pull/11465))
- \[`flake8-pie`\] Preserve parentheses in `unnecessary-dict-kwargs` ([#11372](https://github.com/astral-sh/ruff/pull/11372))
- \[`pylint`\] Ignore `__slots__` with dynamic values ([#11488](https://github.com/astral-sh/ruff/pull/11488))
- \[`pylint`\] Remove `try` body from branch counting ([#11487](https://github.com/astral-sh/ruff/pull/11487))
- \[`refurb`\] Respect operator precedence in `FURB110` ([#11464](https://github.com/astral-sh/ruff/pull/11464))
### Documentation
- Add `--preview` to the README ([#11395](https://github.com/astral-sh/ruff/pull/11395))
- Add Python 3.13 to list of allowed Python versions ([#11411](https://github.com/astral-sh/ruff/pull/11411))
- Simplify Neovim setup documentation ([#11489](https://github.com/astral-sh/ruff/pull/11489))
- Update CONTRIBUTING.md to reflect the new parser ([#11434](https://github.com/astral-sh/ruff/pull/11434))
- Update server documentation with new migration guide ([#11499](https://github.com/astral-sh/ruff/pull/11499))
- \[`pycodestyle`\] Clarify motivation for `E713` and `E714` ([#11483](https://github.com/astral-sh/ruff/pull/11483))
- \[`pyflakes`\] Update docs to describe WAI behavior (F541) ([#11362](https://github.com/astral-sh/ruff/pull/11362))
- \[`pylint`\] Clearly indicate what is counted as a branch ([#11423](https://github.com/astral-sh/ruff/pull/11423))
## 0.4.4
### Preview features
@@ -74,6 +231,10 @@
- Avoid allocations for isort module names ([#11251](https://github.com/astral-sh/ruff/pull/11251))
- Build a separate ARM wheel for macOS ([#11149](https://github.com/astral-sh/ruff/pull/11149))
### Windows
- Increase the minimum requirement to Windows 10.
## 0.4.2
### Rule changes

View File

@@ -101,6 +101,8 @@ pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting,
These checks will run on GitHub Actions when you open your pull request, but running them locally
will save you time and expedite the merge process.
If you're using VS Code, you can also install the recommended [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer) extension to get these checks while editing.
Note that many code changes also require updating the snapshot tests, which is done interactively
after running `cargo test` like so:

214
Cargo.lock generated
View File

@@ -129,9 +129,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.83"
version = "1.0.86"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25bdb32cbbdce2b519a9cd7df3a678443100e265d5e25ca763b7572a5104f5f3"
checksum = "b3d1d046238990b9cf5bcde22a3fb3584ee5cf65fb2765f454ed428c7a0063da"
[[package]]
name = "argfile"
@@ -350,7 +350,7 @@ version = "4.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "528131438037fd55894f62d6e9f068b8f45ac57ffa77517819645d10aed04f64"
dependencies = [
"heck 0.5.0",
"heck",
"proc-macro2",
"quote",
"syn",
@@ -868,12 +868,6 @@ dependencies = [
"allocator-api2",
]
[[package]]
name = "heck"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
[[package]]
name = "heck"
version = "0.5.0"
@@ -886,12 +880,6 @@ version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d231dfb89cfffdbc30e7fc41579ed6066ad03abda9e567ccafae602b97ec5024"
[[package]]
name = "hexf-parse"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dfa686283ad6dd069f105e5ab091b04c62850d3e4cf5d67debad1933f55023df"
[[package]]
name = "home"
version = "0.5.9"
@@ -1035,9 +1023,9 @@ dependencies = [
[[package]]
name = "insta"
version = "1.38.0"
version = "1.39.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3eab73f58e59ca6526037208f0e98851159ec1633cf17b6cd2e1f2c3fd5d53cc"
checksum = "810ae6042d48e2c9e9215043563a58a80b877bc863228a74cf10c49d4620a6f5"
dependencies = [
"console",
"globset",
@@ -1122,9 +1110,9 @@ dependencies = [
[[package]]
name = "itertools"
version = "0.12.1"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569"
checksum = "413ee7dfc52ee1a4949ceeb7dbc8a33f2d6c088194d9f922fb8318faf1f01186"
dependencies = [
"either",
]
@@ -1176,47 +1164,17 @@ version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "lexical-parse-float"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "683b3a5ebd0130b8fb52ba0bdc718cc56815b6a097e28ae5a6997d0ad17dc05f"
dependencies = [
"lexical-parse-integer",
"lexical-util",
"static_assertions",
]
[[package]]
name = "lexical-parse-integer"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d0994485ed0c312f6d965766754ea177d07f9c00c9b82a5ee62ed5b47945ee9"
dependencies = [
"lexical-util",
"static_assertions",
]
[[package]]
name = "lexical-util"
version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5255b9ff16ff898710eb9eb63cb39248ea8a5bb036bea8085b1a767ff6c4e3fc"
dependencies = [
"static_assertions",
]
[[package]]
name = "libc"
version = "0.2.154"
version = "0.2.155"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346"
checksum = "97b3888a4aecf77e811145cadf6eef5901f4782c53886191b2f693f24761847c"
[[package]]
name = "libcst"
version = "1.3.1"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f1e25d1b119ab5c2f15a6e081bb94a8d547c5c2ad065f5fd0dbb683f31ced91"
checksum = "10293a04a48e8b0cb2cc825a93b83090e527bffd3c897a0255ad7bc96079e920"
dependencies = [
"chic",
"libcst_derive",
@@ -1229,9 +1187,9 @@ dependencies = [
[[package]]
name = "libcst_derive"
version = "1.3.1"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a5011f2d59093de14a4a90e01b9d85dee9276e58a25f0107dcee167dd601be0"
checksum = "a2ae40017ac09cd2c6a53504cb3c871c7f2b41466eac5bc66ba63f39073b467b"
dependencies = [
"quote",
"syn",
@@ -1239,9 +1197,9 @@ dependencies = [
[[package]]
name = "libmimalloc-sys"
version = "0.1.37"
version = "0.1.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81eb4061c0582dedea1cbc7aff2240300dd6982e0239d1c99e65c1dbf4a30ba7"
checksum = "0e7bb23d733dfcc8af652a78b7bf232f0e967710d044732185e561e47c0336b6"
dependencies = [
"cc",
"libc",
@@ -1300,8 +1258,7 @@ dependencies = [
[[package]]
name = "lsp-types"
version = "0.95.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e34d33a8e9b006cd3fc4fe69a921affa097bae4bb65f76271f4644f9a334365"
source = "git+https://github.com/astral-sh/lsp-types.git?rev=3512a9f#3512a9f33eadc5402cfab1b8f7340824c8ca1439"
dependencies = [
"bitflags 1.3.2",
"serde",
@@ -1339,9 +1296,9 @@ checksum = "6c8640c5d730cb13ebd907d8d04b52f55ac9a2eec55b440c8892f40d56c76c1d"
[[package]]
name = "mimalloc"
version = "0.1.41"
version = "0.1.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f41a2280ded0da56c8cf898babb86e8f10651a34adcfff190ae9a1159c6908d"
checksum = "e9186d86b79b52f4a77af65604b51225e8db1d6ee7e3f41aec1e40829c71a176"
dependencies = [
"libmimalloc-sys",
]
@@ -1441,9 +1398,9 @@ dependencies = [
[[package]]
name = "nu-ansi-term"
version = "0.49.0"
version = "0.50.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c073d3c1930d0751774acf49e66653acecb416c3a54c6ec095a9b11caddb5a68"
checksum = "dd2800e1520bdc966782168a627aa5d1ad92e33b984bf7c7615d31280c83ff14"
dependencies = [
"windows-sys 0.48.0",
]
@@ -1498,9 +1455,9 @@ checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
[[package]]
name = "parking_lot"
version = "0.12.2"
version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e4af0ca4f6caed20e900d564c242b8e5d4903fdacf31d3daf527b66fe6f42fb"
checksum = "f1bf18183cf54e8d6059647fc3063646a1801cf30896933ec2311622cc4b9a27"
dependencies = [
"lock_api",
"parking_lot_core",
@@ -1707,9 +1664,9 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.82"
version = "1.0.85"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8ad3d49ab951a01fbaafe34f2ec74122942fe18a3f9814c3268f1bb72042131b"
checksum = "22244ce15aa966053a896d1accb3a6e68469b97c7f33f284b99f0d576879fc23"
dependencies = [
"unicode-ident",
]
@@ -1832,7 +1789,6 @@ dependencies = [
"rustc-hash",
"smol_str",
"tempfile",
"textwrap",
"tracing",
"tracing-subscriber",
"tracing-tree",
@@ -1940,7 +1896,7 @@ dependencies = [
[[package]]
name = "ruff"
version = "0.4.4"
version = "0.4.8"
dependencies = [
"anyhow",
"argfile",
@@ -1957,7 +1913,7 @@ dependencies = [
"insta",
"insta-cmd",
"is-macro",
"itertools 0.12.1",
"itertools 0.13.0",
"log",
"mimalloc",
"notify",
@@ -2003,7 +1959,6 @@ dependencies = [
"ruff_linter",
"ruff_python_ast",
"ruff_python_formatter",
"ruff_python_index",
"ruff_python_parser",
"serde",
"serde_json",
@@ -2019,7 +1974,7 @@ dependencies = [
"filetime",
"glob",
"globset",
"itertools 0.12.1",
"itertools 0.13.0",
"regex",
"ruff_macros",
"seahash",
@@ -2035,7 +1990,7 @@ dependencies = [
"imara-diff",
"indicatif",
"indoc",
"itertools 0.12.1",
"itertools 0.13.0",
"libcst",
"pretty_assertions",
"rayon",
@@ -2051,6 +2006,7 @@ dependencies = [
"ruff_python_parser",
"ruff_python_stdlib",
"ruff_python_trivia",
"ruff_text_size",
"ruff_workspace",
"schemars",
"serde",
@@ -2101,7 +2057,7 @@ dependencies = [
[[package]]
name = "ruff_linter"
version = "0.4.4"
version = "0.4.8"
dependencies = [
"aho-corasick",
"annotate-snippets 0.9.2",
@@ -2117,7 +2073,7 @@ dependencies = [
"insta",
"is-macro",
"is-wsl",
"itertools 0.12.1",
"itertools 0.13.0",
"libcst",
"log",
"memchr",
@@ -2165,7 +2121,7 @@ dependencies = [
name = "ruff_macros"
version = "0.0.0"
dependencies = [
"itertools 0.12.1",
"itertools 0.13.0",
"proc-macro2",
"quote",
"ruff_python_trivia",
@@ -2177,7 +2133,7 @@ name = "ruff_notebook"
version = "0.0.0"
dependencies = [
"anyhow",
"itertools 0.12.1",
"itertools 0.13.0",
"once_cell",
"rand",
"ruff_diagnostics",
@@ -2198,7 +2154,7 @@ dependencies = [
"aho-corasick",
"bitflags 2.5.0",
"is-macro",
"itertools 0.12.1",
"itertools 0.13.0",
"once_cell",
"ruff_python_trivia",
"ruff_source_file",
@@ -2227,6 +2183,7 @@ dependencies = [
"ruff_python_literal",
"ruff_python_parser",
"ruff_source_file",
"ruff_text_size",
]
[[package]]
@@ -2237,7 +2194,7 @@ dependencies = [
"clap",
"countme",
"insta",
"itertools 0.12.1",
"itertools 0.13.0",
"memchr",
"once_cell",
"regex",
@@ -2245,7 +2202,6 @@ dependencies = [
"ruff_formatter",
"ruff_macros",
"ruff_python_ast",
"ruff_python_index",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_source_file",
@@ -2278,9 +2234,7 @@ name = "ruff_python_literal"
version = "0.0.0"
dependencies = [
"bitflags 2.5.0",
"hexf-parse",
"itertools 0.12.1",
"lexical-parse-float",
"itertools 0.13.0",
"ruff_python_ast",
"unic-ucd-category",
]
@@ -2294,10 +2248,9 @@ dependencies = [
"bitflags 2.5.0",
"bstr",
"insta",
"is-macro",
"itertools 0.12.1",
"memchr",
"ruff_python_ast",
"ruff_python_trivia",
"ruff_source_file",
"ruff_text_size",
"rustc-hash",
@@ -2344,7 +2297,7 @@ dependencies = [
name = "ruff_python_trivia"
version = "0.0.0"
dependencies = [
"itertools 0.12.1",
"itertools 0.13.0",
"ruff_source_file",
"ruff_text_size",
"unicode-ident",
@@ -2355,7 +2308,6 @@ name = "ruff_python_trivia_integration_tests"
version = "0.0.0"
dependencies = [
"insta",
"ruff_python_index",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_source_file",
@@ -2368,6 +2320,7 @@ version = "0.2.2"
dependencies = [
"anyhow",
"crossbeam",
"globset",
"insta",
"jod-thread",
"libc",
@@ -2377,6 +2330,7 @@ dependencies = [
"ruff_diagnostics",
"ruff_formatter",
"ruff_linter",
"ruff_notebook",
"ruff_python_ast",
"ruff_python_codegen",
"ruff_python_formatter",
@@ -2428,7 +2382,6 @@ dependencies = [
"ruff_python_formatter",
"ruff_python_index",
"ruff_python_parser",
"ruff_python_trivia",
"ruff_source_file",
"ruff_text_size",
"ruff_workspace",
@@ -2449,7 +2402,7 @@ dependencies = [
"globset",
"ignore",
"is-macro",
"itertools 0.12.1",
"itertools 0.13.0",
"log",
"matchit",
"path-absolutize",
@@ -2555,9 +2508,9 @@ dependencies = [
[[package]]
name = "schemars"
version = "0.8.19"
version = "0.8.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fc6e7ed6919cb46507fb01ff1654309219f62b4d603822501b0b80d42f6f21ef"
checksum = "09c024468a378b7e36765cd36702b7a90cc3cba11654f6685c8f233408e89e92"
dependencies = [
"dyn-clone",
"schemars_derive",
@@ -2567,9 +2520,9 @@ dependencies = [
[[package]]
name = "schemars_derive"
version = "0.8.19"
version = "0.8.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "185f2b7aa7e02d418e453790dde16890256bbd2bcd04b7dc5348811052b53f49"
checksum = "b1eee588578aff73f856ab961cd2f79e36bc45d7ded33a7562adba4667aecc0e"
dependencies = [
"proc-macro2",
"quote",
@@ -2597,9 +2550,9 @@ checksum = "1c107b6f4780854c8b126e228ea8869f4d7b71260f962fefb57b996b8959ba6b"
[[package]]
name = "serde"
version = "1.0.201"
version = "1.0.203"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "780f1cebed1629e4753a1a38a3c72d30b97ec044f0aef68cb26650a3c5cf363c"
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
dependencies = [
"serde_derive",
]
@@ -2617,9 +2570,9 @@ dependencies = [
[[package]]
name = "serde_derive"
version = "1.0.201"
version = "1.0.203"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c5e405930b9796f1c00bee880d03fc7e0bb4b9a11afc776885ffe84320da2865"
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
dependencies = [
"proc-macro2",
"quote",
@@ -2661,9 +2614,9 @@ dependencies = [
[[package]]
name = "serde_spanned"
version = "0.6.5"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb3622f419d1296904700073ea6cc23ad690adbd66f13ea683df73298736f0c1"
checksum = "79e674e01f999af37c49f70a6ede167a8a60b2503e56c5599532a65baa5969a0"
dependencies = [
"serde",
]
@@ -2736,17 +2689,11 @@ version = "1.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3c5e1a9a646d36c3599cd173a41282daf47c44583ad367b8e6837255952e5c67"
[[package]]
name = "smawk"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7c388c1b5e93756d0c740965c41e8822f866621d41acbdf6336a6a168f8840c"
[[package]]
name = "smol_str"
version = "0.2.1"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6845563ada680337a52d43bb0b29f396f2d911616f6573012645b9e3d048a49"
checksum = "dd538fb6910ac1099850255cf94a94df6551fbdd602454387d0adb2d1ca6dead"
dependencies = [
"serde",
]
@@ -2795,11 +2742,11 @@ dependencies = [
[[package]]
name = "strum_macros"
version = "0.26.2"
version = "0.26.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c6cf59daf282c0a494ba14fd21610a0325f9f90ec9d1231dea26bcb1d696c946"
checksum = "f7993a8e3a9e88a00351486baae9522c91b123a088f76469e5bd5cc17198ea87"
dependencies = [
"heck 0.4.1",
"heck",
"proc-macro2",
"quote",
"rustversion",
@@ -2814,9 +2761,9 @@ checksum = "81cdd64d312baedb58e21336b31bc043b77e01cc99033ce76ef539f78e965ebc"
[[package]]
name = "syn"
version = "2.0.63"
version = "2.0.66"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf5be731623ca1a1fb7d8be6f261a3be6d3e2337b8a1f97be944d020c8fcb704"
checksum = "c42f3f41a2de00b01c0aaad383c5a45241efc8b2d1eda5661812fda5f3cdcff5"
dependencies = [
"proc-macro2",
"quote",
@@ -2891,31 +2838,20 @@ dependencies = [
"test-case-core",
]
[[package]]
name = "textwrap"
version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23d434d3f8967a09480fb04132ebe0a3e088c173e6d0ee7897abbdf4eab0f8b9"
dependencies = [
"smawk",
"unicode-linebreak",
"unicode-width",
]
[[package]]
name = "thiserror"
version = "1.0.60"
version = "1.0.61"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "579e9083ca58dd9dcf91a9923bb9054071b9ebbd800b342194c9feb0ee89fc18"
checksum = "c546c80d6be4bc6a00c0f01730c08df82eaa7a7a61f11d656526506112cc1709"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "1.0.60"
version = "1.0.61"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2470041c06ec3ac1ab38d0356a6119054dedaea53e12fbefc0de730a1c08524"
checksum = "46c3384250002a6d5af4d114f2845d37b57521033f30d5c3f46c4d70e1197533"
dependencies = [
"proc-macro2",
"quote",
@@ -2979,9 +2915,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "toml"
version = "0.8.12"
version = "0.8.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e9dd1545e8208b4a5af1aa9bbd0b4cf7e9ea08fabc5d0a5c67fcaafa17433aa3"
checksum = "a4e43f8cc456c9704c851ae29c67e17ef65d2c30017c17a9765b89c382dc8bba"
dependencies = [
"serde",
"serde_spanned",
@@ -2991,18 +2927,18 @@ dependencies = [
[[package]]
name = "toml_datetime"
version = "0.6.5"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3550f4e9685620ac18a50ed434eb3aec30db8ba93b0287467bca5826ea25baf1"
checksum = "4badfd56924ae69bcc9039335b2e017639ce3f9b001c393c1b2d1ef846ce2cbf"
dependencies = [
"serde",
]
[[package]]
name = "toml_edit"
version = "0.22.12"
version = "0.22.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3328d4f68a705b2a4498da1d580585d39a6510f98318a2cec3018a7ec61ddef"
checksum = "c127785850e8c20836d49732ae6abfa47616e60bf9d9f57c43c250361a9db96c"
dependencies = [
"indexmap",
"serde",
@@ -3087,11 +3023,11 @@ dependencies = [
[[package]]
name = "tracing-tree"
version = "0.3.0"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65139ecd2c3f6484c3b99bc01c77afe21e95473630747c7aca525e78b0666675"
checksum = "b56c62d2c80033cb36fae448730a2f2ef99410fe3ecbffc916681a32f6807dbe"
dependencies = [
"nu-ansi-term 0.49.0",
"nu-ansi-term 0.50.0",
"tracing-core",
"tracing-log",
"tracing-subscriber",
@@ -3157,12 +3093,6 @@ version = "1.0.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b"
[[package]]
name = "unicode-linebreak"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b09c83c3c29d37506a3e260c08c03743a6bb66a9cd432c6934ab501a190571f"
[[package]]
name = "unicode-normalization"
version = "0.1.23"

View File

@@ -4,7 +4,7 @@ resolver = "2"
[workspace.package]
edition = "2021"
rust-version = "1.71"
rust-version = "1.74"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -62,7 +62,6 @@ filetime = { version = "0.2.23" }
glob = { version = "0.3.1" }
globset = { version = "0.4.14" }
hashbrown = "0.14.3"
hexf-parse = { version = "0.2.1" }
ignore = { version = "0.4.22" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
@@ -73,15 +72,14 @@ insta = { version = "1.35.1", feature = ["filters", "glob"] }
insta-cmd = { version = "0.6.0" }
is-macro = { version = "0.3.5" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.12.1" }
itertools = { version = "0.13.0" }
js-sys = { version = "0.3.69" }
jod-thread = { version = "0.1.2" }
lexical-parse-float = { version = "0.8.0", features = ["format"] }
libc = { version = "0.2.153" }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
lsp-server = { version = "0.7.6" }
lsp-types = { version = "0.95.0", features = ["proposed"] }
lsp-types = { git = "https://github.com/astral-sh/lsp-types.git", rev = "3512a9f", features = ["proposed"] }
matchit = { version = "0.8.1" }
memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }

View File

@@ -152,7 +152,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.4.4
rev: v0.4.8
hooks:
# Run the linter.
- id: ruff
@@ -408,6 +408,7 @@ Ruff is used by a number of major open-source projects and companies, including:
- [Dagster](https://github.com/dagster-io/dagster)
- Databricks ([MLflow](https://github.com/mlflow/mlflow))
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Godot](https://github.com/godotengine/godot)
- [Gradio](https://github.com/gradio-app/gradio)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
- [HTTPX](https://github.com/encode/httpx)
@@ -433,6 +434,7 @@ Ruff is used by a number of major open-source projects and companies, including:
- Modern Treasury ([Python SDK](https://github.com/Modern-Treasury/modern-treasury-python))
- Mozilla ([Firefox](https://github.com/mozilla/gecko-dev))
- [Mypy](https://github.com/python/mypy)
- [Nautobot](https://github.com/nautobot/nautobot)
- Netflix ([Dispatch](https://github.com/Netflix/dispatch))
- [Neon](https://github.com/neondatabase/neon)
- [Nokia](https://nokia.com/)

View File

@@ -35,7 +35,6 @@ tracing-subscriber = { workspace = true }
tracing-tree = { workspace = true }
[dev-dependencies]
textwrap = { version = "0.16.1" }
tempfile = { workspace = true }
[lints]

View File

@@ -275,10 +275,7 @@ pub struct TypedNodeKey<N: AstNode> {
impl<N: AstNode> TypedNodeKey<N> {
pub fn from_node(node: &N) -> Self {
let inner = NodeKey {
kind: node.as_any_node_ref().kind(),
range: node.range(),
};
let inner = NodeKey::from_node(node.as_any_node_ref());
Self {
inner,
_marker: PhantomData,
@@ -352,6 +349,12 @@ pub struct NodeKey {
}
impl NodeKey {
pub fn from_node(node: AnyNodeRef) -> Self {
NodeKey {
kind: node.kind(),
range: node.range(),
}
}
pub fn resolve<'a>(&self, root: AnyNodeRef<'a>) -> Option<AnyNodeRef<'a>> {
// We need to do a binary search here. Only traverse into a node if the range is withint the node
let mut visitor = FindNodeKeyVisitor {

View File

@@ -9,9 +9,9 @@ use crate::files::FileId;
use crate::lint::{LintSemanticStorage, LintSyntaxStorage};
use crate::module::ModuleResolver;
use crate::parse::ParsedStorage;
use crate::semantic::SemanticIndexStorage;
use crate::semantic::TypeStore;
use crate::source::SourceStorage;
use crate::symbols::SymbolTablesStorage;
use crate::types::TypeStore;
mod jars;
mod query;
@@ -125,7 +125,7 @@ pub struct SourceJar {
#[derive(Debug, Default)]
pub struct SemanticJar {
pub module_resolver: ModuleResolver,
pub symbol_tables: SymbolTablesStorage,
pub semantic_indices: SemanticIndexStorage,
pub type_store: TypeStore,
}

View File

@@ -17,9 +17,8 @@ pub mod lint;
pub mod module;
mod parse;
pub mod program;
mod semantic;
pub mod source;
mod symbols;
mod types;
pub mod watch;
pub(crate) type FxDashMap<K, V> = dashmap::DashMap<K, V, BuildHasherDefault<FxHasher>>;

View File

@@ -5,17 +5,18 @@ use std::time::Duration;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{ModModule, StringLiteral};
use ruff_python_parser::Parsed;
use crate::cache::KeyValueCache;
use crate::db::{LintDb, LintJar, QueryResult};
use crate::files::FileId;
use crate::module::ModuleName;
use crate::parse::{parse, Parsed};
use crate::source::{source_text, Source};
use crate::symbols::{
resolve_global_symbol, symbol_table, Definition, GlobalSymbolId, SymbolId, SymbolTable,
use crate::module::{resolve_module, ModuleName};
use crate::parse::parse;
use crate::semantic::{infer_definition_type, infer_symbol_public_type, Type};
use crate::semantic::{
resolve_global_symbol, semantic_index, Definition, GlobalSymbolId, SemanticIndex, SymbolId,
};
use crate::types::{infer_definition_type, infer_symbol_type, Type};
use crate::source::{source_text, Source};
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn lint_syntax(db: &dyn LintDb, file_id: FileId) -> QueryResult<Diagnostics> {
@@ -40,7 +41,7 @@ pub(crate) fn lint_syntax(db: &dyn LintDb, file_id: FileId) -> QueryResult<Diagn
let parsed = parse(db.upcast(), *file_id)?;
if parsed.errors().is_empty() {
let ast = parsed.ast();
let ast = parsed.syntax();
let mut visitor = SyntaxLintVisitor {
diagnostics,
@@ -81,13 +82,13 @@ pub(crate) fn lint_semantic(db: &dyn LintDb, file_id: FileId) -> QueryResult<Dia
storage.get(&file_id, |file_id| {
let source = source_text(db.upcast(), *file_id)?;
let parsed = parse(db.upcast(), *file_id)?;
let symbols = symbol_table(db.upcast(), *file_id)?;
let semantic_index = semantic_index(db.upcast(), *file_id)?;
let context = SemanticLintContext {
file_id: *file_id,
source,
parsed,
symbols,
parsed: &parsed,
semantic_index,
db,
diagnostics: RefCell::new(Vec::new()),
};
@@ -101,17 +102,17 @@ pub(crate) fn lint_semantic(db: &dyn LintDb, file_id: FileId) -> QueryResult<Dia
fn lint_unresolved_imports(context: &SemanticLintContext) -> QueryResult<()> {
// TODO: Consider iterating over the dependencies (imports) only instead of all definitions.
for (symbol, definition) in context.symbols().all_definitions() {
for (symbol, definition) in context.semantic_index().symbol_table().all_definitions() {
match definition {
Definition::Import(import) => {
let ty = context.infer_symbol_type(symbol)?;
let ty = context.infer_symbol_public_type(symbol)?;
if ty.is_unknown() {
context.push_diagnostic(format!("Unresolved module {}", import.module));
}
}
Definition::ImportFrom(import) => {
let ty = context.infer_symbol_type(symbol)?;
let ty = context.infer_symbol_public_type(symbol)?;
if ty.is_unknown() {
let module_name = import.module().map(Deref::deref).unwrap_or_default();
@@ -144,16 +145,14 @@ fn lint_bad_overrides(context: &SemanticLintContext) -> QueryResult<()> {
// TODO we should have a special marker on the real typing module (from typeshed) so if you
// have your own "typing" module in your project, we don't consider it THE typing module (and
// same for other stdlib modules that our lint rules care about)
let Some(typing_override) =
resolve_global_symbol(context.db.upcast(), ModuleName::new("typing"), "override")?
else {
let Some(typing_override) = context.resolve_global_symbol("typing", "override")? else {
// TODO once we bundle typeshed, this should be unreachable!()
return Ok(());
};
// TODO we should maybe index definitions by type instead of iterating all, or else iterate all
// just once, match, and branch to all lint rules that care about a type of definition
for (symbol, definition) in context.symbols().all_definitions() {
for (symbol, definition) in context.semantic_index().symbol_table().all_definitions() {
if !matches!(definition, Definition::FunctionDef(_)) {
continue;
}
@@ -194,8 +193,8 @@ fn lint_bad_overrides(context: &SemanticLintContext) -> QueryResult<()> {
pub struct SemanticLintContext<'a> {
file_id: FileId,
source: Source,
parsed: Parsed,
symbols: Arc<SymbolTable>,
parsed: &'a Parsed<ModModule>,
semantic_index: Arc<SemanticIndex>,
db: &'a dyn LintDb,
diagnostics: RefCell<Vec<String>>,
}
@@ -209,16 +208,16 @@ impl<'a> SemanticLintContext<'a> {
self.file_id
}
pub fn ast(&self) -> &ModModule {
self.parsed.ast()
pub fn ast(&self) -> &'a ModModule {
self.parsed.syntax()
}
pub fn symbols(&self) -> &SymbolTable {
&self.symbols
pub fn semantic_index(&self) -> &SemanticIndex {
&self.semantic_index
}
pub fn infer_symbol_type(&self, symbol_id: SymbolId) -> QueryResult<Type> {
infer_symbol_type(
pub fn infer_symbol_public_type(&self, symbol_id: SymbolId) -> QueryResult<Type> {
infer_symbol_public_type(
self.db.upcast(),
GlobalSymbolId {
file_id: self.file_id,
@@ -234,6 +233,18 @@ impl<'a> SemanticLintContext<'a> {
pub fn extend_diagnostics(&mut self, diagnostics: impl IntoIterator<Item = String>) {
self.diagnostics.get_mut().extend(diagnostics);
}
pub fn resolve_global_symbol(
&self,
module: &str,
symbol_name: &str,
) -> QueryResult<Option<GlobalSymbolId>> {
let Some(module) = resolve_module(self.db.upcast(), ModuleName::new(module))? else {
return Ok(None);
};
resolve_global_symbol(self.db.upcast(), module, symbol_name)
}
}
#[derive(Debug)]

View File

@@ -9,14 +9,18 @@ use smol_str::SmolStr;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::files::FileId;
use crate::symbols::Dependency;
use crate::semantic::Dependency;
use crate::FxDashMap;
/// ID uniquely identifying a module.
/// Representation of a Python module.
///
/// The inner type wrapped by this struct is a unique identifier for the module
/// that is used by the struct's methods to lazily query information about the module.
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct Module(u32);
impl Module {
/// Return the absolute name of the module (e.g. `foo.bar`)
pub fn name(&self, db: &dyn SemanticDb) -> QueryResult<ModuleName> {
let jar: &SemanticJar = db.jar()?;
let modules = &jar.module_resolver;
@@ -24,6 +28,7 @@ impl Module {
Ok(modules.modules.get(self).unwrap().name.clone())
}
/// Return the path to the source code that defines this module
pub fn path(&self, db: &dyn SemanticDb) -> QueryResult<ModulePath> {
let jar: &SemanticJar = db.jar()?;
let modules = &jar.module_resolver;
@@ -31,6 +36,7 @@ impl Module {
Ok(modules.modules.get(self).unwrap().path.clone())
}
/// Determine whether this module is a single-file module or a package
pub fn kind(&self, db: &dyn SemanticDb) -> QueryResult<ModuleKind> {
let jar: &SemanticJar = db.jar()?;
let modules = &jar.module_resolver;
@@ -38,6 +44,16 @@ impl Module {
Ok(modules.modules.get(self).unwrap().kind)
}
/// Attempt to resolve a dependency of this module to an absolute [`ModuleName`].
///
/// A dependency could be either absolute (e.g. the `foo` dependency implied by `from foo import bar`)
/// or relative to this module (e.g. the `.foo` dependency implied by `from .foo import bar`)
///
/// - Returns an error if the query failed.
/// - Returns `Ok(None)` if the query succeeded,
/// but the dependency refers to a module that does not exist.
/// - Returns `Ok(Some(ModuleName))` if the query succeeded,
/// and the dependency refers to a module that exists.
pub fn resolve_dependency(
&self,
db: &dyn SemanticDb,
@@ -87,7 +103,8 @@ impl Module {
/// A module name, e.g. `foo.bar`.
///
/// Always normalized to the absolute form (never a relative module name).
/// Always normalized to the absolute form
/// (never a relative module name, i.e., never `.foo`).
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ModuleName(smol_str::SmolStr);
@@ -124,10 +141,13 @@ impl ModuleName {
Some(Self(name))
}
/// An iterator over the components of the module name:
/// `foo.bar.baz` -> `foo`, `bar`, `baz`
pub fn components(&self) -> impl DoubleEndedIterator<Item = &str> {
self.0.split('.')
}
/// The name of this module's immediate parent, if it has a parent
pub fn parent(&self) -> Option<ModuleName> {
let (_, parent) = self.0.rsplit_once('.')?;
@@ -159,9 +179,10 @@ impl std::fmt::Display for ModuleName {
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub enum ModuleKind {
/// A single-file module (e.g. `foo.py` or `foo.pyi`)
Module,
/// A python package (a `__init__.py` or `__init__.pyi` file)
/// A python package (`foo/__init__.py` or `foo/__init__.pyi`)
Package,
}
@@ -181,10 +202,12 @@ impl ModuleSearchPath {
}
}
/// Determine whether this is a first-party, third-party or standard-library search path
pub fn kind(&self) -> ModuleSearchPathKind {
self.inner.kind
}
/// Return the location of the search path on the file system
pub fn path(&self) -> &Path {
&self.inner.path
}
@@ -231,9 +254,11 @@ pub struct ModuleData {
// Queries
//////////////////////////////////////////////////////
/// Resolves a module name to a module id
/// TODO: This would not work with Salsa because `ModuleName` isn't an ingredient and, therefore, cannot be used as part of a query.
/// For this to work with salsa, it would be necessary to intern all `ModuleName`s.
/// Resolves a module name to a module.
///
/// TODO: This would not work with Salsa because `ModuleName` isn't an ingredient
/// and, therefore, cannot be used as part of a query.
/// For this to work with salsa, it would be necessary to intern all `ModuleName`s.
#[tracing::instrument(level = "debug", skip(db))]
pub fn resolve_module(db: &dyn SemanticDb, name: ModuleName) -> QueryResult<Option<Module>> {
let jar: &SemanticJar = db.jar()?;
@@ -255,7 +280,7 @@ pub fn resolve_module(db: &dyn SemanticDb, name: ModuleName) -> QueryResult<Opti
let file_id = db.file_id(&normalized);
let path = ModulePath::new(root_path.clone(), file_id);
let id = Module(
let module = Module(
modules
.next_module_id
.fetch_add(1, std::sync::atomic::Ordering::Relaxed),
@@ -263,7 +288,7 @@ pub fn resolve_module(db: &dyn SemanticDb, name: ModuleName) -> QueryResult<Opti
modules
.modules
.insert(id, Arc::from(ModuleData { name, path, kind }));
.insert(module, Arc::from(ModuleData { name, path, kind }));
// A path can map to multiple modules because of symlinks:
// ```
@@ -272,33 +297,33 @@ pub fn resolve_module(db: &dyn SemanticDb, name: ModuleName) -> QueryResult<Opti
// ```
// Here, both `foo` and `bar` resolve to the same module but through different paths.
// That's why we need to insert the absolute path and not the normalized path here.
let absolute_id = if absolute_path == normalized {
let absolute_file_id = if absolute_path == normalized {
file_id
} else {
db.file_id(&absolute_path)
};
modules.by_file.insert(absolute_id, id);
modules.by_file.insert(absolute_file_id, module);
entry.insert_entry(id);
entry.insert_entry(module);
Ok(Some(id))
Ok(Some(module))
}
}
}
/// Resolves the module id for the given path.
/// Resolves the module for the given path.
///
/// Returns `None` if the path is not a module in `sys.path`.
/// Returns `None` if the path is not a module locatable via `sys.path`.
#[tracing::instrument(level = "debug", skip(db))]
pub fn path_to_module(db: &dyn SemanticDb, path: &Path) -> QueryResult<Option<Module>> {
let file = db.file_id(path);
file_to_module(db, file)
}
/// Resolves the module id for the file with the given id.
/// Resolves the module for the file with the given id.
///
/// Returns `None` if the file is not a module in `sys.path`.
/// Returns `None` if the file is not a module locatable via `sys.path`.
#[tracing::instrument(level = "debug", skip(db))]
pub fn file_to_module(db: &dyn SemanticDb, file: FileId) -> QueryResult<Option<Module>> {
let jar: &SemanticJar = db.jar()?;
@@ -325,12 +350,12 @@ pub fn file_to_module(db: &dyn SemanticDb, file: FileId) -> QueryResult<Option<M
// Resolve the module name to see if Python would resolve the name to the same path.
// If it doesn't, then that means that multiple modules have the same in different
// root paths, but that the module corresponding to the past path is in a lower priority path,
// root paths, but that the module corresponding to the past path is in a lower priority search path,
// in which case we ignore it.
let Some(module_id) = resolve_module(db, module_name)? else {
let Some(module) = resolve_module(db, module_name)? else {
return Ok(None);
};
let module_path = module_id.path(db)?;
let module_path = module.path(db)?;
if module_path.root() == &root_path {
let Ok(normalized) = path.canonicalize() else {
@@ -350,7 +375,7 @@ pub fn file_to_module(db: &dyn SemanticDb, file: FileId) -> QueryResult<Option<M
}
// Path has been inserted by `resolved`
Ok(Some(module_id))
Ok(Some(module))
} else {
// This path is for a module with the same name but in a module search path with a lower priority.
// Ignore it.
@@ -369,19 +394,22 @@ pub fn set_module_search_paths(db: &mut dyn SemanticDb, search_paths: Vec<Module
jar.module_resolver = ModuleResolver::new(search_paths);
}
/// Adds a module to the resolver.
/// Adds a module located at `path` to the resolver.
///
/// Returns `None` if the path doesn't resolve to a module.
///
/// Returns `Some` with the id of the module and the ids of the modules that need re-resolving
/// because they were part of a namespace package and might now resolve differently.
/// Returns `Some(module, other_modules)`, where `module` is the resolved module
/// with file location `path`, and `other_modules` is a `Vec` of `ModuleData` instances.
/// Each element in `other_modules` provides information regarding a single module that needs
/// re-resolving because it was part of a namespace package and might now resolve differently.
///
/// Note: This won't work with salsa because `Path` is not an ingredient.
pub fn add_module(db: &mut dyn SemanticDb, path: &Path) -> Option<(Module, Vec<Arc<ModuleData>>)> {
// No locking is required because we're holding a mutable reference to `modules`.
// TODO This needs tests
// Note: Intentionally by-pass caching here. Module should not be in the cache yet.
// Note: Intentionally bypass caching here. Module should not be in the cache yet.
let module = path_to_module(db, path).ok()??;
// The code below is to handle the addition of `__init__.py` files.
@@ -405,15 +433,15 @@ pub fn add_module(db: &mut dyn SemanticDb, path: &Path) -> Option<(Module, Vec<A
let jar: &mut SemanticJar = db.jar_mut();
let modules = &mut jar.module_resolver;
modules.by_file.retain(|_, id| {
modules.by_file.retain(|_, module| {
if modules
.modules
.get(id)
.get(module)
.unwrap()
.name
.starts_with(&parent_name)
{
to_remove.push(*id);
to_remove.push(*module);
false
} else {
true
@@ -422,8 +450,8 @@ pub fn add_module(db: &mut dyn SemanticDb, path: &Path) -> Option<(Module, Vec<A
// TODO remove need for this vec
let mut removed = Vec::with_capacity(to_remove.len());
for id in &to_remove {
removed.push(modules.remove_module_by_id(*id));
for module in &to_remove {
removed.push(modules.remove_module(*module));
}
Some((module, removed))
@@ -436,10 +464,10 @@ pub struct ModuleResolver {
// Locking: Locking is done by acquiring a (write) lock on `by_name`. This is because `by_name` is the primary
// lookup method. Acquiring locks in any other ordering can result in deadlocks.
/// Resolves a module name to it's module id.
/// Looks up a module by name
by_name: FxDashMap<ModuleName, Module>,
/// All known modules, indexed by the module id.
/// A map of all known modules to data about those modules
modules: FxDashMap<Module, Arc<ModuleData>>,
/// Lookup from absolute path to module.
@@ -459,24 +487,27 @@ impl ModuleResolver {
}
}
pub(crate) fn remove_module(&mut self, file_id: FileId) {
/// Remove a module from the inner cache
pub(crate) fn remove_module_by_file(&mut self, file_id: FileId) {
// No locking is required because we're holding a mutable reference to `self`.
let Some((_, id)) = self.by_file.remove(&file_id) else {
let Some((_, module)) = self.by_file.remove(&file_id) else {
return;
};
self.remove_module_by_id(id);
self.remove_module(module);
}
fn remove_module_by_id(&mut self, id: Module) -> Arc<ModuleData> {
let (_, module) = self.modules.remove(&id).unwrap();
fn remove_module(&mut self, module: Module) -> Arc<ModuleData> {
let (_, module_data) = self.modules.remove(&module).unwrap();
self.by_name.remove(&module.name).unwrap();
self.by_name.remove(&module_data.name).unwrap();
// It's possible that multiple paths map to the same id. Search all other paths referencing the same module id.
self.by_file.retain(|_, current_id| *current_id != id);
// It's possible that multiple paths map to the same module.
// Search all other paths referencing the same module.
self.by_file
.retain(|_, current_module| *current_module != module);
module
module_data
}
}
@@ -505,15 +536,19 @@ impl ModulePath {
Self { root, file_id }
}
/// The search path that was used to locate the module
pub fn root(&self) -> &ModuleSearchPath {
&self.root
}
/// The file containing the source code for the module
pub fn file(&self) -> FileId {
self.file_id
}
}
/// Given a module name and a list of search paths in which to lookup modules,
/// attempt to resolve the module name
fn resolve_name(
name: &ModuleName,
search_paths: &[ModuleSearchPath],
@@ -635,7 +670,9 @@ enum PackageKind {
/// A root package or module. E.g. `foo` in `foo.bar.baz` or just `foo`.
Root,
/// A regular sub-package where the parent contains an `__init__.py`. For example `bar` in `foo.bar` when the `foo` directory contains an `__init__.py`.
/// A regular sub-package where the parent contains an `__init__.py`.
///
/// For example, `bar` in `foo.bar` when the `foo` directory contains an `__init__.py`.
Regular,
/// A sub-package in a namespace package. A namespace package is a package without an `__init__.py`.
@@ -660,7 +697,7 @@ mod tests {
path_to_module, resolve_module, set_module_search_paths, ModuleKind, ModuleName,
ModuleSearchPath, ModuleSearchPathKind,
};
use crate::symbols::Dependency;
use crate::semantic::Dependency;
struct TestCase {
temp_dir: tempfile::TempDir,

View File

@@ -1,85 +1,33 @@
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use ruff_python_ast as ast;
use ruff_python_parser::{Mode, ParseError};
use ruff_text_size::{Ranged, TextRange};
use ruff_python_ast::ModModule;
use ruff_python_parser::Parsed;
use crate::cache::KeyValueCache;
use crate::db::{QueryResult, SourceDb};
use crate::files::FileId;
use crate::source::source_text;
#[derive(Debug, Clone, PartialEq)]
pub struct Parsed {
inner: Arc<ParsedInner>,
}
#[derive(Debug, PartialEq)]
struct ParsedInner {
ast: ast::ModModule,
errors: Vec<ParseError>,
}
impl Parsed {
fn new(ast: ast::ModModule, errors: Vec<ParseError>) -> Self {
Self {
inner: Arc::new(ParsedInner { ast, errors }),
}
}
pub(crate) fn from_text(text: &str) -> Self {
let result = ruff_python_parser::parse(text, Mode::Module);
let (module, errors) = match result {
Ok(ast::Mod::Module(module)) => (module, vec![]),
Ok(ast::Mod::Expression(expression)) => (
ast::ModModule {
range: expression.range(),
body: vec![ast::Stmt::Expr(ast::StmtExpr {
range: expression.range(),
value: expression.body,
})],
},
vec![],
),
Err(errors) => (
ast::ModModule {
range: TextRange::default(),
body: Vec::new(),
},
vec![errors],
),
};
Parsed::new(module, errors)
}
pub fn ast(&self) -> &ast::ModModule {
&self.inner.ast
}
pub fn errors(&self) -> &[ParseError] {
&self.inner.errors
}
}
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn parse(db: &dyn SourceDb, file_id: FileId) -> QueryResult<Parsed> {
pub(crate) fn parse(db: &dyn SourceDb, file_id: FileId) -> QueryResult<Arc<Parsed<ModModule>>> {
let jar = db.jar()?;
jar.parsed.get(&file_id, |file_id| {
let source = source_text(db, *file_id)?;
Ok(Parsed::from_text(source.text()))
Ok(Arc::new(ruff_python_parser::parse_unchecked_source(
source.text(),
source.kind().into(),
)))
})
}
#[derive(Debug, Default)]
pub struct ParsedStorage(KeyValueCache<FileId, Parsed>);
pub struct ParsedStorage(KeyValueCache<FileId, Arc<Parsed<ModModule>>>);
impl Deref for ParsedStorage {
type Target = KeyValueCache<FileId, Parsed>;
type Target = KeyValueCache<FileId, Arc<Parsed<ModModule>>>;
fn deref(&self) -> &Self::Target {
&self.0

View File

@@ -6,7 +6,7 @@ use crate::files::FileId;
use crate::lint::{lint_semantic, lint_syntax, Diagnostics};
use crate::module::{file_to_module, resolve_module};
use crate::program::Program;
use crate::symbols::{symbol_table, Dependency};
use crate::semantic::{semantic_index, Dependency};
impl Program {
/// Checks all open files in the workspace and its dependencies.
@@ -28,8 +28,8 @@ impl Program {
fn check_file(&self, file: FileId, context: &CheckFileContext) -> QueryResult<Diagnostics> {
self.cancelled()?;
let symbol_table = symbol_table(self, file)?;
let dependencies = symbol_table.dependencies();
let index = semantic_index(self, file)?;
let dependencies = index.symbol_table().dependencies();
if !dependencies.is_empty() {
let module = file_to_module(self, file)?;

View File

@@ -42,8 +42,8 @@ impl Program {
let (source, semantic, lint) = self.jars_mut();
for change in aggregated_changes.iter() {
semantic.module_resolver.remove_module(change.id);
semantic.symbol_tables.remove(&change.id);
semantic.module_resolver.remove_module_by_file(change.id);
semantic.semantic_indices.remove(&change.id);
source.sources.remove(&change.id);
source.parsed.remove(&change.id);
// TODO: remove all dependent modules as well

View File

@@ -0,0 +1,747 @@
use std::num::NonZeroU32;
use ruff_python_ast as ast;
use ruff_python_ast::visitor::preorder::PreorderVisitor;
use crate::ast_ids::{NodeKey, TypedNodeKey};
use crate::cache::KeyValueCache;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::files::FileId;
use crate::module::Module;
use crate::module::ModuleName;
use crate::parse::parse;
use crate::Name;
use flow_graph::{FlowGraph, FlowGraphBuilder, FlowNodeId, ReachableDefinitionsIterator};
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
pub(crate) use symbol_table::{Definition, Dependency, SymbolId};
use symbol_table::{
ImportDefinition, ImportFromDefinition, ScopeId, ScopeKind, SymbolFlags, SymbolTable,
SymbolTableBuilder,
};
pub(crate) use types::{infer_definition_type, infer_symbol_public_type, Type, TypeStore};
mod flow_graph;
mod symbol_table;
mod types;
#[tracing::instrument(level = "debug", skip(db))]
pub fn semantic_index(db: &dyn SemanticDb, file_id: FileId) -> QueryResult<Arc<SemanticIndex>> {
let jar: &SemanticJar = db.jar()?;
jar.semantic_indices.get(&file_id, |_| {
let parsed = parse(db.upcast(), file_id)?;
Ok(Arc::from(SemanticIndex::from_ast(parsed.syntax())))
})
}
#[tracing::instrument(level = "debug", skip(db))]
pub fn resolve_global_symbol(
db: &dyn SemanticDb,
module: Module,
name: &str,
) -> QueryResult<Option<GlobalSymbolId>> {
let file_id = module.path(db)?.file();
let symbol_table = &semantic_index(db, file_id)?.symbol_table;
let Some(symbol_id) = symbol_table.root_symbol_id_by_name(name) else {
return Ok(None);
};
Ok(Some(GlobalSymbolId { file_id, symbol_id }))
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub struct GlobalSymbolId {
pub(crate) file_id: FileId,
pub(crate) symbol_id: SymbolId,
}
#[derive(Debug)]
pub struct SemanticIndex {
symbol_table: SymbolTable,
flow_graph: FlowGraph,
}
impl SemanticIndex {
pub fn from_ast(module: &ast::ModModule) -> Self {
let root_scope_id = SymbolTable::root_scope_id();
let mut indexer = SemanticIndexer {
symbol_table_builder: SymbolTableBuilder::new(),
flow_graph_builder: FlowGraphBuilder::new(),
scopes: vec![ScopeState {
scope_id: root_scope_id,
current_flow_node_id: FlowGraph::start(),
}],
current_definition: None,
};
indexer.visit_body(&module.body);
indexer.finish()
}
/// Return an iterator over all definitions of `symbol_id` reachable from `use_expr`. The value
/// of `symbol_id` in `use_expr` must originate from one of the iterated definitions (or from
/// an external reassignment of the name outside of this scope).
pub fn reachable_definitions(
&self,
symbol_id: SymbolId,
use_expr: &ast::Expr,
) -> ReachableDefinitionsIterator {
ReachableDefinitionsIterator::new(
&self.flow_graph,
symbol_id,
self.flow_graph.for_expr(use_expr),
)
}
pub fn symbol_table(&self) -> &SymbolTable {
&self.symbol_table
}
}
#[derive(Debug)]
struct ScopeState {
scope_id: ScopeId,
current_flow_node_id: FlowNodeId,
}
#[derive(Debug)]
struct SemanticIndexer {
symbol_table_builder: SymbolTableBuilder,
flow_graph_builder: FlowGraphBuilder,
scopes: Vec<ScopeState>,
/// the definition whose target(s) we are currently walking
current_definition: Option<Definition>,
}
impl SemanticIndexer {
pub(crate) fn finish(self) -> SemanticIndex {
let SemanticIndexer {
flow_graph_builder,
symbol_table_builder,
..
} = self;
SemanticIndex {
flow_graph: flow_graph_builder.finish(),
symbol_table: symbol_table_builder.finish(),
}
}
fn set_current_flow_node(&mut self, new_flow_node_id: FlowNodeId) {
let scope_state = self.scopes.last_mut().expect("scope stack is never empty");
scope_state.current_flow_node_id = new_flow_node_id;
}
fn current_flow_node(&self) -> FlowNodeId {
self.scopes
.last()
.expect("scope stack is never empty")
.current_flow_node_id
}
fn add_or_update_symbol(&mut self, identifier: &str, flags: SymbolFlags) -> SymbolId {
self.symbol_table_builder
.add_or_update_symbol(self.cur_scope(), identifier, flags)
}
fn add_or_update_symbol_with_def(
&mut self,
identifier: &str,
definition: Definition,
) -> SymbolId {
let symbol_id = self.add_or_update_symbol(identifier, SymbolFlags::IS_DEFINED);
self.symbol_table_builder
.add_definition(symbol_id, definition.clone());
let new_flow_node_id =
self.flow_graph_builder
.add_definition(symbol_id, definition, self.current_flow_node());
self.set_current_flow_node(new_flow_node_id);
symbol_id
}
fn push_scope(
&mut self,
name: &str,
kind: ScopeKind,
definition: Option<Definition>,
defining_symbol: Option<SymbolId>,
) -> ScopeId {
let scope_id = self.symbol_table_builder.add_child_scope(
self.cur_scope(),
name,
kind,
definition,
defining_symbol,
);
self.scopes.push(ScopeState {
scope_id,
current_flow_node_id: FlowGraph::start(),
});
scope_id
}
fn pop_scope(&mut self) -> ScopeId {
self.scopes
.pop()
.expect("Scope stack should never be empty")
.scope_id
}
fn cur_scope(&self) -> ScopeId {
self.scopes
.last()
.expect("Scope stack should never be empty")
.scope_id
}
fn record_scope_for_node(&mut self, node_key: NodeKey, scope_id: ScopeId) {
self.symbol_table_builder
.record_scope_for_node(node_key, scope_id);
}
fn with_type_params(
&mut self,
name: &str,
params: &Option<Box<ast::TypeParams>>,
definition: Option<Definition>,
defining_symbol: Option<SymbolId>,
nested: impl FnOnce(&mut Self) -> ScopeId,
) -> ScopeId {
if let Some(type_params) = params {
self.push_scope(name, ScopeKind::Annotation, definition, defining_symbol);
for type_param in &type_params.type_params {
let name = match type_param {
ast::TypeParam::TypeVar(ast::TypeParamTypeVar { name, .. }) => name,
ast::TypeParam::ParamSpec(ast::TypeParamParamSpec { name, .. }) => name,
ast::TypeParam::TypeVarTuple(ast::TypeParamTypeVarTuple { name, .. }) => name,
};
self.add_or_update_symbol(name, SymbolFlags::IS_DEFINED);
}
}
let scope_id = nested(self);
if params.is_some() {
self.pop_scope();
}
scope_id
}
}
impl PreorderVisitor<'_> for SemanticIndexer {
fn visit_expr(&mut self, expr: &ast::Expr) {
if let ast::Expr::Name(ast::ExprName { id, ctx, .. }) = expr {
let flags = match ctx {
ast::ExprContext::Load => SymbolFlags::IS_USED,
ast::ExprContext::Store => SymbolFlags::IS_DEFINED,
ast::ExprContext::Del => SymbolFlags::IS_DEFINED,
ast::ExprContext::Invalid => SymbolFlags::empty(),
};
self.add_or_update_symbol(id, flags);
if flags.contains(SymbolFlags::IS_DEFINED) {
if let Some(curdef) = self.current_definition.clone() {
self.add_or_update_symbol_with_def(id, curdef);
}
}
}
self.flow_graph_builder
.record_expr(expr, self.current_flow_node());
ast::visitor::preorder::walk_expr(self, expr);
}
fn visit_stmt(&mut self, stmt: &ast::Stmt) {
// TODO need to capture more definition statements here
match stmt {
ast::Stmt::ClassDef(node) => {
let node_key = TypedNodeKey::from_node(node);
let def = Definition::ClassDef(node_key.clone());
let symbol_id = self.add_or_update_symbol_with_def(&node.name, def.clone());
for decorator in &node.decorator_list {
self.visit_decorator(decorator);
}
let scope_id = self.with_type_params(
&node.name,
&node.type_params,
Some(def.clone()),
Some(symbol_id),
|indexer| {
if let Some(arguments) = &node.arguments {
indexer.visit_arguments(arguments);
}
let scope_id = indexer.push_scope(
&node.name,
ScopeKind::Class,
Some(def.clone()),
Some(symbol_id),
);
indexer.visit_body(&node.body);
indexer.pop_scope();
scope_id
},
);
self.record_scope_for_node(*node_key.erased(), scope_id);
}
ast::Stmt::FunctionDef(node) => {
let node_key = TypedNodeKey::from_node(node);
let def = Definition::FunctionDef(node_key.clone());
let symbol_id = self.add_or_update_symbol_with_def(&node.name, def.clone());
for decorator in &node.decorator_list {
self.visit_decorator(decorator);
}
let scope_id = self.with_type_params(
&node.name,
&node.type_params,
Some(def.clone()),
Some(symbol_id),
|indexer| {
indexer.visit_parameters(&node.parameters);
for expr in &node.returns {
indexer.visit_annotation(expr);
}
let scope_id = indexer.push_scope(
&node.name,
ScopeKind::Function,
Some(def.clone()),
Some(symbol_id),
);
indexer.visit_body(&node.body);
indexer.pop_scope();
scope_id
},
);
self.record_scope_for_node(*node_key.erased(), scope_id);
}
ast::Stmt::Import(ast::StmtImport { names, .. }) => {
for alias in names {
let symbol_name = if let Some(asname) = &alias.asname {
asname.id.as_str()
} else {
alias.name.id.split('.').next().unwrap()
};
let module = ModuleName::new(&alias.name.id);
let def = Definition::Import(ImportDefinition {
module: module.clone(),
});
self.add_or_update_symbol_with_def(symbol_name, def);
self.symbol_table_builder
.add_dependency(Dependency::Module(module));
}
}
ast::Stmt::ImportFrom(ast::StmtImportFrom {
module,
names,
level,
..
}) => {
let module = module.as_ref().map(|m| ModuleName::new(&m.id));
for alias in names {
let symbol_name = if let Some(asname) = &alias.asname {
asname.id.as_str()
} else {
alias.name.id.as_str()
};
let def = Definition::ImportFrom(ImportFromDefinition {
module: module.clone(),
name: Name::new(&alias.name.id),
level: *level,
});
self.add_or_update_symbol_with_def(symbol_name, def);
}
let dependency = if let Some(module) = module {
match NonZeroU32::new(*level) {
Some(level) => Dependency::Relative {
level,
module: Some(module),
},
None => Dependency::Module(module),
}
} else {
Dependency::Relative {
level: NonZeroU32::new(*level)
.expect("Import without a module to have a level > 0"),
module,
}
};
self.symbol_table_builder.add_dependency(dependency);
}
ast::Stmt::Assign(node) => {
debug_assert!(self.current_definition.is_none());
self.current_definition =
Some(Definition::Assignment(TypedNodeKey::from_node(node)));
ast::visitor::preorder::walk_stmt(self, stmt);
self.current_definition = None;
}
ast::Stmt::If(node) => {
// we visit the if "test" condition first regardless
self.visit_expr(&node.test);
// create branch node: does the if test pass or not?
let if_branch = self.flow_graph_builder.add_branch(self.current_flow_node());
// visit the body of the `if` clause
self.set_current_flow_node(if_branch);
self.visit_body(&node.body);
// Flow node for the last if/elif condition branch; represents the "no branch
// taken yet" possibility (where "taking a branch" means that the condition in an
// if or elif evaluated to true and control flow went into that clause).
let mut prior_branch = if_branch;
// Flow node for the state after the prior if/elif/else clause; represents "we have
// taken one of the branches up to this point." Initially set to the post-if-clause
// state, later will be set to the phi node joining that possible path with the
// possibility that we took a later if/elif/else clause instead.
let mut post_prior_clause = self.current_flow_node();
// Flag to mark if the final clause is an "else" -- if so, that means the "match no
// clauses" path is not possible, we have to go through one of the clauses.
let mut last_branch_is_else = false;
for clause in &node.elif_else_clauses {
if clause.test.is_some() {
// This is an elif clause. Create a new branch node. Its predecessor is the
// previous branch node, because we can only take one branch in an entire
// if/elif/else chain, so if we take this branch, it can only be because we
// didn't take the previous one.
prior_branch = self.flow_graph_builder.add_branch(prior_branch);
self.set_current_flow_node(prior_branch);
} else {
// This is an else clause. No need to create a branch node; there's no
// branch here, if we haven't taken any previous branch, we definitely go
// into the "else" clause.
self.set_current_flow_node(prior_branch);
last_branch_is_else = true;
}
self.visit_elif_else_clause(clause);
// Update `post_prior_clause` to a new phi node joining the possibility that we
// took any of the previous branches with the possibility that we took the one
// just visited.
post_prior_clause = self
.flow_graph_builder
.add_phi(self.current_flow_node(), post_prior_clause);
}
if !last_branch_is_else {
// Final branch was not an "else", which means it's possible we took zero
// branches in the entire if/elif chain, so we need one more phi node to join
// the "no branches taken" possibility.
post_prior_clause = self
.flow_graph_builder
.add_phi(post_prior_clause, prior_branch);
}
// Onward, with current flow node set to our final Phi node.
self.set_current_flow_node(post_prior_clause);
}
_ => {
ast::visitor::preorder::walk_stmt(self, stmt);
}
}
}
}
#[derive(Debug, Default)]
pub struct SemanticIndexStorage(KeyValueCache<FileId, Arc<SemanticIndex>>);
impl Deref for SemanticIndexStorage {
type Target = KeyValueCache<FileId, Arc<SemanticIndex>>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for SemanticIndexStorage {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[cfg(test)]
mod tests {
use crate::semantic::symbol_table::{Symbol, SymbolIterator};
use ruff_python_ast as ast;
use ruff_python_ast::ModModule;
use ruff_python_parser::{Mode, Parsed};
use super::{Definition, ScopeKind, SemanticIndex, SymbolId};
fn parse(code: &str) -> Parsed<ModModule> {
ruff_python_parser::parse_unchecked(code, Mode::Module)
.try_into_module()
.unwrap()
}
fn names<I>(it: SymbolIterator<I>) -> Vec<&str>
where
I: Iterator<Item = SymbolId>,
{
let mut symbols: Vec<_> = it.map(Symbol::name).collect();
symbols.sort_unstable();
symbols
}
#[test]
fn empty() {
let parsed = parse("");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()).len(), 0);
}
#[test]
fn simple() {
let parsed = parse("x");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("x").unwrap())
.len(),
0
);
}
#[test]
fn annotation_only() {
let parsed = parse("x: int");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["int", "x"]);
// TODO record definition
}
#[test]
fn import() {
let parsed = parse("import foo");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("foo").unwrap())
.len(),
1
);
}
#[test]
fn import_sub() {
let parsed = parse("import foo.bar");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo"]);
}
#[test]
fn import_as() {
let parsed = parse("import foo.bar as baz");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["baz"]);
}
#[test]
fn import_from() {
let parsed = parse("from bar import foo");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("foo").unwrap())
.len(),
1
);
assert!(
table.root_symbol_id_by_name("foo").is_some_and(|sid| {
let s = sid.symbol(&table);
s.is_defined() || !s.is_used()
}),
"symbols that are defined get the defined flag"
);
}
#[test]
fn assign() {
let parsed = parse("x = foo");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo", "x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("x").unwrap())
.len(),
1
);
assert!(
table.root_symbol_id_by_name("foo").is_some_and(|sid| {
let s = sid.symbol(&table);
!s.is_defined() && s.is_used()
}),
"a symbol used but not defined in a scope should have only the used flag"
);
}
#[test]
fn class_scope() {
let parsed = parse(
"
class C:
x = 1
y = 2
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["C", "y"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let c_scope = scopes[0].scope(&table);
assert_eq!(c_scope.kind(), ScopeKind::Class);
assert_eq!(c_scope.name(), "C");
assert_eq!(names(table.symbols_for_scope(scopes[0])), vec!["x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("C").unwrap())
.len(),
1
);
}
#[test]
fn func_scope() {
let parsed = parse(
"
def func():
x = 1
y = 2
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["func", "y"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let func_scope = scopes[0].scope(&table);
assert_eq!(func_scope.kind(), ScopeKind::Function);
assert_eq!(func_scope.name(), "func");
assert_eq!(names(table.symbols_for_scope(scopes[0])), vec!["x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("func").unwrap())
.len(),
1
);
}
#[test]
fn dupes() {
let parsed = parse(
"
def func():
x = 1
def func():
y = 2
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["func"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 2);
let func_scope_1 = scopes[0].scope(&table);
let func_scope_2 = scopes[1].scope(&table);
assert_eq!(func_scope_1.kind(), ScopeKind::Function);
assert_eq!(func_scope_1.name(), "func");
assert_eq!(func_scope_2.kind(), ScopeKind::Function);
assert_eq!(func_scope_2.name(), "func");
assert_eq!(names(table.symbols_for_scope(scopes[0])), vec!["x"]);
assert_eq!(names(table.symbols_for_scope(scopes[1])), vec!["y"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("func").unwrap())
.len(),
2
);
}
#[test]
fn generic_func() {
let parsed = parse(
"
def func[T]():
x = 1
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["func"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let ann_scope_id = scopes[0];
let ann_scope = ann_scope_id.scope(&table);
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope.name(), "func");
assert_eq!(names(table.symbols_for_scope(ann_scope_id)), vec!["T"]);
let scopes = table.child_scope_ids_of(ann_scope_id);
assert_eq!(scopes.len(), 1);
let func_scope_id = scopes[0];
let func_scope = func_scope_id.scope(&table);
assert_eq!(func_scope.kind(), ScopeKind::Function);
assert_eq!(func_scope.name(), "func");
assert_eq!(names(table.symbols_for_scope(func_scope_id)), vec!["x"]);
}
#[test]
fn generic_class() {
let parsed = parse(
"
class C[T]:
x = 1
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["C"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let ann_scope_id = scopes[0];
let ann_scope = ann_scope_id.scope(&table);
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope.name(), "C");
assert_eq!(names(table.symbols_for_scope(ann_scope_id)), vec!["T"]);
assert!(
table
.symbol_by_name(ann_scope_id, "T")
.is_some_and(|s| s.is_defined() && !s.is_used()),
"type parameters are defined by the scope that introduces them"
);
let scopes = table.child_scope_ids_of(ann_scope_id);
assert_eq!(scopes.len(), 1);
let func_scope_id = scopes[0];
let func_scope = func_scope_id.scope(&table);
assert_eq!(func_scope.kind(), ScopeKind::Class);
assert_eq!(func_scope.name(), "C");
assert_eq!(names(table.symbols_for_scope(func_scope_id)), vec!["x"]);
}
#[test]
fn reachability_trivial() {
let parsed = parse("x = 1; x");
let ast = parsed.syntax();
let index = SemanticIndex::from_ast(ast);
let table = &index.symbol_table;
let x_sym = table
.root_symbol_id_by_name("x")
.expect("x symbol should exist");
let ast::Stmt::Expr(ast::StmtExpr { value: x_use, .. }) = &ast.body[1] else {
panic!("should be an expr")
};
let x_defs: Vec<_> = index.reachable_definitions(x_sym, x_use).collect();
assert_eq!(x_defs.len(), 1);
let Definition::Assignment(node_key) = &x_defs[0] else {
panic!("def should be an assignment")
};
let Some(def_node) = node_key.resolve(ast.into()) else {
panic!("node key should resolve")
};
let ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(num),
..
}) = &*def_node.value
else {
panic!("should be a number literal")
};
assert_eq!(*num, 1);
}
}

View File

@@ -0,0 +1,204 @@
use super::symbol_table::{Definition, SymbolId};
use crate::ast_ids::NodeKey;
use ruff_index::{newtype_index, IndexVec};
use ruff_python_ast as ast;
use rustc_hash::FxHashMap;
use std::iter::FusedIterator;
#[newtype_index]
pub struct FlowNodeId;
#[derive(Debug)]
pub(crate) enum FlowNode {
Start,
Definition(DefinitionFlowNode),
Branch(BranchFlowNode),
Phi(PhiFlowNode),
}
/// A Definition node represents a point in control flow where a symbol is defined
#[derive(Debug)]
pub(crate) struct DefinitionFlowNode {
symbol_id: SymbolId,
definition: Definition,
predecessor: FlowNodeId,
}
/// A Branch node represents a branch in control flow
#[derive(Debug)]
pub(crate) struct BranchFlowNode {
predecessor: FlowNodeId,
}
/// A Phi node represents a join point where control flow paths come together
#[derive(Debug)]
pub(crate) struct PhiFlowNode {
first_predecessor: FlowNodeId,
second_predecessor: FlowNodeId,
}
#[derive(Debug)]
pub struct FlowGraph {
flow_nodes_by_id: IndexVec<FlowNodeId, FlowNode>,
ast_to_flow: FxHashMap<NodeKey, FlowNodeId>,
}
impl FlowGraph {
pub fn start() -> FlowNodeId {
FlowNodeId::from_usize(0)
}
pub fn for_expr(&self, expr: &ast::Expr) -> FlowNodeId {
let node_key = NodeKey::from_node(expr.into());
self.ast_to_flow[&node_key]
}
}
#[derive(Debug)]
pub(crate) struct FlowGraphBuilder {
flow_graph: FlowGraph,
}
impl FlowGraphBuilder {
pub(crate) fn new() -> Self {
let mut graph = FlowGraph {
flow_nodes_by_id: IndexVec::default(),
ast_to_flow: FxHashMap::default(),
};
graph.flow_nodes_by_id.push(FlowNode::Start);
Self { flow_graph: graph }
}
pub(crate) fn add(&mut self, node: FlowNode) -> FlowNodeId {
self.flow_graph.flow_nodes_by_id.push(node)
}
pub(crate) fn add_definition(
&mut self,
symbol_id: SymbolId,
definition: Definition,
predecessor: FlowNodeId,
) -> FlowNodeId {
self.add(FlowNode::Definition(DefinitionFlowNode {
symbol_id,
definition,
predecessor,
}))
}
pub(crate) fn add_branch(&mut self, predecessor: FlowNodeId) -> FlowNodeId {
self.add(FlowNode::Branch(BranchFlowNode { predecessor }))
}
pub(crate) fn add_phi(
&mut self,
first_predecessor: FlowNodeId,
second_predecessor: FlowNodeId,
) -> FlowNodeId {
self.add(FlowNode::Phi(PhiFlowNode {
first_predecessor,
second_predecessor,
}))
}
pub(crate) fn record_expr(&mut self, expr: &ast::Expr, node_id: FlowNodeId) {
self.flow_graph
.ast_to_flow
.insert(NodeKey::from_node(expr.into()), node_id);
}
pub(crate) fn finish(self) -> FlowGraph {
self.flow_graph
}
}
#[derive(Debug)]
pub struct ReachableDefinitionsIterator<'a> {
flow_graph: &'a FlowGraph,
symbol_id: SymbolId,
pending: Vec<FlowNodeId>,
}
impl<'a> ReachableDefinitionsIterator<'a> {
pub fn new(flow_graph: &'a FlowGraph, symbol_id: SymbolId, start_node_id: FlowNodeId) -> Self {
Self {
flow_graph,
symbol_id,
pending: vec![start_node_id],
}
}
}
impl<'a> Iterator for ReachableDefinitionsIterator<'a> {
type Item = Definition;
fn next(&mut self) -> Option<Self::Item> {
loop {
let flow_node_id = self.pending.pop()?;
match &self.flow_graph.flow_nodes_by_id[flow_node_id] {
FlowNode::Start => return Some(Definition::None),
FlowNode::Definition(def_node) => {
if def_node.symbol_id == self.symbol_id {
return Some(def_node.definition.clone());
}
self.pending.push(def_node.predecessor);
}
FlowNode::Branch(branch_node) => {
self.pending.push(branch_node.predecessor);
}
FlowNode::Phi(phi_node) => {
self.pending.push(phi_node.first_predecessor);
self.pending.push(phi_node.second_predecessor);
}
}
}
}
}
impl<'a> FusedIterator for ReachableDefinitionsIterator<'a> {}
impl std::fmt::Display for FlowGraph {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
writeln!(f, "flowchart TD")?;
for (id, node) in self.flow_nodes_by_id.iter_enumerated() {
write!(f, " id{}", id.as_u32())?;
match node {
FlowNode::Start => writeln!(f, r"[\Start/]")?,
FlowNode::Definition(def_node) => {
writeln!(f, r"(Define symbol {})", def_node.symbol_id.as_u32())?;
writeln!(
f,
r" id{}-->id{}",
def_node.predecessor.as_u32(),
id.as_u32()
)?;
}
FlowNode::Branch(branch_node) => {
writeln!(f, r"{{Branch}}")?;
writeln!(
f,
r" id{}-->id{}",
branch_node.predecessor.as_u32(),
id.as_u32()
)?;
}
FlowNode::Phi(phi_node) => {
writeln!(f, r"((Phi))")?;
writeln!(
f,
r" id{}-->id{}",
phi_node.second_predecessor.as_u32(),
id.as_u32()
)?;
writeln!(
f,
r" id{}-->id{}",
phi_node.first_predecessor.as_u32(),
id.as_u32()
)?;
}
}
}
Ok(())
}
}

View File

@@ -0,0 +1,583 @@
#![allow(dead_code)]
use std::hash::{Hash, Hasher};
use std::iter::{Copied, DoubleEndedIterator, FusedIterator};
use std::num::NonZeroU32;
use bitflags::bitflags;
use hashbrown::hash_map::{Keys, RawEntryMut};
use rustc_hash::{FxHashMap, FxHasher};
use ruff_index::{newtype_index, IndexVec};
use ruff_python_ast as ast;
use crate::ast_ids::{NodeKey, TypedNodeKey};
use crate::module::ModuleName;
use crate::Name;
type Map<K, V> = hashbrown::HashMap<K, V, ()>;
#[newtype_index]
pub struct ScopeId;
impl ScopeId {
pub fn scope(self, table: &SymbolTable) -> &Scope {
&table.scopes_by_id[self]
}
}
#[newtype_index]
pub struct SymbolId;
impl SymbolId {
pub fn symbol(self, table: &SymbolTable) -> &Symbol {
&table.symbols_by_id[self]
}
}
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum ScopeKind {
Module,
Annotation,
Class,
Function,
}
#[derive(Debug)]
pub struct Scope {
name: Name,
kind: ScopeKind,
parent: Option<ScopeId>,
children: Vec<ScopeId>,
/// the definition (e.g. class or function) that created this scope
definition: Option<Definition>,
/// the symbol (e.g. class or function) that owns this scope
defining_symbol: Option<SymbolId>,
/// symbol IDs, hashed by symbol name
symbols_by_name: Map<SymbolId, ()>,
}
impl Scope {
pub fn name(&self) -> &str {
self.name.as_str()
}
pub fn kind(&self) -> ScopeKind {
self.kind
}
pub fn definition(&self) -> Option<Definition> {
self.definition.clone()
}
pub fn defining_symbol(&self) -> Option<SymbolId> {
self.defining_symbol
}
}
#[derive(Debug)]
pub(crate) enum Kind {
FreeVar,
CellVar,
CellVarAssigned,
ExplicitGlobal,
ImplicitGlobal,
}
bitflags! {
#[derive(Copy,Clone,Debug)]
pub struct SymbolFlags: u8 {
const IS_USED = 1 << 0;
const IS_DEFINED = 1 << 1;
/// TODO: This flag is not yet set by anything
const MARKED_GLOBAL = 1 << 2;
/// TODO: This flag is not yet set by anything
const MARKED_NONLOCAL = 1 << 3;
}
}
#[derive(Debug)]
pub struct Symbol {
name: Name,
flags: SymbolFlags,
scope_id: ScopeId,
// kind: Kind,
}
impl Symbol {
pub fn name(&self) -> &str {
self.name.as_str()
}
pub fn scope_id(&self) -> ScopeId {
self.scope_id
}
/// Is the symbol used in its containing scope?
pub fn is_used(&self) -> bool {
self.flags.contains(SymbolFlags::IS_USED)
}
/// Is the symbol defined in its containing scope?
pub fn is_defined(&self) -> bool {
self.flags.contains(SymbolFlags::IS_DEFINED)
}
// TODO: implement Symbol.kind 2-pass analysis to categorize as: free-var, cell-var,
// explicit-global, implicit-global and implement Symbol.kind by modifying the preorder
// traversal code
}
// TODO storing TypedNodeKey for definitions means we have to search to find them again in the AST;
// this is at best O(log n). If looking up definitions is a bottleneck we should look for
// alternatives here.
// TODO intern Definitions in SymbolTable and reference using IDs?
#[derive(Clone, Debug)]
pub enum Definition {
// For the import cases, we don't need reference to any arbitrary AST subtrees (annotations,
// RHS), and referencing just the import statement node is imprecise (a single import statement
// can assign many symbols, we'd have to re-search for the one we care about), so we just copy
// the small amount of information we need from the AST.
Import(ImportDefinition),
ImportFrom(ImportFromDefinition),
ClassDef(TypedNodeKey<ast::StmtClassDef>),
FunctionDef(TypedNodeKey<ast::StmtFunctionDef>),
Assignment(TypedNodeKey<ast::StmtAssign>),
AnnotatedAssignment(TypedNodeKey<ast::StmtAnnAssign>),
None,
// TODO with statements, except handlers, function args...
}
#[derive(Clone, Debug)]
pub struct ImportDefinition {
pub module: ModuleName,
}
#[derive(Clone, Debug)]
pub struct ImportFromDefinition {
pub module: Option<ModuleName>,
pub name: Name,
pub level: u32,
}
impl ImportFromDefinition {
pub fn module(&self) -> Option<&ModuleName> {
self.module.as_ref()
}
pub fn name(&self) -> &Name {
&self.name
}
pub fn level(&self) -> u32 {
self.level
}
}
#[derive(Debug, Clone)]
pub enum Dependency {
Module(ModuleName),
Relative {
level: NonZeroU32,
module: Option<ModuleName>,
},
}
/// Table of all symbols in all scopes for a module.
#[derive(Debug)]
pub struct SymbolTable {
scopes_by_id: IndexVec<ScopeId, Scope>,
symbols_by_id: IndexVec<SymbolId, Symbol>,
/// the definitions for each symbol
defs: FxHashMap<SymbolId, Vec<Definition>>,
/// map of AST node (e.g. class/function def) to sub-scope it creates
scopes_by_node: FxHashMap<NodeKey, ScopeId>,
/// dependencies of this module
dependencies: Vec<Dependency>,
}
impl SymbolTable {
pub fn dependencies(&self) -> &[Dependency] {
&self.dependencies
}
pub const fn root_scope_id() -> ScopeId {
ScopeId::from_usize(0)
}
pub fn root_scope(&self) -> &Scope {
&self.scopes_by_id[SymbolTable::root_scope_id()]
}
pub fn symbol_ids_for_scope(&self, scope_id: ScopeId) -> Copied<Keys<SymbolId, ()>> {
self.scopes_by_id[scope_id].symbols_by_name.keys().copied()
}
pub fn symbols_for_scope(
&self,
scope_id: ScopeId,
) -> SymbolIterator<Copied<Keys<SymbolId, ()>>> {
SymbolIterator {
table: self,
ids: self.symbol_ids_for_scope(scope_id),
}
}
pub fn root_symbol_ids(&self) -> Copied<Keys<SymbolId, ()>> {
self.symbol_ids_for_scope(SymbolTable::root_scope_id())
}
pub fn root_symbols(&self) -> SymbolIterator<Copied<Keys<SymbolId, ()>>> {
self.symbols_for_scope(SymbolTable::root_scope_id())
}
pub fn child_scope_ids_of(&self, scope_id: ScopeId) -> &[ScopeId] {
&self.scopes_by_id[scope_id].children
}
pub fn child_scopes_of(&self, scope_id: ScopeId) -> ScopeIterator<&[ScopeId]> {
ScopeIterator {
table: self,
ids: self.child_scope_ids_of(scope_id),
}
}
pub fn root_child_scope_ids(&self) -> &[ScopeId] {
self.child_scope_ids_of(SymbolTable::root_scope_id())
}
pub fn root_child_scopes(&self) -> ScopeIterator<&[ScopeId]> {
self.child_scopes_of(SymbolTable::root_scope_id())
}
pub fn symbol_id_by_name(&self, scope_id: ScopeId, name: &str) -> Option<SymbolId> {
let scope = &self.scopes_by_id[scope_id];
let hash = SymbolTable::hash_name(name);
let name = Name::new(name);
Some(
*scope
.symbols_by_name
.raw_entry()
.from_hash(hash, |symid| self.symbols_by_id[*symid].name == name)?
.0,
)
}
pub fn symbol_by_name(&self, scope_id: ScopeId, name: &str) -> Option<&Symbol> {
Some(&self.symbols_by_id[self.symbol_id_by_name(scope_id, name)?])
}
pub fn root_symbol_id_by_name(&self, name: &str) -> Option<SymbolId> {
self.symbol_id_by_name(SymbolTable::root_scope_id(), name)
}
pub fn root_symbol_by_name(&self, name: &str) -> Option<&Symbol> {
self.symbol_by_name(SymbolTable::root_scope_id(), name)
}
pub fn scope_id_of_symbol(&self, symbol_id: SymbolId) -> ScopeId {
self.symbols_by_id[symbol_id].scope_id
}
pub fn scope_of_symbol(&self, symbol_id: SymbolId) -> &Scope {
&self.scopes_by_id[self.scope_id_of_symbol(symbol_id)]
}
pub fn parent_scopes(
&self,
scope_id: ScopeId,
) -> ScopeIterator<impl Iterator<Item = ScopeId> + '_> {
ScopeIterator {
table: self,
ids: std::iter::successors(Some(scope_id), |scope| self.scopes_by_id[*scope].parent),
}
}
pub fn parent_scope(&self, scope_id: ScopeId) -> Option<ScopeId> {
self.scopes_by_id[scope_id].parent
}
pub fn scope_id_for_node(&self, node_key: &NodeKey) -> ScopeId {
self.scopes_by_node[node_key]
}
pub fn definitions(&self, symbol_id: SymbolId) -> &[Definition] {
self.defs
.get(&symbol_id)
.map(std::vec::Vec::as_slice)
.unwrap_or_default()
}
pub fn all_definitions(&self) -> impl Iterator<Item = (SymbolId, &Definition)> + '_ {
self.defs
.iter()
.flat_map(|(sym_id, defs)| defs.iter().map(move |def| (*sym_id, def)))
}
fn hash_name(name: &str) -> u64 {
let mut hasher = FxHasher::default();
name.hash(&mut hasher);
hasher.finish()
}
}
pub struct SymbolIterator<'a, I> {
table: &'a SymbolTable,
ids: I,
}
impl<'a, I> Iterator for SymbolIterator<'a, I>
where
I: Iterator<Item = SymbolId>,
{
type Item = &'a Symbol;
fn next(&mut self) -> Option<Self::Item> {
let id = self.ids.next()?;
Some(&self.table.symbols_by_id[id])
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.ids.size_hint()
}
}
impl<'a, I> FusedIterator for SymbolIterator<'a, I> where
I: Iterator<Item = SymbolId> + FusedIterator
{
}
impl<'a, I> DoubleEndedIterator for SymbolIterator<'a, I>
where
I: Iterator<Item = SymbolId> + DoubleEndedIterator,
{
fn next_back(&mut self) -> Option<Self::Item> {
let id = self.ids.next_back()?;
Some(&self.table.symbols_by_id[id])
}
}
// TODO maybe get rid of this and just do all data access via methods on ScopeId?
pub struct ScopeIterator<'a, I> {
table: &'a SymbolTable,
ids: I,
}
/// iterate (`ScopeId`, `Scope`) pairs for given `ScopeId` iterator
impl<'a, I> Iterator for ScopeIterator<'a, I>
where
I: Iterator<Item = ScopeId>,
{
type Item = (ScopeId, &'a Scope);
fn next(&mut self) -> Option<Self::Item> {
let id = self.ids.next()?;
Some((id, &self.table.scopes_by_id[id]))
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.ids.size_hint()
}
}
impl<'a, I> FusedIterator for ScopeIterator<'a, I> where I: Iterator<Item = ScopeId> + FusedIterator {}
impl<'a, I> DoubleEndedIterator for ScopeIterator<'a, I>
where
I: Iterator<Item = ScopeId> + DoubleEndedIterator,
{
fn next_back(&mut self) -> Option<Self::Item> {
let id = self.ids.next_back()?;
Some((id, &self.table.scopes_by_id[id]))
}
}
#[derive(Debug)]
pub(crate) struct SymbolTableBuilder {
symbol_table: SymbolTable,
}
impl SymbolTableBuilder {
pub(crate) fn new() -> Self {
let mut table = SymbolTable {
scopes_by_id: IndexVec::new(),
symbols_by_id: IndexVec::new(),
defs: FxHashMap::default(),
scopes_by_node: FxHashMap::default(),
dependencies: Vec::new(),
};
table.scopes_by_id.push(Scope {
name: Name::new("<module>"),
kind: ScopeKind::Module,
parent: None,
children: Vec::new(),
definition: None,
defining_symbol: None,
symbols_by_name: Map::default(),
});
Self {
symbol_table: table,
}
}
pub(crate) fn finish(self) -> SymbolTable {
self.symbol_table
}
pub(crate) fn add_or_update_symbol(
&mut self,
scope_id: ScopeId,
name: &str,
flags: SymbolFlags,
) -> SymbolId {
let hash = SymbolTable::hash_name(name);
let scope = &mut self.symbol_table.scopes_by_id[scope_id];
let name = Name::new(name);
let entry = scope
.symbols_by_name
.raw_entry_mut()
.from_hash(hash, |existing| {
self.symbol_table.symbols_by_id[*existing].name == name
});
match entry {
RawEntryMut::Occupied(entry) => {
if let Some(symbol) = self.symbol_table.symbols_by_id.get_mut(*entry.key()) {
symbol.flags.insert(flags);
};
*entry.key()
}
RawEntryMut::Vacant(entry) => {
let id = self.symbol_table.symbols_by_id.push(Symbol {
name,
flags,
scope_id,
});
entry.insert_with_hasher(hash, id, (), |symid| {
SymbolTable::hash_name(&self.symbol_table.symbols_by_id[*symid].name)
});
id
}
}
}
pub(crate) fn add_definition(&mut self, symbol_id: SymbolId, definition: Definition) {
self.symbol_table
.defs
.entry(symbol_id)
.or_default()
.push(definition);
}
pub(crate) fn add_child_scope(
&mut self,
parent_scope_id: ScopeId,
name: &str,
kind: ScopeKind,
definition: Option<Definition>,
defining_symbol: Option<SymbolId>,
) -> ScopeId {
let new_scope_id = self.symbol_table.scopes_by_id.push(Scope {
name: Name::new(name),
kind,
parent: Some(parent_scope_id),
children: Vec::new(),
definition,
defining_symbol,
symbols_by_name: Map::default(),
});
let parent_scope = &mut self.symbol_table.scopes_by_id[parent_scope_id];
parent_scope.children.push(new_scope_id);
new_scope_id
}
pub(crate) fn record_scope_for_node(&mut self, node_key: NodeKey, scope_id: ScopeId) {
self.symbol_table.scopes_by_node.insert(node_key, scope_id);
}
pub(crate) fn add_dependency(&mut self, dependency: Dependency) {
self.symbol_table.dependencies.push(dependency);
}
}
#[cfg(test)]
mod tests {
use super::{ScopeKind, SymbolFlags, SymbolTable, SymbolTableBuilder};
#[test]
fn insert_same_name_symbol_twice() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let symbol_id_1 =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::IS_DEFINED);
let symbol_id_2 = builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::IS_USED);
let table = builder.finish();
assert_eq!(symbol_id_1, symbol_id_2);
assert!(symbol_id_1.symbol(&table).is_used(), "flags must merge");
assert!(symbol_id_1.symbol(&table).is_defined(), "flags must merge");
}
#[test]
fn insert_different_named_symbols() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let symbol_id_1 = builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
let symbol_id_2 = builder.add_or_update_symbol(root_scope_id, "bar", SymbolFlags::empty());
assert_ne!(symbol_id_1, symbol_id_2);
}
#[test]
fn add_child_scope_with_symbol() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let foo_symbol_top =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
let c_scope = builder.add_child_scope(root_scope_id, "C", ScopeKind::Class, None, None);
let foo_symbol_inner = builder.add_or_update_symbol(c_scope, "foo", SymbolFlags::empty());
assert_ne!(foo_symbol_top, foo_symbol_inner);
}
#[test]
fn scope_from_id() {
let table = SymbolTableBuilder::new().finish();
let root_scope_id = SymbolTable::root_scope_id();
let scope = root_scope_id.scope(&table);
assert_eq!(scope.name.as_str(), "<module>");
assert_eq!(scope.kind, ScopeKind::Module);
}
#[test]
fn symbol_from_id() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let foo_symbol_id =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
let table = builder.finish();
let symbol = foo_symbol_id.symbol(&table);
assert_eq!(symbol.name(), "foo");
}
#[test]
fn bigger_symbol_table() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let foo_symbol_id =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
builder.add_or_update_symbol(root_scope_id, "bar", SymbolFlags::empty());
builder.add_or_update_symbol(root_scope_id, "baz", SymbolFlags::empty());
builder.add_or_update_symbol(root_scope_id, "qux", SymbolFlags::empty());
let table = builder.finish();
let foo_symbol_id_2 = table
.root_symbol_id_by_name("foo")
.expect("foo symbol to be found");
assert_eq!(foo_symbol_id_2, foo_symbol_id);
}
}

View File

@@ -2,14 +2,17 @@
use crate::ast_ids::NodeKey;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::files::FileId;
use crate::symbols::{symbol_table, GlobalSymbolId, ScopeId, ScopeKind, SymbolId};
use crate::module::{Module, ModuleName};
use crate::semantic::{
resolve_global_symbol, semantic_index, GlobalSymbolId, ScopeId, ScopeKind, SymbolId,
};
use crate::{FxDashMap, FxIndexSet, Name};
use ruff_index::{newtype_index, IndexVec};
use rustc_hash::FxHashMap;
pub(crate) mod infer;
pub(crate) use infer::{infer_definition_type, infer_symbol_type};
pub(crate) use infer::{infer_definition_type, infer_symbol_public_type};
/// unique ID for a type
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
@@ -25,12 +28,15 @@ pub enum Type {
Unbound,
/// a specific function object
Function(FunctionTypeId),
/// a specific module object
Module(ModuleTypeId),
/// a specific class object
Class(ClassTypeId),
/// the set of Python objects with the given class in their __class__'s method resolution order
Instance(ClassTypeId),
Union(UnionTypeId),
Intersection(IntersectionTypeId),
IntLiteral(i64),
// TODO protocols, callable types, overloads, generics, type vars
}
@@ -46,6 +52,39 @@ impl Type {
pub const fn is_unknown(&self) -> bool {
matches!(self, Type::Unknown)
}
pub fn get_member(&self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
match self {
Type::Any => todo!("attribute lookup on Any type"),
Type::Never => todo!("attribute lookup on Never type"),
Type::Unknown => todo!("attribute lookup on Unknown type"),
Type::Unbound => todo!("attribute lookup on Unbound type"),
Type::Function(_) => todo!("attribute lookup on Function type"),
Type::Module(module_id) => module_id.get_member(db, name),
Type::Class(class_id) => class_id.get_class_member(db, name),
Type::Instance(_) => {
// TODO MRO? get_own_instance_member, get_instance_member
todo!("attribute lookup on Instance type")
}
Type::Union(union_id) => {
let jar: &SemanticJar = db.jar()?;
let _todo_union_ref = jar.type_store.get_union(*union_id);
// TODO perform the get_member on each type in the union
// TODO return the union of those results
// TODO if any of those results is `None` then include Unknown in the result union
todo!("attribute lookup on Union type")
}
Type::Intersection(_) => {
// TODO perform the get_member on each type in the intersection
// TODO return the intersection of those results
todo!("attribute lookup on Intersection type")
}
Type::IntLiteral(_) => {
// TODO raise error
Ok(Some(Type::Unknown))
}
}
}
}
impl From<FunctionTypeId> for Type {
@@ -80,7 +119,7 @@ impl TypeStore {
self.modules.remove(&file_id);
}
pub fn cache_symbol_type(&self, symbol: GlobalSymbolId, ty: Type) {
pub fn cache_symbol_public_type(&self, symbol: GlobalSymbolId, ty: Type) {
self.add_or_get_module(symbol.file_id)
.symbol_types
.insert(symbol.symbol_id, ty);
@@ -92,7 +131,7 @@ impl TypeStore {
.insert(node_key, ty);
}
pub fn get_cached_symbol_type(&self, symbol: GlobalSymbolId) -> Option<Type> {
pub fn get_cached_symbol_public_type(&self, symbol: GlobalSymbolId) -> Option<Type> {
self.try_get_module(symbol.file_id)?
.symbol_types
.get(&symbol.symbol_id)
@@ -143,12 +182,12 @@ impl TypeStore {
.add_class(name, scope_id, bases)
}
fn add_union(&mut self, file_id: FileId, elems: &[Type]) -> UnionTypeId {
fn add_union(&self, file_id: FileId, elems: &[Type]) -> UnionTypeId {
self.add_or_get_module(file_id).add_union(elems)
}
fn add_intersection(
&mut self,
&self,
file_id: FileId,
positive: &[Type],
negative: &[Type],
@@ -292,10 +331,11 @@ impl FunctionTypeId {
self,
db: &dyn SemanticDb,
) -> QueryResult<Option<ClassTypeId>> {
let table = symbol_table(db, self.file_id)?;
let index = semantic_index(db, self.file_id)?;
let table = index.symbol_table();
let FunctionType { symbol_id, .. } = *self.function(db)?;
let scope_id = symbol_id.symbol(&table).scope_id();
let scope = scope_id.scope(&table);
let scope_id = symbol_id.symbol(table).scope_id();
let scope = scope_id.scope(table);
if !matches!(scope.kind(), ScopeKind::Class) {
return Ok(None);
};
@@ -336,6 +376,31 @@ impl FunctionTypeId {
}
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct ModuleTypeId {
module: Module,
file_id: FileId,
}
impl ModuleTypeId {
fn module(self, db: &dyn SemanticDb) -> QueryResult<ModuleStoreRef> {
let jar: &SemanticJar = db.jar()?;
Ok(jar.type_store.add_or_get_module(self.file_id).downgrade())
}
pub(crate) fn name(self, db: &dyn SemanticDb) -> QueryResult<ModuleName> {
self.module.name(db)
}
fn get_member(self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
if let Some(symbol_id) = resolve_global_symbol(db, self.module, name)? {
Ok(Some(infer_symbol_public_type(db, symbol_id)?))
} else {
Ok(None)
}
}
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct ClassTypeId {
file_id: FileId,
@@ -375,9 +440,9 @@ impl ClassTypeId {
fn get_own_class_member(self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
// TODO: this should distinguish instance-only members (e.g. `x: int`) and not return them
let ClassType { scope_id, .. } = *self.class(db)?;
let table = symbol_table(db, self.file_id)?;
if let Some(symbol_id) = table.symbol_id_by_name(scope_id, name) {
Ok(Some(infer_symbol_type(
let index = semantic_index(db, self.file_id)?;
if let Some(symbol_id) = index.symbol_table().symbol_id_by_name(scope_id, name) {
Ok(Some(infer_symbol_public_type(
db,
GlobalSymbolId {
file_id: self.file_id,
@@ -389,7 +454,13 @@ impl ClassTypeId {
}
}
// TODO: get_own_instance_member, get_class_member, get_instance_member
/// Get own class member or fall back to super-class member.
fn get_class_member(self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
self.get_own_class_member(db, name)
.or_else(|_| self.get_super_class_member(db, name))
}
// TODO: get_own_instance_member, get_instance_member
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
@@ -427,7 +498,7 @@ struct ModuleTypeStore {
unions: IndexVec<ModuleUnionTypeId, UnionType>,
/// arena of all intersection types created in this module
intersections: IndexVec<ModuleIntersectionTypeId, IntersectionType>,
/// cached types of symbols in this module
/// cached public types of symbols in this module
symbol_types: FxHashMap<SymbolId, Type>,
/// cached types of AST nodes in this module
node_types: FxHashMap<NodeKey, Type>,
@@ -529,6 +600,10 @@ impl std::fmt::Display for DisplayType<'_> {
Type::Never => f.write_str("Never"),
Type::Unknown => f.write_str("Unknown"),
Type::Unbound => f.write_str("Unbound"),
Type::Module(module_id) => {
// NOTE: something like this?: "<module 'module-name' from 'path-from-fileid'>"
todo!("{module_id:?}")
}
// TODO functions and classes should display using a fully qualified name
Type::Class(class_id) => {
f.write_str("Literal[")?;
@@ -547,6 +622,7 @@ impl std::fmt::Display for DisplayType<'_> {
.get_module(int_id.file_id)
.get_intersection(int_id.intersection_id)
.display(f, self.store),
Type::IntLiteral(n) => write!(f, "Literal[{n}]"),
}
}
}
@@ -657,11 +733,12 @@ impl IntersectionType {
#[cfg(test)]
mod tests {
use super::Type;
use std::path::Path;
use crate::files::Files;
use crate::symbols::{SymbolFlags, SymbolTable};
use crate::types::{Type, TypeStore};
use crate::semantic::symbol_table::SymbolTableBuilder;
use crate::semantic::{SymbolFlags, SymbolTable, TypeStore};
use crate::FxIndexSet;
#[test]
@@ -680,12 +757,13 @@ mod tests {
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let mut table = SymbolTable::new();
let func_symbol = table.add_or_update_symbol(
let mut builder = SymbolTableBuilder::new();
let func_symbol = builder.add_or_update_symbol(
SymbolTable::root_scope_id(),
"func",
SymbolFlags::IS_DEFINED,
);
builder.finish();
let id = store.add_function(
file_id,
@@ -702,7 +780,7 @@ mod tests {
#[test]
fn add_union() {
let mut store = TypeStore::default();
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let c1 = store.add_class(file_id, "C1", SymbolTable::root_scope_id(), Vec::new());
@@ -719,7 +797,7 @@ mod tests {
#[test]
fn add_intersection() {
let mut store = TypeStore::default();
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let c1 = store.add_class(file_id, "C1", SymbolTable::root_scope_id(), Vec::new());

View File

@@ -0,0 +1,491 @@
#![allow(dead_code)]
use ruff_python_ast as ast;
use ruff_python_ast::AstNode;
use std::fmt::Debug;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::module::{resolve_module, ModuleName};
use crate::parse::parse;
use crate::semantic::types::{ModuleTypeId, Type};
use crate::semantic::{
resolve_global_symbol, semantic_index, Definition, GlobalSymbolId, ImportDefinition,
ImportFromDefinition,
};
use crate::{FileId, Name};
// FIXME: Figure out proper dead-lock free synchronisation now that this takes `&db` instead of `&mut db`.
/// Resolve the public-facing type for a symbol (the type seen by other scopes: other modules, or
/// nested functions). Because calls to nested functions and imports can occur anywhere in control
/// flow, this type must be conservative and consider all definitions of the symbol that could
/// possibly be seen by another scope. Currently we take the most conservative approach, which is
/// the union of all definitions. We may be able to narrow this in future to eliminate definitions
/// which can't possibly (or at least likely) be seen by any other scope, so that e.g. we could
/// infer `Literal["1"]` instead of `Literal[1] | Literal["1"]` for `x` in `x = x; x = str(x);`.
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_symbol_public_type(db: &dyn SemanticDb, symbol: GlobalSymbolId) -> QueryResult<Type> {
let index = semantic_index(db, symbol.file_id)?;
let defs = index.symbol_table().definitions(symbol.symbol_id).to_vec();
let jar: &SemanticJar = db.jar()?;
if let Some(ty) = jar.type_store.get_cached_symbol_public_type(symbol) {
return Ok(ty);
}
let ty = infer_type_from_definitions(db, symbol, defs.iter().cloned())?;
jar.type_store.cache_symbol_public_type(symbol, ty);
// TODO record dependencies
Ok(ty)
}
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_type_from_definitions<T>(
db: &dyn SemanticDb,
symbol: GlobalSymbolId,
definitions: T,
) -> QueryResult<Type>
where
T: Debug + Iterator<Item = Definition>,
{
let jar: &SemanticJar = db.jar()?;
let mut tys = definitions
.map(|def| infer_definition_type(db, symbol, def.clone()))
.peekable();
if let Some(first) = tys.next() {
if tys.peek().is_some() {
Ok(Type::Union(jar.type_store.add_union(
symbol.file_id,
&Iterator::chain([first].into_iter(), tys).collect::<QueryResult<Vec<_>>>()?,
)))
} else {
first
}
} else {
Ok(Type::Unknown)
}
}
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_definition_type(
db: &dyn SemanticDb,
symbol: GlobalSymbolId,
definition: Definition,
) -> QueryResult<Type> {
let jar: &SemanticJar = db.jar()?;
let type_store = &jar.type_store;
let file_id = symbol.file_id;
match definition {
Definition::None => Ok(Type::Unbound),
Definition::Import(ImportDefinition {
module: module_name,
}) => {
if let Some(module) = resolve_module(db, module_name.clone())? {
Ok(Type::Module(ModuleTypeId { module, file_id }))
} else {
Ok(Type::Unknown)
}
}
Definition::ImportFrom(ImportFromDefinition {
module,
name,
level,
}) => {
// TODO relative imports
assert!(matches!(level, 0));
let module_name = ModuleName::new(module.as_ref().expect("TODO relative imports"));
let Some(module) = resolve_module(db, module_name.clone())? else {
return Ok(Type::Unknown);
};
if let Some(remote_symbol) = resolve_global_symbol(db, module, &name)? {
infer_symbol_public_type(db, remote_symbol)
} else {
Ok(Type::Unknown)
}
}
Definition::ClassDef(node_key) => {
if let Some(ty) = type_store.get_cached_node_type(file_id, node_key.erased()) {
Ok(ty)
} else {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let index = semantic_index(db, file_id)?;
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
let mut bases = Vec::with_capacity(node.bases().len());
for base in node.bases() {
bases.push(infer_expr_type(db, file_id, base)?);
}
let scope_id = index.symbol_table().scope_id_for_node(node_key.erased());
let ty = Type::Class(type_store.add_class(file_id, &node.name.id, scope_id, bases));
type_store.cache_node_type(file_id, *node_key.erased(), ty);
Ok(ty)
}
}
Definition::FunctionDef(node_key) => {
if let Some(ty) = type_store.get_cached_node_type(file_id, node_key.erased()) {
Ok(ty)
} else {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let index = semantic_index(db, file_id)?;
let node = node_key
.resolve(ast.as_any_node_ref())
.expect("node key should resolve");
let decorator_tys = node
.decorator_list
.iter()
.map(|decorator| infer_expr_type(db, file_id, &decorator.expression))
.collect::<QueryResult<_>>()?;
let scope_id = index.symbol_table().scope_id_for_node(node_key.erased());
let ty = type_store
.add_function(
file_id,
&node.name.id,
symbol.symbol_id,
scope_id,
decorator_tys,
)
.into();
type_store.cache_node_type(file_id, *node_key.erased(), ty);
Ok(ty)
}
}
Definition::Assignment(node_key) => {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
// TODO handle unpacking assignment correctly (here and for AnnotatedAssignment case, below)
infer_expr_type(db, file_id, &node.value)
}
Definition::AnnotatedAssignment(node_key) => {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
// TODO actually look at the annotation
let Some(value) = &node.value else {
return Ok(Type::Unknown);
};
// TODO handle unpacking assignment correctly (here and for Assignment case, above)
infer_expr_type(db, file_id, value)
}
}
}
fn infer_expr_type(db: &dyn SemanticDb, file_id: FileId, expr: &ast::Expr) -> QueryResult<Type> {
// TODO cache the resolution of the type on the node
let index = semantic_index(db, file_id)?;
match expr {
ast::Expr::NumberLiteral(ast::ExprNumberLiteral { value, .. }) => {
match value {
ast::Number::Int(n) => {
// TODO support big int literals
Ok(n.as_i64().map(Type::IntLiteral).unwrap_or(Type::Unknown))
}
// TODO builtins.float or builtins.complex
_ => Ok(Type::Unknown),
}
}
ast::Expr::Name(name) => {
// TODO look up in the correct scope, don't assume global
if let Some(symbol_id) = index.symbol_table().root_symbol_id_by_name(&name.id) {
// TODO should use only reachable definitions, not public type
infer_type_from_definitions(
db,
GlobalSymbolId { file_id, symbol_id },
index.reachable_definitions(symbol_id, expr),
)
} else {
Ok(Type::Unknown)
}
}
ast::Expr::Attribute(ast::ExprAttribute { value, attr, .. }) => {
let value_type = infer_expr_type(db, file_id, value)?;
let attr_name = &Name::new(&attr.id);
value_type
.get_member(db, attr_name)
.map(|ty| ty.unwrap_or(Type::Unknown))
}
_ => todo!("full expression type resolution"),
}
}
#[cfg(test)]
mod tests {
use crate::db::tests::TestDb;
use crate::db::{HasJar, SemanticJar};
use crate::module::{
resolve_module, set_module_search_paths, ModuleName, ModuleSearchPath, ModuleSearchPathKind,
};
use crate::semantic::{infer_symbol_public_type, resolve_global_symbol, Type};
use crate::Name;
// TODO with virtual filesystem we shouldn't have to write files to disk for these
// tests
struct TestCase {
temp_dir: tempfile::TempDir,
db: TestDb,
src: ModuleSearchPath,
}
fn create_test() -> std::io::Result<TestCase> {
let temp_dir = tempfile::tempdir()?;
let src = temp_dir.path().join("src");
std::fs::create_dir(&src)?;
let src = ModuleSearchPath::new(src.canonicalize()?, ModuleSearchPathKind::FirstParty);
let roots = vec![src.clone()];
let mut db = TestDb::default();
set_module_search_paths(&mut db, roots);
Ok(TestCase { temp_dir, db, src })
}
fn write_to_path(case: &TestCase, relative_path: &str, contents: &str) -> anyhow::Result<()> {
let path = case.src.path().join(relative_path);
std::fs::write(path, contents)?;
Ok(())
}
fn get_public_type(
case: &TestCase,
module_name: &str,
variable_name: &str,
) -> anyhow::Result<Type> {
let db = &case.db;
let module = resolve_module(db, ModuleName::new(module_name))?.expect("Module to exist");
let symbol = resolve_global_symbol(db, module, variable_name)?.expect("symbol to exist");
Ok(infer_symbol_public_type(db, symbol)?)
}
fn assert_public_type(
case: &TestCase,
module_name: &str,
variable_name: &str,
type_name: &str,
) -> anyhow::Result<()> {
let ty = get_public_type(case, module_name, variable_name)?;
let jar = HasJar::<SemanticJar>::jar(&case.db)?;
assert_eq!(format!("{}", ty.display(&jar.type_store)), type_name);
Ok(())
}
#[test]
fn follow_import_to_class() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "from b import C as D; E = D")?;
write_to_path(&case, "b.py", "class C: pass")?;
assert_public_type(&case, "a", "E", "Literal[C]")
}
#[test]
fn resolve_base_class_by_name() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"mod.py",
"
class Base: pass
class Sub(Base): pass
",
)?;
let ty = get_public_type(&case, "mod", "Sub")?;
let Type::Class(class_id) = ty else {
panic!("Sub is not a Class")
};
let jar = HasJar::<SemanticJar>::jar(&case.db)?;
let base_names: Vec<_> = jar
.type_store
.get_class(class_id)
.bases()
.iter()
.map(|base_ty| format!("{}", base_ty.display(&jar.type_store)))
.collect();
assert_eq!(base_names, vec!["Literal[Base]"]);
Ok(())
}
#[test]
fn resolve_method() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"mod.py",
"
class C:
def f(self): pass
",
)?;
let ty = get_public_type(&case, "mod", "C")?;
let Type::Class(class_id) = ty else {
panic!("C is not a Class");
};
let member_ty = class_id
.get_own_class_member(&case.db, &Name::new("f"))
.expect("C.f to resolve");
let Some(Type::Function(func_id)) = member_ty else {
panic!("C.f is not a Function");
};
let jar = HasJar::<SemanticJar>::jar(&case.db)?;
let function = jar.type_store.get_function(func_id);
assert_eq!(function.name(), "f");
Ok(())
}
#[test]
fn resolve_module_member() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "import b; D = b.C")?;
write_to_path(&case, "b.py", "class C: pass")?;
assert_public_type(&case, "a", "D", "Literal[C]")
}
#[test]
fn resolve_literal() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "x = 1")?;
assert_public_type(&case, "a", "x", "Literal[1]")
}
#[test]
fn resolve_union() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
if flag:
x = 1
else:
x = 2
",
)?;
assert_public_type(&case, "a", "x", "(Literal[1] | Literal[2])")
}
#[test]
fn resolve_visible_def() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "y = 1; y = 2; x = y")?;
assert_public_type(&case, "a", "x", "Literal[2]")
}
#[test]
fn join_paths() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
y = 1
y = 2
if flag:
y = 3
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Literal[2] | Literal[3])")
}
#[test]
fn maybe_unbound() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
if flag:
y = 1
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Unbound | Literal[1])")
}
#[test]
fn if_elif_else() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
y = 1
y = 2
if flag:
y = 3
elif flag2:
y = 4
else:
r = y
y = 5
s = y
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Literal[3] | Literal[4] | Literal[5])")?;
assert_public_type(&case, "a", "r", "Literal[2]")?;
assert_public_type(&case, "a", "s", "Literal[5]")
}
#[test]
fn if_elif() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
y = 1
y = 2
if flag:
y = 3
elif flag2:
y = 4
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Literal[2] | Literal[3] | Literal[4])")
}
}

View File

@@ -53,6 +53,16 @@ pub enum SourceKind {
IpyNotebook(Arc<Notebook>),
}
impl<'a> From<&'a SourceKind> for PySourceType {
fn from(value: &'a SourceKind) -> Self {
match value {
SourceKind::Python(_) => PySourceType::Python,
SourceKind::Stub(_) => PySourceType::Stub,
SourceKind::IpyNotebook(_) => PySourceType::Ipynb,
}
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct Source {
kind: SourceKind,

File diff suppressed because it is too large Load Diff

View File

@@ -1,292 +0,0 @@
#![allow(dead_code)]
use ruff_python_ast as ast;
use ruff_python_ast::AstNode;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::module::ModuleName;
use crate::parse::parse;
use crate::symbols::{
resolve_global_symbol, symbol_table, Definition, GlobalSymbolId, ImportFromDefinition,
};
use crate::types::Type;
use crate::FileId;
// FIXME: Figure out proper dead-lock free synchronisation now that this takes `&db` instead of `&mut db`.
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_symbol_type(db: &dyn SemanticDb, symbol: GlobalSymbolId) -> QueryResult<Type> {
let symbols = symbol_table(db, symbol.file_id)?;
let defs = symbols.definitions(symbol.symbol_id);
let jar: &SemanticJar = db.jar()?;
if let Some(ty) = jar.type_store.get_cached_symbol_type(symbol) {
return Ok(ty);
}
// TODO handle multiple defs, conditional defs...
assert_eq!(defs.len(), 1);
let ty = infer_definition_type(db, symbol, defs[0].clone())?;
jar.type_store.cache_symbol_type(symbol, ty);
// TODO record dependencies
Ok(ty)
}
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_definition_type(
db: &dyn SemanticDb,
symbol: GlobalSymbolId,
definition: Definition,
) -> QueryResult<Type> {
let jar: &SemanticJar = db.jar()?;
let type_store = &jar.type_store;
let file_id = symbol.file_id;
match definition {
Definition::ImportFrom(ImportFromDefinition {
module,
name,
level,
}) => {
// TODO relative imports
assert!(matches!(level, 0));
let module_name = ModuleName::new(module.as_ref().expect("TODO relative imports"));
if let Some(remote_symbol) = resolve_global_symbol(db, module_name, &name)? {
infer_symbol_type(db, remote_symbol)
} else {
Ok(Type::Unknown)
}
}
Definition::ClassDef(node_key) => {
if let Some(ty) = type_store.get_cached_node_type(file_id, node_key.erased()) {
Ok(ty)
} else {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.ast();
let table = symbol_table(db, file_id)?;
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
let mut bases = Vec::with_capacity(node.bases().len());
for base in node.bases() {
bases.push(infer_expr_type(db, file_id, base)?);
}
let scope_id = table.scope_id_for_node(node_key.erased());
let ty = Type::Class(type_store.add_class(file_id, &node.name.id, scope_id, bases));
type_store.cache_node_type(file_id, *node_key.erased(), ty);
Ok(ty)
}
}
Definition::FunctionDef(node_key) => {
if let Some(ty) = type_store.get_cached_node_type(file_id, node_key.erased()) {
Ok(ty)
} else {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.ast();
let table = symbol_table(db, file_id)?;
let node = node_key
.resolve(ast.as_any_node_ref())
.expect("node key should resolve");
let decorator_tys = node
.decorator_list
.iter()
.map(|decorator| infer_expr_type(db, file_id, &decorator.expression))
.collect::<QueryResult<_>>()?;
let scope_id = table.scope_id_for_node(node_key.erased());
let ty = type_store
.add_function(
file_id,
&node.name.id,
symbol.symbol_id,
scope_id,
decorator_tys,
)
.into();
type_store.cache_node_type(file_id, *node_key.erased(), ty);
Ok(ty)
}
}
Definition::Assignment(node_key) => {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.ast();
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
// TODO handle unpacking assignment correctly
infer_expr_type(db, file_id, &node.value)
}
_ => todo!("other kinds of definitions"),
}
}
fn infer_expr_type(db: &dyn SemanticDb, file_id: FileId, expr: &ast::Expr) -> QueryResult<Type> {
// TODO cache the resolution of the type on the node
let symbols = symbol_table(db, file_id)?;
match expr {
ast::Expr::Name(name) => {
// TODO look up in the correct scope, don't assume global
if let Some(symbol_id) = symbols.root_symbol_id_by_name(&name.id) {
infer_symbol_type(db, GlobalSymbolId { file_id, symbol_id })
} else {
Ok(Type::Unknown)
}
}
_ => todo!("full expression type resolution"),
}
}
#[cfg(test)]
mod tests {
use crate::db::tests::TestDb;
use crate::db::{HasJar, SemanticJar};
use crate::module::{
resolve_module, set_module_search_paths, ModuleName, ModuleSearchPath, ModuleSearchPathKind,
};
use crate::symbols::{symbol_table, GlobalSymbolId};
use crate::types::{infer_symbol_type, Type};
use crate::Name;
// TODO with virtual filesystem we shouldn't have to write files to disk for these
// tests
struct TestCase {
temp_dir: tempfile::TempDir,
db: TestDb,
src: ModuleSearchPath,
}
fn create_test() -> std::io::Result<TestCase> {
let temp_dir = tempfile::tempdir()?;
let src = temp_dir.path().join("src");
std::fs::create_dir(&src)?;
let src = ModuleSearchPath::new(src.canonicalize()?, ModuleSearchPathKind::FirstParty);
let roots = vec![src.clone()];
let mut db = TestDb::default();
set_module_search_paths(&mut db, roots);
Ok(TestCase { temp_dir, db, src })
}
#[test]
fn follow_import_to_class() -> anyhow::Result<()> {
let case = create_test()?;
let db = &case.db;
let a_path = case.src.path().join("a.py");
let b_path = case.src.path().join("b.py");
std::fs::write(a_path, "from b import C as D; E = D")?;
std::fs::write(b_path, "class C: pass")?;
let a_file = resolve_module(db, ModuleName::new("a"))?
.expect("module should be found")
.path(db)?
.file();
let a_syms = symbol_table(db, a_file)?;
let e_sym = a_syms
.root_symbol_id_by_name("E")
.expect("E symbol should be found");
let ty = infer_symbol_type(
db,
GlobalSymbolId {
file_id: a_file,
symbol_id: e_sym,
},
)?;
let jar = HasJar::<SemanticJar>::jar(db)?;
assert!(matches!(ty, Type::Class(_)));
assert_eq!(format!("{}", ty.display(&jar.type_store)), "Literal[C]");
Ok(())
}
#[test]
fn resolve_base_class_by_name() -> anyhow::Result<()> {
let case = create_test()?;
let db = &case.db;
let path = case.src.path().join("mod.py");
std::fs::write(path, "class Base: pass\nclass Sub(Base): pass")?;
let file = resolve_module(db, ModuleName::new("mod"))?
.expect("module should be found")
.path(db)?
.file();
let syms = symbol_table(db, file)?;
let sym = syms
.root_symbol_id_by_name("Sub")
.expect("Sub symbol should be found");
let ty = infer_symbol_type(
db,
GlobalSymbolId {
file_id: file,
symbol_id: sym,
},
)?;
let Type::Class(class_id) = ty else {
panic!("Sub is not a Class")
};
let jar = HasJar::<SemanticJar>::jar(db)?;
let base_names: Vec<_> = jar
.type_store
.get_class(class_id)
.bases()
.iter()
.map(|base_ty| format!("{}", base_ty.display(&jar.type_store)))
.collect();
assert_eq!(base_names, vec!["Literal[Base]"]);
Ok(())
}
#[test]
fn resolve_method() -> anyhow::Result<()> {
let case = create_test()?;
let db = &case.db;
let path = case.src.path().join("mod.py");
std::fs::write(path, "class C:\n def f(self): pass")?;
let file = resolve_module(db, ModuleName::new("mod"))?
.expect("module should be found")
.path(db)?
.file();
let syms = symbol_table(db, file)?;
let sym = syms
.root_symbol_id_by_name("C")
.expect("C symbol should be found");
let ty = infer_symbol_type(
db,
GlobalSymbolId {
file_id: file,
symbol_id: sym,
},
)?;
let Type::Class(class_id) = ty else {
panic!("C is not a Class");
};
let member_ty = class_id
.get_own_class_member(db, &Name::new("f"))
.expect("C.f to resolve");
let Some(Type::Function(func_id)) = member_ty else {
panic!("C.f is not a Function");
};
let jar = HasJar::<SemanticJar>::jar(db)?;
let function = jar.type_store.get_function(func_id);
assert_eq!(function.name(), "f");
Ok(())
}
}

View File

@@ -1 +1 @@
a9d7e861f7a46ae7acd56569326adef302e10f29
4b6558c12ac43cd40716cd6452fe98a632ae65d7

View File

@@ -166,7 +166,7 @@ ipaddress: 3.3-
itertools: 3.0-
json: 3.0-
keyword: 3.0-
lib2to3: 3.0-
lib2to3: 3.0-3.12
linecache: 3.0-
locale: 3.0-
logging: 3.0-

File diff suppressed because it is too large Load Diff

View File

@@ -201,7 +201,7 @@ class Array(_CData, Generic[_CT]):
# Sized and _CData prevents using _CDataMeta.
def __len__(self) -> int: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def addressof(obj: _CData) -> int: ...
def alignment(obj_or_type: _CData | type[_CData]) -> int: ...

View File

@@ -783,7 +783,7 @@ def ntohl(x: int, /) -> int: ... # param & ret val are 32-bit ints
def ntohs(x: int, /) -> int: ... # param & ret val are 16-bit ints
def htonl(x: int, /) -> int: ... # param & ret val are 32-bit ints
def htons(x: int, /) -> int: ... # param & ret val are 16-bit ints
def inet_aton(ip_string: str, /) -> bytes: ... # ret val 4 bytes in length
def inet_aton(ip_addr: str, /) -> bytes: ... # ret val 4 bytes in length
def inet_ntoa(packed_ip: ReadableBuffer, /) -> str: ...
def inet_pton(address_family: int, ip_string: str, /) -> bytes: ...
def inet_ntop(address_family: int, packed_ip: ReadableBuffer, /) -> str: ...
@@ -797,7 +797,7 @@ if sys.platform != "win32":
def socketpair(family: int = ..., type: int = ..., proto: int = ..., /) -> tuple[socket, socket]: ...
def if_nameindex() -> list[tuple[int, str]]: ...
def if_nametoindex(name: str, /) -> int: ...
def if_nametoindex(oname: str, /) -> int: ...
def if_indextoname(index: int, /) -> str: ...
CAPI: object

View File

@@ -64,19 +64,19 @@ UF_NODUMP: Literal[0x00000001]
UF_NOUNLINK: Literal[0x00000010]
UF_OPAQUE: Literal[0x00000008]
def S_IMODE(mode: int) -> int: ...
def S_IFMT(mode: int) -> int: ...
def S_ISBLK(mode: int) -> bool: ...
def S_ISCHR(mode: int) -> bool: ...
def S_ISDIR(mode: int) -> bool: ...
def S_ISDOOR(mode: int) -> bool: ...
def S_ISFIFO(mode: int) -> bool: ...
def S_ISLNK(mode: int) -> bool: ...
def S_ISPORT(mode: int) -> bool: ...
def S_ISREG(mode: int) -> bool: ...
def S_ISSOCK(mode: int) -> bool: ...
def S_ISWHT(mode: int) -> bool: ...
def filemode(mode: int) -> str: ...
def S_IMODE(mode: int, /) -> int: ...
def S_IFMT(mode: int, /) -> int: ...
def S_ISBLK(mode: int, /) -> bool: ...
def S_ISCHR(mode: int, /) -> bool: ...
def S_ISDIR(mode: int, /) -> bool: ...
def S_ISDOOR(mode: int, /) -> bool: ...
def S_ISFIFO(mode: int, /) -> bool: ...
def S_ISLNK(mode: int, /) -> bool: ...
def S_ISPORT(mode: int, /) -> bool: ...
def S_ISREG(mode: int, /) -> bool: ...
def S_ISSOCK(mode: int, /) -> bool: ...
def S_ISWHT(mode: int, /) -> bool: ...
def filemode(mode: int, /) -> str: ...
if sys.platform == "win32":
IO_REPARSE_TAG_SYMLINK: int
@@ -101,3 +101,17 @@ if sys.platform == "win32":
FILE_ATTRIBUTE_SYSTEM: Literal[4]
FILE_ATTRIBUTE_TEMPORARY: Literal[256]
FILE_ATTRIBUTE_VIRTUAL: Literal[65536]
if sys.version_info >= (3, 13):
SF_SETTABLE: Literal[0x3FFF0000]
# https://github.com/python/cpython/issues/114081#issuecomment-2119017790
# SF_RESTRICTED: Literal[0x00080000]
SF_FIRMLINK: Literal[0x00800000]
SF_DATALESS: Literal[0x40000000]
SF_SUPPORTED: Literal[0x9F0000]
SF_SYNTHETIC: Literal[0xC0000000]
UF_TRACKED: Literal[0x00000040]
UF_DATAVAULT: Literal[0x00000080]
UF_SETTABLE: Literal[0x0000FFFF]

View File

@@ -326,6 +326,8 @@ class structseq(Generic[_T_co]):
# but only has any meaning if you supply it a dict where the keys are strings.
# https://github.com/python/typeshed/pull/6560#discussion_r767149830
def __new__(cls: type[Self], sequence: Iterable[_T_co], dict: dict[str, Any] = ...) -> Self: ...
if sys.version_info >= (3, 13):
def __replace__(self: Self, **kwargs: Any) -> Self: ...
# Superset of typing.AnyStr that also includes LiteralString
AnyOrLiteralStr = TypeVar("AnyOrLiteralStr", str, bytes, LiteralString) # noqa: Y001

View File

@@ -27,7 +27,7 @@ class ReferenceType(Generic[_T]):
def __eq__(self, value: object, /) -> bool: ...
def __hash__(self) -> int: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
ref = ReferenceType

View File

@@ -48,4 +48,4 @@ class WeakSet(MutableSet[_T]):
def __or__(self, other: Iterable[_S]) -> WeakSet[_S | _T]: ...
def isdisjoint(self, other: Iterable[_T]) -> bool: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...

View File

@@ -318,19 +318,36 @@ class Action(_AttributeHolder):
required: bool
help: str | None
metavar: str | tuple[str, ...] | None
def __init__(
self,
option_strings: Sequence[str],
dest: str,
nargs: int | str | None = None,
const: _T | None = None,
default: _T | str | None = None,
type: Callable[[str], _T] | FileType | None = None,
choices: Iterable[_T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
nargs: int | str | None = None,
const: _T | None = None,
default: _T | str | None = None,
type: Callable[[str], _T] | FileType | None = None,
choices: Iterable[_T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
dest: str,
nargs: int | str | None = None,
const: _T | None = None,
default: _T | str | None = None,
type: Callable[[str], _T] | FileType | None = None,
choices: Iterable[_T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
def __call__(
self, parser: ArgumentParser, namespace: Namespace, values: str | Sequence[Any] | None, option_string: str | None = None
) -> None: ...
@@ -339,29 +356,56 @@ class Action(_AttributeHolder):
if sys.version_info >= (3, 12):
class BooleanOptionalAction(Action):
@overload
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool | None = None,
*,
required: bool = False,
help: str | None = None,
) -> None: ...
@overload
@deprecated("The `type`, `choices`, and `metavar` parameters are ignored and will be removed in Python 3.14.")
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: _T | bool | None = None,
type: Callable[[str], _T] | FileType | None = sentinel,
choices: Iterable[_T] | None = sentinel,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = sentinel,
) -> None: ...
if sys.version_info >= (3, 13):
@overload
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool | None = None,
*,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
@overload
@deprecated("The `type`, `choices`, and `metavar` parameters are ignored and will be removed in Python 3.14.")
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: _T | bool | None = None,
type: Callable[[str], _T] | FileType | None = sentinel,
choices: Iterable[_T] | None = sentinel,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = sentinel,
deprecated: bool = False,
) -> None: ...
else:
@overload
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool | None = None,
*,
required: bool = False,
help: str | None = None,
) -> None: ...
@overload
@deprecated("The `type`, `choices`, and `metavar` parameters are ignored and will be removed in Python 3.14.")
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: _T | bool | None = None,
type: Callable[[str], _T] | FileType | None = sentinel,
choices: Iterable[_T] | None = sentinel,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = sentinel,
) -> None: ...
elif sys.version_info >= (3, 9):
class BooleanOptionalAction(Action):
@@ -431,7 +475,19 @@ class _StoreAction(Action): ...
# undocumented
class _StoreConstAction(Action):
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any | None = None,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
deprecated: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
option_strings: Sequence[str],
@@ -456,15 +512,37 @@ class _StoreConstAction(Action):
# undocumented
class _StoreTrueAction(_StoreConstAction):
def __init__(
self, option_strings: Sequence[str], dest: str, default: bool = False, required: bool = False, help: str | None = None
) -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool = False,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self, option_strings: Sequence[str], dest: str, default: bool = False, required: bool = False, help: str | None = None
) -> None: ...
# undocumented
class _StoreFalseAction(_StoreConstAction):
def __init__(
self, option_strings: Sequence[str], dest: str, default: bool = True, required: bool = False, help: str | None = None
) -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool = True,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self, option_strings: Sequence[str], dest: str, default: bool = True, required: bool = False, help: str | None = None
) -> None: ...
# undocumented
class _AppendAction(Action): ...
@@ -474,7 +552,19 @@ class _ExtendAction(_AppendAction): ...
# undocumented
class _AppendConstAction(Action):
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any | None = None,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
deprecated: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
option_strings: Sequence[str],
@@ -499,27 +589,72 @@ class _AppendConstAction(Action):
# undocumented
class _CountAction(Action):
def __init__(
self, option_strings: Sequence[str], dest: str, default: Any = None, required: bool = False, help: str | None = None
) -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: Any = None,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self, option_strings: Sequence[str], dest: str, default: Any = None, required: bool = False, help: str | None = None
) -> None: ...
# undocumented
class _HelpAction(Action):
def __init__(
self, option_strings: Sequence[str], dest: str = "==SUPPRESS==", default: str = "==SUPPRESS==", help: str | None = None
) -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
) -> None: ...
# undocumented
class _VersionAction(Action):
version: str | None
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str = "show program's version number and exit",
) -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
deprecated: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str = "show program's version number and exit",
) -> None: ...
# undocumented
class _SubParsersAction(Action, Generic[_ArgumentParserT]):
@@ -542,7 +677,30 @@ class _SubParsersAction(Action, Generic[_ArgumentParserT]):
# Note: `add_parser` accepts all kwargs of `ArgumentParser.__init__`. It also
# accepts its own `help` and `aliases` kwargs.
if sys.version_info >= (3, 9):
if sys.version_info >= (3, 13):
def add_parser(
self,
name: str,
*,
deprecated: bool = False,
help: str | None = ...,
aliases: Sequence[str] = ...,
# Kwargs from ArgumentParser constructor
prog: str | None = ...,
usage: str | None = ...,
description: str | None = ...,
epilog: str | None = ...,
parents: Sequence[_ArgumentParserT] = ...,
formatter_class: _FormatterClass = ...,
prefix_chars: str = ...,
fromfile_prefix_chars: str | None = ...,
argument_default: Any = ...,
conflict_handler: str = ...,
add_help: bool = ...,
allow_abbrev: bool = ...,
exit_on_error: bool = ...,
) -> _ArgumentParserT: ...
elif sys.version_info >= (3, 9):
def add_parser(
self,
name: str,

View File

@@ -87,6 +87,6 @@ class array(MutableSequence[_T]):
def __buffer__(self, flags: int, /) -> memoryview: ...
def __release_buffer__(self, buffer: memoryview, /) -> None: ...
if sys.version_info >= (3, 12):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
ArrayType = array

View File

@@ -30,12 +30,12 @@ if sys.platform == "win32":
else:
from .unix_events import *
_T = TypeVar("_T")
_T_co = TypeVar("_T_co", covariant=True)
# Aliases imported by multiple submodules in typeshed
if sys.version_info >= (3, 12):
_AwaitableLike: TypeAlias = Awaitable[_T] # noqa: Y047
_CoroutineLike: TypeAlias = Coroutine[Any, Any, _T] # noqa: Y047
_AwaitableLike: TypeAlias = Awaitable[_T_co] # noqa: Y047
_CoroutineLike: TypeAlias = Coroutine[Any, Any, _T_co] # noqa: Y047
else:
_AwaitableLike: TypeAlias = Generator[Any, None, _T] | Awaitable[_T]
_CoroutineLike: TypeAlias = Generator[Any, None, _T] | Coroutine[Any, Any, _T]
_AwaitableLike: TypeAlias = Generator[Any, None, _T_co] | Awaitable[_T_co]
_CoroutineLike: TypeAlias = Generator[Any, None, _T_co] | Coroutine[Any, Any, _T_co]

View File

@@ -2,7 +2,7 @@ import ssl
import sys
from _typeshed import FileDescriptorLike, ReadableBuffer, StrPath, Unused, WriteableBuffer
from abc import ABCMeta, abstractmethod
from collections.abc import Callable, Coroutine, Generator, Sequence
from collections.abc import Callable, Sequence
from contextvars import Context
from socket import AddressFamily, SocketKind, _Address, _RetAddress, socket
from typing import IO, Any, Literal, Protocol, TypeVar, overload
@@ -43,7 +43,7 @@ _ProtocolFactory: TypeAlias = Callable[[], BaseProtocol]
_SSLContext: TypeAlias = bool | None | ssl.SSLContext
class _TaskFactory(Protocol):
def __call__(self, loop: AbstractEventLoop, factory: Coroutine[Any, Any, _T] | Generator[Any, None, _T], /) -> Future[_T]: ...
def __call__(self, loop: AbstractEventLoop, factory: _CoroutineLike[_T], /) -> Future[_T]: ...
class Handle:
_cancelled: bool

View File

@@ -52,6 +52,6 @@ class Future(Awaitable[_T], Iterable[_T]):
@property
def _loop(self) -> AbstractEventLoop: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def wrap_future(future: _ConcurrentFuture[_T] | Future[_T], *, loop: AbstractEventLoop | None = None) -> Future[_T]: ...

View File

@@ -41,7 +41,7 @@ class Queue(Generic[_T], _LoopBoundMixin): # noqa: Y059
async def join(self) -> None: ...
def task_done(self) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, type: Any) -> GenericAlias: ...
def __class_getitem__(cls, type: Any, /) -> GenericAlias: ...
class PriorityQueue(Queue[_T]): ...
class LifoQueue(Queue[_T]): ...

View File

@@ -443,7 +443,7 @@ class Task(Future[_T_co]): # type: ignore[type-var] # pyright: ignore[reportIn
@classmethod
def all_tasks(cls, loop: AbstractEventLoop | None = None) -> set[Task[Any]]: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def all_tasks(loop: AbstractEventLoop | None = None) -> set[Task[Any]]: ...

View File

@@ -8,5 +8,5 @@ _P = ParamSpec("_P")
def _clear() -> None: ...
def _ncallbacks() -> int: ...
def _run_exitfuncs() -> None: ...
def register(func: Callable[_P, _T], *args: _P.args, **kwargs: _P.kwargs) -> Callable[_P, _T]: ...
def unregister(func: Callable[..., object]) -> None: ...
def register(func: Callable[_P, _T], /, *args: _P.args, **kwargs: _P.kwargs) -> Callable[_P, _T]: ...
def unregister(func: Callable[..., object], /) -> None: ...

View File

@@ -25,6 +25,8 @@ __all__ = [
if sys.version_info >= (3, 10):
__all__ += ["b32hexencode", "b32hexdecode"]
if sys.version_info >= (3, 13):
__all__ += ["z85decode", "z85encode"]
def b64encode(s: ReadableBuffer, altchars: ReadableBuffer | None = None) -> bytes: ...
def b64decode(s: str | ReadableBuffer, altchars: str | ReadableBuffer | None = None, validate: bool = False) -> bytes: ...
@@ -57,3 +59,7 @@ def decodebytes(s: ReadableBuffer) -> bytes: ...
if sys.version_info < (3, 9):
def encodestring(s: ReadableBuffer) -> bytes: ...
def decodestring(s: ReadableBuffer) -> bytes: ...
if sys.version_info >= (3, 13):
def z85encode(s: ReadableBuffer) -> bytes: ...
def z85decode(s: str | ReadableBuffer) -> bytes: ...

View File

@@ -461,7 +461,7 @@ class str(Sequence[str]):
def format(self: LiteralString, *args: LiteralString, **kwargs: LiteralString) -> LiteralString: ...
@overload
def format(self, *args: object, **kwargs: object) -> str: ...
def format_map(self, map: _FormatMapMapping) -> str: ...
def format_map(self, mapping: _FormatMapMapping, /) -> str: ...
def index(self, sub: str, start: SupportsIndex | None = ..., end: SupportsIndex | None = ..., /) -> int: ...
def isalnum(self) -> bool: ...
def isalpha(self) -> bool: ...
@@ -495,10 +495,20 @@ class str(Sequence[str]):
def partition(self: LiteralString, sep: LiteralString, /) -> tuple[LiteralString, LiteralString, LiteralString]: ...
@overload
def partition(self, sep: str, /) -> tuple[str, str, str]: ... # type: ignore[misc]
@overload
def replace(self: LiteralString, old: LiteralString, new: LiteralString, count: SupportsIndex = -1, /) -> LiteralString: ...
@overload
def replace(self, old: str, new: str, count: SupportsIndex = -1, /) -> str: ... # type: ignore[misc]
if sys.version_info >= (3, 13):
@overload
def replace(
self: LiteralString, old: LiteralString, new: LiteralString, /, count: SupportsIndex = -1
) -> LiteralString: ...
@overload
def replace(self, old: str, new: str, /, count: SupportsIndex = -1) -> str: ... # type: ignore[misc]
else:
@overload
def replace(
self: LiteralString, old: LiteralString, new: LiteralString, count: SupportsIndex = -1, /
) -> LiteralString: ...
@overload
def replace(self, old: str, new: str, count: SupportsIndex = -1, /) -> str: ... # type: ignore[misc]
if sys.version_info >= (3, 9):
@overload
def removeprefix(self: LiteralString, prefix: LiteralString, /) -> LiteralString: ...
@@ -1214,6 +1224,9 @@ class property:
fset: Callable[[Any, Any], None] | None
fdel: Callable[[Any], None] | None
__isabstractmethod__: bool
if sys.version_info >= (3, 13):
__name__: str
def __init__(
self,
fget: Callable[[Any], Any] | None = ...,
@@ -1321,12 +1334,34 @@ def divmod(x: _T_contra, y: SupportsRDivMod[_T_contra, _T_co], /) -> _T_co: ...
# The `globals` argument to `eval` has to be `dict[str, Any]` rather than `dict[str, object]` due to invariance.
# (The `globals` argument has to be a "real dict", rather than any old mapping, unlike the `locals` argument.)
def eval(
source: str | ReadableBuffer | CodeType, globals: dict[str, Any] | None = None, locals: Mapping[str, object] | None = None, /
) -> Any: ...
if sys.version_info >= (3, 13):
def eval(
source: str | ReadableBuffer | CodeType,
/,
globals: dict[str, Any] | None = None,
locals: Mapping[str, object] | None = None,
) -> Any: ...
else:
def eval(
source: str | ReadableBuffer | CodeType,
globals: dict[str, Any] | None = None,
locals: Mapping[str, object] | None = None,
/,
) -> Any: ...
# Comment above regarding `eval` applies to `exec` as well
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
def exec(
source: str | ReadableBuffer | CodeType,
/,
globals: dict[str, Any] | None = None,
locals: Mapping[str, object] | None = None,
*,
closure: tuple[CellType, ...] | None = None,
) -> None: ...
elif sys.version_info >= (3, 11):
def exec(
source: str | ReadableBuffer | CodeType,
globals: dict[str, Any] | None = None,
@@ -2035,3 +2070,7 @@ if sys.version_info >= (3, 11):
def split(
self, condition: Callable[[_ExceptionT_co | Self], bool], /
) -> tuple[ExceptionGroup[_ExceptionT_co] | None, ExceptionGroup[_ExceptionT_co] | None]: ...
if sys.version_info >= (3, 13):
class IncompleteInputError(SyntaxError): ...
class PythonFinalizationError(RuntimeError): ...

View File

@@ -4,7 +4,7 @@ import sys
from _typeshed import Unused
from collections.abc import Iterable, Sequence
from time import struct_time
from typing import ClassVar, Literal
from typing import ClassVar, Final
from typing_extensions import TypeAlias
__all__ = [
@@ -154,18 +154,18 @@ month_abbr: Sequence[str]
if sys.version_info >= (3, 12):
class Month(enum.IntEnum):
JANUARY: Literal[1]
FEBRUARY: Literal[2]
MARCH: Literal[3]
APRIL: Literal[4]
MAY: Literal[5]
JUNE: Literal[6]
JULY: Literal[7]
AUGUST: Literal[8]
SEPTEMBER: Literal[9]
OCTOBER: Literal[10]
NOVEMBER: Literal[11]
DECEMBER: Literal[12]
JANUARY = 1
FEBRUARY = 2
MARCH = 3
APRIL = 4
MAY = 5
JUNE = 6
JULY = 7
AUGUST = 8
SEPTEMBER = 9
OCTOBER = 10
NOVEMBER = 11
DECEMBER = 12
JANUARY = Month.JANUARY
FEBRUARY = Month.FEBRUARY
@@ -181,13 +181,13 @@ if sys.version_info >= (3, 12):
DECEMBER = Month.DECEMBER
class Day(enum.IntEnum):
MONDAY: Literal[0]
TUESDAY: Literal[1]
WEDNESDAY: Literal[2]
THURSDAY: Literal[3]
FRIDAY: Literal[4]
SATURDAY: Literal[5]
SUNDAY: Literal[6]
MONDAY = 0
TUESDAY = 1
WEDNESDAY = 2
THURSDAY = 3
FRIDAY = 4
SATURDAY = 5
SUNDAY = 6
MONDAY = Day.MONDAY
TUESDAY = Day.TUESDAY
@@ -197,12 +197,12 @@ if sys.version_info >= (3, 12):
SATURDAY = Day.SATURDAY
SUNDAY = Day.SUNDAY
else:
MONDAY: Literal[0]
TUESDAY: Literal[1]
WEDNESDAY: Literal[2]
THURSDAY: Literal[3]
FRIDAY: Literal[4]
SATURDAY: Literal[5]
SUNDAY: Literal[6]
MONDAY: Final = 0
TUESDAY: Final = 1
WEDNESDAY: Final = 2
THURSDAY: Final = 3
FRIDAY: Final = 4
SATURDAY: Final = 5
SUNDAY: Final = 6
EPOCH: Literal[1970]
EPOCH: Final = 1970

View File

@@ -1,3 +1,4 @@
import sys
from codeop import CommandCompiler
from collections.abc import Callable, Mapping
from types import CodeType
@@ -18,16 +19,34 @@ class InteractiveInterpreter:
class InteractiveConsole(InteractiveInterpreter):
buffer: list[str] # undocumented
filename: str # undocumented
def __init__(self, locals: Mapping[str, Any] | None = None, filename: str = "<console>") -> None: ...
if sys.version_info >= (3, 13):
def __init__(
self, locals: Mapping[str, Any] | None = None, filename: str = "<console>", *, local_exit: bool = False
) -> None: ...
def push(self, line: str, filename: str | None = None) -> bool: ...
else:
def __init__(self, locals: Mapping[str, Any] | None = None, filename: str = "<console>") -> None: ...
def push(self, line: str) -> bool: ...
def interact(self, banner: str | None = None, exitmsg: str | None = None) -> None: ...
def push(self, line: str) -> bool: ...
def resetbuffer(self) -> None: ...
def raw_input(self, prompt: str = "") -> str: ...
def interact(
banner: str | None = None,
readfunc: Callable[[str], str] | None = None,
local: Mapping[str, Any] | None = None,
exitmsg: str | None = None,
) -> None: ...
if sys.version_info >= (3, 13):
def interact(
banner: str | None = None,
readfunc: Callable[[str], str] | None = None,
local: Mapping[str, Any] | None = None,
exitmsg: str | None = None,
local_exit: bool = False,
) -> None: ...
else:
def interact(
banner: str | None = None,
readfunc: Callable[[str], str] | None = None,
local: Mapping[str, Any] | None = None,
exitmsg: str | None = None,
) -> None: ...
def compile_command(source: str, filename: str = "<input>", symbol: str = "single") -> CodeType | None: ...

View File

@@ -54,7 +54,7 @@ class Future(Generic[_T]):
def exception(self, timeout: float | None = None) -> BaseException | None: ...
def set_exception(self, exception: BaseException | None) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class Executor:
if sys.version_info >= (3, 9):

View File

@@ -29,7 +29,7 @@ class _WorkItem(Generic[_S]):
def __init__(self, future: Future[_S], fn: Callable[..., _S], args: Iterable[Any], kwargs: Mapping[str, Any]) -> None: ...
def run(self) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def _worker(
executor_reference: ref[Any],

View File

@@ -30,7 +30,7 @@ class ContextVar(Generic[_T]):
def set(self, value: _T, /) -> Token[_T]: ...
def reset(self, token: Token[_T], /) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
@final
class Token(Generic[_T]):
@@ -40,7 +40,7 @@ class Token(Generic[_T]):
def old_value(self) -> Any: ... # returns either _T or MISSING, but that's hard to express
MISSING: ClassVar[object]
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def copy_context() -> Context: ...

View File

@@ -40,7 +40,6 @@ __all__ = [
"QUOTE_NONE",
"Error",
"Dialect",
"__doc__",
"excel",
"excel_tab",
"field_size_limit",
@@ -51,13 +50,14 @@ __all__ = [
"list_dialects",
"Sniffer",
"unregister_dialect",
"__version__",
"DictReader",
"DictWriter",
"unix_dialect",
]
if sys.version_info >= (3, 12):
__all__ += ["QUOTE_STRINGS", "QUOTE_NOTNULL"]
if sys.version_info < (3, 13):
__all__ += ["__doc__", "__version__"]
_T = TypeVar("_T")
@@ -111,7 +111,7 @@ class DictReader(Iterator[dict[_T | Any, str | Any]], Generic[_T]):
def __iter__(self) -> Self: ...
def __next__(self) -> dict[_T | Any, str | Any]: ...
if sys.version_info >= (3, 12):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class DictWriter(Generic[_T]):
fieldnames: Collection[_T]
@@ -139,7 +139,7 @@ class DictWriter(Generic[_T]):
def writerow(self, rowdict: Mapping[_T, Any]) -> Any: ...
def writerows(self, rowdicts: Iterable[Mapping[_T, Any]]) -> None: ...
if sys.version_info >= (3, 12):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class Sniffer:
preferred: list[str]

View File

@@ -76,7 +76,7 @@ class LibraryLoader(Generic[_DLLT]):
def __getitem__(self, name: str) -> _DLLT: ...
def LoadLibrary(self, name: str) -> _DLLT: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
cdll: LibraryLoader[CDLL]
if sys.platform == "win32":

View File

@@ -5,7 +5,7 @@ from _typeshed import DataclassInstance
from builtins import type as Type # alias to avoid name clashes with fields named "type"
from collections.abc import Callable, Iterable, Mapping
from typing import Any, Generic, Literal, Protocol, TypeVar, overload
from typing_extensions import TypeAlias, TypeGuard
from typing_extensions import TypeAlias, TypeIs
if sys.version_info >= (3, 9):
from types import GenericAlias
@@ -143,7 +143,7 @@ class Field(Generic[_T]):
def __set_name__(self, owner: Type[Any], name: str) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
# NOTE: Actual return type is 'Field[_T]', but we want to help type checkers
# to understand the magic that happens at runtime.
@@ -214,11 +214,9 @@ else:
def fields(class_or_instance: DataclassInstance | type[DataclassInstance]) -> tuple[Field[Any], ...]: ...
@overload
def is_dataclass(obj: DataclassInstance) -> Literal[True]: ...
def is_dataclass(obj: type) -> TypeIs[type[DataclassInstance]]: ...
@overload
def is_dataclass(obj: type) -> TypeGuard[type[DataclassInstance]]: ...
@overload
def is_dataclass(obj: object) -> TypeGuard[DataclassInstance | type[DataclassInstance]]: ...
def is_dataclass(obj: object) -> TypeIs[DataclassInstance | type[DataclassInstance]]: ...
class FrozenInstanceError(AttributeError): ...

View File

@@ -79,6 +79,9 @@ class date:
def isoformat(self) -> str: ...
def timetuple(self) -> struct_time: ...
def toordinal(self) -> int: ...
if sys.version_info >= (3, 13):
def __replace__(self, /, *, year: SupportsIndex = ..., month: SupportsIndex = ..., day: SupportsIndex = ...) -> Self: ...
def replace(self, year: SupportsIndex = ..., month: SupportsIndex = ..., day: SupportsIndex = ...) -> Self: ...
def __le__(self, value: date, /) -> bool: ...
def __lt__(self, value: date, /) -> bool: ...
@@ -148,6 +151,19 @@ class time:
def utcoffset(self) -> timedelta | None: ...
def tzname(self) -> str | None: ...
def dst(self) -> timedelta | None: ...
if sys.version_info >= (3, 13):
def __replace__(
self,
/,
*,
hour: SupportsIndex = ...,
minute: SupportsIndex = ...,
second: SupportsIndex = ...,
microsecond: SupportsIndex = ...,
tzinfo: _TzInfo | None = ...,
fold: int = ...,
) -> Self: ...
def replace(
self,
hour: SupportsIndex = ...,
@@ -263,6 +279,22 @@ class datetime(date):
def date(self) -> _Date: ...
def time(self) -> _Time: ...
def timetz(self) -> _Time: ...
if sys.version_info >= (3, 13):
def __replace__(
self,
/,
*,
year: SupportsIndex = ...,
month: SupportsIndex = ...,
day: SupportsIndex = ...,
hour: SupportsIndex = ...,
minute: SupportsIndex = ...,
second: SupportsIndex = ...,
microsecond: SupportsIndex = ...,
tzinfo: _TzInfo | None = ...,
fold: int = ...,
) -> Self: ...
def replace(
self,
year: SupportsIndex = ...,

View File

@@ -55,7 +55,7 @@ class SequenceMatcher(Generic[_T]):
def quick_ratio(self) -> float: ...
def real_quick_ratio(self) -> float: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
@overload
def get_close_matches(word: AnyStr, possibilities: Iterable[AnyStr], n: int = 3, cutoff: float = 0.6) -> list[AnyStr]: ...

View File

@@ -47,7 +47,22 @@ if sys.version_info >= (3, 11):
col_offset: int | None = None
end_col_offset: int | None = None
if sys.version_info >= (3, 11):
if sys.version_info >= (3, 13):
class _Instruction(NamedTuple):
opname: str
opcode: int
arg: int | None
argval: Any
argrepr: str
offset: int
start_offset: int
starts_line: bool
line_number: int | None
label: int | None = None
positions: Positions | None = None
cache_info: list[tuple[str, int, Any]] | None = None
elif sys.version_info >= (3, 11):
class _Instruction(NamedTuple):
opname: str
opcode: int

View File

@@ -1,20 +1,35 @@
from _typeshed import StrOrBytesPath, StrPath
from typing import Literal, overload
@overload
def make_archive(
base_name: str,
format: str,
root_dir: str | None = None,
root_dir: StrOrBytesPath | None = None,
base_dir: str | None = None,
verbose: int = 0,
dry_run: int = 0,
verbose: bool | Literal[0, 1] = 0,
dry_run: bool | Literal[0, 1] = 0,
owner: str | None = None,
group: str | None = None,
) -> str: ...
@overload
def make_archive(
base_name: StrPath,
format: str,
root_dir: StrOrBytesPath,
base_dir: str | None = None,
verbose: bool | Literal[0, 1] = 0,
dry_run: bool | Literal[0, 1] = 0,
owner: str | None = None,
group: str | None = None,
) -> str: ...
def make_tarball(
base_name: str,
base_dir: str,
base_dir: StrPath,
compress: str | None = "gzip",
verbose: int = 0,
dry_run: int = 0,
verbose: bool | Literal[0, 1] = 0,
dry_run: bool | Literal[0, 1] = 0,
owner: str | None = None,
group: str | None = None,
) -> str: ...
def make_zipfile(base_name: str, base_dir: str, verbose: int = 0, dry_run: int = 0) -> str: ...
def make_zipfile(base_name: str, base_dir: str, verbose: bool | Literal[0, 1] = 0, dry_run: bool | Literal[0, 1] = 0) -> str: ...

View File

@@ -1,5 +1,7 @@
from collections.abc import Callable
from typing import Any
from _typeshed import BytesPath, StrPath
from collections.abc import Callable, Iterable
from distutils.file_util import _BytesPathT, _StrPathT
from typing import Any, Literal, overload
from typing_extensions import TypeAlias
_Macro: TypeAlias = tuple[str] | tuple[str, str | None]
@@ -10,7 +12,11 @@ def gen_lib_options(
def gen_preprocess_options(macros: list[_Macro], include_dirs: list[str]) -> list[str]: ...
def get_default_compiler(osname: str | None = None, platform: str | None = None) -> str: ...
def new_compiler(
plat: str | None = None, compiler: str | None = None, verbose: int = 0, dry_run: int = 0, force: int = 0
plat: str | None = None,
compiler: str | None = None,
verbose: bool | Literal[0, 1] = 0,
dry_run: bool | Literal[0, 1] = 0,
force: bool | Literal[0, 1] = 0,
) -> CCompiler: ...
def show_compilers() -> None: ...
@@ -25,7 +31,9 @@ class CCompiler:
library_dirs: list[str]
runtime_library_dirs: list[str]
objects: list[str]
def __init__(self, verbose: int = 0, dry_run: int = 0, force: int = 0) -> None: ...
def __init__(
self, verbose: bool | Literal[0, 1] = 0, dry_run: bool | Literal[0, 1] = 0, force: bool | Literal[0, 1] = 0
) -> None: ...
def add_include_dir(self, dir: str) -> None: ...
def set_include_dirs(self, dirs: list[str]) -> None: ...
def add_library(self, libname: str) -> None: ...
@@ -39,7 +47,7 @@ class CCompiler:
def add_link_object(self, object: str) -> None: ...
def set_link_objects(self, objects: list[str]) -> None: ...
def detect_language(self, sources: str | list[str]) -> str | None: ...
def find_library_file(self, dirs: list[str], lib: str, debug: bool = ...) -> str | None: ...
def find_library_file(self, dirs: list[str], lib: str, debug: bool | Literal[0, 1] = 0) -> str | None: ...
def has_function(
self,
funcname: str,
@@ -58,7 +66,7 @@ class CCompiler:
output_dir: str | None = None,
macros: list[_Macro] | None = None,
include_dirs: list[str] | None = None,
debug: bool = ...,
debug: bool | Literal[0, 1] = 0,
extra_preargs: list[str] | None = None,
extra_postargs: list[str] | None = None,
depends: list[str] | None = None,
@@ -68,7 +76,7 @@ class CCompiler:
objects: list[str],
output_libname: str,
output_dir: str | None = None,
debug: bool = ...,
debug: bool | Literal[0, 1] = 0,
target_lang: str | None = None,
) -> None: ...
def link(
@@ -81,7 +89,7 @@ class CCompiler:
library_dirs: list[str] | None = None,
runtime_library_dirs: list[str] | None = None,
export_symbols: list[str] | None = None,
debug: bool = ...,
debug: bool | Literal[0, 1] = 0,
extra_preargs: list[str] | None = None,
extra_postargs: list[str] | None = None,
build_temp: str | None = None,
@@ -95,7 +103,7 @@ class CCompiler:
libraries: list[str] | None = None,
library_dirs: list[str] | None = None,
runtime_library_dirs: list[str] | None = None,
debug: bool = ...,
debug: bool | Literal[0, 1] = 0,
extra_preargs: list[str] | None = None,
extra_postargs: list[str] | None = None,
target_lang: str | None = None,
@@ -109,7 +117,7 @@ class CCompiler:
library_dirs: list[str] | None = None,
runtime_library_dirs: list[str] | None = None,
export_symbols: list[str] | None = None,
debug: bool = ...,
debug: bool | Literal[0, 1] = 0,
extra_preargs: list[str] | None = None,
extra_postargs: list[str] | None = None,
build_temp: str | None = None,
@@ -124,7 +132,7 @@ class CCompiler:
library_dirs: list[str] | None = None,
runtime_library_dirs: list[str] | None = None,
export_symbols: list[str] | None = None,
debug: bool = ...,
debug: bool | Literal[0, 1] = 0,
extra_preargs: list[str] | None = None,
extra_postargs: list[str] | None = None,
build_temp: str | None = None,
@@ -139,14 +147,27 @@ class CCompiler:
extra_preargs: list[str] | None = None,
extra_postargs: list[str] | None = None,
) -> None: ...
def executable_filename(self, basename: str, strip_dir: int = 0, output_dir: str = "") -> str: ...
def library_filename(self, libname: str, lib_type: str = "static", strip_dir: int = 0, output_dir: str = "") -> str: ...
def object_filenames(self, source_filenames: list[str], strip_dir: int = 0, output_dir: str = "") -> list[str]: ...
def shared_object_filename(self, basename: str, strip_dir: int = 0, output_dir: str = "") -> str: ...
@overload
def executable_filename(self, basename: str, strip_dir: Literal[0, False] = 0, output_dir: StrPath = "") -> str: ...
@overload
def executable_filename(self, basename: StrPath, strip_dir: Literal[1, True], output_dir: StrPath = "") -> str: ...
def library_filename(
self, libname: str, lib_type: str = "static", strip_dir: bool | Literal[0, 1] = 0, output_dir: StrPath = ""
) -> str: ...
def object_filenames(
self, source_filenames: Iterable[StrPath], strip_dir: bool | Literal[0, 1] = 0, output_dir: StrPath | None = ""
) -> list[str]: ...
@overload
def shared_object_filename(self, basename: str, strip_dir: Literal[0, False] = 0, output_dir: StrPath = "") -> str: ...
@overload
def shared_object_filename(self, basename: StrPath, strip_dir: Literal[1, True], output_dir: StrPath = "") -> str: ...
def execute(self, func: Callable[..., object], args: tuple[Any, ...], msg: str | None = None, level: int = 1) -> None: ...
def spawn(self, cmd: list[str]) -> None: ...
def mkpath(self, name: str, mode: int = 0o777) -> None: ...
def move_file(self, src: str, dst: str) -> str: ...
@overload
def move_file(self, src: StrPath, dst: _StrPathT) -> _StrPathT | str: ...
@overload
def move_file(self, src: BytesPath, dst: _BytesPathT) -> _BytesPathT | bytes: ...
def announce(self, msg: str, level: int = 1) -> None: ...
def warn(self, msg: str) -> None: ...
def debug_print(self, msg: str) -> None: ...

View File

@@ -1,12 +1,14 @@
from _typeshed import Incomplete
from _typeshed import BytesPath, Incomplete, StrOrBytesPath, StrPath, Unused
from abc import abstractmethod
from collections.abc import Callable, Iterable
from distutils.dist import Distribution
from typing import Any
from distutils.file_util import _BytesPathT, _StrPathT
from typing import Any, ClassVar, Literal, overload
class Command:
distribution: Distribution
sub_commands: list[tuple[str, Callable[[Command], bool] | None]]
# Any to work around variance issues
sub_commands: ClassVar[list[tuple[str, Callable[[Any], bool] | None]]]
def __init__(self, dist: Distribution) -> None: ...
@abstractmethod
def initialize_options(self) -> None: ...
@@ -22,32 +24,63 @@ class Command:
def ensure_dirname(self, option: str) -> None: ...
def get_command_name(self) -> str: ...
def set_undefined_options(self, src_cmd: str, *option_pairs: tuple[str, str]) -> None: ...
def get_finalized_command(self, command: str, create: int = 1) -> Command: ...
def reinitialize_command(self, command: Command | str, reinit_subcommands: int = 0) -> Command: ...
def get_finalized_command(self, command: str, create: bool | Literal[0, 1] = 1) -> Command: ...
def reinitialize_command(self, command: Command | str, reinit_subcommands: bool | Literal[0, 1] = 0) -> Command: ...
def run_command(self, command: str) -> None: ...
def get_sub_commands(self) -> list[str]: ...
def warn(self, msg: str) -> None: ...
def execute(self, func: Callable[..., object], args: Iterable[Any], msg: str | None = None, level: int = 1) -> None: ...
def mkpath(self, name: str, mode: int = 0o777) -> None: ...
@overload
def copy_file(
self, infile: str, outfile: str, preserve_mode: int = 1, preserve_times: int = 1, link: str | None = None, level: Any = 1
) -> tuple[str, bool]: ... # level is not used
self,
infile: StrPath,
outfile: _StrPathT,
preserve_mode: bool | Literal[0, 1] = 1,
preserve_times: bool | Literal[0, 1] = 1,
link: str | None = None,
level: Unused = 1,
) -> tuple[_StrPathT | str, bool]: ...
@overload
def copy_file(
self,
infile: BytesPath,
outfile: _BytesPathT,
preserve_mode: bool | Literal[0, 1] = 1,
preserve_times: bool | Literal[0, 1] = 1,
link: str | None = None,
level: Unused = 1,
) -> tuple[_BytesPathT | bytes, bool]: ...
def copy_tree(
self,
infile: str,
infile: StrPath,
outfile: str,
preserve_mode: int = 1,
preserve_times: int = 1,
preserve_symlinks: int = 0,
level: Any = 1,
) -> list[str]: ... # level is not used
def move_file(self, src: str, dst: str, level: Any = 1) -> str: ... # level is not used
def spawn(self, cmd: Iterable[str], search_path: int = 1, level: Any = 1) -> None: ... # level is not used
preserve_mode: bool | Literal[0, 1] = 1,
preserve_times: bool | Literal[0, 1] = 1,
preserve_symlinks: bool | Literal[0, 1] = 0,
level: Unused = 1,
) -> list[str]: ...
@overload
def move_file(self, src: StrPath, dst: _StrPathT, level: Unused = 1) -> _StrPathT | str: ...
@overload
def move_file(self, src: BytesPath, dst: _BytesPathT, level: Unused = 1) -> _BytesPathT | bytes: ...
def spawn(self, cmd: Iterable[str], search_path: bool | Literal[0, 1] = 1, level: Unused = 1) -> None: ...
@overload
def make_archive(
self,
base_name: str,
format: str,
root_dir: str | None = None,
root_dir: StrOrBytesPath | None = None,
base_dir: str | None = None,
owner: str | None = None,
group: str | None = None,
) -> str: ...
@overload
def make_archive(
self,
base_name: StrPath,
format: str,
root_dir: StrOrBytesPath,
base_dir: str | None = None,
owner: str | None = None,
group: str | None = None,
@@ -55,12 +88,12 @@ class Command:
def make_file(
self,
infiles: str | list[str] | tuple[str, ...],
outfile: str,
outfile: StrOrBytesPath,
func: Callable[..., object],
args: list[Any],
exec_msg: str | None = None,
skip_msg: str | None = None,
level: Any = 1,
) -> None: ... # level is not used
level: Unused = 1,
) -> None: ...
def ensure_finalized(self) -> None: ...
def dump_options(self, header: Incomplete | None = None, indent: str = "") -> None: ...

View File

@@ -1,5 +1,5 @@
import sys
from typing import Any
from typing import Any, Literal
from ..cmd import Command
@@ -9,9 +9,9 @@ if sys.platform == "win32":
class PyDialog(Dialog):
def __init__(self, *args, **kw) -> None: ...
def title(self, title) -> None: ...
def back(self, title, next, name: str = "Back", active: int = 1): ...
def cancel(self, title, next, name: str = "Cancel", active: int = 1): ...
def next(self, title, next, name: str = "Next", active: int = 1): ...
def back(self, title, next, name: str = "Back", active: bool | Literal[0, 1] = 1): ...
def cancel(self, title, next, name: str = "Cancel", active: bool | Literal[0, 1] = 1): ...
def next(self, title, next, name: str = "Next", active: bool | Literal[0, 1] = 1): ...
def xbutton(self, name, title, next, xpos): ...
class bdist_msi(Command):

View File

@@ -1,4 +1,5 @@
from typing import Any
from collections.abc import Callable
from typing import Any, ClassVar
from ..cmd import Command
@@ -28,4 +29,5 @@ class build(Command):
def has_c_libraries(self): ...
def has_ext_modules(self): ...
def has_scripts(self): ...
sub_commands: Any
# Any to work around variance issues
sub_commands: ClassVar[list[tuple[str, Callable[[Any], bool] | None]]]

View File

@@ -1,4 +1,4 @@
from typing import Any
from typing import Any, Literal
from ..cmd import Command
from ..util import Mixin2to3 as Mixin2to3
@@ -32,7 +32,7 @@ class build_py(Command):
def find_all_modules(self): ...
def get_source_files(self): ...
def get_module_outfile(self, build_dir, package, module): ...
def get_outputs(self, include_bytecode: int = 1): ...
def get_outputs(self, include_bytecode: bool | Literal[0, 1] = 1): ...
def build_module(self, module, module_file, package): ...
def build_modules(self) -> None: ...
def build_packages(self) -> None: ...

View File

@@ -1,4 +1,4 @@
from typing import Any
from typing import Any, Literal
from typing_extensions import TypeAlias
from ..cmd import Command
@@ -16,7 +16,7 @@ class SilentReporter(_Reporter):
report_level,
halt_level,
stream: Any | None = ...,
debug: int = ...,
debug: bool | Literal[0, 1] = 0,
encoding: str = ...,
error_handler: str = ...,
) -> None: ...

View File

@@ -1,6 +1,7 @@
from _typeshed import StrOrBytesPath
from collections.abc import Sequence
from re import Pattern
from typing import Any
from typing import Any, Literal
from ..ccompiler import CCompiler
from ..cmd import Command
@@ -65,8 +66,8 @@ class config(Command):
include_dirs: Sequence[str] | None = None,
libraries: Sequence[str] | None = None,
library_dirs: Sequence[str] | None = None,
decl: int = 0,
call: int = 0,
decl: bool | Literal[0, 1] = 0,
call: bool | Literal[0, 1] = 0,
) -> bool: ...
def check_lib(
self,
@@ -80,4 +81,4 @@ class config(Command):
self, header: str, include_dirs: Sequence[str] | None = None, library_dirs: Sequence[str] | None = None, lang: str = "c"
) -> bool: ...
def dump_file(filename: str, head: Any | None = None) -> None: ...
def dump_file(filename: StrOrBytesPath, head: Any | None = None) -> None: ...

View File

@@ -1,4 +1,5 @@
from typing import Any
from collections.abc import Callable
from typing import Any, ClassVar
from ..cmd import Command
@@ -60,4 +61,5 @@ class install(Command):
def has_headers(self): ...
def has_scripts(self): ...
def has_data(self): ...
sub_commands: Any
# Any to work around variance issues
sub_commands: ClassVar[list[tuple[str, Callable[[Any], bool] | None]]]

View File

@@ -1,10 +1,12 @@
from typing import Any
from collections.abc import Callable
from typing import Any, ClassVar
from ..config import PyPIRCCommand
class register(PyPIRCCommand):
description: str
sub_commands: Any
# Any to work around variance issues
sub_commands: ClassVar[list[tuple[str, Callable[[Any], bool] | None]]]
list_classifiers: int
strict: int
def initialize_options(self) -> None: ...

View File

@@ -1,4 +1,5 @@
from typing import Any
from collections.abc import Callable
from typing import Any, ClassVar
from ..cmd import Command
@@ -11,7 +12,8 @@ class sdist(Command):
boolean_options: Any
help_options: Any
negative_opt: Any
sub_commands: Any
# Any to work around variance issues
sub_commands: ClassVar[list[tuple[str, Callable[[Any], bool] | None]]]
READMES: Any
template: Any
manifest: Any

View File

@@ -3,7 +3,7 @@ from collections.abc import Mapping
from distutils.cmd import Command as Command
from distutils.dist import Distribution as Distribution
from distutils.extension import Extension as Extension
from typing import Any
from typing import Any, Literal
USAGE: str
@@ -45,7 +45,7 @@ def setup(
command_packages: list[str] = ...,
command_options: Mapping[str, Mapping[str, tuple[Any, Any]]] = ...,
package_data: Mapping[str, list[str]] = ...,
include_package_data: bool = ...,
include_package_data: bool | Literal[0, 1] = ...,
libraries: list[str] = ...,
headers: list[str] = ...,
ext_package: str = ...,

View File

@@ -1,3 +1,14 @@
def newer(source: str, target: str) -> bool: ...
def newer_pairwise(sources: list[str], targets: list[str]) -> list[tuple[str, str]]: ...
def newer_group(sources: list[str], target: str, missing: str = "error") -> bool: ...
from _typeshed import StrOrBytesPath, SupportsLenAndGetItem
from collections.abc import Iterable
from typing import Literal, TypeVar
_SourcesT = TypeVar("_SourcesT", bound=StrOrBytesPath)
_TargetsT = TypeVar("_TargetsT", bound=StrOrBytesPath)
def newer(source: StrOrBytesPath, target: StrOrBytesPath) -> bool | Literal[1]: ...
def newer_pairwise(
sources: SupportsLenAndGetItem[_SourcesT], targets: SupportsLenAndGetItem[_TargetsT]
) -> tuple[list[_SourcesT], list[_TargetsT]]: ...
def newer_group(
sources: Iterable[StrOrBytesPath], target: StrOrBytesPath, missing: Literal["error", "ignore", "newer"] = "error"
) -> Literal[0, 1]: ...

View File

@@ -1,13 +1,23 @@
def mkpath(name: str, mode: int = 0o777, verbose: int = 1, dry_run: int = 0) -> list[str]: ...
def create_tree(base_dir: str, files: list[str], mode: int = 0o777, verbose: int = 1, dry_run: int = 0) -> None: ...
from _typeshed import StrOrBytesPath, StrPath
from collections.abc import Iterable
from typing import Literal
def mkpath(name: str, mode: int = 0o777, verbose: bool | Literal[0, 1] = 1, dry_run: bool | Literal[0, 1] = 0) -> list[str]: ...
def create_tree(
base_dir: StrPath,
files: Iterable[StrPath],
mode: int = 0o777,
verbose: bool | Literal[0, 1] = 1,
dry_run: bool | Literal[0, 1] = 0,
) -> None: ...
def copy_tree(
src: str,
src: StrPath,
dst: str,
preserve_mode: int = 1,
preserve_times: int = 1,
preserve_symlinks: int = 0,
update: int = 0,
verbose: int = 1,
dry_run: int = 0,
preserve_mode: bool | Literal[0, 1] = 1,
preserve_times: bool | Literal[0, 1] = 1,
preserve_symlinks: bool | Literal[0, 1] = 0,
update: bool | Literal[0, 1] = 0,
verbose: bool | Literal[0, 1] = 1,
dry_run: bool | Literal[0, 1] = 0,
) -> list[str]: ...
def remove_tree(directory: str, verbose: int = 1, dry_run: int = 0) -> None: ...
def remove_tree(directory: StrOrBytesPath, verbose: bool | Literal[0, 1] = 1, dry_run: bool | Literal[0, 1] = 0) -> None: ...

View File

@@ -1,8 +1,8 @@
from _typeshed import FileDescriptorOrPath, Incomplete, SupportsWrite
from _typeshed import Incomplete, StrOrBytesPath, StrPath, SupportsWrite
from collections.abc import Iterable, Mapping
from distutils.cmd import Command
from re import Pattern
from typing import IO, Any, ClassVar, TypeVar, overload
from typing import IO, Any, ClassVar, Literal, TypeVar, overload
from typing_extensions import TypeAlias
command_re: Pattern[str]
@@ -11,7 +11,7 @@ _OptionsList: TypeAlias = list[tuple[str, str | None, str, int] | tuple[str, str
_CommandT = TypeVar("_CommandT", bound=Command)
class DistributionMetadata:
def __init__(self, path: FileDescriptorOrPath | None = None) -> None: ...
def __init__(self, path: StrOrBytesPath | None = None) -> None: ...
name: str | None
version: str | None
author: str | None
@@ -30,7 +30,7 @@ class DistributionMetadata:
requires: list[str] | None
obsoletes: list[str] | None
def read_pkg_file(self, file: IO[str]) -> None: ...
def write_pkg_info(self, base_dir: str) -> None: ...
def write_pkg_info(self, base_dir: StrPath) -> None: ...
def write_pkg_file(self, file: SupportsWrite[str]) -> None: ...
def get_name(self) -> str: ...
def get_version(self) -> str: ...
@@ -63,7 +63,10 @@ class Distribution:
def __init__(self, attrs: Mapping[str, Any] | None = None) -> None: ...
def get_option_dict(self, command: str) -> dict[str, tuple[str, str]]: ...
def parse_config_files(self, filenames: Iterable[str] | None = None) -> None: ...
def get_command_obj(self, command: str, create: bool = True) -> Command | None: ...
@overload
def get_command_obj(self, command: str, create: Literal[1, True] = 1) -> Command: ...
@overload
def get_command_obj(self, command: str, create: Literal[0, False]) -> Command | None: ...
global_options: ClassVar[_OptionsList]
common_usage: ClassVar[str]
display_options: ClassVar[_OptionsList]

View File

@@ -1,14 +1,38 @@
from collections.abc import Sequence
from _typeshed import BytesPath, StrOrBytesPath, StrPath
from collections.abc import Iterable
from typing import Literal, TypeVar, overload
_StrPathT = TypeVar("_StrPathT", bound=StrPath)
_BytesPathT = TypeVar("_BytesPathT", bound=BytesPath)
@overload
def copy_file(
src: str,
dst: str,
preserve_mode: bool = ...,
preserve_times: bool = ...,
update: bool = ...,
src: StrPath,
dst: _StrPathT,
preserve_mode: bool | Literal[0, 1] = 1,
preserve_times: bool | Literal[0, 1] = 1,
update: bool | Literal[0, 1] = 0,
link: str | None = None,
verbose: bool = ...,
dry_run: bool = ...,
) -> tuple[str, str]: ...
def move_file(src: str, dst: str, verbose: bool = ..., dry_run: bool = ...) -> str: ...
def write_file(filename: str, contents: Sequence[str]) -> None: ...
verbose: bool | Literal[0, 1] = 1,
dry_run: bool | Literal[0, 1] = 0,
) -> tuple[_StrPathT | str, bool]: ...
@overload
def copy_file(
src: BytesPath,
dst: _BytesPathT,
preserve_mode: bool | Literal[0, 1] = 1,
preserve_times: bool | Literal[0, 1] = 1,
update: bool | Literal[0, 1] = 0,
link: str | None = None,
verbose: bool | Literal[0, 1] = 1,
dry_run: bool | Literal[0, 1] = 0,
) -> tuple[_BytesPathT | bytes, bool]: ...
@overload
def move_file(
src: StrPath, dst: _StrPathT, verbose: bool | Literal[0, 1] = 0, dry_run: bool | Literal[0, 1] = 0
) -> _StrPathT | str: ...
@overload
def move_file(
src: BytesPath, dst: _BytesPathT, verbose: bool | Literal[0, 1] = 0, dry_run: bool | Literal[0, 1] = 0
) -> _BytesPathT | bytes: ...
def write_file(filename: StrOrBytesPath, contents: Iterable[str]) -> None: ...

View File

@@ -23,7 +23,11 @@ class FileList:
def include_pattern(self, pattern: str | Pattern[str], *, is_regex: Literal[True, 1]) -> bool: ...
@overload
def include_pattern(
self, pattern: str | Pattern[str], anchor: bool | Literal[0, 1] = 1, prefix: str | None = None, is_regex: int = 0
self,
pattern: str | Pattern[str],
anchor: bool | Literal[0, 1] = 1,
prefix: str | None = None,
is_regex: bool | Literal[0, 1] = 0,
) -> bool: ...
@overload
def exclude_pattern(
@@ -33,7 +37,11 @@ class FileList:
def exclude_pattern(self, pattern: str | Pattern[str], *, is_regex: Literal[True, 1]) -> bool: ...
@overload
def exclude_pattern(
self, pattern: str | Pattern[str], anchor: bool | Literal[0, 1] = 1, prefix: str | None = None, is_regex: int = 0
self,
pattern: str | Pattern[str],
anchor: bool | Literal[0, 1] = 1,
prefix: str | None = None,
is_regex: bool | Literal[0, 1] = 0,
) -> bool: ...
def findall(dir: str = ".") -> list[str]: ...
@@ -46,5 +54,5 @@ def translate_pattern(
def translate_pattern(pattern: str | Pattern[str], *, is_regex: Literal[True, 1]) -> Pattern[str]: ...
@overload
def translate_pattern(
pattern: str | Pattern[str], anchor: bool | Literal[0, 1] = 1, prefix: str | None = None, is_regex: int = 0
pattern: str | Pattern[str], anchor: bool | Literal[0, 1] = 1, prefix: str | None = None, is_regex: bool | Literal[0, 1] = 0
) -> Pattern[str]: ...

View File

@@ -1,2 +1,6 @@
def spawn(cmd: list[str], search_path: bool = ..., verbose: bool = ..., dry_run: bool = ...) -> None: ...
from typing import Literal
def spawn(
cmd: list[str], search_path: bool | Literal[0, 1] = 1, verbose: bool | Literal[0, 1] = 0, dry_run: bool | Literal[0, 1] = 0
) -> None: ...
def find_executable(executable: str, path: str | None = None) -> str | None: ...

View File

@@ -23,8 +23,10 @@ def get_config_vars() -> dict[str, str | int]: ...
def get_config_vars(arg: str, /, *args: str) -> list[str | int]: ...
def get_config_h_filename() -> str: ...
def get_makefile_filename() -> str: ...
def get_python_inc(plat_specific: bool = ..., prefix: str | None = None) -> str: ...
def get_python_lib(plat_specific: bool = ..., standard_lib: bool = ..., prefix: str | None = None) -> str: ...
def get_python_inc(plat_specific: bool | Literal[0, 1] = 0, prefix: str | None = None) -> str: ...
def get_python_lib(
plat_specific: bool | Literal[0, 1] = 0, standard_lib: bool | Literal[0, 1] = 0, prefix: str | None = None
) -> str: ...
def customize_compiler(compiler: CCompiler) -> None: ...
if sys.version_info < (3, 10):

View File

@@ -1,4 +1,4 @@
from typing import IO
from typing import IO, Literal
class TextFile:
def __init__(
@@ -6,12 +6,12 @@ class TextFile:
filename: str | None = None,
file: IO[str] | None = None,
*,
strip_comments: bool = ...,
lstrip_ws: bool = ...,
rstrip_ws: bool = ...,
skip_blanks: bool = ...,
join_lines: bool = ...,
collapse_join: bool = ...,
strip_comments: bool | Literal[0, 1] = ...,
lstrip_ws: bool | Literal[0, 1] = ...,
rstrip_ws: bool | Literal[0, 1] = ...,
skip_blanks: bool | Literal[0, 1] = ...,
join_lines: bool | Literal[0, 1] = ...,
collapse_join: bool | Literal[0, 1] = ...,
) -> None: ...
def open(self, filename: str) -> None: ...
def close(self) -> None: ...

View File

@@ -5,22 +5,26 @@ from typing import Any, Literal
def get_host_platform() -> str: ...
def get_platform() -> str: ...
def convert_path(pathname: str) -> str: ...
def change_root(new_root: str, pathname: str) -> str: ...
def change_root(new_root: StrPath, pathname: StrPath) -> str: ...
def check_environ() -> None: ...
def subst_vars(s: str, local_vars: Mapping[str, str]) -> None: ...
def split_quoted(s: str) -> list[str]: ...
def execute(
func: Callable[..., object], args: tuple[Any, ...], msg: str | None = None, verbose: bool = ..., dry_run: bool = ...
func: Callable[..., object],
args: tuple[Any, ...],
msg: str | None = None,
verbose: bool | Literal[0, 1] = 0,
dry_run: bool | Literal[0, 1] = 0,
) -> None: ...
def strtobool(val: str) -> Literal[0, 1]: ...
def byte_compile(
py_files: list[str],
optimize: int = 0,
force: bool = ...,
force: bool | Literal[0, 1] = 0,
prefix: str | None = None,
base_dir: str | None = None,
verbose: bool = ...,
dry_run: bool = ...,
verbose: bool | Literal[0, 1] = 1,
dry_run: bool | Literal[0, 1] = 0,
direct: bool | None = None,
) -> None: ...
def rfc822_escape(header: str) -> str: ...

View File

@@ -10,4 +10,4 @@ def is_enabled() -> bool: ...
if sys.platform != "win32":
def register(signum: int, file: FileDescriptorLike = ..., all_threads: bool = ..., chain: bool = ...) -> None: ...
def unregister(signum: int) -> None: ...
def unregister(signum: int, /) -> None: ...

View File

@@ -52,6 +52,6 @@ class dircmp(Generic[AnyStr]):
def phase4(self) -> None: ...
def phase4_closure(self) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def clear_cache() -> None: ...

View File

@@ -200,7 +200,7 @@ class FileInput(Iterator[AnyStr]):
def isfirstline(self) -> bool: ...
def isstdin(self) -> bool: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
if sys.version_info >= (3, 10):
def hook_compressed(

View File

@@ -132,7 +132,7 @@ class partial(Generic[_T]):
def __new__(cls, func: Callable[..., _T], /, *args: Any, **kwargs: Any) -> Self: ...
def __call__(self, /, *args: Any, **kwargs: Any) -> _T: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
# With protocols, this could change into a generic protocol that defines __get__ and returns _T
_Descriptor: TypeAlias = Any
@@ -149,7 +149,7 @@ class partialmethod(Generic[_T]):
@property
def __isabstractmethod__(self) -> bool: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class _SingleDispatchCallable(Generic[_T]):
registry: types.MappingProxyType[Any, Callable[..., _T]]
@@ -196,7 +196,7 @@ class cached_property(Generic[_T_co]):
# __set__ is not defined at runtime, but @cached_property is designed to be settable
def __set__(self, instance: object, value: _T_co) -> None: ... # type: ignore[misc] # pyright: ignore[reportGeneralTypeIssues]
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
if sys.version_info >= (3, 9):
def cache(user_function: Callable[..., _T], /) -> _lru_cache_wrapper[_T]: ...

View File

@@ -20,6 +20,8 @@ __all__ = [
]
if sys.version_info >= (3, 12):
__all__ += ["islink"]
if sys.version_info >= (3, 13):
__all__ += ["isjunction", "isdevdrive", "lexists"]
# All overloads can return empty string. Ideally, Literal[""] would be a valid
# Iterable[T], so that list[T] | Literal[""] could be used as a return
@@ -50,3 +52,8 @@ def getctime(filename: FileDescriptorOrPath) -> float: ...
def samefile(f1: FileDescriptorOrPath, f2: FileDescriptorOrPath) -> bool: ...
def sameopenfile(fp1: int, fp2: int) -> bool: ...
def samestat(s1: os.stat_result, s2: os.stat_result) -> bool: ...
if sys.version_info >= (3, 13):
def isjunction(path: StrOrBytesPath) -> bool: ...
def isdevdrive(path: StrOrBytesPath) -> bool: ...
def lexists(path: StrOrBytesPath) -> bool: ...

View File

@@ -23,6 +23,6 @@ class TopologicalSorter(Generic[_T]):
def get_ready(self) -> tuple[_T, ...]: ...
def static_order(self) -> Iterable[_T]: ...
if sys.version_info >= (3, 11):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class CycleError(ValueError): ...

View File

@@ -12,8 +12,8 @@ _ReadBinaryMode: TypeAlias = Literal["r", "rb"]
_WriteBinaryMode: TypeAlias = Literal["a", "ab", "w", "wb", "x", "xb"]
_OpenTextMode: TypeAlias = Literal["rt", "at", "wt", "xt"]
READ: Literal[1] # undocumented
WRITE: Literal[2] # undocumented
READ: object # undocumented
WRITE: object # undocumented
FTEXT: int # actually Literal[1] # undocumented
FHCRC: int # actually Literal[2] # undocumented
@@ -86,7 +86,7 @@ class BadGzipFile(OSError): ...
class GzipFile(_compression.BaseStream):
myfileobj: FileIO | None
mode: Literal[1, 2]
mode: object
name: str
compress: zlib._Compress
fileobj: _ReadableFileobj | _WritableFileobj

View File

@@ -1,6 +1,5 @@
import sys
from enum import IntEnum
from typing import Literal
if sys.version_info >= (3, 11):
from enum import StrEnum
@@ -49,11 +48,19 @@ class HTTPStatus(IntEnum):
GONE = 410
LENGTH_REQUIRED = 411
PRECONDITION_FAILED = 412
if sys.version_info >= (3, 13):
CONTENT_TOO_LARGE = 413
REQUEST_ENTITY_TOO_LARGE = 413
if sys.version_info >= (3, 13):
URI_TOO_LONG = 414
REQUEST_URI_TOO_LONG = 414
UNSUPPORTED_MEDIA_TYPE = 415
if sys.version_info >= (3, 13):
RANGE_NOT_SATISFIABLE = 416
REQUESTED_RANGE_NOT_SATISFIABLE = 416
EXPECTATION_FAILED = 417
if sys.version_info >= (3, 13):
UNPROCESSABLE_CONTENT = 422
UNPROCESSABLE_ENTITY = 422
LOCKED = 423
FAILED_DEPENDENCY = 424
@@ -75,9 +82,9 @@ class HTTPStatus(IntEnum):
MISDIRECTED_REQUEST = 421
UNAVAILABLE_FOR_LEGAL_REASONS = 451
if sys.version_info >= (3, 9):
EARLY_HINTS: Literal[103]
IM_A_TEAPOT: Literal[418]
TOO_EARLY: Literal[425]
EARLY_HINTS = 103
IM_A_TEAPOT = 418
TOO_EARLY = 425
if sys.version_info >= (3, 12):
@property
def is_informational(self) -> bool: ...

View File

@@ -45,7 +45,7 @@ class Morsel(dict[str, Any], Generic[_T]):
def __eq__(self, morsel: object) -> bool: ...
def __setitem__(self, K: str, V: Any) -> None: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class BaseCookie(dict[str, Morsel[_T]], Generic[_T]):
def __init__(self, input: _DataType | None = None) -> None: ...

View File

@@ -240,7 +240,10 @@ class DistributionFinder(MetaPathFinder):
class MetadataPathFinder(DistributionFinder):
@classmethod
def find_distributions(cls, context: DistributionFinder.Context = ...) -> Iterable[PathDistribution]: ...
if sys.version_info >= (3, 10):
if sys.version_info >= (3, 11):
@classmethod
def invalidate_caches(cls) -> None: ...
elif sys.version_info >= (3, 10):
# Yes, this is an instance method that has a parameter named "cls"
def invalidate_caches(cls) -> None: ...

View File

@@ -318,6 +318,7 @@ class Signature:
def bind(self, *args: Any, **kwargs: Any) -> BoundArguments: ...
def bind_partial(self, *args: Any, **kwargs: Any) -> BoundArguments: ...
def replace(self, *, parameters: Sequence[Parameter] | type[_void] | None = ..., return_annotation: Any = ...) -> Self: ...
__replace__ = replace
if sys.version_info >= (3, 10):
@classmethod
def from_callable(
@@ -332,6 +333,8 @@ class Signature:
else:
@classmethod
def from_callable(cls, obj: _IntrospectableCallable, *, follow_wrapped: bool = True) -> Self: ...
if sys.version_info >= (3, 13):
def format(self, *, max_width: int | None = None) -> str: ...
def __eq__(self, other: object) -> bool: ...
def __hash__(self) -> int: ...
@@ -392,6 +395,9 @@ class Parameter:
default: Any = ...,
annotation: Any = ...,
) -> Self: ...
if sys.version_info >= (3, 13):
__replace__ = replace
def __eq__(self, other: object) -> bool: ...
def __hash__(self) -> int: ...

View File

@@ -75,7 +75,7 @@ class IOBase(metaclass=abc.ABCMeta):
def __del__(self) -> None: ...
@property
def closed(self) -> bool: ...
def _checkClosed(self, msg: str | None = ...) -> None: ... # undocumented
def _checkClosed(self) -> None: ... # undocumented
class RawIOBase(IOBase):
def readall(self) -> bytes: ...

View File

@@ -147,7 +147,11 @@ class _BaseV4:
@property
def max_prefixlen(self) -> Literal[32]: ...
class IPv4Address(_BaseV4, _BaseAddress): ...
class IPv4Address(_BaseV4, _BaseAddress):
if sys.version_info >= (3, 13):
@property
def ipv6_mapped(self) -> IPv6Address: ...
class IPv4Network(_BaseV4, _BaseNetwork[IPv4Address]): ...
class IPv4Interface(IPv4Address, _BaseInterface[IPv4Address, IPv4Network]):

View File

@@ -17,6 +17,10 @@ _T3 = TypeVar("_T3")
_T4 = TypeVar("_T4")
_T5 = TypeVar("_T5")
_T6 = TypeVar("_T6")
_T7 = TypeVar("_T7")
_T8 = TypeVar("_T8")
_T9 = TypeVar("_T9")
_T10 = TypeVar("_T10")
_Step: TypeAlias = SupportsFloat | SupportsInt | SupportsIndex | SupportsComplex
@@ -214,6 +218,60 @@ class product(Iterator[_T_co]):
/,
) -> product[tuple[_T1, _T2, _T3, _T4, _T5, _T6]]: ...
@overload
def __new__(
cls,
iter1: Iterable[_T1],
iter2: Iterable[_T2],
iter3: Iterable[_T3],
iter4: Iterable[_T4],
iter5: Iterable[_T5],
iter6: Iterable[_T6],
iter7: Iterable[_T7],
/,
) -> product[tuple[_T1, _T2, _T3, _T4, _T5, _T6, _T7]]: ...
@overload
def __new__(
cls,
iter1: Iterable[_T1],
iter2: Iterable[_T2],
iter3: Iterable[_T3],
iter4: Iterable[_T4],
iter5: Iterable[_T5],
iter6: Iterable[_T6],
iter7: Iterable[_T7],
iter8: Iterable[_T8],
/,
) -> product[tuple[_T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8]]: ...
@overload
def __new__(
cls,
iter1: Iterable[_T1],
iter2: Iterable[_T2],
iter3: Iterable[_T3],
iter4: Iterable[_T4],
iter5: Iterable[_T5],
iter6: Iterable[_T6],
iter7: Iterable[_T7],
iter8: Iterable[_T8],
iter9: Iterable[_T9],
/,
) -> product[tuple[_T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8, _T9]]: ...
@overload
def __new__(
cls,
iter1: Iterable[_T1],
iter2: Iterable[_T2],
iter3: Iterable[_T3],
iter4: Iterable[_T4],
iter5: Iterable[_T5],
iter6: Iterable[_T6],
iter7: Iterable[_T7],
iter8: Iterable[_T8],
iter9: Iterable[_T9],
iter10: Iterable[_T10],
/,
) -> product[tuple[_T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8, _T9, _T10]]: ...
@overload
def __new__(cls, *iterables: Iterable[_T1], repeat: int = 1) -> product[tuple[_T1, ...]]: ...
def __iter__(self) -> Self: ...
def __next__(self) -> _T_co: ...

View File

@@ -10,8 +10,8 @@ INFINITY: float
def py_encode_basestring(s: str) -> str: ... # undocumented
def py_encode_basestring_ascii(s: str) -> str: ... # undocumented
def encode_basestring(s: str) -> str: ... # undocumented
def encode_basestring_ascii(s: str) -> str: ... # undocumented
def encode_basestring(s: str, /) -> str: ... # undocumented
def encode_basestring_ascii(s: str, /) -> str: ... # undocumented
class JSONEncoder:
item_separator: str

View File

@@ -7,14 +7,14 @@ if sys.version_info >= (3, 9):
else:
__all__ = ["iskeyword", "kwlist"]
def iskeyword(s: str) -> bool: ...
def iskeyword(s: str, /) -> bool: ...
# a list at runtime, but you're not meant to mutate it;
# type it as a sequence
kwlist: Final[Sequence[str]]
if sys.version_info >= (3, 9):
def issoftkeyword(s: str) -> bool: ...
def issoftkeyword(s: str, /) -> bool: ...
# a list at runtime, but you're not meant to mutate it;
# type it as a sequence

View File

@@ -96,7 +96,6 @@ __all__ = [
"getpreferredencoding",
"Error",
"setlocale",
"resetlocale",
"localeconv",
"strcoll",
"strxfrm",
@@ -121,6 +120,9 @@ if sys.version_info >= (3, 11):
if sys.version_info < (3, 12):
__all__ += ["format"]
if sys.version_info < (3, 13):
__all__ += ["resetlocale"]
if sys.platform != "win32":
__all__ += ["LC_MESSAGES"]
@@ -133,7 +135,9 @@ def getlocale(category: int = ...) -> tuple[_str | None, _str | None]: ...
def setlocale(category: int, locale: _str | Iterable[_str | None] | None = None) -> _str: ...
def getpreferredencoding(do_setlocale: bool = True) -> _str: ...
def normalize(localename: _str) -> _str: ...
def resetlocale(category: int = ...) -> None: ...
if sys.version_info < (3, 13):
def resetlocale(category: int = ...) -> None: ...
if sys.version_info < (3, 12):
def format(

View File

@@ -50,7 +50,6 @@ __all__ = [
"makeLogRecord",
"setLoggerClass",
"shutdown",
"warn",
"warning",
"getLogRecordFactory",
"setLogRecordFactory",
@@ -58,6 +57,8 @@ __all__ = [
"raiseExceptions",
]
if sys.version_info < (3, 13):
__all__ += ["warn"]
if sys.version_info >= (3, 11):
__all__ += ["getLevelNamesMapping"]
if sys.version_info >= (3, 12):
@@ -156,15 +157,17 @@ class Logger(Filterer):
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
) -> None: ...
def warn(
self,
msg: object,
*args: object,
exc_info: _ExcInfoType = None,
stack_info: bool = False,
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
) -> None: ...
if sys.version_info < (3, 13):
def warn(
self,
msg: object,
*args: object,
exc_info: _ExcInfoType = None,
stack_info: bool = False,
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
) -> None: ...
def error(
self,
msg: object,
@@ -365,13 +368,19 @@ _L = TypeVar("_L", bound=Logger | LoggerAdapter[Any])
class LoggerAdapter(Generic[_L]):
logger: _L
manager: Manager # undocumented
if sys.version_info >= (3, 10):
extra: Mapping[str, object] | None
if sys.version_info >= (3, 13):
def __init__(self, logger: _L, extra: Mapping[str, object] | None = None, merge_extra: bool = False) -> None: ...
elif sys.version_info >= (3, 10):
def __init__(self, logger: _L, extra: Mapping[str, object] | None = None) -> None: ...
else:
extra: Mapping[str, object]
def __init__(self, logger: _L, extra: Mapping[str, object]) -> None: ...
if sys.version_info >= (3, 10):
extra: Mapping[str, object] | None
else:
extra: Mapping[str, object]
def process(self, msg: Any, kwargs: MutableMapping[str, Any]) -> tuple[Any, MutableMapping[str, Any]]: ...
def debug(
self,
@@ -403,16 +412,18 @@ class LoggerAdapter(Generic[_L]):
extra: Mapping[str, object] | None = None,
**kwargs: object,
) -> None: ...
def warn(
self,
msg: object,
*args: object,
exc_info: _ExcInfoType = None,
stack_info: bool = False,
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
**kwargs: object,
) -> None: ...
if sys.version_info < (3, 13):
def warn(
self,
msg: object,
*args: object,
exc_info: _ExcInfoType = None,
stack_info: bool = False,
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
**kwargs: object,
) -> None: ...
def error(
self,
msg: object,
@@ -458,19 +469,32 @@ class LoggerAdapter(Generic[_L]):
def getEffectiveLevel(self) -> int: ...
def setLevel(self, level: _Level) -> None: ...
def hasHandlers(self) -> bool: ...
def _log(
self,
level: int,
msg: object,
args: _ArgsType,
exc_info: _ExcInfoType | None = None,
extra: Mapping[str, object] | None = None,
stack_info: bool = False,
) -> None: ... # undocumented
if sys.version_info >= (3, 11):
def _log(
self,
level: int,
msg: object,
args: _ArgsType,
*,
exc_info: _ExcInfoType | None = None,
extra: Mapping[str, object] | None = None,
stack_info: bool = False,
) -> None: ... # undocumented
else:
def _log(
self,
level: int,
msg: object,
args: _ArgsType,
exc_info: _ExcInfoType | None = None,
extra: Mapping[str, object] | None = None,
stack_info: bool = False,
) -> None: ... # undocumented
@property
def name(self) -> str: ... # undocumented
if sys.version_info >= (3, 11):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def getLogger(name: str | None = None) -> Logger: ...
def getLoggerClass() -> type[Logger]: ...
@@ -499,14 +523,17 @@ def warning(
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
) -> None: ...
def warn(
msg: object,
*args: object,
exc_info: _ExcInfoType = None,
stack_info: bool = False,
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
) -> None: ...
if sys.version_info < (3, 13):
def warn(
msg: object,
*args: object,
exc_info: _ExcInfoType = None,
stack_info: bool = False,
stacklevel: int = 1,
extra: Mapping[str, object] | None = None,
) -> None: ...
def error(
msg: object,
*args: object,
@@ -600,7 +627,7 @@ class StreamHandler(Handler, Generic[_StreamT]):
def __init__(self: StreamHandler[_StreamT], stream: _StreamT) -> None: ... # pyright: ignore[reportInvalidTypeVarUse] #11780
def setStream(self, stream: _StreamT) -> _StreamT | None: ...
if sys.version_info >= (3, 11):
def __class_getitem__(cls, item: Any) -> GenericAlias: ...
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
class FileHandler(StreamHandler[TextIOWrapper]):
baseFilename: str # undocumented

View File

@@ -46,7 +46,7 @@ class BaseRotatingHandler(FileHandler):
def rotate(self, source: str, dest: str) -> None: ...
class RotatingFileHandler(BaseRotatingHandler):
maxBytes: str # undocumented
maxBytes: int # undocumented
backupCount: int # undocumented
if sys.version_info >= (3, 9):
def __init__(

Some files were not shown because too many files have changed in this diff Show More