Compare commits

..

353 Commits

Author SHA1 Message Date
Dhruv Manilawala
a8cf7096ff Bump version to v0.4.8 (#11755)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-06-05 20:51:31 +05:30
Carl Meyer
895eb3ef48 [red-knot] refactor CFG outside of symbol table (#11746) 2024-06-05 06:23:43 -06:00
Dhruv Manilawala
2e0a9755e0 Disallow access to Parsed output, use the API instead (#11741)
## Summary

This PR is a follow-up to #11740 to restrict access to the `Parsed`
output by replacing the `parsed` API function with a more specific one.
Currently, that is `comment_ranges` but the linked PR exposes a `tokens`
method.

The main motivation is so that there's no way to get an incorrect
information from the checker. And, it also encapsulates the source of
the comment ranges and the tokens itself. This way it would become
easier to just update the checker if the source for these information
changes in the future.

## Test Plan

`cargo insta test`
2024-06-05 08:24:19 +00:00
Dhruv Manilawala
b021b5babe Use Tokens from parsed type annotation or parsed source (#11740)
## Summary

This PR fixes a bug where the checker would require the tokens for an
invalid offset w.r.t. the source code.

Taking the source code from the linked issue as an example:
```py
relese_version :"0.0is 64"
```

Now, this isn't really a valid type annotation but that's what this PR
is fixing. Regardless of whether it's valid or not, Ruff shouldn't
panic.

The checker would visit the parsed type annotation (`0.0is 64`) and try
to detect any violations. Certain rule logic requests the tokens for the
same but it would fail because the lexer would only have the `String`
token considering original source code. This worked before because the
lexer was invoked again for each rule logic.

The solution is to store the parsed type annotation on the checker if
it's in a typing context and use the tokens from that instead if it's
available. This is enforced by creating a new API on the checker to get
the tokens.

But, this means that there are two ways to get the tokens via the
checker API. I want to restrict this in a follow-up PR (#11741) to only
expose `tokens` and `comment_ranges` as methods and restrict access to
the parsed source code.

fixes: #11736 

## Test Plan

- [x] Add a test case for `F632` rule and update the snapshot
- [x] Check all affected rules
- [x] No ecosystem changes
2024-06-05 07:50:33 +00:00
Dhruv Manilawala
eed6d784df Update type annotation parsing API to return Parsed (#11739)
## Summary

This PR updates the return type of `parse_type_annotation` from `Expr`
to `Parsed<ModExpression>`. This is to allow accessing the tokens for
the parsed sub-expression in the follow-up PR.

## Test Plan

`cargo insta test`
2024-06-05 12:59:43 +05:30
Jane Lewis
8338db6c12 ruff server: Formatting a document with syntax problems no longer spams a visible error popup (#11745)
## Summary

Fixes https://github.com/astral-sh/ruff-vscode/issues/482.

I've made adjustments to `format` and `format_range` that handle parsing
errors before they become server errors. We'll still log this as a
problem, but there will no longer be a visible popup.

## Test Plan

Instead of seeing a visible error when formatting a document with syntax
issues, you should see this warning in the LSP logs:

<img width="991" alt="Screenshot 2024-06-04 at 3 38 23 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/9d68947d-6462-4ca6-ab5a-65e573c91db6">

Similarly, if you try to format a range with syntax issues, you should
see this warning in the LSP logs instead of a visible error popup:

<img width="1010" alt="Screenshot 2024-06-04 at 3 39 10 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/99fff098-798d-406a-976e-81ead0da0352">

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-06-04 17:18:21 -07:00
Carl Meyer
d056d09547 [red-knot] add if-statement support to FlowGraph (#11673)
## Summary

Add if-statement support to FlowGraph. This introduces branches and
joins in the graph for the first time.

## Test Plan

Added tests.
2024-06-04 15:09:39 -06:00
Mateusz Sokół
1645be018d Update NPY001 rule for NumPy 2.0 (#11735)
Hi!

This PR addresses https://github.com/astral-sh/ruff/issues/11093.

It skips `np.bool` and `np.long` replacements as both of these names
were reintroduced in NumPy 2.0 with a different meaning
(https://github.com/numpy/numpy/pull/24922,
https://github.com/numpy/numpy/pull/25080).
With this change `NPY001` will no longer conflict with `NPY201`. For
projects using NumPy 1.x `np.bool` and `np.long` has been deprecated and
removed long time ago, and accessing them yields an informative error
message.
2024-06-04 19:23:42 +00:00
Michael Oultram
2c865023ac CI: add job to run tests under minimum supported rust version (msrv) (#11737)
## Summary

This change adds a GitHub Actions CI job to check that the project
builds and test pass under the declared minimum supported rust compiler.
I have bumped the msrv to 1.74 as that is the lowest version I could get
this project to build on.

## Test Plan

The CI job has run on this PR, and will also run on the main branch.
2024-06-04 15:14:50 -04:00
Dhruv Manilawala
2567e14b7a Lexer should consider BOM for the start offset (#11732)
## Summary

This PR fixes a bug where the lexer didn't consider the BOM into the
start offset.

fixes: #11731

## Test Plan

Add multiple test cases which involves BOM character in the source for
the lexer and verify the snapshot.
2024-06-04 08:45:46 +00:00
Dhruv Manilawala
3b19df04d7 Use cursor offset for lexer checkpoint (#11734)
## Summary

This PR updates the lexer checkpoint to store the cursor offset instead
of cloning the cursor itself. This reduces the size of `LexerCheckpoint`
from 136 to 112 bytes and also removes the need for lifetime.

## Test Plan

`cargo insta test`
2024-06-04 14:13:57 +05:30
Micha Reiser
6ffb96171a red-knot: Change resolve_global_symbol to take Module as an argument (#11723) 2024-06-04 06:20:50 +00:00
Micha Reiser
64165bee43 red-knot: Use parse_unchecked to get all parse errors (#11725) 2024-06-04 06:04:48 +00:00
Charlie Marsh
0c75548146 Respect per-file ignores for blanket and redirected noqa rules (#11728)
## Summary

Ensures that we respect per-file ignores and exemptions for these rules.
Specifically, we allow:

```python
# ruff: noqa: PGH004
```

...to ignore `PGH004`.
2024-06-04 03:57:59 +00:00
Alex
b56a577f25 [pygrep_hooks] Check blanket ignores via file-level pragmas (PGH004) (#11540)
## Summary

Should resolve https://github.com/astral-sh/ruff/issues/11454.

This is my first PR to `ruff`, so I may have missed something.

If I understood the suggestion in the issue correctly, rule `PGH004`
should be set to `Preview` again.

## Test Plan

Created two fixtures derived from the issue.
2024-06-04 03:42:58 +00:00
Tushar Sadhwani
e1133a24ed [flake8-pyi] Implement PYI063 (#11699)
## Summary
Implements `Y063` from `flake8-pyi`.

## Test Plan
`cargo test` / `cargo insta review`
2024-06-04 03:15:04 +00:00
Charlie Marsh
2f8ac1e9b3 Fix red-knot compilation (#11727)
## Summary

Perhaps a result of a bad rebase, but `cargo clippy --fix --workspace
--all-targets -- -D warnings` does not pass on main as-is.
2024-06-04 03:03:38 +00:00
Carl Meyer
3fb2028506 [red-knot] extract helper functions in inference tests (#11671)
There's a lot of repeat boilerplate in the type inference tests; this
cuts it down a lot.
2024-06-03 17:46:04 -06:00
Carl Meyer
3f9ee31efb [red-knot] use reachable definitions in infer_expression_type (#11670)
## Summary

Switch name resolution in `infer_expression_type` from resolving the
public type of a symbol, to resolving the reachable definitions of that
symbol from the reference point, using the flow graph.

This surfaced a bug in the flow graph implementation and a bug in symbol
table building, both of which are also fixed here.

The bug in flow graph implementation was that when we pushed and popped
scopes, we didn't maintain a stack of "current flow nodes" in all
stacked scopes, to be restored when we returned to that scope. Now we
do.

The bug in symbol table building that we didn't visit the parts of
functions and class definitions in the correct scopes. E.g. decorators
should be visited in the outer scope, arguments should be visited inside
the type-params scope (if any) but not inside the function body scope,
and only the body itself should actually be visited inside the body
scope. Fixing this requires that we no longer use `walk_stmt` here,
instead we have to visit each individual component.

## Test Plan

Added test.
2024-06-03 17:45:31 -06:00
Carl Meyer
b02d3f3fd9 [red-knot] infer_symbol_public_type infers union of all definitions (#11669)
## Summary

Rename `infer_symbol_type` to `infer_symbol_public_type`, and allow it
to work on symbols with more than one definition. For now, use the most
cautious/sound inference, which is the union of all definitions. We can
prune this union more in future by eliminating definitions if we can
show that they can't be visible (this requires both that the symbol is
definitely later reassigned, and that there is no intervening
call/import that might be able to see the over-written definition).

## Test Plan

Added a test showing inference of union from multiple definitions.
2024-06-03 17:27:06 -06:00
Dhruv Manilawala
2b28889ca9 Isolate non-breaking whitespace indentation test case (#11721)
As discussed in Discord, this moves the test case for non-breaking
whitespace into its own method.
2024-06-03 13:20:55 +00:00
Dhruv Manilawala
8db147c09d Generator should add a newline before type statement (#11720)
## Summary

This PR fixes a bug where the `Generator` wouldn't add a newline before
a type alias statement. This is because it wasn't using the `statement`
macro which takes care of the newline.

Without this fix, a code like:
```py
type X = int
type Y = str
```

The generator would produce:
```py
type X = inttype Y = str
```

## Test Plan

Add a test case.
2024-06-03 18:44:21 +05:30
Dhruv Manilawala
a58bde6958 Remove less used parser dependencies (#11718)
## Summary

This PR removes the following dependencies from the `ruff_python_parser`
crate:
* `anyhow` (moved to dev dependencies)
* `is-macro`
* `itertools`

The main motivation is that they aren't used much.

Additionally, it updates the return type of `parse_type_annotation` to
use a more specific `ParseError` instead of the generic `anyhow::Error`.

## Test Plan

`cargo insta test`
2024-06-03 13:08:24 +00:00
Dhruv Manilawala
f4e23d2dff Use string expression for parsing type annotation (#11717)
## Summary

This PR updates the logic for parsing type annotation to accept a
`ExprStringLiteral` node instead of the string value and the range.

The main motivation of this change is to simplify the implementation of
`parse_type_annotation` function with:
* Use the `opener_len` and `closer_len` from the string flags to get the
raw contents range instead of extracting it via
	* `str::leading_quote(expression).unwrap().text_len()`
	* `str::trailing_quote(expression).unwrap().text_len()`
* Avoid comparing the string content if we already know that it's
implicitly concatenated

## Test Plan

`cargo insta test`
2024-06-03 13:04:03 +00:00
Dhruv Manilawala
4a155e2b22 Re-order lexer methods (#11716)
## Summary

This PR re-orders the lexer methods in the following order:

1. `next_token`
2. `lex_token`
3. `eat_indentation`
4. `handle_indentation`
5. `skip_whitespace`
6. `consume_ascii_character`
7. `try_single_char_prefix`
8. `try_double_char_prefix`
9. `lex_identifier`
10. `lex_fstring_start`
11. `lex_fstring_middle_or_end`
12. `lex_string`
13. `lex_number`
14. `lex_number_radix`
15. `lex_decimal_number`
16. `radix_run`
17. `lex_comment`
18. `lex_ipython_escape_command`
19. `consume_end`

Following was considered for the ordering:
* 1 is the main entry point which delegates to 2
* 3, 4, 5 are all related to whitespace which is done first
* 6 is the entrypoint for an ascii character which delegates to 9, 12,
13, 17, 18, 19
* Others are grouped around similar kind of methods
2024-06-03 12:58:35 +00:00
Dhruv Manilawala
bf5b62edac Maintain synchronicity between the lexer and the parser (#11457)
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
2024-06-03 18:23:50 +05:30
renovate[bot]
c69a789aa5 Update NPM Development dependencies (#11713) 2024-06-03 01:59:07 +00:00
renovate[bot]
140c408a92 Update pre-commit dependencies (#11712) 2024-06-02 21:51:42 -04:00
renovate[bot]
27085a93d9 Update cloudflare/wrangler-action action to v3.6.1 (#11709) 2024-06-02 21:51:27 -04:00
renovate[bot]
a9b6c4f269 Update dependency monaco-editor to ^0.49.0 (#11710) 2024-06-02 21:51:23 -04:00
renovate[bot]
ded010cf9c Update Rust crate tracing-tree to v0.3.1 (#11703) 2024-06-02 21:51:13 -04:00
renovate[bot]
436dc18b15 Update Rust crate libcst to v1.4.0 (#11707) 2024-06-03 01:05:32 +00:00
renovate[bot]
9599bd7622 Update Rust crate itertools to 0.13.0 (#11706) 2024-06-03 01:05:17 +00:00
renovate[bot]
ec3f523924 Update Rust crate insta to v1.39.0 (#11705) 2024-06-03 01:04:26 +00:00
renovate[bot]
010434015e Update Rust crate proc-macro2 to v1.0.85 (#11700) 2024-06-03 01:03:31 +00:00
renovate[bot]
25131da2c3 Update Rust crate toml to v0.8.13 (#11702) 2024-06-02 21:03:09 -04:00
renovate[bot]
712783825d Update Rust crate strum_macros to v0.26.3 (#11701) 2024-06-02 21:03:03 -04:00
Alex Waygood
94a3c53841 Update UP035 for Python 3.13 and the latest version of typing_extensions (#11693) 2024-06-02 22:59:48 +01:00
Tobias Fischer
0ea2519e80 Add RDJson support. (#11682)
## Summary

Implement support for RDJson output for `ruff check`, as requested in
#8655.

## Test Plan

Tested using a snapshot test. Same approach as for e.g. the JSON output
formatter.

## Additional info

I tried to keep the implementation close to the JSON implementation.

I had to deviate a bit to make the `suggestions` key work: If there are
no suggestions, then setting `suggestions` to `null` is invalid
according to the JSONSchema. Therefore, I opted for a slightly more
complex implementation, that skips the `suggestions` key entirely if
there are no fixes available for the given diagnostic. Maybe it would
have been easier to set `"suggestions": []`, but I ended up doing it
this way.

I didn't consider notebooks, as I _think_ that RDJson doesn't work with
notebooks. This should be confirmed, and if so, there should be some
form of warning or error emitted when trying to output diagnostics for a
notebook.

I also didn't consider `ruff format`, as this comment:
https://github.com/astral-sh/ruff/issues/8655#issuecomment-1811446160
suggests that that wouldn't be compatible.

I'm new to Rust, any feedback is appreciated. 🙂 I
implemented this in order to have a productive rainy saturday afternoon,
I'm not knowledgeable about RDJson beyond the sources linked in the
issue.
2024-06-02 17:59:57 +00:00
Charlie Marsh
6d79ddc0aa [pyupgrade] Write empty string in lieu of panic (#11696)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11692.
2024-06-02 17:51:03 +00:00
Alex Waygood
9f3e609278 Make tests aware that py313 is the latest supported Python version (#11690) 2024-06-02 13:06:04 +00:00
Charlie Marsh
b36dd1aa51 [flake8-simplify] Simplify double negatives in SIM103 (#11684)
## Summary

Closes: https://github.com/astral-sh/ruff/issues/11685.
2024-06-01 23:21:11 +00:00
Charlie Marsh
fd9d68051e Update CHANGELOG.md (#11683) 2024-06-01 18:08:02 +00:00
github-actions[bot]
99834ee93d Sync vendored typeshed stubs (#11668)
Close and reopen this PR to trigger CI

Co-authored-by: typeshedbot <>
2024-05-31 22:26:20 -06:00
Charlie Marsh
b80bf22c4d Omit red-knot PRs from the changelog (#11666)
## Summary

This just ensures that PRs labelled with `red-knot` are automatically
filtered out from the auto-generated changelog (which we then manually
finalize anyway).
2024-05-31 19:18:53 -04:00
Tobias Fischer
312f6640b8 [flake8-bugbear] Implement return-in-generator (B901) (#11644)
## Summary

This PR implements the rule B901, which is part of the opinionated rules
of `flake8-bugbear`.

This rule seems to be desired in `ruff` as per
https://github.com/astral-sh/ruff/issues/3758 and
https://github.com/astral-sh/ruff/issues/2954#issuecomment-1441162976.

## Test Plan

As this PR was made closely following the
[CONTRIBUTING.md](8a25531a71/CONTRIBUTING.md),
it tests using the snapshot approach, that is described there.

## Sources

The implementation is inspired by [the original implementation in the
`flake8-bugbear`
repository](d1aec4cbef/bugbear.py (L1092)).
The error message and [test
file](d1aec4cbef/tests/b901.py)
where also copied from there.

The documentation I came up with on my own and needs improvement. Maybe
the example given in
https://github.com/astral-sh/ruff/issues/2954#issuecomment-1441162976
could be used, but maybe they are too complex, I'm not sure.

## Open Questions

- [ ] Documentation. (See above.)

- [x] Can I access the parent in a visitor?

The [original
implementation](d1aec4cbef/bugbear.py (L1100))
references the `yield` statement's parent to check if it is an
expression statement. I didn't find a way to do this in `ruff` and used
the `is_expresssion_statement` field on the visitor instead. What are
your thoughts on this? Is it possible and / or desired to access the
parent node here?

- [x] Is `Option::is_some(...)` -> `...unwrap()` the right thing to do?

Referring to [this piece of
code](9d5a280f71/crates/ruff_linter/src/rules/flake8_bugbear/rules/return_x_in_generator.rs (L91-L96)).
From my understanding, the `.unwrap()` is safe, because it is checked
that `return_` is not `None`. However, I feel like I missed a more
elegant solution that does both in one.

## Other

I don't know a lot about this rule, I just implemented it because I
found it in a
https://github.com/astral-sh/ruff/labels/good%20first%20issue.

I'm new to Rust, so any constructive critisism is appreciated.

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-05-31 21:48:36 +00:00
Charlie Marsh
91a5fdee7a Use find in indent detection (#11650) 2024-05-31 20:35:19 +00:00
Charlie Marsh
1ad5f9c038 Bump version to v0.4.7 (#11646) 2024-05-31 16:30:36 -04:00
plredmond
e914bc300b F401 sort bindings before adding to __all__ (#11648)
Sort the binding IDs before passing them to the add-to-`__all__`
function to address #11619.
2024-05-31 20:29:08 +00:00
Carl Meyer
27f6f048f0 [red-knot] initial (very incomplete) flow graph (#11624)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Introduces the skeleton of the flow graph. So far it doesn't actually
handle any non-linear control flow :) But it does show how we can go
from an expression that references a symbol, backward through the flow
graph, to find reachable definitions of that symbol.

Adding non-linear control flow will mean adding flow nodes with multiple
predecessors, which will introduce more complexity into
`ReachableDefinitionsIterator.next()`. But one step at a time.

## Test Plan

Added a (very basic) test.
2024-05-31 14:27:17 -06:00
Alex Waygood
d62a617938 red-knot: Don't refer to Module instances as IDs (#11649) 2024-05-31 20:04:47 +00:00
Carl Meyer
16a926d138 [red-knot] infer int literal types (#11623)
## Summary

Give red-knot the ability to infer int literal types. This is quick and
easy, mostly because these types are a convenient way to observe
control-flow handling with simple assignments.

## Test Plan

Added test.
2024-05-31 13:52:29 -06:00
Jakub Marcowski
05566c6075 Update Who's Using Ruff? section to include Godot (#11647)
## Summary

- Ever since https://github.com/godotengine/godot/pull/90457 was merged
into the `master` branch, Godot has been using ruff for linting and
formatting Python files. As such, this PR adds Godot to the "Who's Using
Ruff?" section of the main `README.md` file.

## Test Plan

- N/A
2024-05-31 15:33:39 -04:00
JaRoSchm
7ce17b7736 Add Vim and Kate setup guide for ruff server (#11615)
## Summary

In the [roadmap for `ruff
server`](https://github.com/astral-sh/ruff/discussions/10581) support
for vim and kate is listed. Therefore I added setup guides for them
based on the neovim guide. As I don't use pyright I wasn't able to
translate the corresponding part from the neovim guide.

## Test Plan

Doesn't apply.
2024-05-31 19:06:55 +00:00
Charlie Marsh
f9a64503c8 Use char index rather than position for indent slice (#11645)
## Summary

A beginner's mistake :)

Closes https://github.com/astral-sh/ruff/issues/11641.
2024-05-31 19:04:36 +00:00
Alex Waygood
8a25531a71 red-knot: improve internal documentation in module.rs (#11638) 2024-05-31 16:11:18 +00:00
Micha Reiser
9b6d2ce1f2 Fix incorect placement of trailing stub function comments (#11632) 2024-05-31 12:06:17 +00:00
Carl Meyer
889667ad84 [red-knot] Update CODEOWNERS (#11625)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-31 06:47:53 +00:00
T-256
5b500fc4dc ruff server: Add support for documents not exist on disk (#11588)
Co-authored-by: T-256 <Tester@test.com>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-31 08:34:10 +02:00
Charlie Marsh
685d11a909 Mark repeated-isinstance-calls as unsafe on Python 3.10 and later (#11622)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11616.
2024-05-30 18:05:24 +00:00
plredmond
dcabd04caf F401 use BTreeMap instead of FxHashMap (#11621)
* Potentially resolves #11619 (nondeterministic hashmap order across
different architectures) in F401 by replacing a hashmap with
nondeterministic traversal order with an ordered mapping.

I'm not sure how to test this with our CI/CD. I don't have an s390x
machine at home. Should I try it in Qemu?
2024-05-30 10:54:46 -07:00
Charlie Marsh
3aa7e35a4c Avoid removing newlines between docstring headers and rST blocks (#11609)
Given:

```python
def func():
    """
    Example:

    .. code-block:: python

        import foo
    """
```

Removing the newline after the `Example:` header breaks Sphinx
rendering.

See: https://github.com/astral-sh/ruff/issues/11577
2024-05-30 13:29:20 -04:00
Micha Reiser
b0a751012e Document bump to win 10 (#11613) 2024-05-30 07:49:38 +00:00
Charlie Marsh
bd46cd1fcf Infer indentation with imports when logical indent is absent (#11608)
## Summary

In an `__init__.py` file, it's not uncommon to lack a logical indent
(since it may just contain imports). In such cases, we were always
falling back to four-space indent. This PR adds detection for indents
within import groups.

Closes https://github.com/astral-sh/ruff/issues/11606.
2024-05-30 00:18:07 -04:00
Charlie Marsh
a8d1328c1a [flake8-comprehension] Strip parentheses around generators in C400 (#11607)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11603.
2024-05-30 03:26:56 +00:00
Christoph Hasse
e35deee583 fix(F822): add option to enable F822 in __init__.py files (#11370)
## Summary

This PR aims to close #10095 by adding an option
`init-allow-undef-export` to the `pyflakes` settings. This option is
currently set to `true` such that behavior is kept identical.
But setting this option to `false` will lead to `F822` warnings to be
shown in all files, **including** `__init__.py` files.

As I've mentioned on #10095, I think `init-allow-undef-export=false`
would be the more user-friendly default option, as it creates fewer
surprises. @charliermarsh what do you think about making that the
default?

With this option in place, it's a single line fix for people that rely
on the old behavior.

And thinking longer term, for future major releases, one could probably
consider deprecating the option and eventually having people just `noqa`
these warnings if they are not wanted.


## Test Plan

I've added a `test_init_f822_enabled` test which repeats the test that
is done in the `init` test but this time with
`init-allow-undef-export=false` and the snap file correctly shows that
ruff will then trigger the otherwise suppressed F822 warning.


closes #10095
2024-05-30 03:15:05 +00:00
Micha Reiser
921bc15542 use owned ast and tokens in bench (#11598) 2024-05-29 18:10:32 +02:00
Vitaliy
e14096f0a8 docs: Minor formatting typo in F401 example. (#11601)
## Summary

Removed stray space in sample code snippet that is against ruff's own
default formatting rules.

This documentation appears on
https://docs.astral.sh/ruff/rules/unused-import/

## Test Plan

This is a trivially obvious change, verifiable with `ruff format
--check`
2024-05-29 11:14:53 -04:00
T-256
5f976cae07 Windows: Statically linked C runtime (#11589)
Co-authored-by: T-256 <Tester@test.com>
2024-05-29 14:00:12 +02:00
Tomas R
7659114eb3 [flake8-pyi] Implement PYI057 (#11486)
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-05-29 10:04:36 +00:00
Micha Reiser
163c374242 Reduce extensive use of snapshot.query (#11596) 2024-05-29 10:11:46 +02:00
Charlie Marsh
204c59e353 Respect file exclusions in ruff server (#11590)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11587.

## Test Plan

- Added a lint error to `test_server.py` in `vscode-ruff`.
- Validated that, prior to this change, diagnostics appeared in the
file.
- Validated that, with this change, no diagnostics were shown.
- Validated that, with this change, no diagnostics were fixed on-save.
2024-05-29 02:58:36 +00:00
Tushar Sadhwani
531ae5227c [flake8-pyi] Implement PYI066 (#11541)
## Summary

- Implements `Y066` from `flake8-pyi` as `PYI066`
- Fixes `PYI006` not being raised for `elif` clauses. This would have
conflicted with PYI006's implementation, so decided to do it in the same
PR.

## Test Plan

`cargo test` / `cargo insta review`
2024-05-29 00:30:00 +00:00
Tushar Sadhwani
e0169d8dea [flake8-pyi] Implement PYI064 (#11325)
## Summary

Implements `Y064` from `flake8-pyi` and its autofix.

## Test Plan

`cargo test` / `cargo insta review`
2024-05-28 23:57:13 +00:00
plredmond
9a3b9f9fb5 [redknot] add module type and attribute lookup for some types (#11416)
* Add a module type, `ModuleTypeId`
* Add an attribute lookup method `get_member` for `Type`
  * Only implemented for `ModuleTypeId` and `ClassTypeId`
  * [x] Should this be a trait?
    *Answer: no*
* [x] Uses `unwrap`, but we should remove that. Maybe add a new variant
to `QueryError`?
    *Answer: Return `Option<Type>` as is done elsewhere*
* Add `infer_definition_type` case for `Import`
* Add `infer_expr_type` case for `Attribute`
* Add a test to exercise these
* [x] remove all NOTE/FIXME/TODO after discussing with reviewers
2024-05-28 13:13:03 -07:00
Charlie Marsh
49a5a9ccc2 Bump version to v0.4.6 (#11585) 2024-05-28 15:10:53 -04:00
Charlie Marsh
69d9212817 Propagate reads on global variables (#11584)
## Summary

This PR ensures that if a variable is bound via `global`, and then the
`global` is read, the originating variable is also marked as read. It's
not perfect, in that it won't detect _rebindings_, like:

```python
from app import redis_connection

def func():
    global redis_connection

    redis_connection = 1
    redis_connection()
```

So, above, `redis_connection` is still marked as unused.

But it does avoid flagging `redis_connection` as unused in:

```python
from app import redis_connection

def func():
    global redis_connection

    redis_connection()
```

Closes https://github.com/astral-sh/ruff/issues/11518.
2024-05-28 14:47:05 -04:00
Akshet Pandey
4a305588e9 [flake8-bandit] request-without-timeout should warn for requests.request (#11548)
## Summary
Update
[S113](https://docs.astral.sh/ruff/rules/request-without-timeout/) to
also warns for missing timeout on when calling `requests.request`
2024-05-28 16:31:12 +00:00
Charlie Marsh
16acd4913f Remove some unused pub functions (#11576)
## Summary

I left anything in `red-knot`, any `with_` methods, etc.
2024-05-28 09:56:51 -04:00
Micha Reiser
3989cb8b56 Make ruff_notebook a workspace dependency in ruff_server (#11572) 2024-05-28 09:26:39 +02:00
Charlie Marsh
a38c05bf13 Avoid recommending context manager in __enter__ implementations (#11575)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11567.
2024-05-28 01:44:24 +00:00
Charlie Marsh
ab107ef1f3 Avoid recomending operator.itemgetter with dependence on lambda arg (#11574)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11573.
2024-05-28 01:29:29 +00:00
Ahmed Ilyas
b36c713279 Consider irrefutable pattern similar to if .. else for C901 (#11565)
## Summary

Follow up to https://github.com/astral-sh/ruff/pull/11521

Removes the extra added complexity for catch all match cases. This
matches the implementation of plain `else` statements.

## Test Plan
Added new test cases.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-05-27 17:33:36 +00:00
Charlie Marsh
34a5063aa2 Respect excludes in ruff server configuration discovery (#11551)
## Summary

Right now, we're discovering configuration files even within (e.g.)
virtual environments, because we're recursing without respecting the
`exclude` field on parent configuration.

Closes https://github.com/astral-sh/ruff-vscode/issues/478.

## Test Plan

Installed Pandas; verified that I saw no warnings:

![Screenshot 2024-05-26 at 8 09
05 PM](https://github.com/astral-sh/ruff/assets/1309177/dcf4115c-d7b3-453b-b7c7-afdd4804d6f5)
2024-05-27 16:59:46 +00:00
Micha Reiser
adc0a5d126 Rename document module to text_document (#11571) 2024-05-27 18:32:21 +02:00
Dhruv Manilawala
e28e737296 Update FStringElements to deref to a slice (#11570)
Ref: https://github.com/astral-sh/ruff/pull/11400#discussion_r1615600354
2024-05-27 15:52:13 +00:00
Dhruv Manilawala
37ad994318 Use default settings if initialization options is empty or not provided (#11566)
## Summary

This PR fixes the bug to avoid flattening the global-only settings for
the new server.

This was added in https://github.com/astral-sh/ruff/pull/11497, possibly
to correctly de-serialize an empty value (`{}`). But, this lead to a bug
where the configuration under the `settings` key was not being read for
global-only variant.

By using #[serde(default)], we ensure that the settings field in the
`GlobalOnly` variant is optional and that an empty JSON object `{}` is
correctly deserialized into `GlobalOnly` with a default `ClientSettings`
instance.

fixes: #11507 

## Test Plan

Update the snapshot and existing test case. Also, verify the following
settings in Neovim:

1. Nothing

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
}
```

2. Empty dictionary

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
  init_options = vim.empty_dict(),
}
```

3. Empty `settings`

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
  init_options = {
    settings = vim.empty_dict(),
  },
}
```

4. With some configuration:

```lua
ruff = {
  cmd = {
    '/Users/dhruv/work/astral/ruff/target/debug/ruff',
    'server',
    '--preview',
  },
  init_options = {
    settings = {
      configuration = '/tmp/ruff-repro/pyproject.toml',
    },
  },
}
```
2024-05-27 21:06:34 +05:30
Alex Waygood
246a3388ee Implement a common trait for the string flags (#11564) 2024-05-27 16:02:01 +01:00
Evan Kohilas
6be00d5775 Adds recommended extension settings for vscode (#11519) 2024-05-27 13:04:32 +02:00
Dhruv Manilawala
9200dfc79f Remove empty strings when converting to f-string (UP032) (#11524)
## Summary

This PR brings back the functionality to remove empty strings when
converting to an f-string in `UP032`.

For context, https://github.com/astral-sh/ruff/pull/8712 added this
functionality to remove _trailing_ empty strings but it got removed in
https://github.com/astral-sh/ruff/pull/8697 possibly unexpectedly so.

There's one difference which is that this PR will remove _any_ empty
strings and not just trailing ones. For example,

```diff
--- /Users/dhruv/playground/ruff/src/UP032.py
+++ /Users/dhruv/playground/ruff/src/UP032.py
@@ -1,7 +1,5 @@
 (
-    "{a}"
-    ""
-    "{b}"
-    ""
-).format(a=1, b=1)
+    f"{1}"
+    f"{1}"
+)
```

## Test Plan

Run `cargo insta test` and update the snapshots.
2024-05-27 05:05:22 +00:00
renovate[bot]
5dcde88099 Update Rust crate thiserror to v1.0.61 (#11561) 2024-05-27 00:33:54 +00:00
renovate[bot]
7794eb2bde Update Rust crate proc-macro2 to v1.0.84 (#11556) 2024-05-26 20:21:50 -04:00
renovate[bot]
40bfae4f99 Update Rust crate syn to v2.0.66 (#11560) 2024-05-27 00:21:44 +00:00
renovate[bot]
7b064b25b2 Update Rust crate mimalloc to v0.1.42 (#11554) 2024-05-26 20:21:39 -04:00
renovate[bot]
9993115f63 Update Rust crate smol_str to v0.2.2 (#11559) 2024-05-26 20:21:25 -04:00
renovate[bot]
f0a21c9161 Update Rust crate serde to v1.0.203 (#11558) 2024-05-26 20:21:19 -04:00
renovate[bot]
f26c155de5 Update Rust crate schemars to v0.8.21 (#11557) 2024-05-26 20:21:13 -04:00
renovate[bot]
c3fa826b0a Update Rust crate parking_lot to v0.12.3 (#11555) 2024-05-26 20:21:03 -04:00
renovate[bot]
8b69794f1d Update Rust crate libc to v0.2.155 (#11553) 2024-05-26 20:20:47 -04:00
renovate[bot]
4e7c84df1d Update Rust crate anyhow to v1.0.86 (#11552) 2024-05-26 20:20:38 -04:00
Dhruv Manilawala
99c400000a Avoid owned token data in sequence sorting (#11533)
## Summary

This PR updates the sequence sorting (`RUF022` and `RUF023`) to avoid
using the owned data from the string token. Instead, we will directly
use the reference to the data on the AST. This does introduce a lot of
lifetimes but that's required.

The main motivation for this is to allow removing the `lex_starts_at`
usage easily.

### Alternatives

1. Extract the raw string content (stripping the prefix and quotes)
using the `Locator` and use that for comparison
2. Build up an
[`IndexVec`](3e30962077/crates/ruff_index/src/vec.rs)
and use the newtype index in place of the string value itself. This also
does require lifetimes so we might as well just use the method in this
PR.

## Test Plan

`cargo insta test` and no ecosystem changes
2024-05-26 20:20:20 -04:00
Charlie Marsh
b5d147d219 Create intermediary directories for --output-file (#11550)
Closes https://github.com/astral-sh/ruff/issues/11549.
2024-05-26 23:23:11 +00:00
Aleksei Latyshev
77da4615c1 [pyupgrade] Support TypeAliasType in UP040 (#11530)
## Summary
Lint `TypeAliasType` in UP040.

Fixes #11422 

## Test Plan

cargo test
2024-05-26 19:05:35 +00:00
Jane Lewis
627d230688 ruff server searches for configuration in parent directories (#11537)
## Summary

Fixes #11506.

`RuffSettingsIndex::new` now searches for configuration files in parent
directories.

## Test Plan

I confirmed that the original test case described in the issue worked as
expected.
2024-05-26 18:11:08 +00:00
Fergus Longley
0eef834e89 Use project-relative path when calculating gitlab message fingerprint (#11532)
## Summary

Concurrent GitLab runners clone projects into separate directories, e.g.
`{builds_dir}/$RUNNER_TOKEN_KEY/$CONCURRENT_ID/$NAMESPACE/$PROJECT_NAME`.
Since the fingerprint uses the full path to the file, the fingerprints
calculated by Ruff are different depending on which concurrent runner it
executes on, so often an MR will appear to remove all existing issues
and add them with new fingerprints.

I've adjusted the fingerprint function to use the project relative path,
which fixes this. Unfortunately this will have a breaking change for any
current users of this output - the fingerprints will change and appear
in GitLab as all linting messages having been fixed and then created.

## Test Plan

`cargo nextest run`

Running `ruff check --output-format gitlab` in a git repo, moving the
repo and running again, verifying no diffs between the outputs
2024-05-26 14:10:04 -04:00
Charlie Marsh
650c578e07 [flake8-self] Ignore sunder accesses in flake8-self rule (#11546)
## Summary

We already ignore dunders, so ignoring sunders (as in
https://docs.python.org/3/library/enum.html#supported-sunder-names)
makes sense to me.
2024-05-26 13:57:24 -04:00
Jane Lewis
9567fddf69 ruff server correctly treats .pyi files as stub files (#11535)
## Summary

Fixes #11534.

`DocumentQuery::source_type` now returns `PySourceType::Stub` when the
document is a `.pyi` file.

## Test Plan

I confirmed that stub-specific rule violations appeared with a build
from this PR (they were not visible from a `main` build).

<img width="1066" alt="Screenshot 2024-05-24 at 2 15 38 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/cd519b7e-21e4-41c8-bc30-43eb6d4d438e">
2024-05-26 13:42:48 -04:00
Mateusz Sokół
ab6d9d4658 Add missing functions to NumPy 2.0 migration rule (#11528)
Hi! 

I left out some of the functions in the migration rule which became
removed in NumPy 2.0:
- `np.alltrue`
- `np.anytrue`
- `np.cumproduct`
- `np.product`

Addressing: https://github.com/numpy/numpy/issues/26493
2024-05-26 13:24:20 -04:00
Amar Paul
677893226a [flake8-2020] fix minor typo in YTT301 documentation (#11543)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->
Current doc says `sys.version[0]` will select the first digit of a major
version number (correct) then as an example says

> e.g., `"3.10"` would evaluate to `"1"`

(would actually evaluate to `"3"`). Changed the example version to a
two-digit number to make the problem more clear.

## Test Plan

<!-- How was it tested? -->
ran the following:
- `cargo run -p ruff -- check
crates/ruff_linter/resources/test/fixtures/flake8_2020/YTT301.py
--no-cache`
- `cargo insta review`
- `cargo test`
which all passed.
2024-05-26 13:23:41 -04:00
Ahmed Ilyas
33fd50027c Consider match-case stmts for C901, PLR0912, and PLR0915 (#11521)
Resolves #11421

## Summary

Instead of counting match/case as one statement, consider each `case` as
a conditional.

## Test Plan

`cargo test`
2024-05-24 14:44:46 +05:30
Dmitry Bogorad
3e30962077 [flake8-logging-format] Fix the autofix title in logging-warn (G010) (#11514)
## Summary

Rule `logging-warn` (`G010`) prescribes a change from `warn` to
`warning` and has a corresponding autofix, but the autofix is mistakenly
titled ```"Convert to `warn`"``` instead of ```"Convert to `warning`"```
(the latter is what the autofix actually does). Seems to be a plain
typo.
2024-05-24 13:13:42 +05:30
Jane Lewis
81275a6c3d ruff server: An empty code action filter no longer returns notebook source actions (#11526)
## Summary

Fixes #11516

`ruff server` was sending both regular source actions and notebook
source actions back when passed an empty action filter. This PR makes a
few small changes so that notebook source actions are not sent when
regular source actions are sent, which means that an empty filter will
only return regular source actions.

## Test Plan

I confirmed that duplicate code actions no longer appeared in Neovim,
using a configuration similar to the one from the original issue.

<img width="509" alt="Screenshot 2024-05-23 at 11 48 48 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/9a5d6907-dd41-48bd-b015-8a344c5e0b3f">
2024-05-24 07:20:39 +00:00
Charlie Marsh
52c946a4c5 Treat all singledispatch arguments as runtime-required (#11523)
## Summary

It turns out that `singledispatch` does end up evaluating all arguments,
even though only the first is used to dispatch.

Closes https://github.com/astral-sh/ruff/issues/11520.
2024-05-23 20:36:24 -04:00
Evan Kohilas
ebdaf5765a [flake8-async] Sleep with >24 hour interval should usually sleep forever (ASYNC116) (#11498)
## Summary

Addresses #8451 by implementing rule 116 to add an unsafe fix when sleep
is used with a >24 hour interval to instead consider sleeping forever.

This rule is added as async instead as I my understanding was that these
trio rules would be moved to async anyway.

There are a couple of TODOs, which address further extending the rule by
adding support for lookups and evaluations, and also supporting `anyio`.
2024-05-23 16:25:50 -04:00
Christian Adell
9a93409e1c Update README.md - new Ruff user (#11509) 2024-05-23 15:50:17 -04:00
Dhruv Manilawala
102b9d930f Use Importer available on Checker (#11513)
## Summary

This PR updates the `FA102` rule logic to use the `Importer` which is
available on the `Checker`.

The main motivation is that this would make updating the `Importer` to
use the `Tokens` struct which will be required to remove the
`lex_starts_at` usage in `Insertion::start_of_block` method.

## Test Plan

`cargo insta test`
2024-05-23 11:19:08 +00:00
Jane Lewis
550aa871d3 Bump version to v0.4.5 (#11502) 2024-05-23 01:09:01 +00:00
Charlie Marsh
3c22a3bdcc Minor edits to ruff server docs (#11500)
## Summary

Minor copy edits based on my read-through. Feel free to disagree
anywhere.
2024-05-22 23:53:53 +00:00
Jane Lewis
6263923915 Update documentation for ruff server with new migration guide (#11499)
## Summary

Introduces a migration guide from `ruff-lsp` to `ruff server` and makes
small updates to the `README.md`.
2024-05-22 14:36:33 -07:00
Jane Lewis
94abea4b08 ruff server: Fix multiple issues with Neovim and Helix (#11497)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/11236.

This PR fixes several issues, most of which relate to non-VS Code
editors (Helix and Neovim).

1. Global-only initialization options are now correctly deserialized
from Neovim and Helix
2. Empty diagnostics are now published correctly for Neovim and Helix.
3. A workspace folder is created at the current working directory if the
initialization parameters send an empty list of workspace folders.
4. The server now gracefully handles opening files outside of any known
workspace, and will use global fallback settings taken from client
editor settings and a user settings TOML, if it exists.

## Test Plan

I've tested to confirm that each issue has been fixed.

* Global-only initialization options are now correctly deserialized from
Neovim and Helix + the server gracefully handles opening files outside
of any known workspace


https://github.com/astral-sh/ruff/assets/19577865/4f33477f-20c8-4e50-8214-6608b1a1ea6b

* Empty diagnostics are now published correctly for Neovim and Helix


https://github.com/astral-sh/ruff/assets/19577865/c93f56a0-f75d-466f-9f40-d77f99cf0637

* A workspace folder is created at the current working directory if the
initialization parameters send an empty list of workspace folders.



https://github.com/astral-sh/ruff/assets/19577865/b4b2e818-4b0d-40ce-961d-5831478cc726
2024-05-22 20:50:58 +00:00
Charlie Marsh
519a65007f Mark quotes as unnecessary for non-evaluated annotations (#11485)
## Summary

Similar to #11414, this PR extends `UP037` to flag quoted annotations
that are located in positions that won't be evaluated at runtime.

For example, the quotes on `Tuple` are unnecessary in:

```python
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from typing import Tuple


def foo():
    x: "Tuple[int, int]" = (0, 0)

foo()
```
2024-05-22 15:44:31 -04:00
Jane Lewis
573facd2ba Fix automatic configuration reloading for text and notebook documents (#11492)
## Summary

Recent changes made in the [Jupyter Notebook feature
PR](https://github.com/astral-sh/ruff/pull/11206) caused automatic
configuration reloading to stop working. This was because we would check
for paths to reload using the changed path, when we should have been
using the parent path of the changed path (to get the directory it was
changed in).

Additionally, this PR fixes an issue where `ruff.toml` and `.ruff.toml`
files were not being automatically reloaded.

Finally, this PR improves configuration reloading by actively publishing
diagnostics for notebook documents (which won't be affected by the
workspace refresh since they don't use pull diagnostics). It will also
publish diagnostics for text documents if pull diagnostics aren't
supported.

## Test Plan
To test this, open an existing configuration file in a codebase, and
make modifications that will affect one or more open Python / Jupyter
Notebook files. You should observe that the diagnostics for both kinds
of files update automatically when the file changes are saved.

Here's a test video showing what a successful test should look like:



https://github.com/astral-sh/ruff/assets/19577865/7172b598-d6de-4965-b33c-6cb8b911ef6c
2024-05-22 11:20:45 -07:00
Jane Lewis
3cb2e677aa ruff.applyFormat now formats an entire notebook document (#11493)
## Summary

Previously, `ruff.applyFormat`, seen in VS Code as the command `Ruff:
Format Document`, would only format the currently active notebook cell
inside a notebook document. This PR makes `ruff.applyFormat` format the
entire notebook document at once, operating on each code cell in order.

## Test Plan

1. Open a notebook document that has multiple unformatted code cells.
2. Run `Ruff: Format Document` through the Command Palette
(`Ctrl/Cmd+Shift+P` by default)
3. Observe that all code cells in the notebook have been formatted.
2024-05-22 09:02:46 -07:00
Dhruv Manilawala
f0046ab28e Move has_comments to CommentRanges (#11495)
## Summary

This PR moves the `has_comments` function from `Indexer` to
`CommentRanges`. The main motivation is that the `CommentRanges` will
now be built by the parser which is shared between the linter and the
formatter. Thus, the `CommentRanges` will be removed from the `Indexer`.

## Test Plan

`cargo test`
2024-05-22 13:35:16 +00:00
Charlie Marsh
5bb9720a10 Avoid multiline quotes warning with quote-style = preserve (#11490)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11063.
2024-05-22 04:31:03 +00:00
Dhruv Manilawala
9ff18bf9d3 Simplify Neovim docs for the LSP setup (#11489)
Similar to what we have at
https://github.com/astral-sh/ruff-lsp#example-neovim
2024-05-22 09:51:02 +05:30
Charlie Marsh
aa906b9c75 [pylint] Ignore __slots__ with dynamic values (#11488)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11333.
2024-05-22 04:18:01 +00:00
Evan Kohilas
3476e2f359 fixes invalid rule from hyphen (#11484)
## Summary

When using `add_rule.py`, it produces the following line in `codes.rs`
```
        (Flake8Async, "102") => (RuleGroup::Stable, rules::flake8-async::rules::BlockingOsCallInAsyncFunction),
```

Causing a syntax error.

This PR resolves that issue so that the script can be used again.

## Test Plan

Tested manually in new rule creation
2024-05-21 23:39:50 -04:00
Charlie Marsh
8848eca3c6 [pylint] Remove try body from branch counting (#11487)
## Summary

Matching Pylint, we now omit the `try` body itself from branch counting.
Each `except` counts as a branch, as does the `else` and the `finally`.

Closes https://github.com/astral-sh/ruff/issues/11205.
2024-05-21 23:38:51 -04:00
Jane Lewis
b0731ef9cb ruff server: Support Jupyter Notebook (*.ipynb) files (#11206)
## Summary

Closes https://github.com/astral-sh/ruff/issues/10858.

`ruff server` now supports `*.ipynb` (aka Jupyter Notebook) files.
Extensive internal changes have been made to facilitate this, which I've
done some work to contextualize with documentation and an pre-review
that highlights notable sections of the code.

`*.ipynb` cells should behave similarly to `*.py` documents, with one
major exception. The format command `ruff.applyFormat` will only apply
to the currently selected notebook cell - if you want to format an
entire notebook document, use `Format Notebook` from the VS Code context
menu.

## Test Plan

The VS Code extension does not yet have Jupyter Notebook support
enabled, so you'll first need to enable it manually. To do this,
checkout the `pre-release` branch and modify `src/common/server.ts` as
follows:

Before:
![Screenshot 2024-05-13 at 10 59
06 PM](https://github.com/astral-sh/ruff/assets/19577865/c6a3c604-c405-4968-b8a2-5d670de89172)

After:
![Screenshot 2024-05-13 at 10 58
24 PM](https://github.com/astral-sh/ruff/assets/19577865/94ab2e3d-0609-448d-9c8c-cd07c69a513b)

I recommend testing this PR with large, complicated notebook files. I
used notebook files from [this popular
repository](https://github.com/jakevdp/PythonDataScienceHandbook/tree/master/notebooks)
in my preliminary testing.

The main thing to test is ensuring that notebook cells behave the same
as Python documents, besides the aforementioned issue with
`ruff.applyFormat`. You should also test adding and deleting cells (in
particular, deleting all the code cells and ensure that doesn't break
anything), changing the kind of a cell (i.e. from markup -> code or vice
versa), and creating a new notebook file from scratch. Finally, you
should also test that source actions work as expected (and across the
entire notebook).

Note: `ruff.applyAutofix` and `ruff.applyOrganizeImports` are currently
broken for notebook files, and I suspect it has something to do with
https://github.com/astral-sh/ruff/issues/11248. Once this is fixed, I
will update the test plan accordingly.

---------

Co-authored-by: nolan <nolan.king90@gmail.com>
2024-05-21 22:29:30 +00:00
Nicolas Jeker
84531d1644 Clarify motivation for E713 and E714 (#11483)
The wording 'negative comparison' is a rather vague description of the
'is not' operation and does not describe what the 'not in' operation
does (potentially copied from 'is not'). This was replaced with more
precise language to describe the operators taken from the official
python docs[1].

Both rules didn't have a strong reasoning besides 'it's bad, use the
other'. The origin of these rules seems to be PEP8[2] which prefers 'is
not' over 'not ... is' for readability. This is now reflected in the
description.

[1]:
https://docs.python.org/3/reference/expressions.html#membership-test-operations
[2]: https://peps.python.org/pep-0008/#programming-recommendations
2024-05-21 14:12:18 -05:00
Charlie Marsh
83b8b62e3e Avoid flagging __future__ annotations as required for non-evaluated type annotations (#11414)
## Summary

If an annotation won't be evaluated at runtime, we don't need to flag
`from __future__ import annotations` as required. This applies both to
quoted annotations and annotations outside of runtime-evaluated
positions, like:

```python
def main() -> None:
    a_list: list[str] | None = []
    a_list.append("hello")
```

Closes https://github.com/astral-sh/ruff/issues/11397.
2024-05-21 18:57:13 +00:00
plredmond
7225732859 F401 - update documentation and deprecate ignore_init_module_imports (#11436)
## Summary

* Update documentation for F401 following recent PRs
  * #11168
  * #11314
* Deprecate `ignore_init_module_imports`
* Add a deprecation pragma to the option and a "warn user once" message
when the option is used.
* Restore the old behavior for stable (non-preview) mode:
* When `ignore_init_module_imports` is set to `true` (default) there are
no `__init_.py` fixes (but we get nice fix titles!).
* When `ignore_init_module_imports` is set to `false` there are unsafe
`__init__.py` fixes to remove unused imports.
* When preview mode is enabled, it overrides
`ignore_init_module_imports`.
* Fixed a bug in fix titles where `import foo as bar` would recommend
reexporting `bar as bar`. It now says to reexport `foo as foo`. (In this
case we don't issue a fix, fwiw; it was just a fix title bug.)

## Test plan

Added new fixture tests that reuse the existing fixtures for
`__init__.py` files. Each of the three situations listed above has
fixture tests. The F401 "stable" tests cover:

> * When `ignore_init_module_imports` is set to `true` (default) there
are no `__init_.py` fixes (but we get nice fix titles!).

The F401 "deprecated option" tests cover:

> * When `ignore_init_module_imports` is set to `false` there are unsafe
`__init__.py` fixes to remove unused imports.

These complement existing "preview" tests that show the new behavior
which recommends fixes in `__init__.py` according to whether the import
is 1st party and other circumstances (for more on that behavior see:
#11314).
2024-05-21 09:23:45 -07:00
Dhruv Manilawala
403f0dccd8 Consider soft keywords for E27 rules (#11446)
## Summary

This is a follow-up PR to #11445 update the `E27` rules to consider soft
keywords as well.

## Test Plan

Add test cases consisting of soft keywords and update the snapshot.
2024-05-20 05:38:06 +00:00
Zanie Blue
46fcd19ca6 Fix division by zero error in ecosystem check (#11469)
e.g.
https://github.com/astral-sh/ruff/actions/runs/9144809516/job/25143076896?pr=11468

<img width="1388" alt="Screenshot 2024-05-19 at 12 02 15 AM"
src="https://github.com/astral-sh/ruff/assets/2586601/0df7cbcd-712c-4ea9-96f5-73f871570525">
2024-05-19 09:08:10 -05:00
Charlie Marsh
d9ec3d56b0 Add some new projects to the ecosystem CI (#11468)
Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-05-19 08:08:38 -05:00
Auguste Lalande
cd87b787d9 Fix windows-ci failure (#11470)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

The recent issues with the windows CI seem to be caused by
https://github.com/nextest-rs/nextest/issues/1493. With this
https://github.com/nextest-rs/nextest/issues/1493#issuecomment-2106331574
as a fix.

(Let's see if it works)
2024-05-19 07:25:06 -05:00
Charlie Marsh
dd6d411026 Remove comma from ecosystem checks (#11466)
## Summary

Something's up with this repo -- they added a post-checkout hook? So
let's just remove it for now. We should go through and add a new batch
of repositories some time.
2024-05-18 23:37:56 -04:00
Charlie Marsh
cfceb437a8 Treat escaped newline as valid sequence (#11465)
## Summary

We weren't treating the escaped newline as a valid condition to trigger
the safer fix (add an extra backslash before each invalid escape
sequence).

Closes https://github.com/astral-sh/ruff/issues/11461.
2024-05-19 03:32:32 +00:00
Charlie Marsh
48b0660228 Respect operator precedence in FURB110 (#11464)
## Summary

Ensures that we parenthesize expressions (if necessary) to preserve
operator precedence in `FURB110`.

Closes https://github.com/astral-sh/ruff/issues/11398.
2024-05-19 03:17:11 +00:00
Charlie Marsh
24899efe50 Remove example from tab-indentation (#11462)
## Summary

I think the example is more confusing than helpful, since there's no
visual difference between the tab and space here (even if it rendered
properly).

Closes
https://github.com/astral-sh/ruff/issues/11460#issuecomment-2118397278.
2024-05-17 17:49:16 -04:00
Dhruv Manilawala
83152fff92 Include soft keywords for is_keyword check (#11445)
## Summary

This PR updates the `TokenKind::is_keyword` check to include soft
keywords. To account for this change, it adds a new
`is_non_soft_keyword` method.

The usage in logical line rules were updated to use the
`is_non_soft_keyword` method but it'll be updated to use `is_keyword` in
a follow-up PR (#11446).

While, the parser usages were kept as is. And because of that, the
snapshots for two test cases were updated in a better direction.

## Test Plan

`cargo insta test`
2024-05-17 10:26:48 +05:30
Charlie Marsh
43e8147eaf Sort edits prior to deduplicating in quotation fix (#11452)
## Summary

We already have handling for "references that get quoted within our
quoted references", but we were assuming a specific ordering in the way
edits were generated.

Closes https://github.com/astral-sh/ruff/issues/11449.
2024-05-16 12:13:09 -04:00
Jason R. Coombs
42b655b24f Locate ruff executable in 'bin' directory as installed by 'pip install --target'. (#11450)
Fixes #11246

## Summary

This change adds an intermediate additional search path for
`find_ruff_bin`.

I would have added this path as the last one, except that the last one
is the one reported to the user, so I made this one second to last.

## Test Plan

It's shown to work with this command:

```
 ~ @ pip-run git+https://github.com/jaraco/ruff@feature/honor-install-target-bin -- -m ruff --version
ruff 0.4.4
```

I tried running the same command on Windows, which should work in
theory, but building ruff from source on Windows is complicated. Even
after installing Rust, ruff fails to build when `libmimalloc-sys` fails
to build because `gcc` isn't installed (and the error message points to
a [broken
anchor](https://github.com/rust-lang/cc-rs#compile-time-requirements)).
I was really hoping Rust would get us away from the Windows as
second-class-citizen model :(.
2024-05-16 16:07:22 +00:00
Dhruv Manilawala
f67c02c837 Remove leftover marker tokens (#11444)
## Summary

This PR removes the leftover marker tokens from the LALRPOP to
hand-written parser migration.
2024-05-16 11:39:05 +00:00
Charlie Marsh
4436dec1d9 Fix broken comment in too-many-branches (#11440) 2024-05-16 02:25:20 +00:00
Tim Hatch
27da223e9f Add --output-format to ruff config CLI (#11438)
This is useful for extracting the defaults in order to construct
equivalent configs by external scripts. This is my first non-hello-world
rust code, comments and suggested tests appreciated.

## Summary

We already have `ruff linter --output-format json`, this provides `ruff
config x --output-format json` as well. I plan to use this to construct
an equivalent config snippet to include in some managed repos, so when
we update their version of ruff and it adds new lints, they get a PR
that includes the commented-out new lints.

Note that the no-args form of `ruff config` ignores output-format
currently, but probably should obey it (although array-of-strings
doesn't seem that useful, looking for input on format).

## Test Plan

I could use a hand coming up with a typical way to write automated tests
for this.

```sh-session
(.venv) [timhatch:ruff ]$ ./target/debug/ruff config lint.select
A list of rule codes or prefixes to enable. Prefixes can specify exact
rules (like `F841`), entire categories (like `F`), or anything in
between.

When breaking ties between enabled and disabled rules (via `select` and
`ignore`, respectively), more specific prefixes override less
specific prefixes.

Default value: ["E4", "E7", "E9", "F"]
Type: list[RuleSelector]
Example usage:
``toml
# On top of the defaults (`E4`, E7`, `E9`, and `F`), enable flake8-bugbear (`B`) and flake8-quotes (`Q`).
select = ["E4", "E7", "E9", "F", "B", "Q"]
``
(.venv) [timhatch:ruff ]$ ./target/debug/ruff config lint.select --output-format json
{
  "Field": {
    "doc": "A list of rule codes or prefixes to enable. Prefixes can specify exact\nrules (like `F841`), entire categories (like `F`), or anything in\nbetween.\n\nWhen breaking ties between enabled and disabled rules (via `select` and\n`ignore`, respectively), more specific prefixes override less\nspecific prefixes.",
    "default": "[\"E4\", \"E7\", \"E9\", \"F\"]",
    "value_type": "list[RuleSelector]",
    "scope": null,
    "example": "# On top of the defaults (`E4`, E7`, `E9`, and `F`), enable flake8-bugbear (`B`) and flake8-quotes (`Q`).\nselect = [\"E4\", \"E7\", \"E9\", \"F\", \"B\", \"Q\"]",
    "deprecated": null
  }
}
```
2024-05-15 22:17:33 -04:00
Jaap Roes
b3e4d39f64 Clearly indicate what is counted as a branch (#11423)
## Summary

As discussed in issue #11408, PLR0912 has a broader definition of
"branches" than I expected. This updates the documentation to include
this definition.

I also updated the example to include several different types of
branches, while still maintaining dictionary lookup as an alternative
solution. (Crafting a realistic example was quite a challenge 😅).

Closes https://github.com/astral-sh/ruff/issues/11408.
2024-05-15 22:17:05 -04:00
Tim Hatch
d05347cfcb Regenerate sys.rs with stdlibs==2024.5.15 (#11437)
## Summary

Now that 3.13.0 b1 is out some of the stdlib modules have changed names.

## Test Plan

Wait for CI to run, expected to be pretty safe.
2024-05-15 22:17:32 +00:00
Léopold Mebazaa
7ac9cabbff Update CONTRIBUTING.md to reflect the new parser (#11434)
## Summary

CONTRIBUTING.md says that `cargo dev print-ast` uses the old RuffPython
parser, even though, as far as I can tell, it uses the shiny new parser.
This PR fixes this.

## Test Plan

CI jobs should do the trick -- I didn't modify any code.
2024-05-15 14:36:28 +00:00
Alex Waygood
6963f75a14 Move string-prefix enumerations to a separate submodule (#11425)
## Summary

This moves the string-prefix enumerations in `ruff_python_ast` to a
separate submodule. I think this helps clarify that these prefixes are
purely abstract: they only depend on each other, and do not depend on
any of the other code in `nodes.rs` in any way. Moreover, while various
AST nodes _use_ them, they're not really nodes themselves, so they feel
slightly out of place in `nodes.rs`.

I considered moving all of them to `str.rs`, but it felt like enough
code that it could be a separate submodule.

## Test Plan

`cargo test`
2024-05-15 07:40:27 -04:00
github-actions[bot]
effe3ad4ef Sync vendored typeshed stubs (#11428)
Co-authored-by: typeshedbot <>
2024-05-15 00:46:41 +00:00
Alex Waygood
bdc15a7cb9 Add automation for updating our vendored typeshed stubs (#11427) 2024-05-14 20:39:30 -04:00
plredmond
da882b6657 F401 - Recommend adding unused import bindings to __all__ (#11314)
Followup on #11168 and resolve #10391

# User facing changes

* F401 now recommends a fix to add unused import bindings to to
`__all__` if a single `__all__` list or tuple is found in `__init__.py`.
* If there are no `__all__` found in the file, fall back to recommending
redundant-aliases.
* If there are multiple `__all__` or only one but of the wrong type (non
list or tuple) then diagnostics are generated without fixes.
* `fix_title` is updated to reflect what the fix/recommendation is.

Subtlety: For a renamed import such as `import foo as bees`, we can
generate a fix to add `bees` to `__all__` but cannot generate a fix to
produce a redundant import (because that would break uses of the binding
`bees`).

# Implementation changes

* Add `name` field to `ImportBinding` to contain the name of the
_binding_ we want to add to `__all__` (important for the `import foo as
bees` case). It previously only contained the `AnyImport` which can give
us information about the import but not the binding.
* Add `binding` field to `UnusedImport` to contain the same. (Naming
note: the field `name` field already existed on `UnusedImport` and
contains the qualified name of the imported symbol/module)
* Change `fix_by_reexporting` to branch on the size of `dunder_all:
Vec<&Expr>`
* For length 0 call the edit-producing function `make_redundant_alias`.
  * For length 1 call edit-producing function `add_to_dunder_all`.
  * Otherwise, produce no fix.
* Implement the edit-producing function `add_to_dunder_all` and add unit
tests.
* Implement several fixture tests: empty `__all__ = []`, nonempty
`__all__ = ["foo"]`, mis-typed `__all__ = None`, plus-eq `__all__ +=
["foo"]`
* `UnusedImportContext::Init` variant now has two fields: whether the
fix is in `__init__.py` and how many `__all__` were found.

# Other changes

* Remove a spurious pattern match and instead use field lookups b/c the
addition of a field would have required changing the unrelated pattern.
* Tweak input type of `make_redundant_alias`

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-05-14 17:02:33 -07:00
Dhruv Manilawala
96f6288622 Move UP034 to use TokenKind instead of Tok (#11424)
## Summary

This PR follows up from #11420 to move `UP034` to use `TokenKind`
instead of `Tok`.

The main reason to have a separate PR is so that the reviewing is easy.
This required a lot more updates because the rule used an index (`i`) to
keep track of the current position in the token vector. Now, as it's
just an iterator, we just use `next` to move the iterator forward and
extract the relevant information.

This is part of https://github.com/astral-sh/ruff/issues/11401

## Test Plan

`cargo test`
2024-05-14 17:28:04 +00:00
Dhruv Manilawala
bb1c107afd Move most of token-based rules to use TokenKind (#11420)
## Summary

This PR moves the following rules to use `TokenKind` instead of `Tok`:
* `PLE2510`, `PLE2512`, `PLE2513`, `PLE2514`, `PLE2515`
* `E701`, `E702`, `E703`
* `ISC001`, `ISC002`
* `COM812`, `COM818`, `COM819`
* `W391`

I've paused here because the next set of rules
(`pyupgrade::rules::extraneous_parentheses`) indexes into the token
slice but we only have an iterator implementation. So, I want to isolate
that change to make sure the logic is still the same when I move to
using the iterator approach.

This is part of #11401 

## Test Plan

`cargo test`
2024-05-14 17:16:42 +00:00
Dhruv Manilawala
c17193b5f8 Use TokenKind in blank lines checker (#11419)
## Summary

This PR updates the blank line rules checker to use `TokenKind` instead
of `Tok`.

This is part of #11401 

## Test Plan

`cargo test`
2024-05-14 17:07:35 +00:00
Dhruv Manilawala
a33763170e Use TokenKind in doc_lines_from_tokens (#11418)
## Summary

This PR updates the `doc_lines_from_tokens` function to use `TokenKind`
instead of `Tok`.

This is part of #11401 

## Test Plan

`cargo test`
2024-05-14 16:56:14 +00:00
Dhruv Manilawala
025768d303 Add Tokens newtype wrapper, TokenKind iterator (#11361)
## Summary

Alternative to #11237 

This PR adds a new `Tokens` struct which is a newtype wrapper around a
vector of lexer output. This allows us to add a `kinds` method which
returns an iterator over the corresponding `TokenKind`. This iterator is
implemented as a separate `TokenKindIter` struct to allow using the type
and provide additional methods like `peek` directly on the iterator.

This exposes the linter to access the stream of `TokenKind` instead of
`Tok`.

Edit: I've made the necessary downstream changes and plan to merge the
entire stack at once.
2024-05-14 16:45:04 +00:00
Dhruv Manilawala
50f14d017e Use tokenize for linter benchmark (#11417)
## Summary

This PR updates the linter benchmark to use the `tokenize` function
instead of the lexer.

The linter expects the token list to be up to and including the first
error which is what the `ruff_python_parser::tokenize` function returns.

This was not a problem before because the benchmarks only uses valid
Python code.
2024-05-14 10:28:40 -04:00
Alex Waygood
aceb182db6 Improve the update_schemastore script (#11353) 2024-05-13 17:06:54 +00:00
Charlie Marsh
6ed2482e27 Add Python 3.13 to list of allowed Python versions (#11411)
## Summary

I believe we're already "Python 3.13-ready"? The main Ruff-impacting
change I see in https://docs.python.org/3.13/whatsnew/3.13.html is [PEP
696](https://peps.python.org/pep-0696/) which Jelle added in
https://github.com/astral-sh/ruff/pull/11120.
2024-05-13 16:35:41 +00:00
Charlie Marsh
dc5c44ccc4 Remove some hardcoded modules from generate_known_standard_library.py (#11409)
See feedback in: https://github.com/astral-sh/ruff/pull/11374
2024-05-13 12:27:34 -04:00
Dhruv Manilawala
c3c87e86ef Implement IntoIterator for FStringElements (#11410)
A change which I lost somewhere when I force pushed in
https://github.com/astral-sh/ruff/pull/11400
2024-05-13 16:24:49 +00:00
Dhruv Manilawala
ca99e9e2f0 Move W605 to the AST checker (#11402)
## Summary

This PR moves the `W605` rule to the AST checker.

This is part of #11401

## Test Plan

`cargo test`
2024-05-13 16:13:06 +00:00
Dhruv Manilawala
4b41e4de7f Create a newtype wrapper around Vec<FStringElement> (#11400)
## Summary

This PR adds a newtype wrapper around `Vec<FStringElement>` that derefs
to a `&Vec<FStringElement>`.

Both f-string and format specifier are made up of `Vec<FStringElement>`.
By creating a newtype wrapper around it, we can share the methods for
both parent types.
2024-05-13 16:04:04 +00:00
Dhruv Manilawala
0dc130e841 Add Iterator impl for StringLike parts (#11399)
## Summary

This PR adds support to iterate over each part of a string-like
expression.

This similar to the one in the formatter:


128414cd95/crates/ruff_python_formatter/src/string/any.rs (L121-L125)

Although I don't think it's a 1-1 replacement in the formatter because
the one implemented in the formatter has another information for certain
variants (as can be seen for `FString`).

The main motivation for this is to avoid duplication for rules which
work only on the parts of the string and doesn't require any information
from the parent node. Here, the parent node being the expression node
which could be an implicitly concatenated string.

This PR also updates certain rule implementation to make use of this and
avoids logic duplication.
2024-05-13 15:52:03 +00:00
Dhruv Manilawala
10b85a0f07 Avoid lexer usage in PLE1300 and PLE1307 (#11406)
## Summary

This PR updates `PLE1300` and `PLE1307` to avoid using the lexer.

This is part of #11401 

## Test Plan

`cargo test`
2024-05-13 10:48:44 -04:00
Charlie Marsh
af60d539ab Move sub-crates to workspace dependencies (#11407)
## Summary

This matches the setup we use in `uv` and allows for consistency in the
`Cargo.toml` files.
2024-05-13 14:37:50 +00:00
Charlie Marsh
b371713591 Add a note on --preview to the README (#11395) 2024-05-13 14:27:29 +00:00
Dimitri Papadopoulos Orfanos
3b0584449d Fix a few typos found by codespell (#11404)
## Summary

Just fix typos.

## Test Plan

CI jobs.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-05-13 13:22:35 +00:00
Dhruv Manilawala
6ecb4776de Rename AnyStringKind -> AnyStringFlags (#11405)
## Summary

This PR renames `AnyStringKind` to `AnyStringFlags` and `AnyStringFlags`
to `AnyStringFlagsInner`.

The main motivation is to have consistent usage of "kind" and "flags".
For each string kind, it's "flags" like `StringLiteralFlags`,
`BytesLiteralFlags`, and `FStringFlags` but it was `AnyStringKind` for
the "any" variant.
2024-05-13 13:18:07 +00:00
Charlie Marsh
be0ccabbaa Add cargo shear to CI (#11393) 2024-05-12 22:23:36 -04:00
Charlie Marsh
6cec82fff8 Get cargo shear passing (#11392)
## Summary

Remove some unused dependencies, add a few ignores.
2024-05-13 01:56:24 +00:00
Tom Kuson
5ab4cc86c2 Reword future-rewritable-type-annotation (FA100) message (#11381)
## Summary

Changes `future-rewritable-type-annotation` (`FA100`) message to be less
confusing. Uses phrasing from the rule documentation to be consistent.
For example,

```
from_typing_import.py:5:13: FA100 Add `from __future__ import annotations` to rewrite `typing.List` more succinctly
```

Closes #10573.

## Test Plan

`cargo nextest run`
2024-05-13 01:38:49 +00:00
renovate[bot]
bc7856e899 Update pre-commit dependencies (#11391) 2024-05-12 21:22:04 -04:00
Rahul Modpur
6a28f3448e Migrate sys.rs generation to stdlibs (#11374)
## Summary

Closes #11347
2024-05-12 21:21:51 -04:00
renovate[bot]
7c824faa88 Update Rust crate thiserror to v1.0.60 (#11390) 2024-05-13 00:36:08 +00:00
renovate[bot]
12da5968a0 Update Rust crate serde_json to v1.0.117 (#11388) 2024-05-13 00:35:46 +00:00
renovate[bot]
a747b3f2a1 Update Rust crate syn to v2.0.63 (#11389) 2024-05-13 00:35:23 +00:00
renovate[bot]
01a0e6cc7e Update Rust crate serde to v1.0.201 (#11387) 2024-05-13 00:34:34 +00:00
renovate[bot]
a8b06537c7 Update Rust crate anyhow to v1.0.83 (#11384) 2024-05-13 00:34:00 +00:00
renovate[bot]
7b8fe25d32 Update Rust crate schemars to v0.8.19 (#11386) 2024-05-13 00:33:29 +00:00
renovate[bot]
a50416a6d7 Update Rust crate proc-macro2 to v1.0.82 (#11385) 2024-05-13 00:33:05 +00:00
renovate[bot]
41e53d59ab Update NPM Development dependencies (#11383) 2024-05-13 00:30:58 +00:00
Dhruv Manilawala
0fc6cf9bee Avoid PLE0237 for property with setter (#11377)
## Summary

Should this consider the decorator only if the name is actually a
property or is the logic in this PR correct?

fixes: #11358

## Test Plan

Add test case.
2024-05-12 20:23:00 -04:00
Dhruv Manilawala
d835b3e218 Avoid TCH005 for if stmt with elif/else block (#11376)
## Summary

This PR fixes a bug where the auto-fix for `TCH005` would delete the
entire `if` statement.

The fix in this PR is to not consider it a violation if there are any
`elif`/`else` blocks. This also matches the behavior of the original
plugin.

fixes: #11368 

## Test plan

Add test cases.
2024-05-12 20:22:25 -04:00
Jane Lewis
d7f093ef9e ruff server: Support noqa comment code action (#11276)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/10594.

Code actions to disable a diagnostic via `noqa` comment are now
available.


https://github.com/astral-sh/ruff/assets/19577865/6d3bcf11-a9d9-499b-8c7f-a10cd39cfbba

`DiagnosticFix` has been changed so that `noqa` code actions appear even
for diagnostics with no available quick fix. It can contain quick fix
edits, `noqa` comment edits, or both.

## Test Plan

The scenarios that need to be tested are as follows:
* A code action to disable a diagnostic should be available for every
diagnostic.
* Using this code action should append to the appropriate line with the
diagnostic, or modify an existing `noqa` comment.
* Adding a `noqa` comment manually should make a diagnostic disappear
* `Fix all auto-fixable problems` should not add `noqa` comments
* Removing a code from a `noqa` comment should make the diagnostic
re-appear
2024-05-12 14:39:46 -07:00
Charlie Marsh
4b330b11c6 [flake8-pie] Preserve parentheses in unnecessary-dict-kwargs (#11372)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11371.
2024-05-11 18:04:54 -04:00
Jane Lewis
890cc325d5 Split add_noqa process into distinctive edit generation and edit application stages (#11265)
## Summary

`--add-noqa` now runs in two stages: first, the linter finds all
diagnostics that need noqa comments and generate edits on a per-line
basis. Second, these edits are applied, in order, to the document.

A public-facing function, `generate_noqa_edits`, has also been
introduced, which returns noqa edits generated on a per-diagnostic
basis. This will be used by `ruff server` for noqa comment quick-fixes.

## Test Plan

Unit tests have been updated.
2024-05-10 23:16:52 +00:00
Douglas Thor
0726e82342 [pyflakes] Update docs to describe WAI behavior (F541) (#11362)
Addresses this comment:
https://github.com/astral-sh/ruff/issues/11357#issuecomment-2104714029


## Summary

The docs for F541 did not mention some surprising, but WAI, behavior
regarding implicit string concatenation. Update the docs to describe the
behavior.

Here's how things rendered for me locally:


![image](https://github.com/astral-sh/ruff/assets/5386897/32067121-b190-4268-b987-ff37df11a618)
2024-05-10 19:10:34 +00:00
Dhruv Manilawala
f79c980e17 Add support for attribute docstring in the semantic model (#11315)
## Summary

This PR adds updates the semantic model to detect attribute docstring.

Refer to [PEP 258](https://peps.python.org/pep-0258/#attribute-docstrings) 
for the definition of an attribute docstring.

This PR doesn't add full support for it but only considers string
literals as attribute docstring for the following cases:
1. A string literal following an assignment statement in the **global
scope**.
2. A global class attribute

For an assignment statement, it's considered an attribute docstring only
if the target expression is a name expression (`x = 1`). So, chained
assignment, multiple assignment or unpacking, and starred expression,
which are all valid in the target position, aren't considered here.

In `__init__` method, an assignment to the `self` variable like `self.x = 1`
is also a candidate for an attribute docstring. **This PR does not
support this position.**

## Test Plan

I used the following source code along with a print statement to verify
that the attribute docstring detection is correct.

Refer to the PR description for the code snippet.

I'll add this in the follow-up PR
(https://github.com/astral-sh/ruff/pull/11302) which uses this method.
2024-05-10 20:27:56 +05:30
Charlie Marsh
35ba3c91ce Use u64 instead of i64 in Int type (#11356)
## Summary

I believe the value here is always unsigned, since we represent `-42` as
a unary operator on `42`.
2024-05-10 13:35:15 +00:00
konsti
1f794077ec Allow clippy map-unwrap-or (#11354)
`map_or` is harder too read than the `.map().unwrap()` version.

See also https://github.com/astral-sh/uv/pull/3498
2024-05-09 21:22:09 +00:00
Alex Waygood
3e8878a1c8 Bump version to v0.4.4 (#11352) 2024-05-09 17:00:46 +00:00
Carl Meyer
b6b4ad9949 [red-knot] @override lint rule (#11282)
## Summary

Lots of TODOs and things to clean up here, but it demonstrates the
working lint rule.

## Test Plan

```
➜ cat main.py
from typing import override
from base import B

class C(B):
    @override
    def method(self): pass

➜ cat base.py
class B: pass

➜ cat typing.py
def override(func):
    return func
```

(We provide our own `typing.py` since we don't have typeshed vendored or
type stub support yet.)

```
➜ ./target/debug/red_knot main.py
...
1   0.012086s TRACE red_knot Main Loop: Tick
[crates/red_knot/src/main.rs:157:21] diagnostics = [
    "Method C.method is decorated with `typing.override` but does not override any base class method",
]
```

If we add `def method(self): pass` to class `B` in `base.py` and run
red_knot again, there is no lint error.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-09 09:25:08 -06:00
Auguste Lalande
dd42961dd9 [pylint] Detect pathlib.Path.open calls in unspecified-encoding (PLW1514) (#11288)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Resolves #11263

Detect `pathlib.Path.open` calls which do not specify a file encoding.

## Test Plan

Test cases added to fixture.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-05-09 12:36:20 +00:00
Alex Waygood
c80c1712f0 [red-knot] Vendor typeshed's stdlib (#11340)
This PR vendors typeshed!

-  The first commit vendors the stdlib directory from typeshed into a new crates/red_knot/vendored_typeshed directory.
-  The second commit adjusts various linting config files to make sure that the vendored code is excluded from typo checks, formatting checks, etc.
-  The LICENSE and README.md files are also vendored, but all other directories and files (stubs, scripts, tests, test_cases, etc.) are excluded. We should have no need for them (except possibly stubs/, discussed in more depth below).
-  Similar to the way pyright has a commit.txt file in its vendored copy of typeshed, to indicate which typeshed commit the vendored code corresponds to, I've also added a crates/red_knot/vendored_typeshed/source_commit.txt file in the third commit of this PR.

One open question is: should we vendor the stdlib and stubs directories, or just the stdlib directory? The stubs/ directory contains stubs for 162 third-party packages outside the stdlib. Mypy and typeshed_client1 only vendor the stdlib directory; pyright and pyre vendor both the stdlib and stubs directories; pytype vendors the entire typeshed repo (scripts/, tests/ and all).

In this PR, I've chosen to copy mypy and typeshed_client. Unlike vendoring the stdlib, which is unavoidable if we want to do typechecking of the stdlib, it's not strictly necessary to vendor the stubs directory: each subdirectory in stubs is published to PyPI as a standalone stubs distribution that can be (uv)-pip-installed into a virtual environment. It might be useful for our users if we vendored those stubs anyway, but there are costs as well as benefits to doing so (apart from just the sheer amount of vendored code in the ruff repository), so I'd rather consider it separately.
2024-05-09 12:44:53 +01:00
Charlie Marsh
e2fe177c6b Revert "Simplify arithmetic operation in logical lines checker (#11346)" (#11348)
## Summary

I merged this, but I think it might not be the same behavior? See my
comment at:
https://github.com/astral-sh/ruff/pull/11346#discussion_r1594848224
2024-05-08 21:51:37 -04:00
Auguste Lalande
e9d1cddc97 Simplify arithmetic operation in logical lines checker (#11346)
## Summary

Simplify arithmetic operation in logical lines checker
2024-05-08 20:59:56 -04:00
Alex Waygood
dfe4291c0b Improve ruff_python_semantic::all::extract_all_names() (#11335) 2024-05-08 17:09:31 +01:00
Micha Reiser
4541337f3d [red-knot] Remove <Db: SemanticDb> contraints in favor of dynamic dispatch (#11339) 2024-05-08 18:07:14 +02:00
Charlie Marsh
8e9ddee392 Ignore end-of-line comments when determining blank line rules (#11342)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11331.
2024-05-08 15:19:27 +00:00
Charlie Marsh
702d2fa1eb Make B024 and B027 documentation more nuanced (#11341)
Closes https://github.com/astral-sh/ruff/issues/11334.
2024-05-08 11:16:58 -04:00
Carl Meyer
caf01472d5 [red-knot] fix re-hashing in Files and SymbolTable (#11327) 2024-05-08 06:31:19 -06:00
Micha Reiser
22639c5a2a Move all module from the AST to the semantic crate (#11330) 2024-05-08 08:56:50 +00:00
Ahmed Ilyas
8591adba11 Consider with statements for too many branches lint (#11321)
Resolves https://github.com/astral-sh/ruff/issues/11313

## Summary

PLR0912(too-many-branches) did not count branches inside with: blocks.
With this fix, the branches inside with statements are also counted.

## Test Plan

Added a new test case.
2024-05-08 03:10:12 +00:00
Shixian Sheng
29f2bc0f97 Update README.md (#11326)
## Summary

After checking the links, I found that one link leads to 404. Correct me
if i'm wrong, but I think the link I changed to is the supposed one
2024-05-07 20:00:08 -04:00
Tushar Sadhwani
56b4c47d74 [flake8-pyi] Implement PYI062 (duplicate-literal-member) (#11269) 2024-05-07 19:28:06 +01:00
Alex Waygood
1a392d34e1 Tell renovate not to try to update GitHub runners (#11324) 2024-05-07 16:02:35 +00:00
Alex Waygood
6774f27f4b Refactor the ExprDict node (#11267)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-07 11:46:10 +00:00
Abdur-Rahmaan Janhangeer
de270154a1 chore(comment): Define HIR (#11320) 2024-05-07 12:39:55 +02:00
Tushar Sadhwani
bc3f4fa3bc [flake8-pyi] Implement PYI059 (generic-not-last-base-class) (#11233) 2024-05-07 10:07:56 +00:00
Dhruv Manilawala
28cc71fb6b Remove cyclic dev dependency with the parser crate (#11261)
## Summary

This PR removes the cyclic dev dependency some of the crates had with
the parser crate.

The cyclic dependencies are:
* `ruff_python_ast` has a **dev dependency** on `ruff_python_parser` and
`ruff_python_parser` directly depends on `ruff_python_ast`
* `ruff_python_trivia` has a **dev dependency** on `ruff_python_parser`
and `ruff_python_parser` has an indirect dependency on
`ruff_python_trivia` (`ruff_python_parser` - `ruff_python_ast` -
`ruff_python_trivia`)

Specifically, this PR does the following:
* Introduce two new crates
* `ruff_python_ast_integration_tests` and move the tests from the
`ruff_python_ast` crate which uses the parser in this crate
* `ruff_python_trivia_integration_tests` and move the tests from the
`ruff_python_trivia` crate which uses the parser in this crate

### Motivation

The main motivation for this PR is to help development. Before this PR,
`rust-analyzer` wouldn't provide any intellisense in the
`ruff_python_parser` crate regarding the symbols in `ruff_python_ast`
crate.

```
[ERROR][2024-05-03 13:47:06] .../vim/lsp/rpc.lua:770	"rpc"	"/Users/dhruv/.cargo/bin/rust-analyzer"	"stderr"	"[ERROR project_model::workspace] cyclic deps: ruff_python_parser(Idx::<CrateData>(50)) -> ruff_python_ast(Idx::<CrateData>(37)), alternative path: ruff_python_ast(Idx::<CrateData>(37)) -> ruff_python_parser(Idx::<CrateData>(50))\n"
```

## Test Plan

Check the logs of `rust-analyzer` to not see any signs of cyclic
dependency.
2024-05-07 09:24:57 +00:00
Charlie Marsh
12b5c3a54c [flake8-bugbear] Ignore enum classes in cached-instance-method (B019) (#11312)
## Summary

While I was here, I also updated the rule to use
`function_type::classify` rather than hard-coding `staticmethod` and
friends.

Per Carl:

> Enum instances are already referred to by the class, forming a cycle
that won't get collected until the class itself does. At which point the
`lru_cache` itself would be collected, too.

Closes https://github.com/astral-sh/ruff/issues/9912.
2024-05-06 14:19:22 -04:00
Charlie Marsh
a73b8c82a8 Add globbing to isort sections docs (#11311)
Closes https://github.com/astral-sh/ruff/issues/11310.
2024-05-06 18:12:29 +00:00
Abdur-Rahmaan Janhangeer
2f1983e4ad fix typo (#11309) 2024-05-06 12:04:53 -04:00
Micha Reiser
868bbd4de6 Fix 'MarkVerbatimCommentsAsFormattedVisitor' is unused warning in release builds (#11304) 2024-05-06 07:43:16 +00:00
Charlie Marsh
1bb61bab67 Respect logged and re-raised expressions in nested statements (#11301)
## Summary

Historically, we only ignored `flake8-blind-except` if you re-raised or
logged the exception as a _direct_ child statement; but it could be
nested somewhere. This was just a known limitation at the time of adding
the previous logic.

Closes https://github.com/astral-sh/ruff/issues/11289.
2024-05-05 21:52:09 -04:00
renovate[bot]
b7fe2b57de Update pre-commit dependencies (#11296) 2024-05-06 01:20:31 +00:00
renovate[bot]
2e353d97ae Update react monorepo to v18.3.1 (#11299) 2024-05-05 21:10:28 -04:00
renovate[bot]
5bdb160781 Update cloudflare/wrangler-action action to v3.5.0 (#11298) 2024-05-05 21:09:22 -04:00
renovate[bot]
ed3b256bc1 Update Rust crate serde to v1.0.200 (#11294) 2024-05-05 21:08:28 -04:00
renovate[bot]
f9424a487d Update NPM Development dependencies (#11297) 2024-05-05 21:08:19 -04:00
renovate[bot]
7e38355ca6 Update Rust crate libc to v0.2.154 (#11293) 2024-05-05 21:08:02 -04:00
renovate[bot]
fa53e67b08 Update dependency react-resizable-panels to v2.0.19 (#11295) 2024-05-05 21:07:18 -04:00
Charlie Marsh
9db11dcce2 Touch-up error messages in server file discovery (#11285)
## Summary

Just making these messages a little more consistent with how we format
them in Ruff, uv, etc.
2024-05-05 13:20:51 -04:00
Jane Lewis
a8a97291d1 Fix ruff server hanging after Neovim closes (#11291)
## Summary

A follow-up to https://github.com/astral-sh/ruff/pull/11222. `ruff
server` stalls during shutdown with Neovim because after it receives an
exit notification and closes the I/O thread, it attempts to log a
success message to `stderr`. Removing this log statement fixes this
issue.

## Test Plan

Track the instances of `ruff` in the OS task manager as you open and
close Neovim. A new instance should appear when Neovim starts and it
should disappear once Neovim is closed.
2024-05-05 17:15:48 +00:00
Charlie Marsh
c3e0306c9d Allow set(True) for boolean traps (#11287)
Closes https://github.com/astral-sh/ruff/issues/8923.
2024-05-04 21:33:08 +00:00
Jane Lewis
1d20422ba9 Create snapshot tests for --add-noqa (#11286) 2024-05-04 18:02:46 +00:00
Jane Lewis
c4bf783b85 ruff server: Editor settings are used by default if no file-based configuration exists (#11266)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/11258.

This PR fixes the settings resolver to match the expected behavior when
file-based configuration is not available.

## Test Plan

In a workspace with no file-based configuration, set a setting in your
editor and confirm that this setting is used instead of the default.
2024-05-04 10:52:01 -07:00
Charlie Marsh
6587dc1269 Use shared is_stub in unused argument rules (#11284)
## Summary

We already have a shared helper for this.
2024-05-04 13:51:44 -04:00
Charlie Marsh
9d45987c19 Expand tildes when resolving Ruff server configuration file (#11283)
## Summary

Users can now include tildes and environment variables in the provided
path, just like with `--config`.

Closes #11277.

## Test Plan

Set the configuration path to `"ruff.configuration": "~/x.toml"`;
verified that the server attempted to read from `/Users/crmarsh/x.toml`.

![Screenshot 2024-05-04 at 1 31
43 PM](https://github.com/astral-sh/ruff/assets/1309177/ea9829cd-6d8a-4818-a47c-dcff9219e996)
2024-05-04 13:51:26 -04:00
Carlos Cabral
5f0c189fa1 Change hardcoded-tmp-directory-extend example to follow the schema (#11275)
## Summary
Change `hardcoded-tmp-directory-extend` example to follow the schema:

1e91a09918/ruff.schema.json (L896-L901)
<!-- What's the purpose of the change? What does it do, and why? -->
2024-05-03 20:44:38 -05:00
Charlie Marsh
1e91a09918 Bump version to v0.4.3 (#11274) 2024-05-03 18:48:31 -04:00
Charlie Marsh
d0f51c6434 Remove remaining ruff_shrinking references (#11272)
## Summary

This caused `rooster release` to fail.

Initially removed in https://github.com/astral-sh/ruff/pull/11242.
2024-05-03 20:22:08 +00:00
Charlie Marsh
8dd38110d9 Use function range for reimplemented-operator diagnostics (#11271) 2024-05-03 20:11:02 +00:00
Charlie Marsh
894cd13ec1 [refurb] Ignore methods in reimplemented-operator (FURB118) (#11270)
## Summary

This rule does more harm than good when applied to methods.

Closes https://github.com/astral-sh/ruff/issues/10898.

Closes https://github.com/astral-sh/ruff/issues/11045.
2024-05-03 20:03:12 +00:00
Tushar Sadhwani
f3284fde9a Remove unnecessary check for RUF020 enabled (#11268)
## Summary

In #9218 `Rule::NeverUnion` was partially removed from a
`checker.any_enabled` call. This makes the change consistent.

## Test Plan

`cargo test`
2024-05-03 18:19:13 +00:00
Carl Meyer
82dd5e6936 [red-knot] resolve class members (#11256) 2024-05-03 11:34:13 -06:00
Micha Reiser
6a1e555537 Upgrade to Rust 1.78 (#11260) 2024-05-03 12:46:21 +00:00
Charlie Marsh
349a4cf8ce Remove trailing reference section (#11257) 2024-05-03 01:23:40 +00:00
Jane Lewis
dfbeca5bdd ruff server no longer hangs after shutdown (#11222)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/11207.

The server would hang after handling a shutdown request on
`IoThreads::join()` because a global sender (`MESSENGER`, used to send
`window/showMessage` notifications) would remain allocated even after
the event loop finished, which kept the writer I/O thread channel open.

To fix this, I've made a few structural changes to `ruff server`. I've
wrapped the send/receive channels and thread join handle behind a new
struct, `Connection`, which facilitates message sending and receiving,
and also runs `IoThreads::join()` after the event loop finishes. To
control the number of sender channels, the `Connection` wraps the sender
channel in an `Arc` and only allows the creation of a wrapper type,
`ClientSender`, which hold a weak reference to this `Arc` instead of
direct channel access. The wrapper type implements the channel methods
directly to prevent access to the inner channel (which would allow the
channel to be cloned). ClientSender's function is analogous to
[`WeakSender` in
`tokio`](https://docs.rs/tokio/latest/tokio/sync/mpsc/struct.WeakSender.html).
Additionally, the receiver channel cannot be accessed directly - the
`Connection` only exposes an iterator over it.

These changes will guarantee that all channels are closed before the I/O
threads are joined.

## Test Plan

Repeatedly open and close an editor utilizing `ruff server` while
observing the task monitor. The net total amount of open `ruff`
instances should be zero once all editor windows have closed.

The following logs should also appear after the server is shut down:

<img width="835" alt="Screenshot 2024-04-30 at 3 56 22 PM"
src="https://github.com/astral-sh/ruff/assets/19577865/404b74f5-ef08-4bb4-9fa2-72e72b946695">

This can be tested on VS Code by changing the settings and then checking
`Output`.
2024-05-03 01:09:42 +00:00
Charlie Marsh
9e69cd6e93 Rephrase rationale for pytest-incorrect-pytest-import (#11255)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11247.
2024-05-03 00:51:42 +00:00
plredmond
b90a937a59 Add decorator types to function type (#11253)
* Add `decorators: Vec<Type>` to `FunctionType` struct
* Thread decorators through two `add_function` definitions
* Populate decorators at the callsite in `infer_symbol_type`
* Small test
2024-05-02 16:58:56 -07:00
plredmond
59afff0e6a F401 - Distinguish between imports we wish to remove and those we wish to make explicit-exports (#11168)
Resolves #10390 and starts to address #10391

# Changes to behavior

* In `__init__.py` we now offer some fixes for unused imports.
* If the import binding is first-party this PR suggests a fix to turn it
into a redundant alias.
* If the import binding is not first-party, this PR suggests a fix to
remove it from the `__init__.py`.
* The fix-titles are specific to these new suggested fixes.
* `checker.settings.ignore_init_module_imports` setting is
deprecated/ignored. There is probably a documentation change to make
that complete which I haven't done.

---

<details><summary>Old description of implementation changes</summary>

# Changes to the implementation

* In the body of the loop over import statements that contain unused
bindings, the bindings are partitioned into `to_reexport` and
`to_remove` (according to how we want to resolve the fact they're
unused) with the following predicate:
  ```rust
in_init && is_first_party(checker, &import.qualified_name().to_string())
// true means make it a reexport
  ```
* Instead of generating a single fix per import statement, we now
generate up to two fixes per import statement:
  ```rust
  (fix_by_removing_imports(checker, node_id, &to_remove, in_init).ok(),
   fix_by_reexporting(checker, node_id, &to_reexport, dunder_all).ok())
  ```
* The `to_remove` fixes are unsafe when `in_init`.
* The `to_explicit` fixes are safe. Currently, until a future PR, we
make them redundant aliases (e.g. `import a` would become `import a as
a`).

## Other changes

* `checker.settings.ignore_init_module_imports` is deprecated/ignored.
Instead, all fixes are gated on `checker.settings.preview.is_enabled()`.
* Got rid of the pattern match on the import-binding bound by the inner
loop because it seemed less readable than referencing fields on the
binding.
* [x] `// FIXME: rename "imports" to "bindings"` if reviewer agrees (see
code)
* [x] `// FIXME: rename "node_id" to "import_statement"` if reviewer
agrees (see code)

<details>
<summary><h2>Scope cut until a future PR</h2></summary>

* (Not implemented) The `to_explicit` fixes will be added to `__all__`
unless it doesn't exist. When `__all__` doesn't exist they're resolved
by converting to redundant aliases (e.g. `import a` would become `import
a as a`).
 
---

</details>

# Test plan

* [x] `crates/ruff_linter/resources/test/fixtures/pyflakes/F401_24`
contains an `__init__.py` with*out* `__all__` that exercises the
features in this PR, but it doesn't pass.
* [x]
`crates/ruff_linter/resources/test/fixtures/pyflakes/F401_25_dunder_all`
contains an `__init__.py` *with* `__all__` that exercises the features
in this PR, but it doesn't pass.
* [x] Write unit tests for the new edit functions in
`fix::edits::make_redundant_alias`.

</details>

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2024-05-02 16:10:32 -07:00
Micha Reiser
7cec3b2623 Remove num-cpus dependency (#11240) 2024-05-02 22:30:48 +02:00
Charlie Marsh
9a1f6f6762 Avoid allocations for isort module names (#11251)
## Summary

Random refactor I noticed when investigating the F401 changes. We don't
need to allocate in most cases here.
2024-05-02 19:17:56 +00:00
Charlie Marsh
3a7c01b365 Ignore list-copy recommendations for async for loops (#11250)
## Summary

Removes these from `PERF402`, but adds them to `PERF401`, with a custom
message to use an `async` comprehension.

Closes https://github.com/astral-sh/ruff/issues/10787.
2024-05-02 11:48:52 -07:00
Micha Reiser
64700d296f Remove ImportMap (#11234)
## Summary

This PR removes the `ImportMap` implementation and all its routing
through ruff.

The import map was added in https://github.com/astral-sh/ruff/pull/3243
but we then never ended up using it to do cross file analysis.

We are now working on adding multifile analysis to ruff, and revisit
import resolution as part of it.


```
hyperfine --warmup 10 --runs 20 --setup "./target/release/ruff clean" \
              "./target/release/ruff check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I" \
              "./target/release/ruff-import check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I" 
Benchmark 1: ./target/release/ruff check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I
  Time (mean ± σ):      37.6 ms ±   0.9 ms    [User: 52.2 ms, System: 63.7 ms]
  Range (min … max):    35.8 ms …  39.8 ms    20 runs
 
Benchmark 2: ./target/release/ruff-import check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I
  Time (mean ± σ):      36.0 ms ±   0.7 ms    [User: 50.3 ms, System: 58.4 ms]
  Range (min … max):    34.5 ms …  37.6 ms    20 runs
 
Summary
  ./target/release/ruff-import check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I ran
    1.04 ± 0.03 times faster than ./target/release/ruff check crates/ruff_linter/resources/test/cpython -e -s --extend-select=I
```

I suspect that the performance improvement should even be more
significant for users that otherwise don't have any diagnostics.


```
hyperfine --warmup 10 --runs 20 --setup "cd ../ecosystem/airflow && ../../ruff/target/release/ruff clean" \
              "./target/release/ruff check ../ecosystem/airflow -e -s --extend-select=I" \
              "./target/release/ruff-import check ../ecosystem/airflow -e -s --extend-select=I" 
Benchmark 1: ./target/release/ruff check ../ecosystem/airflow -e -s --extend-select=I
  Time (mean ± σ):      53.7 ms ±   1.8 ms    [User: 68.4 ms, System: 63.0 ms]
  Range (min … max):    51.1 ms …  58.7 ms    20 runs
 
Benchmark 2: ./target/release/ruff-import check ../ecosystem/airflow -e -s --extend-select=I
  Time (mean ± σ):      50.8 ms ±   1.4 ms    [User: 50.7 ms, System: 60.9 ms]
  Range (min … max):    48.5 ms …  55.3 ms    20 runs
 
Summary
  ./target/release/ruff-import check ../ecosystem/airflow -e -s --extend-select=I ran
    1.06 ± 0.05 times faster than ./target/release/ruff check ../ecosystem/airflow -e -s --extend-select=I

```

## Test Plan

`cargo test`
2024-05-02 11:26:02 -07:00
Charlie Marsh
e62fa4ea32 Avoid debug assertion around NFKC renames (#11249)
## Summary

This assertion isn't quite correct, since with NFKC normalization, two
identifiers can have different lengths but map to the same binding.

Closes https://github.com/astral-sh/ruff/issues/11238.

Closes https://github.com/astral-sh/ruff/issues/11239.
2024-05-02 10:59:39 -07:00
Micha Reiser
1673bc466b Remove ruff-shrinking crate (#11242) 2024-05-02 10:17:55 +02:00
Micha Reiser
a70808b125 Make libc a platform specific dependency (#11241) 2024-05-02 07:45:08 +00:00
Jane Lewis
4aac1d1db9 ruff server respects per-file-ignores configuration (#11224)
## Summary

Fixes #11185
Fixes #11214 

Document path and package information is now forwarded to the Ruff
linter, which allows `per-file-ignores` to correctly match against the
file name. This also fixes an issue where the import sorting rule didn't
distinguish between third-party and first-party packages since we didn't
pass in the package root.

## Test Plan

`per-file-ignores` should ignore files as expected. One quick way to
check is by adding this to your `pyproject.toml`:
```toml
[tool.ruff.lint.per-file-ignores]
"__init__.py" = ["ALL"]
```

Then, confirm that no diagnostics appear when you add code to an
`__init__.py` file (besides syntax errors).

The import sorting fix can be verified by failing to reproduce the
original issue - an `I001` diagnostic should not appear in
`other_module.py`.
2024-05-01 19:24:35 -07:00
Dhruv Manilawala
653c8d83e9 Rename variable to indent_width to match docs (#11230)
Reference:
https://github.com/astral-sh/ruff/issues/8705#issuecomment-2084726911
2024-05-01 12:04:03 +00:00
Micha Reiser
376fb71a7f Avoid parsing the root configuration twice (#10625) 2024-05-01 09:28:30 +00:00
Jane Lewis
068e22d382 ruff server reads from a configuration TOML file in the user configuration directory if no local configuration exists (#11225)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/11158.

A settings file in the ruff user configuration directory will be used as
a configuration fallback, if it exists.

## Test Plan

Create a `pyproject.toml` or `ruff.toml` configuration file in the ruff
user configuration directory.

* On Linux, that will be `$XDG_CONFIG_HOME/ruff/` or `$HOME/.config`
* On macOS, that will be `$HOME/Library/Application Support`
* On Windows, that will be `{FOLDERID_LocalAppData}`

Then, open a file inside of a workspace with no configuration. The
settings in the user configuration file should be used.
2024-05-01 02:08:50 -07:00
Micha Reiser
1f217d54d0 [red-knot] Remove Clone from Files (#11213) 2024-05-01 09:11:39 +02:00
Charlie Marsh
414990c022 Respect async expressions in comprehension bodies (#11219)
## Summary

We weren't recursing into the comprehension body.
2024-04-30 18:38:31 +00:00
Jane Lewis
4779dd1173 Write ruff server setup guide for Helix (#11183)
## Summary

Closes #11027.
2024-04-30 10:15:29 -07:00
Charlie Marsh
c5adbf17da Ignore non-abstract class attributes when enforcing B024 (#11210)
## Summary

I think the check included here does make sense, but I don't see why we
would allow it if a value is provided for the attribute -- since, in
that case, isn't it _not_ abstract?

Closes: https://github.com/astral-sh/ruff/issues/11208.
2024-04-30 09:01:08 -07:00
Micha Reiser
c6dcf3502b [red-knot] Use FileId in module resolver to map from file to module (#11212) 2024-04-30 14:09:47 +00:00
Micha Reiser
1e585b8667 [red knot] Introduce LintDb (#11204) 2024-04-30 16:01:46 +02:00
Alex Waygood
21d824abfd [pylint] Also emit PLR0206 for properties with variadic parameters (#11200) 2024-04-30 11:59:37 +01:00
Micha Reiser
7e28c80354 [red-knot] Refactor program.check scheduling (#11202) 2024-04-30 07:23:41 +00:00
Micha Reiser
bc03d376e8 [red-knot] Add "cheap" program.snapshot (#11172) 2024-04-30 07:13:26 +00:00
Alex Waygood
eb6f562419 red-knot: introduce a StatisticsRecorder trait for the KeyValueCache (#11179)
## Summary

This PR changes the `DebugStatistics` and `ReleaseStatistics` structs so
that they implement a common `StatisticsRecorder` trait, and makes the
`KeyValueCache` struct generic over a type parameter bound to that
trait. The advantage of this approach is that it's much harder for the
`DebugStatistics` and `ReleaseStatistics` structs to accidentally grow
out of sync in the methods that they implement, which was the cause of
the release-build failure recently fixed in #11177.

## Test Plan

`cargo test -p red_knot` and `cargo build --release` both continue to
pass for me locally
2024-04-30 07:14:06 +01:00
Micha Reiser
5561d445d7 linter: Enable test-rules for test build (#11201) 2024-04-30 08:06:47 +02:00
plredmond
c391c8b6cb Red Knot - Add symbol flags (#11134)
* Adds `Symbol.flag` bitfield. Populates it from (the three renamed)
`add_or_update_symbol*` methods.
* Currently there are these flags supported:
  * `IS_DEFINED` is set in a scope where a variable is defined.
* `IS_USED` is set in a scope where a variable is referenced. (To have
both this and `IS_DEFINED` would require two separate appearances of a
variable in the same scope-- one def and one use.)
* `MARKED_GLOBAL` and `MARKED_NONLOCAL` are **not yet implemented**.
(*TODO: While traversing, if you find these declarations, add these
flags to the variable.*)
* Adds `Symbol.kind` field (commented) and the data structure which will
populate it: `Kind` which is an enum of freevar, cellvar,
implicit_global, and implicit_local. **Not yet populated**. (*TODO: a
second pass over the scope (or the ast?) will observe the
`MARKED_GLOBAL` and `MARKED_NONLOCAL` flags to populate this field. When
that's added, we'll uncomment the field.*)
* Adds a few tests that the `IS_DEFINED` and `IS_USED` fields are
correctly set and/or merged:
* Unit test that subsequent calls to `add_or_update_symbol` will merge
the flag arguments.
* Unit test that in the statement `x = foo`, the variable `foo` is
considered used but not defined.
* Unit test that in the statement `from bar import foo`, the variable
`foo` is considered defined but not used.

---------

Co-authored-by: Carl Meyer <carl@astral.sh>
2024-04-29 17:07:23 -07:00
Carl Meyer
ce030a467f [red-knot] resolve base class types (#11178)
## Summary

Resolve base class types, as long as they are simple names.

## Test Plan

cargo test
2024-04-29 16:22:30 -06:00
Dhruv Manilawala
04a922866a Add basic docs for the parser crate (#11199)
## Summary

This PR adds a basic README for the `ruff_python_parser` crate and
updates the CONTRIBUTING docs with the fuzzer and benchmark section.

Additionally, it also updates some inline documentation within the
parser crate and splits the `parse_program` function into
`parse_single_expression` and `parse_module` which will be called by
matching against the `Mode`.

This PR doesn't go into too much internal detail around the parser logic
due to the following reasons:
1. Where should the docs go? Should it be as a module docs in `lib.rs`
or in README?
2. The parser is still evolving and could include a lot of refactors
with the future work (feedback loop and improved error recovery and
resilience)

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2024-04-29 17:08:07 +00:00
Alex Waygood
0ed7af35ec Add a daily workflow to fuzz the parser with randomly selected seeds (#11203) 2024-04-29 17:54:17 +01:00
Alex Waygood
87929ad5f1 Add convenience methods for iterating over all parameter nodes in a function (#11174) 2024-04-29 10:36:15 +00:00
renovate[bot]
8a887daeb4 Update pre-commit dependencies (#11195)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-04-29 08:40:21 +00:00
renovate[bot]
7317d734be Update dependency monaco-editor to ^0.48.0 (#11197)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:34:49 +02:00
renovate[bot]
c1a2a60182 Update NPM Development dependencies (#11196)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:33:12 +02:00
renovate[bot]
8e056b3a93 Update Rust crate serde to v1.0.199 (#11192)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:17:15 +02:00
renovate[bot]
616dd1873f Update Rust crate matchit to v0.8.2 (#11189)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:16:32 +02:00
renovate[bot]
acfb1a83c9 Update Rust crate serde_with to v3.8.1 (#11193)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:16:14 +02:00
renovate[bot]
7c0e32f255 Update Rust crate schemars to v0.8.17 (#11191)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:15:38 +02:00
renovate[bot]
4b84c55e3a Update Rust crate parking_lot to v0.12.2 (#11190)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:14:57 +02:00
renovate[bot]
4c8d33ec45 Update Rust crate hashbrown to v0.14.5 (#11188)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-29 08:14:33 +02:00
Alex Waygood
113e259e6d Various small improvements to the fuzz-parser script (#11186) 2024-04-28 18:17:27 +00:00
Micha Reiser
3474e37836 [red-knot] Unresolved imports lint rule (#11164) 2024-04-28 12:12:49 +02:00
Charlie Marsh
dfe90a3b2b Add a --release build to CI (#11182)
## Summary

We merged a failure here (#11177), and it only takes ~five minutes
anyway (which is shorter than some of our other jobs).
2024-04-27 20:36:33 -04:00
Micha Reiser
00d7c01cfc [red-knot] Fix absolute imports in module.resolve_name (#11180) 2024-04-27 20:07:07 +02:00
Micha Reiser
983a06cec3 [red-knot] Resolve and check dependencies (#11161) 2024-04-27 15:49:03 +00:00
Alex Waygood
47692027bf Fix cargo build --release (#11177)
## Summary

`cargo build --release` currently fails to compile on `main`:

<details>

```
error[E0599]: no method named `hit` found for struct `ReleaseStatistics` in the current scope
   --> crates/red_knot/src/cache.rs:22:29
    |
22  |             self.statistics.hit();
    |                             ^^^ method not found in `ReleaseStatistics`
...
145 | pub struct ReleaseStatistics;
    | ---------------------------- method `hit` not found for this struct

error[E0599]: no method named `miss` found for struct `ReleaseStatistics` in the current scope
   --> crates/red_knot/src/cache.rs:25:29
    |
25  |             self.statistics.miss();
    |                             ^^^^ method not found in `ReleaseStatistics`
...
145 | pub struct ReleaseStatistics;
    | ---------------------------- method `miss` not found for this struct

error[E0599]: no method named `hit` found for struct `ReleaseStatistics` in the current scope
   --> crates/red_knot/src/cache.rs:36:33
    |
36  |                 self.statistics.hit();
    |                                 ^^^ method not found in `ReleaseStatistics`
...
145 | pub struct ReleaseStatistics;
    | ---------------------------- method `hit` not found for this struct

error[E0599]: no method named `miss` found for struct `ReleaseStatistics` in the current scope
   --> crates/red_knot/src/cache.rs:41:33
    |
41  |                 self.statistics.miss();
    |                                 ^^^^ method not found in `ReleaseStatistics`
...
145 | pub struct ReleaseStatistics;
    | ---------------------------- method `miss` not found for this struct
```

</details>

This is because in a release build, `CacheStatistics` is a type alias
for `ReleaseStatistics`, and `ReleaseStatistics` doesn't have `hit()` or
`miss()` methods. (In a debug build, `CacheStatistics` is a type alias
for `DebugStatistics`, which _does_ have those methods.)

Possibly we could make this less likely to happen in the future by
making both structs implement a common trait instead of using type
aliases that vary depending on whether it's a debug build or not? For
now, though, this PR just brings the two structs in sync w.r.t. the
methods they expose.

## Test Plan

`cargo build --release` now once again compiles for me locally
2024-04-27 11:45:32 -04:00
Charlie Marsh
ec3243a6e5 Prioritize redefined-while-unused over unused-import (#11173)
## Summary

This PR adds an override to the fixer to ensure that we apply any
`redefined-while-unused` fixes prior to `unused-import`.

Closes https://github.com/astral-sh/ruff/issues/10905.
2024-04-27 11:44:53 -04:00
Carl Meyer
2d6978f236 [red-knot] fix class vs instance (#11175)
## Summary

Clarify the type of an exact class object vs the type of instances of
that class.

## Test Plan

cargo test
2024-04-27 09:09:02 -06:00
Auguste Lalande
2490d2d4af [ruff] Detect duplicate codes as part of unused-noqa (RUF100) (#10850)
## Summary

Implement duplicate code detection as part of `RUF100`, mirroring the
behavior of `flake8-noqa` (`NQA005`) mentioned in #850. The idea to
merge the rule into `RUF100` was suggested by @MichaReiser
https://github.com/astral-sh/ruff/pull/10325#issuecomment-2025535444.

## Test Plan

Test cases were added to the fixture.
2024-04-27 12:33:44 +00:00
Jelle Zijlstra
59b73fabc1 [pyflakes] Improve invalid-print-syntax documentation (#11171)
This syntax wasn't "deprecated" in Python 3; it was removed.

I started looking at this rule because I was curious how Ruff could even
detect this without a Python 2 parser. Then I realized that
"print >> f, x" is actually valid Python 3 syntax: it creates a tuple
containing a right-shifted version of the print function.
2024-04-27 07:54:04 -04:00
Micha Reiser
61c97a037c red-knot: Introduce program.check (#11148) 2024-04-27 09:01:20 +00:00
Micha Reiser
7cd065e4a2 Kick off Red-knot (#10849)
Co-authored-by: Carl Meyer <carl@oddbird.net>
Co-authored-by: Carl Meyer <carl@astral.sh>
2024-04-27 08:34:00 +00:00
Carl Meyer
845ba7cf5f Make ImportFrom level just a u32 (#11170) 2024-04-26 20:38:35 -06:00
Auguste Lalande
5994414739 [ruff] Implement redirected-noqa (RUF101) (#11052)
## Summary

Based on discussion in #10850.

As it stands today `RUF100` will attempt to replace code redirects with
their target codes even though this is not the "goal" of `RUF100`. This
behavior is confusing and inconsistent, since code redirects which don't
otherwise violate `RUF100` will not be updated. The behavior is also
undocumented. Additionally, users who want to use `RUF100` but do not
want to update redirects have no way to opt out.

This PR explicitly detects redirects with a new rule `RUF101` and
patches `RUF100` to keep original codes in fixes and reporting.

## Test Plan

Added fixture.
2024-04-27 02:08:11 +00:00
Jane Lewis
632965d0fa ruff server: Support a custom TOML configuration file (#11140)
## Summary

Closes #10985.

The server now supports a custom TOML configuration file as a client
setting. The setting must be an absolute path to a file. If the file is
called `pyproject.toml`, the server will attempt to parse it as a
pyproject file - otherwise, it will attempt to parse it as a `ruff.toml`
file, even if the file has a name besides `ruff.toml`.

If an option is set in both the custom TOML configuration file and in
the client settings directly, the latter will be used.

## Test Plan

1. Create a `ruff.toml` file outside of the workspace you are testing.
Set an option that is different from the one in the configuration for
your test workspace.
2. Set the path to the configuration in NeoVim:
```lua
require('lspconfig').ruff.setup {
    init_options = {
      settings = {
        configuration = "absolute/path/to/your/configuration"
      }
    }
}
```
3. Confirm that the option in the configuration file is used, regardless
of what the option is set to in the workspace configuration.
4. Add the same option, with a different value, to the NeoVim
configuration directly. For example:
```lua
require('lspconfig').ruff.setup {
    init_options = {
      settings = {
        configuration = "absolute/path/to/your/configuration",
        lint = {
          select = []
        }
      }
    }
}
```
5. Confirm that the option set in client settings is used, regardless of
the value in either the custom configuration file or in the workspace
configuration.
2024-04-26 23:46:07 +00:00
Dhruv Manilawala
77a72ecd38 Avoid multiline expression if format specifier is present (#11123)
## Summary

This PR fixes the bug where the formatter would format an f-string and
could potentially change the AST.

For a triple-quoted f-string, the element can't be formatted into
multiline if it has a format specifier because otherwise the newline
would be treated as part of the format specifier.

Given the following f-string:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
    variable:.3f} ddddddddddddddd eeeeeeee"""
```

The formatter sees that the f-string is already multiline so it assumes
that it can contain line breaks i.e., broken into multiple lines. But,
in this specific case we can't format it as:

```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
    variable:.3f
} ddddddddddddddd eeeeeeee"""
```
                     
Because the format specifier string would become ".3f\n", which is not
the original string (`.3f`).

If the original source code already contained a newline, they'll be
preserved. For example:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
    variable:.3f
} ddddddddddddddd eeeeeeee"""
```

The above will be formatted as:
```py
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {variable:.3f
} ddddddddddddddd eeeeeeee"""
```

Note that the newline after `.3f` is part of the format specifier which
needs to be preserved.
The Python version is irrelevant in this case.

fixes: #10040 

## Test Plan

Add some test cases to verify this behavior.
2024-04-26 13:34:38 +00:00
Jane Lewis
16a1f3cbcc ruff server: Support setting to prioritize project configuration over editor configuration (#11086)
## Summary

This is intended to address
https://github.com/astral-sh/ruff-vscode/issues/425, and is a follow-up
to https://github.com/astral-sh/ruff/pull/11062.

A new client setting is now supported by the server,
`prioritizeFileConfiguration`. This is a boolean setting (default:
`false`) that, if set to `true`, will instruct the configuration
resolver to prioritize file configuration (aka discovered TOML files)
over configuration passed in by the editor.

A corresponding extension PR has been opened, which makes this setting
available for VS Code:
https://github.com/astral-sh/ruff-vscode/pull/457.

## Test Plan

To test this with VS Code, you'll need to check out [the VS Code
PR](https://github.com/astral-sh/ruff-vscode/pull/457) that adds this
setting.

The test process is similar to
https://github.com/astral-sh/ruff/pull/11062, but in scenarios where the
editor configuration would take priority over file configuration, file
configuration should take priority.
2024-04-26 08:17:28 +00:00
Jelle Zijlstra
cd3e319538 Add support for PEP 696 syntax (#11120) 2024-04-26 09:47:29 +02:00
Shi Sheng
45725d3275 Update README.md (#11156)
## Summary

Sorry about an oversight, fixed the LICENSE redirection near the bottom
part of the README file. Now also is the full path
2024-04-25 22:46:47 -04:00
Shi Sheng
bbca8eb388 Update README.md (#11155)
## Summary

Modified the license badge so instead of `LICENSE`, it is now the full
path, and hoepfully will resolve the issue appeared in PyPI with the
license badge redirection
2024-04-25 22:14:21 -04:00
Steve C
c8c227dd5d [refurb] Implement fstring-number-format (FURB116) (#10921)
## Summary

Adds `FURB116`

See #1348 

## Test Plan

`cargo test`
2024-04-26 01:15:33 +00:00
Charlie Marsh
b15e9e6e05 Include inline instantiations when detecting loggers (#11154)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11031.
2024-04-25 21:00:12 -04:00
Ibraheem Ahmed
22d4f11348 Upgrade codspeed-criterion-compat to 2.6.0 (#11153)
## Summary

`codspeed-criterion-compat` 2.6.0 [updates it's package selection
mechanism](https://github.com/CodSpeedHQ/codspeed-rust/pull/43), so
https://github.com/astral-sh/ruff/pull/10735 is no longer needed.

## Test Plan

CI should still be passing.
2024-04-25 20:22:37 -04:00
Alex Waygood
269014a539 Delete unused methods from Parameters (#11150) 2024-04-25 22:11:24 +01:00
Charlie Marsh
dc09f529bc Build a separate ARM wheel for macOS (#11149)
## Summary

Since we already build an x86 wheel, we can just build an ARM wheel
rather than cross-compiling to universal.

The build time is ~3 minutes vs. > 20 minutes and the resulting artifact
is much smaller, which is also a win for users.
2024-04-25 15:06:47 -04:00
Auguste Lalande
3364ef957d [pygrep_hooks] Fix blanket-noqa panic when last line has noqa with no newline (PGH004) (#11108)
## Summary

Resolves #11102

The error stems from these lines

f5c7a62aa6/crates/ruff_linter/src/noqa.rs (L697-L702)
I don't really understand the purpose of incrementing the last index,
but it makes the resulting range invalid for indexing into `contents`.

For now I just detect if the index is too high in `blanket_noqa` and
adjust it if necessary.

## Test Plan

Created fixture from issue example.
2024-04-25 14:39:38 -04:00
Jane Lewis
77c93fd63c Bump version to 0.4.2 (#11151) 2024-04-25 17:31:38 +00:00
Dhruv Manilawala
1c9f5e3001 Display the AST even with syntax errors (#11147)
## Summary

This PR updates the playground to display the AST even if it contains a
syntax error. This could be useful for development and also to give a
quick preview of what error recovery looks like.

Note that not all recovery is correct but this allows us to iterate
quickly on what can be improved.

## Test Plan

Build the playground locally and test it.

<img width="1688" alt="Screenshot 2024-04-25 at 21 02 22"
src="https://github.com/astral-sh/ruff/assets/67177269/2b94934c-4f2c-4a9a-9693-3d8460ed9d0b">
2024-04-25 21:55:23 +05:30
Charlie Marsh
263a0d25ed Use macos-12 to build release wheels (#11146)
## Summary

GitHub has started to change `macos-latest` to `macos-14`. But
executables built on `macos-14` don't work on macOS 11 (see:
https://github.com/astral-sh/uv/issues/3261). This PR explicitly uses
`macos-12` instead (which is what we _intended_ to be using anyway).
2024-04-25 12:07:25 -04:00
Dhruv Manilawala
4738e19974 Remove unused lexical error types (#11145) 2024-04-25 15:24:16 +00:00
bersbersbers
f428bd5052 Docs: mention lint.typing-modules in TCH001, TCH002, TCH003 (#11144)
## Summary

Mention `lint.typing-modules` in `TCH001`, `TCH002`, `TCH003`; close
#11142.
2024-04-25 11:23:03 -04:00
Jane Lewis
4690890e9f ruff server: In 'publish diagnostics' mode, document diagnostics are cleared properly when a file is closed (#11137)
## Summary

Fixes #11114. 

As part of the `onClose` handler, we publish an empty array of
diagnostics for the document being closed, similar to
[`ruff-lsp`](187d7790be/ruff_lsp/server.py (L459-L464)).
This prevent phantom diagnostics from lingering after a document is
closed. We'll only do this if the client doesn't support pull
diagnostics, because otherwise clearing diagnostics is their
responsibility.

## Test Plan

Diagnostics should no longer appear for a document in the Problems tab
after the document is closed.
2024-04-24 19:38:54 -07:00
Maxime Beauchemin
19baabba58 README: add Apache Superset to project list (#11136)
## Summary

Stoked to have found` ruff` and migrated over - quick testimonial ->

> Our smooth and fast migration to ruff allowed us to deprecate 4+ other
disjointed tools (pycln, pyupgrade, black, isort, flake8) that each had
their own CLI subcommands, different enabling/disabling semantics, and
pre-commit hooks. I can't wait to add pylint to the list. It's such a no
brainer to replace all this with the Ferrari of linters.`--add-no-qa`
and `--ignore-noqa` are so convenient, but `--fix` takes it home!
2024-04-24 21:39:39 -04:00
Sid
cee38f39df [flake8-blind-expect] Allow raise from in BLE001 (#11131)
## Summary

This allows `raise from` in BLE001.

```python
try:
    ...
except Exception as e:
    raise ValueError from e
```

Fixes #10806

## Test Plan

Test case added.
2024-04-24 11:56:11 -04:00
Alex Waygood
e3fde28146 [flake8-pyi] Allow overloaded __exit__ and __aexit__ definitions (PYI036) (#11057) 2024-04-24 15:48:52 +00:00
Ibraheem Ahmed
1c8849f9a8 Use Matchit to Resolve Per-File Settings (#11111)
## Summary

Continuation of https://github.com/astral-sh/ruff/pull/9444.

> When the formatter is fully cached, it turns out we actually spend
meaningful time mapping from file to `Settings` (since we use a
hierarchical approach to settings). Using `matchit` rather than
`BTreeMap` improves fully-cached performance by anywhere from 2-5%
depending on the project, and since these are all implementation details
of `Resolver`, it's minimally invasive.

`matchit` supports escaping routing characters so this change should now
be fully compatible.

## Test Plan

On my machine I'm seeing a ~3% improvement with this change.

```
hyperfine --warmup 20 -i "./target/release/main format ../airflow" "./target/release/ruff format ../airflow"
Benchmark 1: ./target/release/main format ../airflow
  Time (mean ± σ):      58.1 ms ±   1.4 ms    [User: 63.1 ms, System: 66.5 ms]
  Range (min … max):    56.1 ms …  62.9 ms    49 runs
 
Benchmark 2: ./target/release/ruff format ../airflow
  Time (mean ± σ):      56.6 ms ±   1.5 ms    [User: 57.8 ms, System: 67.7 ms]
  Range (min … max):    54.1 ms …  63.0 ms    51 runs
 
Summary
  ./target/release/ruff format ../airflow ran
    1.03 ± 0.04 times faster than ./target/release/main format ../airflow
```
2024-04-24 10:25:46 -04:00
Alex Waygood
37af6e6147 [flake8-pyi] Allow simple assignments to None in enum class scopes (PYI026) (#11128) 2024-04-24 15:13:55 +01:00
Micha Reiser
92814fd99b Use crossbeam-channel instead of crossbeam (#11129) 2024-04-24 13:56:55 +00:00
Shi Sheng
c9c2e7b978 Fix link to license in README (#11124) 2024-04-24 15:51:07 +02:00
Charlie Marsh
51dec8d95b [flake8-simplify] Avoid raising SIM911 for non-zip attribute calls (#11126)
## Summary

The `matches!` macro here is slightly off and allows _all_ attribute
calls through.

Closes https://github.com/astral-sh/ruff/issues/11125.
2024-04-24 13:05:17 +00:00
Nolan
7c8c1c71a3 Implement hover menu support for ruff-server; Issue #10595 (#11096)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Add support for hover menu to ruff_server, as requested in
[10595](https://github.com/astral-sh/ruff/issues/10595).
Majority of new code is in hover.rs.
I reused the regex from ruff-lsp's implementation. Also reused the
format_rule_text function from ruff/src/commands/rule.rs
Added capability registration in server.rs, and added the handler to
api.rs.

## Test Plan

Tested in NVIM v0.10.0-dev-2582+g2a8cef6bd, configured with lspconfig
using the default options (other than cmd pointing to my test build,
with options "server" and "--preview"). OS: Ubuntu 24.04, kernel
6.8.0-22.

---------

Co-authored-by: Jane Lewis <me@jane.engineering>
2024-04-23 20:16:02 +00:00
Alex Waygood
455d22cdc8 Fix syntax for some if-conditions in ci.yaml (#11109) 2024-04-23 19:46:58 +01:00
Jane Lewis
35ca887e02 ruff server: Ruff configuration from client settings overrides project configuration (#11062)
## Summary

This is a follow-up to https://github.com/astral-sh/ruff/pull/10984 that
implements configuration resolution for editor configuration. By 'editor
configuration', I'm referring to the client settings that correspond to
Ruff configuration/options, like `preview`, `select`, and so on. These
will be combined with 'project configuration' (configuration taken from
project files such as `pyproject.toml`) to generate the final linter and
formatter settings used by `RuffSettings`. Editor configuration takes
priority over project configuration.

In a follow-up pull request, I'll implement a new client setting that
allows project configuration to override editor configuration, as per
[this issue](https://github.com/astral-sh/ruff-vscode/issues/425).

## Review guide

The first commit, e38966d8843becc7234fa7d46009c16af4ba41e9, is just
doing re-arrangement so that we can pass the right things to
`RuffSettings::resolve`. The actual resolution logic is in the second
commit, 0eec9ee75c10e5ec423bd9f5ce1764f4d7a5ad86. It might help to look
at these comments individually since the diff is rather messy.

## Test Plan

For the settings to show up in VS Code, you'll need to checkout this
branch: https://github.com/astral-sh/ruff-vscode/pull/456.

To test that the resolution for a specific setting works as expected,
run through the following scenarios, setting it in project and editor
configuration as needed:

| Set in project configuration? | Set in editor configuration? |
Expected Outcome |

|-------------------------------|--------------------------------------------------|------------------------------------------------------------------------------------------|
| No | No | The editor should behave as if the setting was set to its
default value. |
| Yes | No | The editor should behave as if the setting was set to the
value in project configuration. |
| No | Yes | The editor should behave as if the setting was set to the
value in editor configuration. |
| Yes | Yes (but distinctive from project configuration) | The editor
should behave as if the setting was set to the value in editor
configuration. |

An exception to this is `extendSelect`, which does not have an analog in
TOML configuration. Instead, you should verify that `extendSelect`
amends the `select` setting. If `select` is set in both editor and
project configuration, `extendSelect` will only append to the `select`
value in editor configuration, so make sure to un-set it there if you're
testing `extendSelect` with `select` in project configuration.
2024-04-23 11:19:17 -07:00
Alex Waygood
f5c7a62aa6 Add a new CI job to fuzz the parser (#11089) 2024-04-23 10:24:04 +00:00
Dhruv Manilawala
38d2562f41 Refactor unary expression parsing (#11088)
## Summary

This PR refactors unary expression parsing with the following changes:
* Ability to get `OperatorPrecedence` from a unary operator (`UnaryOp`)
* Implement methods on `TokenKind`
	* Add `as_unary_operator` which returns an `Option<UnaryOp>`
* Add `as_unary_arithmetic_operator` which returns an `Option<UnaryOp>`
(used for pattern parsing)
* Rename `is_unary` to `is_unary_arithmetic_operator` (used in the
linter)

resolves: #10752 

## Test Plan

Verify that the existing test cases pass, no ecosystem changes, run the
Python based fuzzer on 3000 random inputs and run it on dozens of
open-source repositories.
2024-04-23 04:55:02 +00:00
Dhruv Manilawala
7eba967e16 Refactor binary expression parsing (#11073)
## Summary

This PR refactors the binary expression parsing in a way to make it
readable and easy to understand. It draws inspiration from the suggested
edits in the linked messages in #10752.

### Changes

* Ability to get the precedence of an operator
	* From a boolean operator (`BinOp`) to `OperatorPrecedence`
	* From a binary operator (`Operator`) to `OperatorPrecedence`
	* No comparison operator because all of them have the same precedence
* Implement methods on `TokenKind` to convert it to an appropriate
operator enum
	* Add `as_boolean_operator` which returns an `Option<BoolOp>`
	* Add `as_binary_operator` which returns an `Option<Operator>`
* No `as_comparison_operator` because it requires lookahead and I'm not
sure if `token.as_comparison_operator(peek)` is a good way to implement
it
* Introduce `BinaryLikeOperator`
	* Constructed from two tokens using the methods from the second point
* Add `precedence` method using the conversion methods mentioned in the
first point
* Make most of the functions in `TokenKind` private to the module
* Use `self` instead of `&self` for `TokenKind` 

fixes: #11072

## Test Plan

Refer #11088
2024-04-23 04:42:40 +00:00
Dhruv Manilawala
5b81b8368d Make associativity a property of operator precedence (#11065)
## Summary

This PR does a few things but the main change is that is makes
associativity a property of operator precedence.

1. Rename `Precedence` -> `OperatorPrecedence`
2. Rename `parse_expression_with_precedence` ->
`parse_binary_expression_or_higher`
3. Move `current_binding_power` to `OperatorPrecedence::try_from_tokens`
[^1]
4. Add a `OperatorPrecedence::is_right_associative` method
5. Move from `increment_precedence` to using `<=` / `<` to check if the
parsing loop needs to stop [^2]

[^1]: Another alternative would be to have two separate methods to avoid
lookahead as it's required only for once case (`not in`). So,
`try_from_current_token(current).or_else(|| try_from_next_token(current,
peek))`
[^2]: This will allow us to easily make the refactors mentioned in
#10752

## Test Plan

Make sure the precedence parsing algorithm is still correct by running
the test suite, fuzz testing it and running it against a dozen or so
open-source repositories.
2024-04-23 04:28:46 +00:00
Dhruv Manilawala
c30735d4a7 Add ExpressionContext for expression parsing (#11055)
## Summary

This PR adds a new `ExpressionContext` struct which is used in
expression parsing.

This solves the following problem:
1. Allowing starred expression with different precedence
2. Allowing yield expression in certain context
3. Remove ambiguity with `in` keyword when parsing a `for ... in`
statement

For context, (1) was solved by adding `parse_star_expression_list` and
`parse_star_expression_or_higher` in #10623, (2) was solved by by adding
`parse_yield_expression_or_else` in #10809, and (3) was fixed in #11009.
All of the mentioned functions have been removed in favor of the context
flags.

As mentioned in #11009, an ideal solution would be to implement an
expression context which is what this PR implements. This is passed
around as function parameter and the call stack is used to automatically
reset the context.

### Recovery

How should the parser recover if the target expression is invalid when
an expression can consume the `in` keyword?

1. Should the `in` keyword be part of the target expression?
2. Or, should the expression parsing stop as soon as `in` keyword is
encountered, no matter the expression?

For example:
```python
for yield x in y: ...

# Here, should this be parsed as
for (yield x) in (y): ...
# Or
for (yield x in y): ...
# where the `in iter` part is missing
```

Or, for binary expression parsing:
```python
for x or y in z: ...

# Should this be parsed as
for (x or y) in z: ...
# Or
for (x or y in z): ...
# where the `in iter` part is missing
```

This need not be solved now, but is very easy to change. For context
this PR does the following:
* For binary, comparison, and unary expressions, stop at `in`
* For lambda, yield expressions, consume the `in`

## Test Plan

1. Add test cases for the `for ... in` statement and verify the
snapshots
2. Make sure the existing test suite pass
3. Run the fuzzer for around 3000 generated source code
4. Run the updated logic on a dozen or so open source repositories
(codename "parser-checkouts")
2024-04-23 04:19:05 +00:00
Jane Lewis
62478c3070 ruff server: Support publish diagnostics as a fallback when pull diagnostics aren't supported (#11092)
## Summary

Fixes #11059 

Several major editors don't support [pull
diagnostics](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_pullDiagnostics),
a method of sending diagnostics to the client that was introduced in
version `0.3.17` of the specification. Until now, `ruff server` has only
used pull diagnostics, which resulted in diagnostics not being available
on Neovim and Helix, which don't support pull diagnostics yet (though
Neovim `10.0` will have support for this).

`ruff server` will now utilize the older method of sending diagnostics,
known as 'publish diagnostics', when pull diagnostics aren't supported
by the client. This involves re-linting a document every time it is
opened or modified, and then sending the diagnostics generated from that
lint to the client via the `textDocument/publishDiagnostics`
notification.

## Test Plan

The easiest way to test that this PR works is to check if diagnostics
show up on Neovim `<=0.9`.
2024-04-22 21:06:35 -07:00
Ottavio Hartman
111bbc61f6 [refurb] New rule to suggest min/max over sorted() (FURB192) (#10868)
## Summary

Fixes #10463

Add `FURB192` which detects violations like this:

```python
# Bad
a = sorted(l)[0]

# Good
a = min(l)
```

There is a caveat that @Skylion007 has pointed out, which is that
violations with `reverse=True` technically aren't compatible with this
change, in the edge case where the unstable behavior is intended. For
example:

```python
from operator import itemgetter
data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)]

min(data, key=itemgetter(0))  # ('blue', 1)
sorted(data, key=itemgetter(0))[0]  # ('blue', 1)
sorted(data, key=itemgetter(0), reverse=True)[-1]  # ('blue, 2')
```

This seems like a rare edge case, but I can make the `reverse=True`
fixes unsafe if that's best.

## Test Plan

This is unit tested.

## References

https://github.com/dosisod/refurb/pull/333/files

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2024-04-23 01:13:57 +00:00
Charlie Marsh
925c7f8dd3 [refurb] Advoid operator.itemgetter suggestion for single-item tuple (#11095)
## Summary

The `operator.itemgetter` behavior changes where there's more than one
argument, such that `operator.itemgetter(0)` yields `r[0]`, rather than
`(r[0],)`.

Closes https://github.com/astral-sh/ruff/issues/11075.
2024-04-23 00:20:43 +00:00
KotlinIsland
5b4c8a7c5f (📚) fix docs for invalid-X-returns (#11094)
## Summary

There is no class `integer` in python, nor is there a type `integer`, so
I updated the docs to remove the backticks on these references, such
that it is the representation of an integer, and not a reference.
2024-04-23 00:17:47 +00:00
Auguste Lalande
647548b5e7 [pygrep_hooks] Move blanket-noqa to noqa checker (PGH004) (#11053)
## Summary

Move `blanket-noqa` rule from the token checker to the noqa checker.
This allows us to make use of the line directives already computed in
the noqa checker.

## Test Plan

Verified test results are unchanged.
2024-04-22 13:36:25 -04:00
plredmond
a9919707d4 [UP031] When encountering "%s" % var offer unsafe fix (#11019)
Resolves #10187

<details>
<summary>Old PR description; accurate through commit e86dd7d; probably
best to leave this fold closed</summary>

## Description of change

In the case of a printf-style format string with only one %-placeholder
and a variable at right (e.g. `"%s" % var`):

* The new behavior attempts to dereference the variable and then match
on the bound expression to distinguish between a 1-tuple (fix), n-tuple
(bug 🐛), or a non-tuple (fix). Dereferencing is via
`analyze::typing::find_binding_value`.
* If the variable cannot be dereferenced, then the type-analysis routine
is called to distinguish only tuple (no-fix) or non-tuple (fix). Type
analysis is via `analyze::typing::is_tuple`.
* If any of the above fails, the rule still fires, but no fix is
offered.

## Alternatives

* If the reviewers think that singling out the 1-tuple case is too
complicated, I will remove that.
* The ecosystem results show that no new fixes are detected. So I could
probably delete all the variable dereferencing code and code that tries
to generate fixes, tbh.

## Changes to existing behavior

**All the previous rule-firings and fixes are unchanged except for** the
"false negatives" in
`crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_1.py`. Those
previous "false negatives" are now true positives and so I moved them to
`crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py`.

<details>
<summary>Existing false negatives that are now true positives</summary>

```
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:134:1: UP031 Use format specifiers instead of percent format
    |
133 | # UP031 (no longer false negatives)
134 | 'Hello %s' % bar
    | ^^^^^^^^^^^^^^^^ UP031
135 |
136 | 'Hello %s' % bar.baz
    |
    = help: Replace with format specifiers

crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:136:1: UP031 Use format specifiers instead of percent format
    |
134 | 'Hello %s' % bar
135 |
136 | 'Hello %s' % bar.baz
    | ^^^^^^^^^^^^^^^^^^^^ UP031
137 |
138 | 'Hello %s' % bar['bop']
    |
    = help: Replace with format specifiers

crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:138:1: UP031 Use format specifiers instead of percent format
    |
136 | 'Hello %s' % bar.baz
137 |
138 | 'Hello %s' % bar['bop']
    | ^^^^^^^^^^^^^^^^^^^^^^^ UP031
    |
    = help: Replace with format specifiers
```
One of them newly offers a fix.
```
 # UP031 (no longer false negatives)
-'Hello %s' % bar
+'Hello {}'.format(bar)
```
This fix occurs because the new code dereferences `bar` to where it was
defined earlier in the file as a non-tuple:
```python
bar = {"bar": y}
```

---

</details>

## Behavior requiring new tests

Additionally, we now handle a few cases that we didn't previously test.
These cases are when a string has a single %-placeholder and the
righthand operand to the modulo operator is a variable **which can be
dereferenced.** One of those was shown in the previous section (the
"dereference non-tuple" case).

<details>
<summary>New cases handled</summary>

```
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:126:1: UP031 [*] Use format specifiers instead of percent format
    |
125 | t1 = (x,)
126 | "%s" % t1
    | ^^^^^^^^^ UP031
127 | # UP031: deref t1 to 1-tuple, offer fix
    |
    = help: Replace with format specifiers

crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:130:1: UP031 Use format specifiers instead of percent format
    |
129 | t2 = (x,y)
130 | "%s" % t2
    | ^^^^^^^^^ UP031
131 | # UP031: deref t2 to n-tuple, this is a bug
    |
    = help: Replace with format specifiers
```
One of these offers a fix.
```
 t1 = (x,)
-"%s" % t1
+"{}".format(t1[0])
 # UP031: deref t1 to 1-tuple, offer fix
```
The other doesn't offer a fix because it's a bug.

---

</details>

---

</details>


## Changes to existing behavior

In the case of a string with a single %-placeholder and a single
ambiguous righthand argument to the modulo operator, (e.g. `"%s" % var`)
the rule now fires and offers a fix. We explain about this in the "fix
safety" section of the updated documentation.


## Documentation changes

I swapped the order of the "known problems" and the "examples" sections
so that the examples which describe the rule are first, before the
exceptions to the rule are described. I also tweaked the language to be
more explicit, as I had trouble understanding the documentation at
first. The "known problems" section is now "fix safety" but the content
is largely similar.

The diff of the documentation changes looks a little difficult unless
you look at the individual commits.
2024-04-22 08:40:51 -07:00
Alex Waygood
5dcb1d9e8c Improvements to the fuzz-parser script (#11071)
## Summary

- Properly fix the race condition identified in
https://github.com/astral-sh/ruff/pull/11039. Instead of running the
version of Ruff we're testing by invoking `cargo run --release` on each
generated source file, we either (1) accept a path to an executable on
the command line or (2) if that's not specified, we run `cargo build
--release` once at the start and then invoke the executable found in
`target/release/ruff` directly.
- Now that the race condition is properly fixed, remove the workaround
for the race condition added in
https://github.com/astral-sh/ruff/pull/11039.
- Also allow users to pass in an executable to compare against for the
`--only-new-bugs` argument (previously it was hardcoded to always
compare against the version of Ruff installed into the Python
environment)
- Use `argparse.RawDescriptionHelpFormatter` as the formatter class
rather than `argparse.RawTextHelpFormatter`. This means that long help
texts for the individual arguments will be wrapped to a sensible width.
- On completion of the script, indicate success or failure of the script
overall by raising `SytemExit` with the appropriate exit code.
- Add myself as a codeowner for the script
2024-04-22 07:46:58 +01:00
renovate[bot]
0b92f450ca Update Rust crate codspeed-criterion-compat to v2.5.0 (#11087)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-22 08:33:05 +02:00
renovate[bot]
54d42957b0 Update dependency react-resizable-panels to v2.0.18 (#11081)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-22 08:32:24 +02:00
renovate[bot]
d3ed0ad68c Update Rust crate thiserror to v1.0.59 (#11080)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-22 08:31:51 +02:00
renovate[bot]
01678a990c Update Rust crate syn to v2.0.60 (#11079)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-04-22 08:31:33 +02:00
renovate[bot]
45692ce89f Update Rust crate quick-junit to 0.4.0 (#11084) 2024-04-22 07:13:09 +01:00
renovate[bot]
4246111f67 Update Rust crate proc-macro2 to v1.0.81 (#11076) 2024-04-22 07:10:37 +01:00
renovate[bot]
9924bd774a Update Rust crate serde to v1.0.198 (#11077) 2024-04-21 22:50:35 -05:00
renovate[bot]
fc7f07bca7 Update Rust crate serde_json to v1.0.116 (#11078) 2024-04-21 22:50:29 -05:00
renovate[bot]
bd6ca3d586 Update pre-commit dependencies (#11082) 2024-04-21 20:52:27 -05:00
renovate[bot]
23f8e1c3c8 Update NPM Development dependencies (#11083) 2024-04-21 20:52:13 -05:00
Jonathan Plasse
a68938897d [ruff] fix async comprehension false positive (RUF029) (#11070)
## Summary

- Fix #11043 

## Test Plan

Added the false positive code in the test fixture.
2024-04-21 08:17:25 -04:00
Charlie Marsh
d544199272 Respect per-file-ignores for RUF100 with no other diagnostics (#11058)
## Summary

The existing test didn't cover the case in which there are _no_ other
diagnostics in the file.

Closes https://github.com/astral-sh/ruff/issues/10906.
2024-04-20 15:33:22 +00:00
Carl Meyer
c80b9a4a90 Reduce size of Stmt from 144 to 120 bytes (#11051)
## Summary

I happened to notice that we box `TypeParams` on `StmtClassDef` but not
on `StmtFunctionDef` and wondered why, since `StmtFunctionDef` is bigger
and sets the size of `Stmt`.

@charliermarsh found that at the time we started boxing type params on
classes, classes were the largest statement type (see #6275), but that's
no longer true.

So boxing type-params also on functions reduces the overall size of
`Stmt`.

## Test Plan

The `<=` size tests are a bit irritating (since their failure doesn't
tell you the actual size), but I manually confirmed that the size is
actually 120 now.
2024-04-19 17:02:17 -06:00
Charlie Marsh
99f7f94538 Improve documentation around custom isort sections (#11050)
## Summary

Closes https://github.com/astral-sh/ruff/issues/11047.
2024-04-19 22:26:55 +00:00
James Frost
7b3c92a979 [flake8-bugbear] Document explicitly disabling strict zip (B905) (#11040)
Occasionally you intentionally have iterables of differing lengths. The
rule permits this by explicitly adding `strict=False`, but this was not
documented.

## Summary

The rule does not currently document how to avoid it when having
differing length iterables is intentional. This PR adds that to the rule
documentation.
2024-04-19 13:50:18 +00:00
Alex Waygood
fdbcb62adc scripts/fuzz-parser: work around race condition from running cargo build concurrently (#11039) 2024-04-19 14:42:28 +01:00
1605 changed files with 108564 additions and 19785 deletions

View File

@@ -1,3 +1,10 @@
[alias]
dev = "run --package ruff_dev --bin ruff_dev"
benchmark = "bench -p ruff_benchmark --bench linter --bench formatter --"
# statically link the C runtime so the executable does not depend on
# that shared/dynamic library.
#
# See: https://github.com/astral-sh/ruff/issues/11503
[target.'cfg(all(target_env="msvc", target_os = "windows"))']
rustflags = ["-C", "target-feature=+crt-static"]

6
.github/CODEOWNERS vendored
View File

@@ -12,3 +12,9 @@
# flake8-pyi
/crates/ruff_linter/src/rules/flake8_pyi/ @AlexWaygood
# Script for fuzzing the parser
/scripts/fuzz-parser/ @AlexWaygood
# red-knot
/crates/red_knot/ @carljm @MichaReiser

View File

@@ -27,9 +27,20 @@
// Group upload/download artifact updates, the versions are dependent
groupName: "Artifact GitHub Actions dependencies",
matchManagers: ["github-actions"],
matchDatasources: ["gitea-tags", "github-tags"],
matchPackagePatterns: ["actions/.*-artifact"],
description: "Weekly update of artifact-related GitHub Actions dependencies",
},
{
// This package rule disables updates for GitHub runners:
// we'd only pin them to a specific version
// if there was a deliberate reason to do so
groupName: "GitHub runners",
matchManagers: ["github-actions"],
matchDatasources: ["github-runners"],
description: "Disable PRs updating GitHub runners (e.g. 'runs-on: macos-14')",
enabled: false,
},
{
groupName: "pre-commit dependencies",
matchManagers: ["pre-commit"],

View File

@@ -23,6 +23,8 @@ jobs:
name: "Determine changes"
runs-on: ubuntu-latest
outputs:
# Flag that is raised when any code that affects parser is changed
parser: ${{ steps.changed.outputs.parser_any_changed }}
# Flag that is raised when any code that affects linter is changed
linter: ${{ steps.changed.outputs.linter_any_changed }}
# Flag that is raised when any code that affects formatter is changed
@@ -39,6 +41,17 @@ jobs:
id: changed
with:
files_yaml: |
parser:
- Cargo.toml
- Cargo.lock
- crates/ruff_python_trivia/**
- crates/ruff_source_file/**
- crates/ruff_text_size/**
- crates/ruff_python_ast/**
- crates/ruff_python_parser/**
- scripts/fuzz-parser/**
- .github/workflows/ci.yaml
linter:
- Cargo.toml
- Cargo.lock
@@ -46,7 +59,6 @@ jobs:
- "!crates/ruff_python_formatter/**"
- "!crates/ruff_formatter/**"
- "!crates/ruff_dev/**"
- "!crates/ruff_shrinking/**"
- scripts/*
- python/**
- .github/workflows/ci.yaml
@@ -155,6 +167,9 @@ jobs:
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
# Workaround for <https://github.com/nextest-rs/nextest/issues/1493>.
RUSTUP_WINDOWS_PATH_ADD_BIN: 1
run: |
cargo nextest run --all-features --profile ci
cargo test --all-features --doc
@@ -181,6 +196,54 @@ jobs:
cd crates/ruff_wasm
wasm-pack test --node
cargo-build-release:
name: "cargo build (release)"
runs-on: macos-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@v1
- uses: Swatinem/rust-cache@v2
- name: "Build"
run: cargo build --release --locked
cargo-build-msrv:
name: "cargo build (msrv)"
runs-on: ubuntu-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- uses: SebRollen/toml-action@v1.2.0
id: msrv
with:
file: "Cargo.toml"
field: "workspace.package.rust-version"
- name: "Install Rust toolchain"
run: rustup default ${{ steps.msrv.outputs.value }}
- name: "Install mold"
uses: rui314/setup-mold@v1
- name: "Install cargo nextest"
uses: taiki-e/install-action@v2
with:
tool: cargo-nextest
- name: "Install cargo insta"
uses: taiki-e/install-action@v2
with:
tool: cargo-insta
- uses: Swatinem/rust-cache@v2
- name: "Run tests"
shell: bash
env:
NEXTEST_PROFILE: "ci"
run: cargo +${{ steps.msrv.outputs.value }} insta test --all-features --unreferenced reject --test-runner nextest
cargo-fuzz:
name: "cargo fuzz"
runs-on: ubuntu-latest
@@ -200,6 +263,38 @@ jobs:
tool: cargo-fuzz@0.11.2
- run: cargo fuzz build -s none
fuzz-parser:
name: "Fuzz the parser"
runs-on: ubuntu-latest
needs:
- cargo-test-linux
- determine_changes
if: ${{ needs.determine_changes.outputs.parser == 'true' }}
timeout-minutes: 20
env:
FORCE_COLOR: 1
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install uv
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Install Python requirements
run: uv pip install -r scripts/fuzz-parser/requirements.txt --system
- uses: actions/download-artifact@v4
name: Download Ruff binary to test
id: download-cached-binary
with:
name: ruff
path: ruff-to-test
- name: Fuzz
run: |
# Make executable, since artifact download doesn't preserve this
chmod +x ${{ steps.download-cached-binary.outputs.download-path }}/ruff
python scripts/fuzz-parser/fuzz.py 0-500 --test-executable ${{ steps.download-cached-binary.outputs.download-path }}/ruff
scripts:
name: "test scripts"
runs-on: ubuntu-latest
@@ -228,9 +323,7 @@ jobs:
- determine_changes
# Only runs on pull requests, since that is the only we way we can find the base version for comparison.
# Ecosystem check needs linter and/or formatter changes.
if: github.event_name == 'pull_request' && ${{
needs.determine_changes.outputs.code == 'true'
}}
if: ${{ github.event_name == 'pull_request' && needs.determine_changes.outputs.code == 'true' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
@@ -337,22 +430,16 @@ jobs:
name: ecosystem-result
path: ecosystem-result
cargo-udeps:
name: "cargo udeps"
cargo-shear:
name: "cargo shear"
runs-on: ubuntu-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- name: "Install nightly Rust toolchain"
# Only pinned to make caching work, update freely
run: rustup toolchain install nightly-2023-10-15
- uses: Swatinem/rust-cache@v2
- name: "Install cargo-udeps"
uses: taiki-e/install-action@cargo-udeps
- name: "Run cargo-udeps"
run: cargo +nightly-2023-10-15 udeps
- uses: cargo-bins/cargo-binstall@main
- run: cargo binstall --no-confirm cargo-shear
- run: cargo shear
python-package:
name: "python package"
@@ -509,7 +596,7 @@ jobs:
benchmarks:
runs-on: ubuntu-latest
needs: determine_changes
if: ${{ needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main' }}
if: ${{ github.repository == 'astral-sh/ruff' && (needs.determine_changes.outputs.code == 'true' || github.ref == 'refs/heads/main') }}
timeout-minutes: 20
steps:
- name: "Checkout Branch"
@@ -525,23 +612,8 @@ jobs:
- uses: Swatinem/rust-cache@v2
# Codspeed comes with a very ancient cargo version (1.66) that resolves features flags differently than what we use now.
# This can result in build failures; see https://github.com/astral-sh/ruff/pull/10700.
# There's a pending codspeed PR to upgrade to a newer cargo version, but until that's merged, we need to use the workaround below.
# https://github.com/CodSpeedHQ/codspeed-rust/pull/31
# What we do is to call cargo build manually with the correct feature flags and RUSTC settings. We'll have to
# manually maintain the list of benchmarks to run with codspeed (the benefit is that we could detect which benchmarks to run and build based on the changes).
# This is inspired by https://github.com/oxc-project/oxc/blob/a0532adc654039a0c7ead7b35216dfa0b0cb8e8f/.github/workflows/benchmark.yml
- name: "Build benchmarks"
env:
RUSTFLAGS: "-C debuginfo=2 -C strip=none -g --cfg codspeed"
shell: bash
# Build all benchmarks, copy the binary to the codspeed directory, remove any `*.d` files that might have been created.
run: |
cargo build --release -p ruff_benchmark --bench parser --bench linter --bench formatter --bench lexer --features=codspeed
mkdir -p ./target/codspeed/ruff_benchmark
cp ./target/release/deps/{lexer,parser,linter,formatter}* target/codspeed/ruff_benchmark/
rm -rf ./target/codspeed/ruff_benchmark/*.d
run: cargo codspeed build --features codspeed -p ruff_benchmark
- name: "Run benchmarks"
uses: CodSpeedHQ/action@v2

72
.github/workflows/daily_fuzz.yaml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Daily parser fuzz
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * *"
pull_request:
paths:
- ".github/workflows/daily_fuzz.yaml"
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
CARGO_TERM_COLOR: always
RUSTUP_MAX_RETRIES: 10
PACKAGE_NAME: ruff
FORCE_COLOR: 1
jobs:
fuzz:
name: Fuzz
runs-on: ubuntu-latest
timeout-minutes: 20
# Don't run the cron job on forks:
if: ${{ github.repository == 'astral-sh/ruff' || github.event_name != 'schedule' }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install uv
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Install Python requirements
run: uv pip install -r scripts/fuzz-parser/requirements.txt --system
- name: "Install Rust toolchain"
run: rustup show
- name: "Install mold"
uses: rui314/setup-mold@v1
- uses: Swatinem/rust-cache@v2
- name: Build ruff
# A debug build means the script runs slower once it gets started,
# but this is outweighed by the fact that a release build takes *much* longer to compile in CI
run: cargo build --locked
- name: Fuzz
run: python scripts/fuzz-parser/fuzz.py $(shuf -i 0-9999999999999999999 -n 1000) --test-executable target/debug/ruff
create-issue-on-failure:
name: Create an issue if the daily fuzz surfaced any bugs
runs-on: ubuntu-latest
needs: fuzz
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && needs.fuzz.result == 'failure' }}
permissions:
issues: write
steps:
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.create({
owner: "astral-sh",
repo: "ruff",
title: `Daily parser fuzz failed on ${new Date().toDateString()}`,
body: "Runs listed here: https://github.com/astral-sh/ruff/actions/workflows/daily_fuzz.yml",
labels: ["bug", "parser", "fuzzer"],
})

View File

@@ -47,7 +47,7 @@ jobs:
run: mkdocs build --strict -f mkdocs.public.yml
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.4.1
uses: cloudflare/wrangler-action@v3.6.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -40,7 +40,7 @@ jobs:
working-directory: playground
- name: "Deploy to Cloudflare Pages"
if: ${{ env.CF_API_TOKEN_EXISTS == 'true' }}
uses: cloudflare/wrangler-action@v3.4.1
uses: cloudflare/wrangler-action@v3.6.1
with:
apiToken: ${{ secrets.CF_API_TOKEN }}
accountId: ${{ secrets.CF_ACCOUNT_ID }}

View File

@@ -58,7 +58,7 @@ jobs:
path: dist
macos-x86_64:
runs-on: macos-latest
runs-on: macos-12
steps:
- uses: actions/checkout@v4
with:
@@ -97,8 +97,8 @@ jobs:
*.tar.gz
*.sha256
macos-universal:
runs-on: macos-latest
macos-aarch64:
runs-on: macos-14
steps:
- uses: actions/checkout@v4
with:
@@ -106,16 +106,17 @@ jobs:
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
architecture: x64
architecture: arm64
- name: "Prep README.md"
run: python scripts/transform_readme.py --target pypi
- name: "Build wheels - universal2"
- name: "Build wheels - aarch64"
uses: PyO3/maturin-action@v1
with:
args: --release --locked --target universal2-apple-darwin --out dist
- name: "Test wheel - universal2"
target: aarch64
args: --release --locked --out dist
- name: "Test wheel - aarch64"
run: |
pip install dist/${{ env.PACKAGE_NAME }}-*universal2.whl --force-reinstall
pip install dist/${{ env.PACKAGE_NAME }}-*.whl --force-reinstall
ruff --help
python -m ruff --help
- name: "Upload wheels"
@@ -451,7 +452,7 @@ jobs:
name: Upload to PyPI
runs-on: ubuntu-latest
needs:
- macos-universal
- macos-aarch64
- macos-x86_64
- windows
- linux

80
.github/workflows/sync_typeshed.yaml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: Sync typeshed
on:
workflow_dispatch:
schedule:
# Run on the 1st and the 15th of every month:
- cron: "0 0 1,15 * *"
env:
FORCE_COLOR: 1
GH_TOKEN: ${{ github.token }}
jobs:
sync:
name: Sync typeshed
runs-on: ubuntu-latest
timeout-minutes: 20
# Don't run the cron job on forks:
if: ${{ github.repository == 'astral-sh/ruff' || github.event_name != 'schedule' }}
permissions:
contents: write
pull-requests: write
steps:
- uses: actions/checkout@v4
name: Checkout Ruff
with:
path: ruff
- uses: actions/checkout@v4
name: Checkout typeshed
with:
repository: python/typeshed
path: typeshed
- name: Setup git
run: |
git config --global user.name typeshedbot
git config --global user.email '<>'
- name: Sync typeshed
id: sync
run: |
rm -rf ruff/crates/red_knot/vendor/typeshed
mkdir ruff/crates/red_knot/vendor/typeshed
cp typeshed/README.md ruff/crates/red_knot/vendor/typeshed
cp typeshed/LICENSE ruff/crates/red_knot/vendor/typeshed
cp -r typeshed/stdlib ruff/crates/red_knot/vendor/typeshed/stdlib
rm -rf ruff/crates/red_knot/vendor/typeshed/stdlib/@tests
git -C typeshed rev-parse HEAD > ruff/crates/red_knot/vendor/typeshed/source_commit.txt
- name: Commit the changes
id: commit
if: ${{ steps.sync.outcome == 'success' }}
run: |
cd ruff
git checkout -b typeshedbot/sync-typeshed
git add .
git diff --staged --quiet || git commit -m "Sync typeshed. Source commit: https://github.com/python/typeshed/commit/$(git -C ../typeshed rev-parse HEAD)"
- name: Create a PR
if: ${{ steps.sync.outcome == 'success' && steps.commit.outcome == 'success' }}
run: |
cd ruff
git push --force origin typeshedbot/sync-typeshed
gh pr list --repo $GITHUB_REPOSITORY --head typeshedbot/sync-typeshed --json id --jq length | grep 1 && exit 0 # exit if there is existing pr
gh pr create --title "Sync vendored typeshed stubs" --body "Close and reopen this PR to trigger CI" --label "internal"
create-issue-on-failure:
name: Create an issue if the typeshed sync failed
runs-on: ubuntu-latest
needs: [sync]
if: ${{ github.repository == 'astral-sh/ruff' && always() && github.event_name == 'schedule' && needs.sync.result == 'failure' }}
permissions:
issues: write
steps:
- uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.create({
owner: "astral-sh",
repo: "ruff",
title: `Automated typeshed sync failed on ${new Date().toDateString()}`,
body: "Runs are listed here: https://github.com/astral-sh/ruff/actions/workflows/sync_typeshed.yaml",
})

View File

@@ -2,6 +2,7 @@ fail_fast: true
exclude: |
(?x)^(
crates/red_knot/vendor/.*|
crates/ruff_linter/resources/.*|
crates/ruff_linter/src/rules/.*/snapshots/.*|
crates/ruff/resources/.*|
@@ -13,7 +14,7 @@ exclude: |
repos:
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.16
rev: v0.18
hooks:
- id: validate-pyproject
@@ -31,7 +32,7 @@ repos:
)$
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.39.0
rev: v0.41.0
hooks:
- id: markdownlint-fix
exclude: |
@@ -41,7 +42,7 @@ repos:
)$
- repo: https://github.com/crate-ci/typos
rev: v1.20.8
rev: v1.21.0
hooks:
- id: typos
@@ -55,7 +56,7 @@ repos:
pass_filenames: false # This makes it a lot faster
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.0
rev: v0.4.7
hooks:
- id: ruff-format
- id: ruff

5
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,5 @@
{
"recommendations": [
"rust-lang.rust-analyzer"
]
}

6
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,6 @@
{
"rust-analyzer.check.extraArgs": [
"--all-features"
],
"rust-analyzer.check.command": "clippy",
}

View File

@@ -1,5 +1,275 @@
# Changelog
## 0.4.8
### Performance
- Linter performance has been improved by around 10% on some microbenchmarks by refactoring the lexer and parser to maintain synchronicity between them ([#11457](https://github.com/astral-sh/ruff/pull/11457))
### Preview features
- \[`flake8-bugbear`\] Implement `return-in-generator` (`B901`) ([#11644](https://github.com/astral-sh/ruff/pull/11644))
- \[`flake8-pyi`\] Implement `PYI063` ([#11699](https://github.com/astral-sh/ruff/pull/11699))
- \[`pygrep_hooks`\] Check blanket ignores via file-level pragmas (`PGH004`) ([#11540](https://github.com/astral-sh/ruff/pull/11540))
### Rule changes
- \[`pyupgrade`\] Update `UP035` for Python 3.13 and the latest version of `typing_extensions` ([#11693](https://github.com/astral-sh/ruff/pull/11693))
- \[`numpy`\] Update `NPY001` rule for NumPy 2.0 ([#11735](https://github.com/astral-sh/ruff/pull/11735))
### Server
- Formatting a document with syntax problems no longer spams a visible error popup ([#11745](https://github.com/astral-sh/ruff/pull/11745))
### CLI
- Add RDJson support for `--output-format` flag ([#11682](https://github.com/astral-sh/ruff/pull/11682))
### Bug fixes
- \[`pyupgrade`\] Write empty string in lieu of panic when fixing `UP032` ([#11696](https://github.com/astral-sh/ruff/pull/11696))
- \[`flake8-simplify`\] Simplify double negatives in `SIM103` ([#11684](https://github.com/astral-sh/ruff/pull/11684))
- Ensure the expression generator adds a newline before `type` statements ([#11720](https://github.com/astral-sh/ruff/pull/11720))
- Respect per-file ignores for blanket and redirected noqa rules ([#11728](https://github.com/astral-sh/ruff/pull/11728))
## 0.4.7
### Preview features
- \[`flake8-pyi`\] Implement `PYI064` ([#11325](https://github.com/astral-sh/ruff/pull/11325))
- \[`flake8-pyi`\] Implement `PYI066` ([#11541](https://github.com/astral-sh/ruff/pull/11541))
- \[`flake8-pyi`\] Implement `PYI057` ([#11486](https://github.com/astral-sh/ruff/pull/11486))
- \[`pyflakes`\] Enable `F822` in `__init__.py` files by default ([#11370](https://github.com/astral-sh/ruff/pull/11370))
### Formatter
- Fix incorrect placement of trailing stub function comments ([#11632](https://github.com/astral-sh/ruff/pull/11632))
### Server
- Respect file exclusions in `ruff server` ([#11590](https://github.com/astral-sh/ruff/pull/11590))
- Add support for documents not exist on disk ([#11588](https://github.com/astral-sh/ruff/pull/11588))
- Add Vim and Kate setup guide for `ruff server` ([#11615](https://github.com/astral-sh/ruff/pull/11615))
### Bug fixes
- Avoid removing newlines between docstring headers and rST blocks ([#11609](https://github.com/astral-sh/ruff/pull/11609))
- Infer indentation with imports when logical indent is absent ([#11608](https://github.com/astral-sh/ruff/pull/11608))
- Use char index rather than position for indent slice ([#11645](https://github.com/astral-sh/ruff/pull/11645))
- \[`flake8-comprehension`\] Strip parentheses around generators in `C400` ([#11607](https://github.com/astral-sh/ruff/pull/11607))
- Mark `repeated-isinstance-calls` as unsafe on Python 3.10 and later ([#11622](https://github.com/astral-sh/ruff/pull/11622))
## 0.4.6
### Breaking changes
- Use project-relative paths when calculating GitLab fingerprints ([#11532](https://github.com/astral-sh/ruff/pull/11532))
- Bump minimum supported Windows version to Windows 10 ([#11613](https://github.com/astral-sh/ruff/pull/11613))
### Preview features
- \[`flake8-async`\] Sleep with >24 hour interval should usually sleep forever (`ASYNC116`) ([#11498](https://github.com/astral-sh/ruff/pull/11498))
### Rule changes
- \[`numpy`\] Add missing functions to NumPy 2.0 migration rule ([#11528](https://github.com/astral-sh/ruff/pull/11528))
- \[`mccabe`\] Consider irrefutable pattern similar to `if .. else` for `C901` ([#11565](https://github.com/astral-sh/ruff/pull/11565))
- Consider `match`-`case` statements for `C901`, `PLR0912`, and `PLR0915` ([#11521](https://github.com/astral-sh/ruff/pull/11521))
- Remove empty strings when converting to f-string (`UP032`) ([#11524](https://github.com/astral-sh/ruff/pull/11524))
- \[`flake8-bandit`\] `request-without-timeout` should warn for `requests.request` ([#11548](https://github.com/astral-sh/ruff/pull/11548))
- \[`flake8-self`\] Ignore sunder accesses in `flake8-self` rules ([#11546](https://github.com/astral-sh/ruff/pull/11546))
- \[`pyupgrade`\] Lint for `TypeAliasType` usages (`UP040`) ([#11530](https://github.com/astral-sh/ruff/pull/11530))
### Server
- Respect excludes in `ruff server` configuration discovery ([#11551](https://github.com/astral-sh/ruff/pull/11551))
- Use default settings if initialization options is empty or not provided ([#11566](https://github.com/astral-sh/ruff/pull/11566))
- `ruff server` correctly treats `.pyi` files as stub files ([#11535](https://github.com/astral-sh/ruff/pull/11535))
- `ruff server` searches for configuration in parent directories ([#11537](https://github.com/astral-sh/ruff/pull/11537))
- `ruff server`: An empty code action filter no longer returns notebook source actions ([#11526](https://github.com/astral-sh/ruff/pull/11526))
### Bug fixes
- \[`flake8-logging-format`\] Fix autofix title in `logging-warn` (`G010`) ([#11514](https://github.com/astral-sh/ruff/pull/11514))
- \[`refurb`\] Avoid recommending `operator.itemgetter` with dependence on lambda arguments ([#11574](https://github.com/astral-sh/ruff/pull/11574))
- \[`flake8-simplify`\] Avoid recommending context manager in `__enter__` implementations ([#11575](https://github.com/astral-sh/ruff/pull/11575))
- Create intermediary directories for `--output-file` ([#11550](https://github.com/astral-sh/ruff/pull/11550))
- Propagate reads on global variables ([#11584](https://github.com/astral-sh/ruff/pull/11584))
- Treat all `singledispatch` arguments as runtime-required ([#11523](https://github.com/astral-sh/ruff/pull/11523))
## 0.4.5
### Ruff's language server is now in Beta
`v0.4.5` marks the official Beta release of `ruff server`, an integrated language server built into Ruff.
`ruff server` supports the same feature set as `ruff-lsp`, powering linting, formatting, and
code fixes in Ruff's editor integrations -- but with superior performance and
no installation required. We'd love your feedback!
You can enable `ruff server` in the [VS Code extension](https://github.com/astral-sh/ruff-vscode?tab=readme-ov-file#enabling-the-rust-based-language-server) today.
To read more about this exciting milestone, check out our [blog post](https://astral.sh/blog/ruff-v0.4.5)!
### Rule changes
- \[`flake8-future-annotations`\] Reword `future-rewritable-type-annotation` (`FA100`) message ([#11381](https://github.com/astral-sh/ruff/pull/11381))
- \[`pycodestyle`\] Consider soft keywords for `E27` rules ([#11446](https://github.com/astral-sh/ruff/pull/11446))
- \[`pyflakes`\] Recommend adding unused import bindings to `__all__` ([#11314](https://github.com/astral-sh/ruff/pull/11314))
- \[`pyflakes`\] Update documentation and deprecate `ignore_init_module_imports` ([#11436](https://github.com/astral-sh/ruff/pull/11436))
- \[`pyupgrade`\] Mark quotes as unnecessary for non-evaluated annotations ([#11485](https://github.com/astral-sh/ruff/pull/11485))
### Formatter
- Avoid multiline quotes warning with `quote-style = preserve` ([#11490](https://github.com/astral-sh/ruff/pull/11490))
### Server
- Support Jupyter Notebook files ([#11206](https://github.com/astral-sh/ruff/pull/11206))
- Support `noqa` comment code actions ([#11276](https://github.com/astral-sh/ruff/pull/11276))
- Fix automatic configuration reloading ([#11492](https://github.com/astral-sh/ruff/pull/11492))
- Fix several issues with configuration in Neovim and Helix ([#11497](https://github.com/astral-sh/ruff/pull/11497))
### CLI
- Add `--output-format` as a CLI option for `ruff config` ([#11438](https://github.com/astral-sh/ruff/pull/11438))
### Bug fixes
- Avoid `PLE0237` for property with setter ([#11377](https://github.com/astral-sh/ruff/pull/11377))
- Avoid `TCH005` for `if` stmt with `elif`/`else` block ([#11376](https://github.com/astral-sh/ruff/pull/11376))
- Avoid flagging `__future__` annotations as required for non-evaluated type annotations ([#11414](https://github.com/astral-sh/ruff/pull/11414))
- Check for ruff executable in 'bin' directory as installed by 'pip install --target'. ([#11450](https://github.com/astral-sh/ruff/pull/11450))
- Sort edits prior to deduplicating in quotation fix ([#11452](https://github.com/astral-sh/ruff/pull/11452))
- Treat escaped newline as valid sequence ([#11465](https://github.com/astral-sh/ruff/pull/11465))
- \[`flake8-pie`\] Preserve parentheses in `unnecessary-dict-kwargs` ([#11372](https://github.com/astral-sh/ruff/pull/11372))
- \[`pylint`\] Ignore `__slots__` with dynamic values ([#11488](https://github.com/astral-sh/ruff/pull/11488))
- \[`pylint`\] Remove `try` body from branch counting ([#11487](https://github.com/astral-sh/ruff/pull/11487))
- \[`refurb`\] Respect operator precedence in `FURB110` ([#11464](https://github.com/astral-sh/ruff/pull/11464))
### Documentation
- Add `--preview` to the README ([#11395](https://github.com/astral-sh/ruff/pull/11395))
- Add Python 3.13 to list of allowed Python versions ([#11411](https://github.com/astral-sh/ruff/pull/11411))
- Simplify Neovim setup documentation ([#11489](https://github.com/astral-sh/ruff/pull/11489))
- Update CONTRIBUTING.md to reflect the new parser ([#11434](https://github.com/astral-sh/ruff/pull/11434))
- Update server documentation with new migration guide ([#11499](https://github.com/astral-sh/ruff/pull/11499))
- \[`pycodestyle`\] Clarify motivation for `E713` and `E714` ([#11483](https://github.com/astral-sh/ruff/pull/11483))
- \[`pyflakes`\] Update docs to describe WAI behavior (F541) ([#11362](https://github.com/astral-sh/ruff/pull/11362))
- \[`pylint`\] Clearly indicate what is counted as a branch ([#11423](https://github.com/astral-sh/ruff/pull/11423))
## 0.4.4
### Preview features
- \[`pycodestyle`\] Ignore end-of-line comments when determining blank line rules ([#11342](https://github.com/astral-sh/ruff/pull/11342))
- \[`pylint`\] Detect `pathlib.Path.open` calls in `unspecified-encoding` (`PLW1514`) ([#11288](https://github.com/astral-sh/ruff/pull/11288))
- \[`flake8-pyi`\] Implement `PYI059` (`generic-not-last-base-class`) ([#11233](https://github.com/astral-sh/ruff/pull/11233))
- \[`flake8-pyi`\] Implement `PYI062` (`duplicate-literal-member`) ([#11269](https://github.com/astral-sh/ruff/pull/11269))
### Rule changes
- \[`flake8-boolean-trap`\] Allow passing booleans as positional-only arguments in code such as `set(True)` ([#11287](https://github.com/astral-sh/ruff/pull/11287))
- \[`flake8-bugbear`\] Ignore enum classes in `cached-instance-method` (`B019`) ([#11312](https://github.com/astral-sh/ruff/pull/11312))
### Server
- Expand tildes when resolving Ruff server configuration file ([#11283](https://github.com/astral-sh/ruff/pull/11283))
- Fix `ruff server` hanging after Neovim closes ([#11291](https://github.com/astral-sh/ruff/pull/11291))
- Editor settings are used by default if no file-based configuration exists ([#11266](https://github.com/astral-sh/ruff/pull/11266))
### Bug fixes
- \[`pylint`\] Consider `with` statements for `too-many-branches` (`PLR0912`) ([#11321](https://github.com/astral-sh/ruff/pull/11321))
- \[`flake8-blind-except`, `tryceratops`\] Respect logged and re-raised expressions in nested statements (`BLE001`, `TRY201`) ([#11301](https://github.com/astral-sh/ruff/pull/11301))
- Recognise assignments such as `__all__ = builtins.list(["foo", "bar"])` as valid `__all__` definitions ([#11335](https://github.com/astral-sh/ruff/pull/11335))
## 0.4.3
### Enhancements
- Add support for PEP 696 syntax ([#11120](https://github.com/astral-sh/ruff/pull/11120))
### Preview features
- \[`refurb`\] Use function range for `reimplemented-operator` diagnostics ([#11271](https://github.com/astral-sh/ruff/pull/11271))
- \[`refurb`\] Ignore methods in `reimplemented-operator` (`FURB118`) ([#11270](https://github.com/astral-sh/ruff/pull/11270))
- \[`refurb`\] Implement `fstring-number-format` (`FURB116`) ([#10921](https://github.com/astral-sh/ruff/pull/10921))
- \[`ruff`\] Implement `redirected-noqa` (`RUF101`) ([#11052](https://github.com/astral-sh/ruff/pull/11052))
- \[`pyflakes`\] Distinguish between first-party and third-party imports for fix suggestions ([#11168](https://github.com/astral-sh/ruff/pull/11168))
### Rule changes
- \[`flake8-bugbear`\] Ignore non-abstract class attributes when enforcing `B024` ([#11210](https://github.com/astral-sh/ruff/pull/11210))
- \[`flake8-logging`\] Include inline instantiations when detecting loggers ([#11154](https://github.com/astral-sh/ruff/pull/11154))
- \[`pylint`\] Also emit `PLR0206` for properties with variadic parameters ([#11200](https://github.com/astral-sh/ruff/pull/11200))
- \[`ruff`\] Detect duplicate codes as part of `unused-noqa` (`RUF100`) ([#10850](https://github.com/astral-sh/ruff/pull/10850))
### Formatter
- Avoid multiline expression if format specifier is present ([#11123](https://github.com/astral-sh/ruff/pull/11123))
### LSP
- Write `ruff server` setup guide for Helix ([#11183](https://github.com/astral-sh/ruff/pull/11183))
- `ruff server` no longer hangs after shutdown ([#11222](https://github.com/astral-sh/ruff/pull/11222))
- `ruff server` reads from a configuration TOML file in the user configuration directory if no local configuration exists ([#11225](https://github.com/astral-sh/ruff/pull/11225))
- `ruff server` respects `per-file-ignores` configuration ([#11224](https://github.com/astral-sh/ruff/pull/11224))
- `ruff server`: Support a custom TOML configuration file ([#11140](https://github.com/astral-sh/ruff/pull/11140))
- `ruff server`: Support setting to prioritize project configuration over editor configuration ([#11086](https://github.com/astral-sh/ruff/pull/11086))
### Bug fixes
- Avoid debug assertion around NFKC renames ([#11249](https://github.com/astral-sh/ruff/pull/11249))
- \[`pyflakes`\] Prioritize `redefined-while-unused` over `unused-import` ([#11173](https://github.com/astral-sh/ruff/pull/11173))
- \[`ruff`\] Respect `async` expressions in comprehension bodies ([#11219](https://github.com/astral-sh/ruff/pull/11219))
- \[`pygrep_hooks`\] Fix `blanket-noqa` panic when last line has noqa with no newline (`PGH004`) ([#11108](https://github.com/astral-sh/ruff/pull/11108))
- \[`perflint`\] Ignore list-copy recommendations for async `for` loops ([#11250](https://github.com/astral-sh/ruff/pull/11250))
- \[`pyflakes`\] Improve `invalid-print-syntax` documentation ([#11171](https://github.com/astral-sh/ruff/pull/11171))
### Performance
- Avoid allocations for isort module names ([#11251](https://github.com/astral-sh/ruff/pull/11251))
- Build a separate ARM wheel for macOS ([#11149](https://github.com/astral-sh/ruff/pull/11149))
### Windows
- Increase the minimum requirement to Windows 10.
## 0.4.2
### Rule changes
- \[`flake8-pyi`\] Allow for overloaded `__exit__` and `__aexit__` definitions (`PYI036`) ([#11057](https://github.com/astral-sh/ruff/pull/11057))
- \[`pyupgrade`\] Catch usages of `"%s" % var` and provide an unsafe fix (`UP031`) ([#11019](https://github.com/astral-sh/ruff/pull/11019))
- \[`refurb`\] Implement new rule that suggests min/max over `sorted()` (`FURB192`) ([#10868](https://github.com/astral-sh/ruff/pull/10868))
### Server
- Fix an issue with missing diagnostics for Neovim and Helix ([#11092](https://github.com/astral-sh/ruff/pull/11092))
- Implement hover documentation for `noqa` codes ([#11096](https://github.com/astral-sh/ruff/pull/11096))
- Introduce common Ruff configuration options with new server settings ([#11062](https://github.com/astral-sh/ruff/pull/11062))
### Bug fixes
- Use `macos-12` for building release wheels to enable macOS 11 compatibility ([#11146](https://github.com/astral-sh/ruff/pull/11146))
- \[`flake8-blind-expect`\] Allow raise from in `BLE001` ([#11131](https://github.com/astral-sh/ruff/pull/11131))
- \[`flake8-pyi`\] Allow simple assignments to `None` in enum class scopes (`PYI026`) ([#11128](https://github.com/astral-sh/ruff/pull/11128))
- \[`flake8-simplify`\] Avoid raising `SIM911` for non-`zip` attribute calls ([#11126](https://github.com/astral-sh/ruff/pull/11126))
- \[`refurb`\] Avoid `operator.itemgetter` suggestion for single-item tuple ([#11095](https://github.com/astral-sh/ruff/pull/11095))
- \[`ruff`\] Respect per-file-ignores for `RUF100` with no other diagnostics ([#11058](https://github.com/astral-sh/ruff/pull/11058))
- \[`ruff`\] Fix async comprehension false positive (`RUF029`) ([#11070](https://github.com/astral-sh/ruff/pull/11070))
### Documentation
- \[`flake8-bugbear`\] Document explicitly disabling strict zip (`B905`) ([#11040](https://github.com/astral-sh/ruff/pull/11040))
- \[`flake8-type-checking`\] Mention `lint.typing-modules` in `TCH001`, `TCH002`, and `TCH003` ([#11144](https://github.com/astral-sh/ruff/pull/11144))
- \[`isort`\] Improve documentation around custom `isort` sections ([#11050](https://github.com/astral-sh/ruff/pull/11050))
- \[`pylint`\] Fix documentation oversight for `invalid-X-returns` ([#11094](https://github.com/astral-sh/ruff/pull/11094))
### Performance
- Use `matchit` to resolve per-file settings ([#11111](https://github.com/astral-sh/ruff/pull/11111))
## 0.4.1
### Preview features

View File

@@ -101,6 +101,8 @@ pre-commit run --all-files --show-diff-on-failure # Rust and Python formatting,
These checks will run on GitHub Actions when you open your pull request, but running them locally
will save you time and expedite the merge process.
If you're using VS Code, you can also install the recommended [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer) extension to get these checks while editing.
Note that many code changes also require updating the snapshot tests, which is done interactively
after running `cargo test` like so:
@@ -637,11 +639,11 @@ Otherwise, follow the instructions from the linux section.
`cargo dev` is a shortcut for `cargo run --package ruff_dev --bin ruff_dev`. You can run some useful
utils with it:
- `cargo dev print-ast <file>`: Print the AST of a python file using the
[RustPython parser](https://github.com/astral-sh/ruff/tree/main/crates/ruff_python_parser) that is
mainly used in Ruff. For `if True: pass # comment`, you can see the syntax tree, the byte offsets
for start and stop of each node and also how the `:` token, the comment and whitespace are not
represented anymore:
- `cargo dev print-ast <file>`: Print the AST of a python file using Ruff's
[Python parser](https://github.com/astral-sh/ruff/tree/main/crates/ruff_python_parser).
For `if True: pass # comment`, you can see the syntax tree, the byte offsets for start and
stop of each node and also how the `:` token, the comment and whitespace are not represented
anymore:
```text
[

657
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ resolver = "2"
[workspace.package]
edition = "2021"
rust-version = "1.71"
rust-version = "1.74"
homepage = "https://docs.astral.sh/ruff"
documentation = "https://docs.astral.sh/ruff"
repository = "https://github.com/astral-sh/ruff"
@@ -12,6 +12,28 @@ authors = ["Charlie Marsh <charlie.r.marsh@gmail.com>"]
license = "MIT"
[workspace.dependencies]
ruff = { path = "crates/ruff" }
ruff_cache = { path = "crates/ruff_cache" }
ruff_diagnostics = { path = "crates/ruff_diagnostics" }
ruff_formatter = { path = "crates/ruff_formatter" }
ruff_index = { path = "crates/ruff_index" }
ruff_linter = { path = "crates/ruff_linter" }
ruff_macros = { path = "crates/ruff_macros" }
ruff_notebook = { path = "crates/ruff_notebook" }
ruff_python_ast = { path = "crates/ruff_python_ast" }
ruff_python_codegen = { path = "crates/ruff_python_codegen" }
ruff_python_formatter = { path = "crates/ruff_python_formatter" }
ruff_python_index = { path = "crates/ruff_python_index" }
ruff_python_literal = { path = "crates/ruff_python_literal" }
ruff_python_parser = { path = "crates/ruff_python_parser" }
ruff_python_semantic = { path = "crates/ruff_python_semantic" }
ruff_python_stdlib = { path = "crates/ruff_python_stdlib" }
ruff_python_trivia = { path = "crates/ruff_python_trivia" }
ruff_server = { path = "crates/ruff_server" }
ruff_source_file = { path = "crates/ruff_source_file" }
ruff_text_size = { path = "crates/ruff_text_size" }
ruff_workspace = { path = "crates/ruff_workspace" }
aho-corasick = { version = "1.1.3" }
annotate-snippets = { version = "0.9.2", features = ["color"] }
anyhow = { version = "1.0.80" }
@@ -24,53 +46,55 @@ chrono = { version = "0.4.35", default-features = false, features = ["clock"] }
clap = { version = "4.5.3", features = ["derive"] }
clap_complete_command = { version = "0.5.1" }
clearscreen = { version = "3.0.0" }
codspeed-criterion-compat = { version = "2.4.0", default-features = false }
codspeed-criterion-compat = { version = "2.6.0", default-features = false }
colored = { version = "2.1.0" }
console_error_panic_hook = { version = "0.1.7" }
console_log = { version = "1.0.0" }
countme = { version = "3.0.1" }
criterion = { version = "0.5.1", default-features = false }
crossbeam = { version = "0.8.4" }
dashmap = { version = "5.5.3" }
dirs = { version = "5.0.0" }
drop_bomb = { version = "0.1.5" }
env_logger = { version = "0.11.0" }
fern = { version = "0.6.1" }
filetime = { version = "0.2.23" }
fs-err = { version = "2.11.0" }
glob = { version = "0.3.1" }
globset = { version = "0.4.14" }
hexf-parse = { version = "0.2.1" }
hashbrown = "0.14.3"
ignore = { version = "0.4.22" }
imara-diff = { version = "0.1.5" }
imperative = { version = "1.0.4" }
indexmap = { version = "2.2.6" }
indicatif = { version = "0.17.8" }
indoc = { version = "2.0.4" }
insta = { version = "1.35.1", feature = ["filters", "glob"] }
insta-cmd = { version = "0.6.0" }
is-macro = { version = "0.3.5" }
is-wsl = { version = "0.4.0" }
itertools = { version = "0.12.1" }
itertools = { version = "0.13.0" }
js-sys = { version = "0.3.69" }
jod-thread = { version = "0.1.2" }
lexical-parse-float = { version = "0.8.0", features = ["format"] }
libc = { version = "0.2.153" }
libcst = { version = "1.1.0", default-features = false }
log = { version = "0.4.17" }
lsp-server = { version = "0.7.6" }
lsp-types = { version = "0.95.0", features = ["proposed"] }
lsp-types = { git = "https://github.com/astral-sh/lsp-types.git", rev = "3512a9f", features = ["proposed"] }
matchit = { version = "0.8.1" }
memchr = { version = "2.7.1" }
mimalloc = { version = "0.1.39" }
natord = { version = "1.0.9" }
notify = { version = "6.1.1" }
num_cpus = { version = "1.16.0" }
once_cell = { version = "1.19.0" }
path-absolutize = { version = "3.1.1" }
path-slash = { version = "0.2.1" }
pathdiff = { version = "0.2.1" }
parking_lot = "0.12.1"
pep440_rs = { version = "0.6.0", features = ["serde"] }
pretty_assertions = "1.3.0"
proc-macro2 = { version = "1.0.79" }
pyproject-toml = { version = "0.9.0" }
quick-junit = { version = "0.3.5" }
quick-junit = { version = "0.4.0" }
quote = { version = "1.0.23" }
rand = { version = "0.8.5" }
rayon = { version = "1.10.0" }
@@ -85,7 +109,6 @@ serde_json = { version = "1.0.113" }
serde_test = { version = "1.0.152" }
serde_with = { version = "3.6.0", default-features = false, features = ["macros"] }
shellexpand = { version = "3.0.0" }
shlex = { version = "1.3.0" }
similar = { version = "2.4.0", features = ["inline"] }
smallvec = { version = "1.13.2" }
static_assertions = "1.1.0"
@@ -126,6 +149,7 @@ char_lit_as_u8 = "allow"
collapsible_else_if = "allow"
collapsible_if = "allow"
implicit_hasher = "allow"
map_unwrap_or = "allow"
match_same_arms = "allow"
missing_errors_doc = "allow"
missing_panics_doc = "allow"

View File

@@ -4,7 +4,7 @@
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![image](https://img.shields.io/pypi/v/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![image](https://img.shields.io/pypi/l/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![image](https://img.shields.io/pypi/l/ruff.svg)](https://github.com/astral-sh/ruff/blob/main/LICENSE)
[![image](https://img.shields.io/pypi/pyversions/ruff.svg)](https://pypi.python.org/pypi/ruff)
[![Actions status](https://github.com/astral-sh/ruff/workflows/CI/badge.svg)](https://github.com/astral-sh/ruff/actions)
[![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord&logoColor=white)](https://discord.com/invite/astral-sh)
@@ -50,6 +50,7 @@ times faster than any individual tool.
Ruff is extremely actively developed and used in major open-source projects like:
- [Apache Airflow](https://github.com/apache/airflow)
- [Apache Superset](https://github.com/apache/superset)
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Hugging Face](https://github.com/huggingface/transformers)
- [Pandas](https://github.com/pandas-dev/pandas)
@@ -151,7 +152,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.4.1
rev: v0.4.8
hooks:
# Run the linter.
- id: ruff
@@ -265,6 +266,11 @@ The remaining configuration options can be provided through a catch-all `--confi
ruff check --config "lint.per-file-ignores = {'some_file.py' = ['F841']}"
```
To opt in to the latest lint rules, formatter style changes, interface updates, and more, enable
[preview mode](https://docs.astral.sh/ruff/rules/) by setting `preview = true` in your configuration
file or passing `--preview` on the command line. Preview mode enables a collection of unstable
features that may change prior to stabilization.
See `ruff help` for more on Ruff's top-level commands, or `ruff help check` and `ruff help format`
for more on the linting and formatting commands, respectively.
@@ -402,6 +408,7 @@ Ruff is used by a number of major open-source projects and companies, including:
- [Dagster](https://github.com/dagster-io/dagster)
- Databricks ([MLflow](https://github.com/mlflow/mlflow))
- [FastAPI](https://github.com/tiangolo/fastapi)
- [Godot](https://github.com/godotengine/godot)
- [Gradio](https://github.com/gradio-app/gradio)
- [Great Expectations](https://github.com/great-expectations/great_expectations)
- [HTTPX](https://github.com/encode/httpx)
@@ -424,9 +431,10 @@ Ruff is used by a number of major open-source projects and companies, including:
- Microsoft ([Semantic Kernel](https://github.com/microsoft/semantic-kernel),
[ONNX Runtime](https://github.com/microsoft/onnxruntime),
[LightGBM](https://github.com/microsoft/LightGBM))
- Modern Treasury ([Python SDK](https://github.com/Modern-Treasury/modern-treasury-python-sdk))
- Modern Treasury ([Python SDK](https://github.com/Modern-Treasury/modern-treasury-python))
- Mozilla ([Firefox](https://github.com/mozilla/gecko-dev))
- [Mypy](https://github.com/python/mypy)
- [Nautobot](https://github.com/nautobot/nautobot)
- Netflix ([Dispatch](https://github.com/Netflix/dispatch))
- [Neon](https://github.com/neondatabase/neon)
- [Nokia](https://nokia.com/)
@@ -498,7 +506,7 @@ If you're using Ruff, consider adding the Ruff badge to your project's `README.m
## License
MIT
This repository is licensed under the [MIT License](https://github.com/astral-sh/ruff/blob/main/LICENSE)
<div align="center">
<a target="_blank" href="https://astral.sh" style="background:none">

View File

@@ -1,6 +1,6 @@
[files]
# https://github.com/crate-ci/typos/issues/868
extend-exclude = ["**/resources/**/*", "**/snapshots/**/*"]
extend-exclude = ["crates/red_knot/vendor/**/*", "**/resources/**/*", "**/snapshots/**/*"]
[default.extend-words]
"arange" = "arange" # e.g. `numpy.arange`
@@ -11,6 +11,7 @@ ned = "ned"
pn = "pn" # `import panel as pd` is a thing
poit = "poit"
BA = "BA" # acronym for "Bad Allowed", used in testing.
jod = "jod" # e.g., `jod-thread`
[default]
extend-ignore-re = [

View File

@@ -1,7 +1,13 @@
doc-valid-idents = [
"StackOverflow",
"CodeQL",
"IPython",
"NumPy",
"..",
"CodeQL",
"FastAPI",
"IPython",
"LangChain",
"LibCST",
"McCabe",
"NumPy",
"SCREAMING_SNAKE_CASE",
"SQLAlchemy",
"StackOverflow",
]

View File

@@ -0,0 +1,41 @@
[package]
name = "red_knot"
version = "0.1.0"
edition.workspace = true
rust-version.workspace = true
homepage.workspace = true
documentation.workspace = true
repository.workspace = true
authors.workspace = true
license.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
ruff_python_parser = { workspace = true }
ruff_python_ast = { workspace = true }
ruff_text_size = { workspace = true }
ruff_index = { workspace = true }
ruff_notebook = { workspace = true }
anyhow = { workspace = true }
bitflags = { workspace = true }
crossbeam = { workspace = true }
ctrlc = { version = "3.4.4" }
dashmap = { workspace = true }
hashbrown = { workspace = true }
indexmap = { workspace = true }
notify = { workspace = true }
parking_lot = { workspace = true }
rayon = { workspace = true }
rustc-hash = { workspace = true }
smol_str = { version = "0.2.1" }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
tracing-tree = { workspace = true }
[dev-dependencies]
tempfile = { workspace = true }
[lints]
workspace = true

View File

@@ -0,0 +1,9 @@
# Red Knot
The Red Knot crate contains code working towards multifile analysis, type inference and, ultimately, type-checking. It's very much a work in progress for now.
## Vendored types for the stdlib
Red Knot vendors [typeshed](https://github.com/python/typeshed)'s stubs for the standard library. The vendored stubs can be found in `crates/red_knot/vendor/typeshed`. The file `crates/red_knot/vendor/typeshed/source_commit.txt` tells you the typeshed commit that our vendored stdlib stubs currently correspond to.
The typeshed stubs are updated every two weeks via an automated PR using the `sync_typeshed.yaml` workflow in the `.github/workflows` directory. This workflow can also be triggered at any time via [workflow dispatch](https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow#running-a-workflow).

View File

@@ -0,0 +1,418 @@
use std::any::type_name;
use std::fmt::{Debug, Formatter};
use std::hash::{Hash, Hasher};
use std::marker::PhantomData;
use rustc_hash::FxHashMap;
use ruff_index::{Idx, IndexVec};
use ruff_python_ast::visitor::preorder;
use ruff_python_ast::visitor::preorder::{PreorderVisitor, TraversalSignal};
use ruff_python_ast::{
AnyNodeRef, AstNode, ExceptHandler, ExceptHandlerExceptHandler, Expr, MatchCase, ModModule,
NodeKind, Parameter, Stmt, StmtAnnAssign, StmtAssign, StmtAugAssign, StmtClassDef,
StmtFunctionDef, StmtGlobal, StmtImport, StmtImportFrom, StmtNonlocal, StmtTypeAlias,
TypeParam, TypeParamParamSpec, TypeParamTypeVar, TypeParamTypeVarTuple, WithItem,
};
use ruff_text_size::{Ranged, TextRange};
/// A type agnostic ID that uniquely identifies an AST node in a file.
#[ruff_index::newtype_index]
pub struct AstId;
/// A typed ID that uniquely identifies an AST node in a file.
///
/// This is different from [`AstId`] in that it is a combination of ID and the type of the node the ID identifies.
/// Typing the ID prevents mixing IDs of different node types and allows to restrict the API to only accept
/// nodes for which an ID has been created (not all AST nodes get an ID).
pub struct TypedAstId<N: HasAstId> {
erased: AstId,
_marker: PhantomData<fn() -> N>,
}
impl<N: HasAstId> TypedAstId<N> {
/// Upcasts this ID from a more specific node type to a more general node type.
pub fn upcast<M: HasAstId>(self) -> TypedAstId<M>
where
N: Into<M>,
{
TypedAstId {
erased: self.erased,
_marker: PhantomData,
}
}
}
impl<N: HasAstId> Copy for TypedAstId<N> {}
impl<N: HasAstId> Clone for TypedAstId<N> {
fn clone(&self) -> Self {
*self
}
}
impl<N: HasAstId> PartialEq for TypedAstId<N> {
fn eq(&self, other: &Self) -> bool {
self.erased == other.erased
}
}
impl<N: HasAstId> Eq for TypedAstId<N> {}
impl<N: HasAstId> Hash for TypedAstId<N> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.erased.hash(state);
}
}
impl<N: HasAstId> Debug for TypedAstId<N> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_tuple("TypedAstId")
.field(&self.erased)
.field(&type_name::<N>())
.finish()
}
}
pub struct AstIds {
ids: IndexVec<AstId, NodeKey>,
reverse: FxHashMap<NodeKey, AstId>,
}
impl AstIds {
// TODO rust analyzer doesn't allocate an ID for every node. It only allocates ids for
// nodes with a corresponding HIR element, that is nodes that are definitions.
pub fn from_module(module: &ModModule) -> Self {
let mut visitor = AstIdsVisitor::default();
// TODO: visit_module?
// Make sure we visit the root
visitor.create_id(module);
visitor.visit_body(&module.body);
while let Some(deferred) = visitor.deferred.pop() {
match deferred {
DeferredNode::FunctionDefinition(def) => {
def.visit_preorder(&mut visitor);
}
DeferredNode::ClassDefinition(def) => def.visit_preorder(&mut visitor),
}
}
AstIds {
ids: visitor.ids,
reverse: visitor.reverse,
}
}
/// Returns the ID to the root node.
pub fn root(&self) -> NodeKey {
self.ids[AstId::new(0)]
}
/// Returns the [`TypedAstId`] for a node.
pub fn ast_id<N: HasAstId>(&self, node: &N) -> TypedAstId<N> {
let key = node.syntax_node_key();
TypedAstId {
erased: self.reverse.get(&key).copied().unwrap(),
_marker: PhantomData,
}
}
/// Returns the [`TypedAstId`] for the node identified with the given [`TypedNodeKey`].
pub fn ast_id_for_key<N: HasAstId>(&self, node: &TypedNodeKey<N>) -> TypedAstId<N> {
let ast_id = self.ast_id_for_node_key(node.inner);
TypedAstId {
erased: ast_id,
_marker: PhantomData,
}
}
/// Returns the untyped [`AstId`] for the node identified by the given `node` key.
pub fn ast_id_for_node_key(&self, node: NodeKey) -> AstId {
self.reverse
.get(&node)
.copied()
.expect("Can't find node in AstIds map.")
}
/// Returns the [`TypedNodeKey`] for the node identified by the given [`TypedAstId`].
pub fn key<N: HasAstId>(&self, id: TypedAstId<N>) -> TypedNodeKey<N> {
let syntax_key = self.ids[id.erased];
TypedNodeKey::new(syntax_key).unwrap()
}
pub fn node_key<H: HasAstId>(&self, id: TypedAstId<H>) -> NodeKey {
self.ids[id.erased]
}
}
impl std::fmt::Debug for AstIds {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let mut map = f.debug_map();
for (key, value) in self.ids.iter_enumerated() {
map.entry(&key, &value);
}
map.finish()
}
}
impl PartialEq for AstIds {
fn eq(&self, other: &Self) -> bool {
self.ids == other.ids
}
}
impl Eq for AstIds {}
#[derive(Default)]
struct AstIdsVisitor<'a> {
ids: IndexVec<AstId, NodeKey>,
reverse: FxHashMap<NodeKey, AstId>,
deferred: Vec<DeferredNode<'a>>,
}
impl<'a> AstIdsVisitor<'a> {
fn create_id<A: HasAstId>(&mut self, node: &A) {
let node_key = node.syntax_node_key();
let id = self.ids.push(node_key);
self.reverse.insert(node_key, id);
}
}
impl<'a> PreorderVisitor<'a> for AstIdsVisitor<'a> {
fn visit_stmt(&mut self, stmt: &'a Stmt) {
match stmt {
Stmt::FunctionDef(def) => {
self.create_id(def);
self.deferred.push(DeferredNode::FunctionDefinition(def));
return;
}
// TODO defer visiting the assignment body, type alias parameters etc?
Stmt::ClassDef(def) => {
self.create_id(def);
self.deferred.push(DeferredNode::ClassDefinition(def));
return;
}
Stmt::Expr(_) => {
// Skip
return;
}
Stmt::Return(_) => {}
Stmt::Delete(_) => {}
Stmt::Assign(assignment) => self.create_id(assignment),
Stmt::AugAssign(assignment) => {
self.create_id(assignment);
}
Stmt::AnnAssign(assignment) => self.create_id(assignment),
Stmt::TypeAlias(assignment) => self.create_id(assignment),
Stmt::For(_) => {}
Stmt::While(_) => {}
Stmt::If(_) => {}
Stmt::With(_) => {}
Stmt::Match(_) => {}
Stmt::Raise(_) => {}
Stmt::Try(_) => {}
Stmt::Assert(_) => {}
Stmt::Import(import) => self.create_id(import),
Stmt::ImportFrom(import_from) => self.create_id(import_from),
Stmt::Global(global) => self.create_id(global),
Stmt::Nonlocal(non_local) => self.create_id(non_local),
Stmt::Pass(_) => {}
Stmt::Break(_) => {}
Stmt::Continue(_) => {}
Stmt::IpyEscapeCommand(_) => {}
}
preorder::walk_stmt(self, stmt);
}
fn visit_expr(&mut self, _expr: &'a Expr) {}
fn visit_parameter(&mut self, parameter: &'a Parameter) {
self.create_id(parameter);
preorder::walk_parameter(self, parameter);
}
fn visit_except_handler(&mut self, except_handler: &'a ExceptHandler) {
match except_handler {
ExceptHandler::ExceptHandler(except_handler) => {
self.create_id(except_handler);
}
}
preorder::walk_except_handler(self, except_handler);
}
fn visit_with_item(&mut self, with_item: &'a WithItem) {
self.create_id(with_item);
preorder::walk_with_item(self, with_item);
}
fn visit_match_case(&mut self, match_case: &'a MatchCase) {
self.create_id(match_case);
preorder::walk_match_case(self, match_case);
}
fn visit_type_param(&mut self, type_param: &'a TypeParam) {
self.create_id(type_param);
}
}
enum DeferredNode<'a> {
FunctionDefinition(&'a StmtFunctionDef),
ClassDefinition(&'a StmtClassDef),
}
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct TypedNodeKey<N: AstNode> {
/// The type erased node key.
inner: NodeKey,
_marker: PhantomData<fn() -> N>,
}
impl<N: AstNode> TypedNodeKey<N> {
pub fn from_node(node: &N) -> Self {
let inner = NodeKey::from_node(node.as_any_node_ref());
Self {
inner,
_marker: PhantomData,
}
}
pub fn new(node_key: NodeKey) -> Option<Self> {
N::can_cast(node_key.kind).then_some(TypedNodeKey {
inner: node_key,
_marker: PhantomData,
})
}
pub fn resolve<'a>(&self, root: AnyNodeRef<'a>) -> Option<N::Ref<'a>> {
let node_ref = self.inner.resolve(root)?;
Some(N::cast_ref(node_ref).unwrap())
}
pub fn resolve_unwrap<'a>(&self, root: AnyNodeRef<'a>) -> N::Ref<'a> {
self.resolve(root).expect("node should resolve")
}
pub fn erased(&self) -> &NodeKey {
&self.inner
}
}
struct FindNodeKeyVisitor<'a> {
key: NodeKey,
result: Option<AnyNodeRef<'a>>,
}
impl<'a> PreorderVisitor<'a> for FindNodeKeyVisitor<'a> {
fn enter_node(&mut self, node: AnyNodeRef<'a>) -> TraversalSignal {
if self.result.is_some() {
return TraversalSignal::Skip;
}
if node.range() == self.key.range && node.kind() == self.key.kind {
self.result = Some(node);
TraversalSignal::Skip
} else if node.range().contains_range(self.key.range) {
TraversalSignal::Traverse
} else {
TraversalSignal::Skip
}
}
fn visit_body(&mut self, body: &'a [Stmt]) {
// TODO it would be more efficient to use binary search instead of linear
for stmt in body {
if stmt.range().start() > self.key.range.end() {
break;
}
self.visit_stmt(stmt);
}
}
}
// TODO an alternative to this is to have a `NodeId` on each node (in increasing order depending on the position).
// This would allow to reduce the size of this to a u32.
// What would be nice if we could use an `Arc::weak_ref` here but that only works if we use
// `Arc` internally
// TODO: Implement the logic to resolve a node, given a db (and the correct file).
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct NodeKey {
kind: NodeKind,
range: TextRange,
}
impl NodeKey {
pub fn from_node(node: AnyNodeRef) -> Self {
NodeKey {
kind: node.kind(),
range: node.range(),
}
}
pub fn resolve<'a>(&self, root: AnyNodeRef<'a>) -> Option<AnyNodeRef<'a>> {
// We need to do a binary search here. Only traverse into a node if the range is withint the node
let mut visitor = FindNodeKeyVisitor {
key: *self,
result: None,
};
if visitor.enter_node(root) == TraversalSignal::Traverse {
root.visit_preorder(&mut visitor);
}
visitor.result
}
}
/// Marker trait implemented by AST nodes for which we extract the `AstId`.
pub trait HasAstId: AstNode {
fn node_key(&self) -> TypedNodeKey<Self>
where
Self: Sized,
{
TypedNodeKey {
inner: self.syntax_node_key(),
_marker: PhantomData,
}
}
fn syntax_node_key(&self) -> NodeKey {
NodeKey {
kind: self.as_any_node_ref().kind(),
range: self.range(),
}
}
}
impl HasAstId for StmtFunctionDef {}
impl HasAstId for StmtClassDef {}
impl HasAstId for StmtAnnAssign {}
impl HasAstId for StmtAugAssign {}
impl HasAstId for StmtAssign {}
impl HasAstId for StmtTypeAlias {}
impl HasAstId for ModModule {}
impl HasAstId for StmtImport {}
impl HasAstId for StmtImportFrom {}
impl HasAstId for Parameter {}
impl HasAstId for TypeParam {}
impl HasAstId for Stmt {}
impl HasAstId for TypeParamTypeVar {}
impl HasAstId for TypeParamTypeVarTuple {}
impl HasAstId for TypeParamParamSpec {}
impl HasAstId for StmtGlobal {}
impl HasAstId for StmtNonlocal {}
impl HasAstId for ExceptHandlerExceptHandler {}
impl HasAstId for WithItem {}
impl HasAstId for MatchCase {}

View File

@@ -0,0 +1,165 @@
use std::fmt::Formatter;
use std::hash::Hash;
use std::sync::atomic::{AtomicUsize, Ordering};
use crate::db::QueryResult;
use dashmap::mapref::entry::Entry;
use crate::FxDashMap;
/// Simple key value cache that locks on a per-key level.
pub struct KeyValueCache<K, V> {
map: FxDashMap<K, V>,
statistics: CacheStatistics,
}
impl<K, V> KeyValueCache<K, V>
where
K: Eq + Hash + Clone,
V: Clone,
{
pub fn try_get(&self, key: &K) -> Option<V> {
if let Some(existing) = self.map.get(key) {
self.statistics.hit();
Some(existing.clone())
} else {
self.statistics.miss();
None
}
}
pub fn get<F>(&self, key: &K, compute: F) -> QueryResult<V>
where
F: FnOnce(&K) -> QueryResult<V>,
{
Ok(match self.map.entry(key.clone()) {
Entry::Occupied(cached) => {
self.statistics.hit();
cached.get().clone()
}
Entry::Vacant(vacant) => {
self.statistics.miss();
let value = compute(key)?;
vacant.insert(value.clone());
value
}
})
}
pub fn set(&mut self, key: K, value: V) {
self.map.insert(key, value);
}
pub fn remove(&mut self, key: &K) -> Option<V> {
self.map.remove(key).map(|(_, value)| value)
}
pub fn clear(&mut self) {
self.map.clear();
self.map.shrink_to_fit();
}
pub fn statistics(&self) -> Option<Statistics> {
self.statistics.to_statistics()
}
}
impl<K, V> Default for KeyValueCache<K, V>
where
K: Eq + Hash,
V: Clone,
{
fn default() -> Self {
Self {
map: FxDashMap::default(),
statistics: CacheStatistics::default(),
}
}
}
impl<K, V> std::fmt::Debug for KeyValueCache<K, V>
where
K: std::fmt::Debug + Eq + Hash,
V: std::fmt::Debug,
{
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let mut debug = f.debug_map();
for entry in &self.map {
debug.entry(&entry.value(), &entry.key());
}
debug.finish()
}
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Statistics {
pub hits: usize,
pub misses: usize,
}
impl Statistics {
#[allow(clippy::cast_precision_loss)]
pub fn hit_rate(&self) -> Option<f64> {
if self.hits + self.misses == 0 {
return None;
}
Some((self.hits as f64) / (self.hits + self.misses) as f64)
}
}
#[cfg(debug_assertions)]
pub type CacheStatistics = DebugStatistics;
#[cfg(not(debug_assertions))]
pub type CacheStatistics = ReleaseStatistics;
pub trait StatisticsRecorder {
fn hit(&self);
fn miss(&self);
fn to_statistics(&self) -> Option<Statistics>;
}
#[derive(Debug, Default)]
pub struct DebugStatistics {
hits: AtomicUsize,
misses: AtomicUsize,
}
impl StatisticsRecorder for DebugStatistics {
// TODO figure out appropriate Ordering
fn hit(&self) {
self.hits.fetch_add(1, Ordering::SeqCst);
}
fn miss(&self) {
self.misses.fetch_add(1, Ordering::SeqCst);
}
fn to_statistics(&self) -> Option<Statistics> {
let hits = self.hits.load(Ordering::SeqCst);
let misses = self.misses.load(Ordering::SeqCst);
Some(Statistics { hits, misses })
}
}
#[derive(Debug, Default)]
pub struct ReleaseStatistics;
impl StatisticsRecorder for ReleaseStatistics {
#[inline]
fn hit(&self) {}
#[inline]
fn miss(&self) {}
#[inline]
fn to_statistics(&self) -> Option<Statistics> {
None
}
}

View File

@@ -0,0 +1,42 @@
use std::sync::atomic::AtomicBool;
use std::sync::Arc;
#[derive(Debug, Clone, Default)]
pub struct CancellationTokenSource {
signal: Arc<AtomicBool>,
}
impl CancellationTokenSource {
pub fn new() -> Self {
Self {
signal: Arc::new(AtomicBool::new(false)),
}
}
#[tracing::instrument(level = "trace", skip_all)]
pub fn cancel(&self) {
self.signal.store(true, std::sync::atomic::Ordering::SeqCst);
}
pub fn is_cancelled(&self) -> bool {
self.signal.load(std::sync::atomic::Ordering::SeqCst)
}
pub fn token(&self) -> CancellationToken {
CancellationToken {
signal: self.signal.clone(),
}
}
}
#[derive(Clone, Debug)]
pub struct CancellationToken {
signal: Arc<AtomicBool>,
}
impl CancellationToken {
/// Returns `true` if cancellation has been requested.
pub fn is_cancelled(&self) -> bool {
self.signal.load(std::sync::atomic::Ordering::SeqCst)
}
}

248
crates/red_knot/src/db.rs Normal file
View File

@@ -0,0 +1,248 @@
use std::sync::Arc;
pub use jars::{HasJar, HasJars};
pub use query::{QueryError, QueryResult};
pub use runtime::DbRuntime;
pub use storage::JarsStorage;
use crate::files::FileId;
use crate::lint::{LintSemanticStorage, LintSyntaxStorage};
use crate::module::ModuleResolver;
use crate::parse::ParsedStorage;
use crate::semantic::SemanticIndexStorage;
use crate::semantic::TypeStore;
use crate::source::SourceStorage;
mod jars;
mod query;
mod runtime;
mod storage;
pub trait Database {
/// Returns a reference to the runtime of the current worker.
fn runtime(&self) -> &DbRuntime;
/// Returns a mutable reference to the runtime. Only one worker can hold a mutable reference to the runtime.
fn runtime_mut(&mut self) -> &mut DbRuntime;
/// Returns `Ok` if the queries have not been cancelled and `Err(QueryError::Cancelled)` otherwise.
fn cancelled(&self) -> QueryResult<()> {
self.runtime().cancelled()
}
/// Returns `true` if the queries have been cancelled.
fn is_cancelled(&self) -> bool {
self.runtime().is_cancelled()
}
}
/// Database that supports running queries from multiple threads.
pub trait ParallelDatabase: Database + Send {
/// Creates a snapshot of the database state that can be used to query the database in another thread.
///
/// The snapshot is a read-only view of the database but query results are shared between threads.
/// All queries will be automatically cancelled when applying any mutations (calling [`HasJars::jars_mut`])
/// to the database (not the snapshot, because they're readonly).
///
/// ## Creating a snapshot
///
/// Creating a snapshot of the database's jars is cheap but creating a snapshot of
/// other state stored on the database might require deep-cloning data. That's why you should
/// avoid creating snapshots in a hot function (e.g. don't create a snapshot for each file, instead
/// create a snapshot when scheduling the check of an entire program).
///
/// ## Salsa compatibility
/// Salsa prohibits creating a snapshot while running a local query (it's fine if other workers run a query) [[source](https://github.com/salsa-rs/salsa/issues/80)].
/// We should avoid creating snapshots while running a query because we might want to adopt Salsa in the future (if we can figure out persistent caching).
/// Unfortunately, the infrastructure doesn't provide an automated way of knowing when a query is run, that's
/// why we have to "enforce" this constraint manually.
#[must_use]
fn snapshot(&self) -> Snapshot<Self>;
}
pub trait DbWithJar<Jar>: Database + HasJar<Jar> {}
/// Readonly snapshot of a database.
///
/// ## Dead locks
/// A snapshot should always be dropped as soon as it is no longer necessary to run queries.
/// Storing the snapshot without running a query or periodically checking if cancellation was requested
/// can lead to deadlocks because mutating the [`Database`] requires cancels all pending queries
/// and waiting for all [`Snapshot`]s to be dropped.
#[derive(Debug)]
pub struct Snapshot<DB: ?Sized>
where
DB: ParallelDatabase,
{
db: DB,
}
impl<DB> Snapshot<DB>
where
DB: ParallelDatabase,
{
pub fn new(db: DB) -> Self {
Snapshot { db }
}
}
impl<DB> std::ops::Deref for Snapshot<DB>
where
DB: ParallelDatabase,
{
type Target = DB;
fn deref(&self) -> &DB {
&self.db
}
}
pub trait Upcast<T: ?Sized> {
fn upcast(&self) -> &T;
}
// Red knot specific databases code.
pub trait SourceDb: DbWithJar<SourceJar> {
// queries
fn file_id(&self, path: &std::path::Path) -> FileId;
fn file_path(&self, file_id: FileId) -> Arc<std::path::Path>;
}
pub trait SemanticDb: SourceDb + DbWithJar<SemanticJar> + Upcast<dyn SourceDb> {}
pub trait LintDb: SemanticDb + DbWithJar<LintJar> + Upcast<dyn SemanticDb> {}
pub trait Db: LintDb + Upcast<dyn LintDb> {}
#[derive(Debug, Default)]
pub struct SourceJar {
pub sources: SourceStorage,
pub parsed: ParsedStorage,
}
#[derive(Debug, Default)]
pub struct SemanticJar {
pub module_resolver: ModuleResolver,
pub semantic_indices: SemanticIndexStorage,
pub type_store: TypeStore,
}
#[derive(Debug, Default)]
pub struct LintJar {
pub lint_syntax: LintSyntaxStorage,
pub lint_semantic: LintSemanticStorage,
}
#[cfg(test)]
pub(crate) mod tests {
use std::path::Path;
use std::sync::Arc;
use crate::db::{
Database, DbRuntime, DbWithJar, HasJar, HasJars, JarsStorage, LintDb, LintJar, QueryResult,
SourceDb, SourceJar, Upcast,
};
use crate::files::{FileId, Files};
use super::{SemanticDb, SemanticJar};
// This can be a partial database used in a single crate for testing.
// It would hold fewer data than the full database.
#[derive(Debug, Default)]
pub(crate) struct TestDb {
files: Files,
jars: JarsStorage<Self>,
}
impl HasJar<SourceJar> for TestDb {
fn jar(&self) -> QueryResult<&SourceJar> {
Ok(&self.jars()?.0)
}
fn jar_mut(&mut self) -> &mut SourceJar {
&mut self.jars_mut().0
}
}
impl HasJar<SemanticJar> for TestDb {
fn jar(&self) -> QueryResult<&SemanticJar> {
Ok(&self.jars()?.1)
}
fn jar_mut(&mut self) -> &mut SemanticJar {
&mut self.jars_mut().1
}
}
impl HasJar<LintJar> for TestDb {
fn jar(&self) -> QueryResult<&LintJar> {
Ok(&self.jars()?.2)
}
fn jar_mut(&mut self) -> &mut LintJar {
&mut self.jars_mut().2
}
}
impl SourceDb for TestDb {
fn file_id(&self, path: &Path) -> FileId {
self.files.intern(path)
}
fn file_path(&self, file_id: FileId) -> Arc<Path> {
self.files.path(file_id)
}
}
impl DbWithJar<SourceJar> for TestDb {}
impl Upcast<dyn SourceDb> for TestDb {
fn upcast(&self) -> &(dyn SourceDb + 'static) {
self
}
}
impl SemanticDb for TestDb {}
impl DbWithJar<SemanticJar> for TestDb {}
impl Upcast<dyn SemanticDb> for TestDb {
fn upcast(&self) -> &(dyn SemanticDb + 'static) {
self
}
}
impl LintDb for TestDb {}
impl Upcast<dyn LintDb> for TestDb {
fn upcast(&self) -> &(dyn LintDb + 'static) {
self
}
}
impl DbWithJar<LintJar> for TestDb {}
impl HasJars for TestDb {
type Jars = (SourceJar, SemanticJar, LintJar);
fn jars(&self) -> QueryResult<&Self::Jars> {
self.jars.jars()
}
fn jars_mut(&mut self) -> &mut Self::Jars {
self.jars.jars_mut()
}
}
impl Database for TestDb {
fn runtime(&self) -> &DbRuntime {
self.jars.runtime()
}
fn runtime_mut(&mut self) -> &mut DbRuntime {
self.jars.runtime_mut()
}
}
}

View File

@@ -0,0 +1,37 @@
use crate::db::query::QueryResult;
/// Gives access to a specific jar in the database.
///
/// Nope, the terminology isn't borrowed from Java but from Salsa <https://salsa-rs.github.io/salsa/>,
/// which is an analogy to storing the salsa in different jars.
///
/// The basic idea is that each crate can define its own jar and the jars can be combined to a single
/// database in the top level crate. Each crate also defines its own `Database` trait. The combination of
/// `Database` trait and the jar allows to write queries in isolation without having to know how they get composed at the upper levels.
///
/// Salsa further defines a `HasIngredient` trait which slices the jar to a specific storage (e.g. a specific cache).
/// We don't need this just yet because we write our queries by hand. We may want a similar trait if we decide
/// to use a macro to generate the queries.
pub trait HasJar<T> {
/// Gives a read-only reference to the jar.
fn jar(&self) -> QueryResult<&T>;
/// Gives a mutable reference to the jar.
fn jar_mut(&mut self) -> &mut T;
}
/// Gives access to the jars in a database.
pub trait HasJars {
/// A type storing the jars.
///
/// Most commonly, this is a tuple where each jar is a tuple element.
type Jars: Default;
/// Gives access to the underlying jars but tests if the queries have been cancelled.
///
/// Returns `Err(QueryError::Cancelled)` if the queries have been cancelled.
fn jars(&self) -> QueryResult<&Self::Jars>;
/// Gives mutable access to the underlying jars.
fn jars_mut(&mut self) -> &mut Self::Jars;
}

View File

@@ -0,0 +1,20 @@
use std::fmt::{Display, Formatter};
/// Reason why a db query operation failed.
#[derive(Debug, Clone, Copy)]
pub enum QueryError {
/// The query was cancelled because the DB was mutated or the query was cancelled by the host (e.g. on a file change or when pressing CTRL+C).
Cancelled,
}
impl Display for QueryError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
QueryError::Cancelled => f.write_str("query was cancelled"),
}
}
}
impl std::error::Error for QueryError {}
pub type QueryResult<T> = Result<T, QueryError>;

View File

@@ -0,0 +1,41 @@
use crate::cancellation::CancellationTokenSource;
use crate::db::{QueryError, QueryResult};
/// Holds the jar agnostic state of the database.
#[derive(Debug, Default)]
pub struct DbRuntime {
/// The cancellation token source used to signal other works that the queries should be aborted and
/// exit at the next possible point.
cancellation_token: CancellationTokenSource,
}
impl DbRuntime {
pub(super) fn snapshot(&self) -> Self {
Self {
cancellation_token: self.cancellation_token.clone(),
}
}
/// Cancels the pending queries of other workers. The current worker cannot have any pending
/// queries because we're holding a mutable reference to the runtime.
pub(super) fn cancel_other_workers(&mut self) {
self.cancellation_token.cancel();
// Set a new cancellation token so that we're in a non-cancelled state again when running the next
// query.
self.cancellation_token = CancellationTokenSource::default();
}
/// Returns `Ok` if the queries have not been cancelled and `Err(QueryError::Cancelled)` otherwise.
pub(super) fn cancelled(&self) -> QueryResult<()> {
if self.cancellation_token.is_cancelled() {
Err(QueryError::Cancelled)
} else {
Ok(())
}
}
/// Returns `true` if the queries have been cancelled.
pub(super) fn is_cancelled(&self) -> bool {
self.cancellation_token.is_cancelled()
}
}

View File

@@ -0,0 +1,117 @@
use std::fmt::Formatter;
use std::sync::Arc;
use crossbeam::sync::WaitGroup;
use crate::db::query::QueryResult;
use crate::db::runtime::DbRuntime;
use crate::db::{HasJars, ParallelDatabase};
/// Stores the jars of a database and the state for each worker.
///
/// Today, all state is shared across all workers, but it may be desired to store data per worker in the future.
pub struct JarsStorage<T>
where
T: HasJars + Sized,
{
// It's important that `jars_wait_group` is declared after `jars` to ensure that `jars` is dropped first.
// See https://doc.rust-lang.org/reference/destructors.html
/// Stores the jars of the database.
jars: Arc<T::Jars>,
/// Used to count the references to `jars`. Allows implementing `jars_mut` without requiring to clone `jars`.
jars_wait_group: WaitGroup,
/// The data agnostic state.
runtime: DbRuntime,
}
impl<Db> JarsStorage<Db>
where
Db: HasJars,
{
pub(super) fn new() -> Self {
Self {
jars: Arc::new(Db::Jars::default()),
jars_wait_group: WaitGroup::default(),
runtime: DbRuntime::default(),
}
}
/// Creates a snapshot of the jars.
///
/// Creating the snapshot is cheap because it doesn't clone the jars, it only increments a ref counter.
#[must_use]
pub fn snapshot(&self) -> JarsStorage<Db>
where
Db: ParallelDatabase,
{
Self {
jars: self.jars.clone(),
jars_wait_group: self.jars_wait_group.clone(),
runtime: self.runtime.snapshot(),
}
}
pub(crate) fn jars(&self) -> QueryResult<&Db::Jars> {
self.runtime.cancelled()?;
Ok(&self.jars)
}
/// Returns a mutable reference to the jars without cloning their content.
///
/// The method cancels any pending queries of other works and waits for them to complete so that
/// this instance is the only instance holding a reference to the jars.
pub(crate) fn jars_mut(&mut self) -> &mut Db::Jars {
// We have a mutable ref here, so no more workers can be spawned between calling this function and taking the mut ref below.
self.cancel_other_workers();
// Now all other references to `self.jars` should have been released. We can now safely return a mutable reference
// to the Arc's content.
let jars =
Arc::get_mut(&mut self.jars).expect("All references to jars should have been released");
jars
}
pub(crate) fn runtime(&self) -> &DbRuntime {
&self.runtime
}
pub(crate) fn runtime_mut(&mut self) -> &mut DbRuntime {
// Note: This method may need to use a similar trick to `jars_mut` if `DbRuntime` is ever to store data that is shared between workers.
&mut self.runtime
}
#[tracing::instrument(level = "trace", skip(self))]
fn cancel_other_workers(&mut self) {
self.runtime.cancel_other_workers();
// Wait for all other works to complete.
let existing_wait = std::mem::take(&mut self.jars_wait_group);
existing_wait.wait();
}
}
impl<Db> Default for JarsStorage<Db>
where
Db: HasJars,
{
fn default() -> Self {
Self::new()
}
}
impl<T> std::fmt::Debug for JarsStorage<T>
where
T: HasJars,
<T as HasJars>::Jars: std::fmt::Debug,
{
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("SharedStorage")
.field("jars", &self.jars)
.field("jars_wait_group", &self.jars_wait_group)
.field("runtime", &self.runtime)
.finish()
}
}

View File

@@ -0,0 +1,180 @@
use std::fmt::{Debug, Formatter};
use std::hash::{Hash, Hasher};
use std::path::Path;
use std::sync::Arc;
use hashbrown::hash_map::RawEntryMut;
use parking_lot::RwLock;
use rustc_hash::FxHasher;
use ruff_index::{newtype_index, IndexVec};
type Map<K, V> = hashbrown::HashMap<K, V, ()>;
#[newtype_index]
pub struct FileId;
// TODO we'll need a higher level virtual file system abstraction that allows testing if a file exists
// or retrieving its content (ideally lazily and in a way that the memory can be retained later)
// I suspect that we'll end up with a FileSystem trait and our own Path abstraction.
#[derive(Default)]
pub struct Files {
inner: Arc<RwLock<FilesInner>>,
}
impl Files {
#[tracing::instrument(level = "debug", skip(self))]
pub fn intern(&self, path: &Path) -> FileId {
self.inner.write().intern(path)
}
pub fn try_get(&self, path: &Path) -> Option<FileId> {
self.inner.read().try_get(path)
}
#[tracing::instrument(level = "debug", skip(self))]
pub fn path(&self, id: FileId) -> Arc<Path> {
self.inner.read().path(id)
}
/// Snapshots files for a new database snapshot.
///
/// This method should not be used outside a database snapshot.
#[must_use]
pub fn snapshot(&self) -> Files {
Files {
inner: self.inner.clone(),
}
}
}
impl Debug for Files {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let files = self.inner.read();
let mut debug = f.debug_map();
for item in files.iter() {
debug.entry(&item.0, &item.1);
}
debug.finish()
}
}
impl PartialEq for Files {
fn eq(&self, other: &Self) -> bool {
self.inner.read().eq(&other.inner.read())
}
}
impl Eq for Files {}
#[derive(Default)]
struct FilesInner {
by_path: Map<FileId, ()>,
// TODO should we use a map here to reclaim the space for removed files?
// TODO I think we should use our own path abstraction here to avoid having to normalize paths
// and dealing with non-utf paths everywhere.
by_id: IndexVec<FileId, Arc<Path>>,
}
impl FilesInner {
/// Inserts the path and returns a new id for it or returns the id if it is an existing path.
// TODO should this accept Path or PathBuf?
pub(crate) fn intern(&mut self, path: &Path) -> FileId {
let hash = FilesInner::hash_path(path);
let entry = self
.by_path
.raw_entry_mut()
.from_hash(hash, |existing_file| &*self.by_id[*existing_file] == path);
match entry {
RawEntryMut::Occupied(entry) => *entry.key(),
RawEntryMut::Vacant(entry) => {
let id = self.by_id.push(Arc::from(path));
entry.insert_with_hasher(hash, id, (), |file| {
FilesInner::hash_path(&self.by_id[*file])
});
id
}
}
}
fn hash_path(path: &Path) -> u64 {
let mut hasher = FxHasher::default();
path.hash(&mut hasher);
hasher.finish()
}
pub(crate) fn try_get(&self, path: &Path) -> Option<FileId> {
let mut hasher = FxHasher::default();
path.hash(&mut hasher);
let hash = hasher.finish();
Some(
*self
.by_path
.raw_entry()
.from_hash(hash, |existing_file| &*self.by_id[*existing_file] == path)?
.0,
)
}
/// Returns the path for the file with the given id.
pub(crate) fn path(&self, id: FileId) -> Arc<Path> {
self.by_id[id].clone()
}
pub(crate) fn iter(&self) -> impl Iterator<Item = (FileId, Arc<Path>)> + '_ {
self.by_path.keys().map(|id| (*id, self.by_id[*id].clone()))
}
}
impl PartialEq for FilesInner {
fn eq(&self, other: &Self) -> bool {
self.by_id == other.by_id
}
}
impl Eq for FilesInner {}
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
#[test]
fn insert_path_twice_same_id() {
let files = Files::default();
let path = PathBuf::from("foo/bar");
let id1 = files.intern(&path);
let id2 = files.intern(&path);
assert_eq!(id1, id2);
}
#[test]
fn insert_different_paths_different_ids() {
let files = Files::default();
let path1 = PathBuf::from("foo/bar");
let path2 = PathBuf::from("foo/bar/baz");
let id1 = files.intern(&path1);
let id2 = files.intern(&path2);
assert_ne!(id1, id2);
}
#[test]
fn four_files() {
let files = Files::default();
let foo_path = PathBuf::from("foo");
let foo_id = files.intern(&foo_path);
let bar_path = PathBuf::from("bar");
files.intern(&bar_path);
let baz_path = PathBuf::from("baz");
files.intern(&baz_path);
let qux_path = PathBuf::from("qux");
files.intern(&qux_path);
let foo_id_2 = files.try_get(&foo_path).expect("foo_path to be found");
assert_eq!(foo_id_2, foo_id);
}
}

View File

@@ -0,0 +1,67 @@
//! Key observations
//!
//! The HIR (High-Level Intermediate Representation) avoids allocations to large extends by:
//! * Using an arena per node type
//! * using ids and id ranges to reference items.
//!
//! Using separate arena per node type has the advantage that the IDs are relatively stable, because
//! they only change when a node of the same kind has been added or removed. (What's unclear is if that matters or if
//! it still triggers a re-compute because the AST-id in the node has changed).
//!
//! The HIR does not store all details. It mainly stores the *public* interface. There's a reference
//! back to the AST node to get more details.
//!
//!
use crate::ast_ids::{HasAstId, TypedAstId};
use crate::files::FileId;
use std::fmt::Formatter;
use std::hash::{Hash, Hasher};
pub struct HirAstId<N: HasAstId> {
file_id: FileId,
node_id: TypedAstId<N>,
}
impl<N: HasAstId> Copy for HirAstId<N> {}
impl<N: HasAstId> Clone for HirAstId<N> {
fn clone(&self) -> Self {
*self
}
}
impl<N: HasAstId> PartialEq for HirAstId<N> {
fn eq(&self, other: &Self) -> bool {
self.file_id == other.file_id && self.node_id == other.node_id
}
}
impl<N: HasAstId> Eq for HirAstId<N> {}
impl<N: HasAstId> std::fmt::Debug for HirAstId<N> {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("HirAstId")
.field("file_id", &self.file_id)
.field("node_id", &self.node_id)
.finish()
}
}
impl<N: HasAstId> Hash for HirAstId<N> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.file_id.hash(state);
self.node_id.hash(state);
}
}
impl<N: HasAstId> HirAstId<N> {
pub fn upcast<M: HasAstId>(self) -> HirAstId<M>
where
N: Into<M>,
{
HirAstId {
file_id: self.file_id,
node_id: self.node_id.upcast(),
}
}
}

View File

@@ -0,0 +1,556 @@
use std::ops::{Index, Range};
use ruff_index::{newtype_index, IndexVec};
use ruff_python_ast::visitor::preorder;
use ruff_python_ast::visitor::preorder::PreorderVisitor;
use ruff_python_ast::{
Decorator, ExceptHandler, ExceptHandlerExceptHandler, Expr, MatchCase, ModModule, Stmt,
StmtAnnAssign, StmtAssign, StmtClassDef, StmtFunctionDef, StmtGlobal, StmtImport,
StmtImportFrom, StmtNonlocal, StmtTypeAlias, TypeParam, TypeParamParamSpec, TypeParamTypeVar,
TypeParamTypeVarTuple, WithItem,
};
use crate::ast_ids::{AstIds, HasAstId};
use crate::files::FileId;
use crate::hir::HirAstId;
use crate::Name;
#[newtype_index]
pub struct FunctionId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct Function {
ast_id: HirAstId<StmtFunctionDef>,
name: Name,
parameters: Range<ParameterId>,
type_parameters: Range<TypeParameterId>, // TODO: type_parameters, return expression, decorators
}
#[newtype_index]
pub struct ParameterId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct Parameter {
kind: ParameterKind,
name: Name,
default: Option<()>, // TODO use expression HIR
ast_id: HirAstId<ruff_python_ast::Parameter>,
}
// TODO or should `Parameter` be an enum?
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub enum ParameterKind {
PositionalOnly,
Arguments,
Vararg,
KeywordOnly,
Kwarg,
}
#[newtype_index]
pub struct ClassId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct Class {
name: Name,
ast_id: HirAstId<StmtClassDef>,
// TODO type parameters, inheritance, decorators, members
}
#[newtype_index]
pub struct AssignmentId;
// This can have more than one name...
// but that means we can't implement `name()` on `ModuleItem`.
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct Assignment {
// TODO: Handle multiple names / targets
name: Name,
ast_id: HirAstId<StmtAssign>,
}
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct AnnotatedAssignment {
name: Name,
ast_id: HirAstId<StmtAnnAssign>,
}
#[newtype_index]
pub struct AnnotatedAssignmentId;
#[newtype_index]
pub struct TypeAliasId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct TypeAlias {
name: Name,
ast_id: HirAstId<StmtTypeAlias>,
parameters: Range<TypeParameterId>,
}
#[newtype_index]
pub struct TypeParameterId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub enum TypeParameter {
TypeVar(TypeParameterTypeVar),
ParamSpec(TypeParameterParamSpec),
TypeVarTuple(TypeParameterTypeVarTuple),
}
impl TypeParameter {
pub fn ast_id(&self) -> HirAstId<TypeParam> {
match self {
TypeParameter::TypeVar(type_var) => type_var.ast_id.upcast(),
TypeParameter::ParamSpec(param_spec) => param_spec.ast_id.upcast(),
TypeParameter::TypeVarTuple(type_var_tuple) => type_var_tuple.ast_id.upcast(),
}
}
}
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct TypeParameterTypeVar {
name: Name,
ast_id: HirAstId<TypeParamTypeVar>,
}
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct TypeParameterParamSpec {
name: Name,
ast_id: HirAstId<TypeParamParamSpec>,
}
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct TypeParameterTypeVarTuple {
name: Name,
ast_id: HirAstId<TypeParamTypeVarTuple>,
}
#[newtype_index]
pub struct GlobalId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct Global {
// TODO track names
ast_id: HirAstId<StmtGlobal>,
}
#[newtype_index]
pub struct NonLocalId;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct NonLocal {
// TODO track names
ast_id: HirAstId<StmtNonlocal>,
}
pub enum DefinitionId {
Function(FunctionId),
Parameter(ParameterId),
Class(ClassId),
Assignment(AssignmentId),
AnnotatedAssignment(AnnotatedAssignmentId),
Global(GlobalId),
NonLocal(NonLocalId),
TypeParameter(TypeParameterId),
TypeAlias(TypeAlias),
}
pub enum DefinitionItem {
Function(Function),
Parameter(Parameter),
Class(Class),
Assignment(Assignment),
AnnotatedAssignment(AnnotatedAssignment),
Global(Global),
NonLocal(NonLocal),
TypeParameter(TypeParameter),
TypeAlias(TypeAlias),
}
// The closest is rust-analyzers item-tree. It only represents "Items" which make the public interface of a module
// (it excludes any other statement or expressions). rust-analyzer uses it as the main input to the name resolution
// algorithm
// > It is the input to the name resolution algorithm, as well as to the queries defined in `adt.rs`,
// > `data.rs`, and most things in `attr.rs`.
//
// > One important purpose of this layer is to provide an "invalidation barrier" for incremental
// > computations: when typing inside an item body, the `ItemTree` of the modified file is typically
// > unaffected, so we don't have to recompute name resolution results or item data (see `data.rs`).
//
// I haven't fully figured this out but I think that this composes the "public" interface of a module?
// But maybe that's too optimistic.
//
//
#[derive(Debug, Clone, Default, Eq, PartialEq)]
pub struct Definitions {
functions: IndexVec<FunctionId, Function>,
parameters: IndexVec<ParameterId, Parameter>,
classes: IndexVec<ClassId, Class>,
assignments: IndexVec<AssignmentId, Assignment>,
annotated_assignments: IndexVec<AnnotatedAssignmentId, AnnotatedAssignment>,
type_aliases: IndexVec<TypeAliasId, TypeAlias>,
type_parameters: IndexVec<TypeParameterId, TypeParameter>,
globals: IndexVec<GlobalId, Global>,
non_locals: IndexVec<NonLocalId, NonLocal>,
}
impl Definitions {
pub fn from_module(module: &ModModule, ast_ids: &AstIds, file_id: FileId) -> Self {
let mut visitor = DefinitionsVisitor {
definitions: Definitions::default(),
ast_ids,
file_id,
};
visitor.visit_body(&module.body);
visitor.definitions
}
}
impl Index<FunctionId> for Definitions {
type Output = Function;
fn index(&self, index: FunctionId) -> &Self::Output {
&self.functions[index]
}
}
impl Index<ParameterId> for Definitions {
type Output = Parameter;
fn index(&self, index: ParameterId) -> &Self::Output {
&self.parameters[index]
}
}
impl Index<ClassId> for Definitions {
type Output = Class;
fn index(&self, index: ClassId) -> &Self::Output {
&self.classes[index]
}
}
impl Index<AssignmentId> for Definitions {
type Output = Assignment;
fn index(&self, index: AssignmentId) -> &Self::Output {
&self.assignments[index]
}
}
impl Index<AnnotatedAssignmentId> for Definitions {
type Output = AnnotatedAssignment;
fn index(&self, index: AnnotatedAssignmentId) -> &Self::Output {
&self.annotated_assignments[index]
}
}
impl Index<TypeAliasId> for Definitions {
type Output = TypeAlias;
fn index(&self, index: TypeAliasId) -> &Self::Output {
&self.type_aliases[index]
}
}
impl Index<GlobalId> for Definitions {
type Output = Global;
fn index(&self, index: GlobalId) -> &Self::Output {
&self.globals[index]
}
}
impl Index<NonLocalId> for Definitions {
type Output = NonLocal;
fn index(&self, index: NonLocalId) -> &Self::Output {
&self.non_locals[index]
}
}
impl Index<TypeParameterId> for Definitions {
type Output = TypeParameter;
fn index(&self, index: TypeParameterId) -> &Self::Output {
&self.type_parameters[index]
}
}
struct DefinitionsVisitor<'a> {
definitions: Definitions,
ast_ids: &'a AstIds,
file_id: FileId,
}
impl DefinitionsVisitor<'_> {
fn ast_id<N: HasAstId>(&self, node: &N) -> HirAstId<N> {
HirAstId {
file_id: self.file_id,
node_id: self.ast_ids.ast_id(node),
}
}
fn lower_function_def(&mut self, function: &StmtFunctionDef) -> FunctionId {
let name = Name::new(&function.name);
let first_type_parameter_id = self.definitions.type_parameters.next_index();
let mut last_type_parameter_id = first_type_parameter_id;
if let Some(type_params) = &function.type_params {
for parameter in &type_params.type_params {
let id = self.lower_type_parameter(parameter);
last_type_parameter_id = id;
}
}
let parameters = self.lower_parameters(&function.parameters);
self.definitions.functions.push(Function {
name,
ast_id: self.ast_id(function),
parameters,
type_parameters: first_type_parameter_id..last_type_parameter_id,
})
}
fn lower_parameters(&mut self, parameters: &ruff_python_ast::Parameters) -> Range<ParameterId> {
let first_parameter_id = self.definitions.parameters.next_index();
let mut last_parameter_id = first_parameter_id;
for parameter in &parameters.posonlyargs {
last_parameter_id = self.definitions.parameters.push(Parameter {
kind: ParameterKind::PositionalOnly,
name: Name::new(&parameter.parameter.name),
default: None,
ast_id: self.ast_id(&parameter.parameter),
});
}
if let Some(vararg) = &parameters.vararg {
last_parameter_id = self.definitions.parameters.push(Parameter {
kind: ParameterKind::Vararg,
name: Name::new(&vararg.name),
default: None,
ast_id: self.ast_id(vararg),
});
}
for parameter in &parameters.kwonlyargs {
last_parameter_id = self.definitions.parameters.push(Parameter {
kind: ParameterKind::KeywordOnly,
name: Name::new(&parameter.parameter.name),
default: None,
ast_id: self.ast_id(&parameter.parameter),
});
}
if let Some(kwarg) = &parameters.kwarg {
last_parameter_id = self.definitions.parameters.push(Parameter {
kind: ParameterKind::KeywordOnly,
name: Name::new(&kwarg.name),
default: None,
ast_id: self.ast_id(kwarg),
});
}
first_parameter_id..last_parameter_id
}
fn lower_class_def(&mut self, class: &StmtClassDef) -> ClassId {
let name = Name::new(&class.name);
self.definitions.classes.push(Class {
name,
ast_id: self.ast_id(class),
})
}
fn lower_assignment(&mut self, assignment: &StmtAssign) {
// FIXME handle multiple names
if let Some(Expr::Name(name)) = assignment.targets.first() {
self.definitions.assignments.push(Assignment {
name: Name::new(&name.id),
ast_id: self.ast_id(assignment),
});
}
}
fn lower_annotated_assignment(&mut self, annotated_assignment: &StmtAnnAssign) {
if let Expr::Name(name) = &*annotated_assignment.target {
self.definitions
.annotated_assignments
.push(AnnotatedAssignment {
name: Name::new(&name.id),
ast_id: self.ast_id(annotated_assignment),
});
}
}
fn lower_type_alias(&mut self, type_alias: &StmtTypeAlias) {
if let Expr::Name(name) = &*type_alias.name {
let name = Name::new(&name.id);
let lower_parameters_id = self.definitions.type_parameters.next_index();
let mut last_parameter_id = lower_parameters_id;
if let Some(type_params) = &type_alias.type_params {
for type_parameter in &type_params.type_params {
let id = self.lower_type_parameter(type_parameter);
last_parameter_id = id;
}
}
self.definitions.type_aliases.push(TypeAlias {
name,
ast_id: self.ast_id(type_alias),
parameters: lower_parameters_id..last_parameter_id,
});
}
}
fn lower_type_parameter(&mut self, type_parameter: &TypeParam) -> TypeParameterId {
match type_parameter {
TypeParam::TypeVar(type_var) => {
self.definitions
.type_parameters
.push(TypeParameter::TypeVar(TypeParameterTypeVar {
name: Name::new(&type_var.name),
ast_id: self.ast_id(type_var),
}))
}
TypeParam::ParamSpec(param_spec) => {
self.definitions
.type_parameters
.push(TypeParameter::ParamSpec(TypeParameterParamSpec {
name: Name::new(&param_spec.name),
ast_id: self.ast_id(param_spec),
}))
}
TypeParam::TypeVarTuple(type_var_tuple) => {
self.definitions
.type_parameters
.push(TypeParameter::TypeVarTuple(TypeParameterTypeVarTuple {
name: Name::new(&type_var_tuple.name),
ast_id: self.ast_id(type_var_tuple),
}))
}
}
}
fn lower_import(&mut self, _import: &StmtImport) {
// TODO
}
fn lower_import_from(&mut self, _import_from: &StmtImportFrom) {
// TODO
}
fn lower_global(&mut self, global: &StmtGlobal) -> GlobalId {
self.definitions.globals.push(Global {
ast_id: self.ast_id(global),
})
}
fn lower_non_local(&mut self, non_local: &StmtNonlocal) -> NonLocalId {
self.definitions.non_locals.push(NonLocal {
ast_id: self.ast_id(non_local),
})
}
fn lower_except_handler(&mut self, _except_handler: &ExceptHandlerExceptHandler) {
// TODO
}
fn lower_with_item(&mut self, _with_item: &WithItem) {
// TODO
}
fn lower_match_case(&mut self, _match_case: &MatchCase) {
// TODO
}
}
impl PreorderVisitor<'_> for DefinitionsVisitor<'_> {
fn visit_stmt(&mut self, stmt: &Stmt) {
match stmt {
// Definition statements
Stmt::FunctionDef(definition) => {
self.lower_function_def(definition);
self.visit_body(&definition.body);
}
Stmt::ClassDef(definition) => {
self.lower_class_def(definition);
self.visit_body(&definition.body);
}
Stmt::Assign(assignment) => {
self.lower_assignment(assignment);
}
Stmt::AnnAssign(annotated_assignment) => {
self.lower_annotated_assignment(annotated_assignment);
}
Stmt::TypeAlias(type_alias) => {
self.lower_type_alias(type_alias);
}
Stmt::Import(import) => self.lower_import(import),
Stmt::ImportFrom(import_from) => self.lower_import_from(import_from),
Stmt::Global(global) => {
self.lower_global(global);
}
Stmt::Nonlocal(non_local) => {
self.lower_non_local(non_local);
}
// Visit the compound statement bodies because they can contain other definitions.
Stmt::For(_)
| Stmt::While(_)
| Stmt::If(_)
| Stmt::With(_)
| Stmt::Match(_)
| Stmt::Try(_) => {
preorder::walk_stmt(self, stmt);
}
// Skip over simple statements because they can't contain any other definitions.
Stmt::Return(_)
| Stmt::Delete(_)
| Stmt::AugAssign(_)
| Stmt::Raise(_)
| Stmt::Assert(_)
| Stmt::Expr(_)
| Stmt::Pass(_)
| Stmt::Break(_)
| Stmt::Continue(_)
| Stmt::IpyEscapeCommand(_) => {
// No op
}
}
}
fn visit_expr(&mut self, _: &'_ Expr) {}
fn visit_decorator(&mut self, _decorator: &'_ Decorator) {}
fn visit_except_handler(&mut self, except_handler: &'_ ExceptHandler) {
match except_handler {
ExceptHandler::ExceptHandler(except_handler) => {
self.lower_except_handler(except_handler);
}
}
}
fn visit_with_item(&mut self, with_item: &'_ WithItem) {
self.lower_with_item(with_item);
}
fn visit_match_case(&mut self, match_case: &'_ MatchCase) {
self.lower_match_case(match_case);
self.visit_body(&match_case.body);
}
}

108
crates/red_knot/src/lib.rs Normal file
View File

@@ -0,0 +1,108 @@
use std::fmt::Formatter;
use std::hash::BuildHasherDefault;
use std::ops::Deref;
use std::path::{Path, PathBuf};
use rustc_hash::{FxHashSet, FxHasher};
use crate::files::FileId;
pub mod ast_ids;
pub mod cache;
pub mod cancellation;
pub mod db;
pub mod files;
pub mod hir;
pub mod lint;
pub mod module;
mod parse;
pub mod program;
mod semantic;
pub mod source;
pub mod watch;
pub(crate) type FxDashMap<K, V> = dashmap::DashMap<K, V, BuildHasherDefault<FxHasher>>;
#[allow(unused)]
pub(crate) type FxDashSet<V> = dashmap::DashSet<V, BuildHasherDefault<FxHasher>>;
pub(crate) type FxIndexSet<V> = indexmap::set::IndexSet<V, BuildHasherDefault<FxHasher>>;
#[derive(Debug, Clone)]
pub struct Workspace {
/// TODO this should be a resolved path. We should probably use a newtype wrapper that guarantees that
/// PATH is a UTF-8 path and is normalized.
root: PathBuf,
/// The files that are open in the workspace.
///
/// * Editor: The files that are actively being edited in the editor (the user has a tab open with the file).
/// * CLI: The resolved files passed as arguments to the CLI.
open_files: FxHashSet<FileId>,
}
impl Workspace {
pub fn new(root: PathBuf) -> Self {
Self {
root,
open_files: FxHashSet::default(),
}
}
pub fn root(&self) -> &Path {
self.root.as_path()
}
// TODO having the content in workspace feels wrong.
pub fn open_file(&mut self, file_id: FileId) {
self.open_files.insert(file_id);
}
pub fn close_file(&mut self, file_id: FileId) {
self.open_files.remove(&file_id);
}
// TODO introduce an `OpenFile` type instead of using an anonymous tuple.
pub fn open_files(&self) -> impl Iterator<Item = FileId> + '_ {
self.open_files.iter().copied()
}
pub fn is_file_open(&self, file_id: FileId) -> bool {
self.open_files.contains(&file_id)
}
}
#[derive(Debug, Clone, Eq, PartialEq, Hash)]
pub struct Name(smol_str::SmolStr);
impl Name {
#[inline]
pub fn new(name: &str) -> Self {
Self(smol_str::SmolStr::new(name))
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
}
impl Deref for Name {
type Target = str;
#[inline]
fn deref(&self) -> &Self::Target {
self.as_str()
}
}
impl<T> From<T> for Name
where
T: Into<smol_str::SmolStr>,
{
fn from(value: T) -> Self {
Self(value.into())
}
}
impl std::fmt::Display for Name {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(self.as_str())
}
}

332
crates/red_knot/src/lint.rs Normal file
View File

@@ -0,0 +1,332 @@
use std::cell::RefCell;
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use std::time::Duration;
use ruff_python_ast::visitor::Visitor;
use ruff_python_ast::{ModModule, StringLiteral};
use ruff_python_parser::Parsed;
use crate::cache::KeyValueCache;
use crate::db::{LintDb, LintJar, QueryResult};
use crate::files::FileId;
use crate::module::{resolve_module, ModuleName};
use crate::parse::parse;
use crate::semantic::{infer_definition_type, infer_symbol_public_type, Type};
use crate::semantic::{
resolve_global_symbol, semantic_index, Definition, GlobalSymbolId, SemanticIndex, SymbolId,
};
use crate::source::{source_text, Source};
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn lint_syntax(db: &dyn LintDb, file_id: FileId) -> QueryResult<Diagnostics> {
let lint_jar: &LintJar = db.jar()?;
let storage = &lint_jar.lint_syntax;
#[allow(clippy::print_stdout)]
if std::env::var("RED_KNOT_SLOW_LINT").is_ok() {
for i in 0..10 {
db.cancelled()?;
println!("RED_KNOT_SLOW_LINT is set, sleeping for {i}/10 seconds");
std::thread::sleep(Duration::from_secs(1));
}
}
storage.get(&file_id, |file_id| {
let mut diagnostics = Vec::new();
let source = source_text(db.upcast(), *file_id)?;
lint_lines(source.text(), &mut diagnostics);
let parsed = parse(db.upcast(), *file_id)?;
if parsed.errors().is_empty() {
let ast = parsed.syntax();
let mut visitor = SyntaxLintVisitor {
diagnostics,
source: source.text(),
};
visitor.visit_body(&ast.body);
diagnostics = visitor.diagnostics;
} else {
diagnostics.extend(parsed.errors().iter().map(std::string::ToString::to_string));
}
Ok(Diagnostics::from(diagnostics))
})
}
fn lint_lines(source: &str, diagnostics: &mut Vec<String>) {
for (line_number, line) in source.lines().enumerate() {
if line.len() < 88 {
continue;
}
let char_count = line.chars().count();
if char_count > 88 {
diagnostics.push(format!(
"Line {} is too long ({} characters)",
line_number + 1,
char_count
));
}
}
}
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn lint_semantic(db: &dyn LintDb, file_id: FileId) -> QueryResult<Diagnostics> {
let lint_jar: &LintJar = db.jar()?;
let storage = &lint_jar.lint_semantic;
storage.get(&file_id, |file_id| {
let source = source_text(db.upcast(), *file_id)?;
let parsed = parse(db.upcast(), *file_id)?;
let semantic_index = semantic_index(db.upcast(), *file_id)?;
let context = SemanticLintContext {
file_id: *file_id,
source,
parsed: &parsed,
semantic_index,
db,
diagnostics: RefCell::new(Vec::new()),
};
lint_unresolved_imports(&context)?;
lint_bad_overrides(&context)?;
Ok(Diagnostics::from(context.diagnostics.take()))
})
}
fn lint_unresolved_imports(context: &SemanticLintContext) -> QueryResult<()> {
// TODO: Consider iterating over the dependencies (imports) only instead of all definitions.
for (symbol, definition) in context.semantic_index().symbol_table().all_definitions() {
match definition {
Definition::Import(import) => {
let ty = context.infer_symbol_public_type(symbol)?;
if ty.is_unknown() {
context.push_diagnostic(format!("Unresolved module {}", import.module));
}
}
Definition::ImportFrom(import) => {
let ty = context.infer_symbol_public_type(symbol)?;
if ty.is_unknown() {
let module_name = import.module().map(Deref::deref).unwrap_or_default();
let message = if import.level() > 0 {
format!(
"Unresolved relative import '{}' from {}{}",
import.name(),
".".repeat(import.level() as usize),
module_name
)
} else {
format!(
"Unresolved import '{}' from '{}'",
import.name(),
module_name
)
};
context.push_diagnostic(message);
}
}
_ => {}
}
}
Ok(())
}
fn lint_bad_overrides(context: &SemanticLintContext) -> QueryResult<()> {
// TODO we should have a special marker on the real typing module (from typeshed) so if you
// have your own "typing" module in your project, we don't consider it THE typing module (and
// same for other stdlib modules that our lint rules care about)
let Some(typing_override) = context.resolve_global_symbol("typing", "override")? else {
// TODO once we bundle typeshed, this should be unreachable!()
return Ok(());
};
// TODO we should maybe index definitions by type instead of iterating all, or else iterate all
// just once, match, and branch to all lint rules that care about a type of definition
for (symbol, definition) in context.semantic_index().symbol_table().all_definitions() {
if !matches!(definition, Definition::FunctionDef(_)) {
continue;
}
let ty = infer_definition_type(
context.db.upcast(),
GlobalSymbolId {
file_id: context.file_id,
symbol_id: symbol,
},
definition.clone(),
)?;
let Type::Function(func) = ty else {
unreachable!("type of a FunctionDef should always be a Function");
};
let Some(class) = func.get_containing_class(context.db.upcast())? else {
// not a method of a class
continue;
};
if func.has_decorator(context.db.upcast(), typing_override)? {
let method_name = func.name(context.db.upcast())?;
if class
.get_super_class_member(context.db.upcast(), &method_name)?
.is_none()
{
// TODO should have a qualname() method to support nested classes
context.push_diagnostic(
format!(
"Method {}.{} is decorated with `typing.override` but does not override any base class method",
class.name(context.db.upcast())?,
method_name,
));
}
}
}
Ok(())
}
pub struct SemanticLintContext<'a> {
file_id: FileId,
source: Source,
parsed: &'a Parsed<ModModule>,
semantic_index: Arc<SemanticIndex>,
db: &'a dyn LintDb,
diagnostics: RefCell<Vec<String>>,
}
impl<'a> SemanticLintContext<'a> {
pub fn source_text(&self) -> &str {
self.source.text()
}
pub fn file_id(&self) -> FileId {
self.file_id
}
pub fn ast(&self) -> &'a ModModule {
self.parsed.syntax()
}
pub fn semantic_index(&self) -> &SemanticIndex {
&self.semantic_index
}
pub fn infer_symbol_public_type(&self, symbol_id: SymbolId) -> QueryResult<Type> {
infer_symbol_public_type(
self.db.upcast(),
GlobalSymbolId {
file_id: self.file_id,
symbol_id,
},
)
}
pub fn push_diagnostic(&self, diagnostic: String) {
self.diagnostics.borrow_mut().push(diagnostic);
}
pub fn extend_diagnostics(&mut self, diagnostics: impl IntoIterator<Item = String>) {
self.diagnostics.get_mut().extend(diagnostics);
}
pub fn resolve_global_symbol(
&self,
module: &str,
symbol_name: &str,
) -> QueryResult<Option<GlobalSymbolId>> {
let Some(module) = resolve_module(self.db.upcast(), ModuleName::new(module))? else {
return Ok(None);
};
resolve_global_symbol(self.db.upcast(), module, symbol_name)
}
}
#[derive(Debug)]
struct SyntaxLintVisitor<'a> {
diagnostics: Vec<String>,
source: &'a str,
}
impl Visitor<'_> for SyntaxLintVisitor<'_> {
fn visit_string_literal(&mut self, string_literal: &'_ StringLiteral) {
// A very naive implementation of use double quotes
let text = &self.source[string_literal.range];
if text.starts_with('\'') {
self.diagnostics
.push("Use double quotes for strings".to_string());
}
}
}
#[derive(Debug, Clone)]
pub enum Diagnostics {
Empty,
List(Arc<Vec<String>>),
}
impl Diagnostics {
pub fn as_slice(&self) -> &[String] {
match self {
Diagnostics::Empty => &[],
Diagnostics::List(list) => list.as_slice(),
}
}
}
impl Deref for Diagnostics {
type Target = [String];
fn deref(&self) -> &Self::Target {
self.as_slice()
}
}
impl From<Vec<String>> for Diagnostics {
fn from(value: Vec<String>) -> Self {
if value.is_empty() {
Diagnostics::Empty
} else {
Diagnostics::List(Arc::new(value))
}
}
}
#[derive(Default, Debug)]
pub struct LintSyntaxStorage(KeyValueCache<FileId, Diagnostics>);
impl Deref for LintSyntaxStorage {
type Target = KeyValueCache<FileId, Diagnostics>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for LintSyntaxStorage {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[derive(Default, Debug)]
pub struct LintSemanticStorage(KeyValueCache<FileId, Diagnostics>);
impl Deref for LintSemanticStorage {
type Target = KeyValueCache<FileId, Diagnostics>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for LintSemanticStorage {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}

359
crates/red_knot/src/main.rs Normal file
View File

@@ -0,0 +1,359 @@
#![allow(clippy::dbg_macro)]
use std::path::Path;
use std::sync::Mutex;
use crossbeam::channel as crossbeam_channel;
use tracing::subscriber::Interest;
use tracing::{Level, Metadata};
use tracing_subscriber::filter::LevelFilter;
use tracing_subscriber::layer::{Context, Filter, SubscriberExt};
use tracing_subscriber::{Layer, Registry};
use tracing_tree::time::Uptime;
use red_knot::db::{HasJar, ParallelDatabase, QueryError, SourceDb, SourceJar};
use red_knot::module::{set_module_search_paths, ModuleSearchPath, ModuleSearchPathKind};
use red_knot::program::check::ExecutionMode;
use red_knot::program::{FileWatcherChange, Program};
use red_knot::watch::FileWatcher;
use red_knot::Workspace;
#[allow(clippy::print_stdout, clippy::unnecessary_wraps, clippy::print_stderr)]
fn main() -> anyhow::Result<()> {
setup_tracing();
let arguments: Vec<_> = std::env::args().collect();
if arguments.len() < 2 {
eprintln!("Usage: red_knot <path>");
return Err(anyhow::anyhow!("Invalid arguments"));
}
let entry_point = Path::new(&arguments[1]);
if !entry_point.exists() {
eprintln!("The entry point does not exist.");
return Err(anyhow::anyhow!("Invalid arguments"));
}
if !entry_point.is_file() {
eprintln!("The entry point is not a file.");
return Err(anyhow::anyhow!("Invalid arguments"));
}
let workspace_folder = entry_point.parent().unwrap();
let workspace = Workspace::new(workspace_folder.to_path_buf());
let workspace_search_path = ModuleSearchPath::new(
workspace.root().to_path_buf(),
ModuleSearchPathKind::FirstParty,
);
let mut program = Program::new(workspace);
set_module_search_paths(&mut program, vec![workspace_search_path]);
let entry_id = program.file_id(entry_point);
program.workspace_mut().open_file(entry_id);
let (main_loop, main_loop_cancellation_token) = MainLoop::new();
// Listen to Ctrl+C and abort the watch mode.
let main_loop_cancellation_token = Mutex::new(Some(main_loop_cancellation_token));
ctrlc::set_handler(move || {
let mut lock = main_loop_cancellation_token.lock().unwrap();
if let Some(token) = lock.take() {
token.stop();
}
})?;
let file_changes_notifier = main_loop.file_changes_notifier();
// Watch for file changes and re-trigger the analysis.
let mut file_watcher = FileWatcher::new(move |changes| {
file_changes_notifier.notify(changes);
})?;
file_watcher.watch_folder(workspace_folder)?;
main_loop.run(&mut program);
let source_jar: &SourceJar = program.jar().unwrap();
dbg!(source_jar.parsed.statistics());
dbg!(source_jar.sources.statistics());
Ok(())
}
struct MainLoop {
orchestrator_sender: crossbeam_channel::Sender<OrchestratorMessage>,
main_loop_receiver: crossbeam_channel::Receiver<MainLoopMessage>,
}
impl MainLoop {
fn new() -> (Self, MainLoopCancellationToken) {
let (orchestrator_sender, orchestrator_receiver) = crossbeam_channel::bounded(1);
let (main_loop_sender, main_loop_receiver) = crossbeam_channel::bounded(1);
let mut orchestrator = Orchestrator {
receiver: orchestrator_receiver,
sender: main_loop_sender.clone(),
revision: 0,
};
std::thread::spawn(move || {
orchestrator.run();
});
(
Self {
orchestrator_sender,
main_loop_receiver,
},
MainLoopCancellationToken {
sender: main_loop_sender,
},
)
}
fn file_changes_notifier(&self) -> FileChangesNotifier {
FileChangesNotifier {
sender: self.orchestrator_sender.clone(),
}
}
fn run(self, program: &mut Program) {
self.orchestrator_sender
.send(OrchestratorMessage::Run)
.unwrap();
for message in &self.main_loop_receiver {
tracing::trace!("Main Loop: Tick");
match message {
MainLoopMessage::CheckProgram { revision } => {
let program = program.snapshot();
let sender = self.orchestrator_sender.clone();
// Spawn a new task that checks the program. This needs to be done in a separate thread
// to prevent blocking the main loop here.
rayon::spawn(move || match program.check(ExecutionMode::ThreadPool) {
Ok(result) => {
sender
.send(OrchestratorMessage::CheckProgramCompleted {
diagnostics: result,
revision,
})
.unwrap();
}
Err(QueryError::Cancelled) => {}
});
}
MainLoopMessage::ApplyChanges(changes) => {
// Automatically cancels any pending queries and waits for them to complete.
program.apply_changes(changes);
}
MainLoopMessage::CheckCompleted(diagnostics) => {
dbg!(diagnostics);
}
MainLoopMessage::Exit => {
return;
}
}
}
}
}
impl Drop for MainLoop {
fn drop(&mut self) {
self.orchestrator_sender
.send(OrchestratorMessage::Shutdown)
.unwrap();
}
}
#[derive(Debug, Clone)]
struct FileChangesNotifier {
sender: crossbeam_channel::Sender<OrchestratorMessage>,
}
impl FileChangesNotifier {
fn notify(&self, changes: Vec<FileWatcherChange>) {
self.sender
.send(OrchestratorMessage::FileChanges(changes))
.unwrap();
}
}
#[derive(Debug)]
struct MainLoopCancellationToken {
sender: crossbeam_channel::Sender<MainLoopMessage>,
}
impl MainLoopCancellationToken {
fn stop(self) {
self.sender.send(MainLoopMessage::Exit).unwrap();
}
}
struct Orchestrator {
/// Sends messages to the main loop.
sender: crossbeam_channel::Sender<MainLoopMessage>,
/// Receives messages from the main loop.
receiver: crossbeam_channel::Receiver<OrchestratorMessage>,
revision: usize,
}
impl Orchestrator {
fn run(&mut self) {
while let Ok(message) = self.receiver.recv() {
match message {
OrchestratorMessage::Run => {
self.sender
.send(MainLoopMessage::CheckProgram {
revision: self.revision,
})
.unwrap();
}
OrchestratorMessage::CheckProgramCompleted {
diagnostics,
revision,
} => {
// Only take the diagnostics if they are for the latest revision.
if self.revision == revision {
self.sender
.send(MainLoopMessage::CheckCompleted(diagnostics))
.unwrap();
} else {
tracing::debug!("Discarding diagnostics for outdated revision {revision} (current: {}).", self.revision);
}
}
OrchestratorMessage::FileChanges(changes) => {
// Request cancellation, but wait until all analysis tasks have completed to
// avoid stale messages in the next main loop.
self.revision += 1;
self.debounce_changes(changes);
}
OrchestratorMessage::Shutdown => {
return self.shutdown();
}
}
}
}
fn debounce_changes(&self, mut changes: Vec<FileWatcherChange>) {
loop {
// Consume possibly incoming file change messages before running a new analysis, but don't wait for more than 100ms.
crossbeam_channel::select! {
recv(self.receiver) -> message => {
match message {
Ok(OrchestratorMessage::Shutdown) => {
return self.shutdown();
}
Ok(OrchestratorMessage::FileChanges(file_changes)) => {
changes.extend(file_changes);
}
Ok(OrchestratorMessage::CheckProgramCompleted { .. })=> {
// disregard any outdated completion message.
}
Ok(OrchestratorMessage::Run) => unreachable!("The orchestrator is already running."),
Err(_) => {
// There are no more senders, no point in waiting for more messages
return;
}
}
},
default(std::time::Duration::from_millis(10)) => {
// No more file changes after 10 ms, send the changes and schedule a new analysis
self.sender.send(MainLoopMessage::ApplyChanges(changes)).unwrap();
self.sender.send(MainLoopMessage::CheckProgram { revision: self.revision}).unwrap();
return;
}
}
}
}
#[allow(clippy::unused_self)]
fn shutdown(&self) {
tracing::trace!("Shutting down orchestrator.");
}
}
/// Message sent from the orchestrator to the main loop.
#[derive(Debug)]
enum MainLoopMessage {
CheckProgram { revision: usize },
CheckCompleted(Vec<String>),
ApplyChanges(Vec<FileWatcherChange>),
Exit,
}
#[derive(Debug)]
enum OrchestratorMessage {
Run,
Shutdown,
CheckProgramCompleted {
diagnostics: Vec<String>,
revision: usize,
},
FileChanges(Vec<FileWatcherChange>),
}
fn setup_tracing() {
let subscriber = Registry::default().with(
tracing_tree::HierarchicalLayer::default()
.with_indent_lines(true)
.with_indent_amount(2)
.with_bracketed_fields(true)
.with_thread_ids(true)
.with_targets(true)
.with_writer(|| Box::new(std::io::stderr()))
.with_timer(Uptime::default())
.with_filter(LoggingFilter {
trace_level: Level::TRACE,
}),
);
tracing::subscriber::set_global_default(subscriber).unwrap();
}
struct LoggingFilter {
trace_level: Level,
}
impl LoggingFilter {
fn is_enabled(&self, meta: &Metadata<'_>) -> bool {
let filter = if meta.target().starts_with("red_knot") || meta.target().starts_with("ruff") {
self.trace_level
} else {
Level::INFO
};
meta.level() <= &filter
}
}
impl<S> Filter<S> for LoggingFilter {
fn enabled(&self, meta: &Metadata<'_>, _cx: &Context<'_, S>) -> bool {
self.is_enabled(meta)
}
fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
if self.is_enabled(meta) {
Interest::always()
} else {
Interest::never()
}
}
fn max_level_hint(&self) -> Option<LevelFilter> {
Some(LevelFilter::from_level(self.trace_level))
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,41 @@
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use ruff_python_ast::ModModule;
use ruff_python_parser::Parsed;
use crate::cache::KeyValueCache;
use crate::db::{QueryResult, SourceDb};
use crate::files::FileId;
use crate::source::source_text;
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn parse(db: &dyn SourceDb, file_id: FileId) -> QueryResult<Arc<Parsed<ModModule>>> {
let jar = db.jar()?;
jar.parsed.get(&file_id, |file_id| {
let source = source_text(db, *file_id)?;
Ok(Arc::new(ruff_python_parser::parse_unchecked_source(
source.text(),
source.kind().into(),
)))
})
}
#[derive(Debug, Default)]
pub struct ParsedStorage(KeyValueCache<FileId, Arc<Parsed<ModModule>>>);
impl Deref for ParsedStorage {
type Target = KeyValueCache<FileId, Arc<Parsed<ModModule>>>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for ParsedStorage {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}

View File

@@ -0,0 +1,413 @@
use rayon::{current_num_threads, yield_local};
use rustc_hash::FxHashSet;
use crate::db::{Database, QueryError, QueryResult};
use crate::files::FileId;
use crate::lint::{lint_semantic, lint_syntax, Diagnostics};
use crate::module::{file_to_module, resolve_module};
use crate::program::Program;
use crate::semantic::{semantic_index, Dependency};
impl Program {
/// Checks all open files in the workspace and its dependencies.
#[tracing::instrument(level = "debug", skip_all)]
pub fn check(&self, mode: ExecutionMode) -> QueryResult<Vec<String>> {
self.cancelled()?;
let mut context = CheckContext::new(self);
match mode {
ExecutionMode::SingleThreaded => SingleThreadedExecutor.run(&mut context)?,
ExecutionMode::ThreadPool => ThreadPoolExecutor.run(&mut context)?,
};
Ok(context.finish())
}
#[tracing::instrument(level = "debug", skip(self, context))]
fn check_file(&self, file: FileId, context: &CheckFileContext) -> QueryResult<Diagnostics> {
self.cancelled()?;
let index = semantic_index(self, file)?;
let dependencies = index.symbol_table().dependencies();
if !dependencies.is_empty() {
let module = file_to_module(self, file)?;
// TODO scheduling all dependencies here is wasteful if we don't infer any types on them
// but I think that's unlikely, so it is okay?
// Anyway, we need to figure out a way to retrieve the dependencies of a module
// from the persistent cache. So maybe it should be a separate query after all.
for dependency in dependencies {
let dependency_name = match dependency {
Dependency::Module(name) => Some(name.clone()),
Dependency::Relative { .. } => match &module {
Some(module) => module.resolve_dependency(self, dependency)?,
None => None,
},
};
if let Some(dependency_name) = dependency_name {
// TODO We may want to have a different check functions for non-first-party
// files because we only need to index them and not check them.
// Supporting non-first-party code also requires supporting typing stubs.
if let Some(dependency) = resolve_module(self, dependency_name)? {
if dependency.path(self)?.root().kind().is_first_party() {
context.schedule_dependency(dependency.path(self)?.file());
}
}
}
}
}
let mut diagnostics = Vec::new();
if self.workspace().is_file_open(file) {
diagnostics.extend_from_slice(&lint_syntax(self, file)?);
diagnostics.extend_from_slice(&lint_semantic(self, file)?);
}
Ok(Diagnostics::from(diagnostics))
}
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ExecutionMode {
SingleThreaded,
ThreadPool,
}
/// Context that stores state information about the entire check operation.
struct CheckContext<'a> {
/// IDs of the files that have been queued for checking.
///
/// Used to avoid queuing the same file twice.
scheduled_files: FxHashSet<FileId>,
/// Reference to the program that is checked.
program: &'a Program,
/// The aggregated diagnostics
diagnostics: Vec<String>,
}
impl<'a> CheckContext<'a> {
fn new(program: &'a Program) -> Self {
Self {
scheduled_files: FxHashSet::default(),
program,
diagnostics: Vec::new(),
}
}
/// Returns the tasks to check all open files in the workspace.
fn check_open_files(&mut self) -> Vec<CheckOpenFileTask> {
self.scheduled_files
.extend(self.program.workspace().open_files());
self.program
.workspace()
.open_files()
.map(|file_id| CheckOpenFileTask { file_id })
.collect()
}
/// Returns the task to check a dependency.
fn check_dependency(&mut self, file_id: FileId) -> Option<CheckDependencyTask> {
if self.scheduled_files.insert(file_id) {
Some(CheckDependencyTask { file_id })
} else {
None
}
}
/// Pushes the result for a single file check operation
fn push_diagnostics(&mut self, diagnostics: &Diagnostics) {
self.diagnostics.extend_from_slice(diagnostics);
}
/// Returns a reference to the program that is being checked.
fn program(&self) -> &'a Program {
self.program
}
/// Creates a task context that is used to check a single file.
fn task_context<'b, S>(&self, dependency_scheduler: &'b S) -> CheckTaskContext<'a, 'b, S>
where
S: ScheduleDependency,
{
CheckTaskContext {
program: self.program,
dependency_scheduler,
}
}
fn finish(self) -> Vec<String> {
self.diagnostics
}
}
/// Trait that abstracts away how a dependency of a file gets scheduled for checking.
trait ScheduleDependency {
/// Schedules the file with the given ID for checking.
fn schedule(&self, file_id: FileId);
}
impl<T> ScheduleDependency for T
where
T: Fn(FileId),
{
fn schedule(&self, file_id: FileId) {
let f = self;
f(file_id);
}
}
/// Context that is used to run a single file check task.
///
/// The task is generic over `S` because it is passed across thread boundaries and
/// we don't want to add the requirement that [`ScheduleDependency`] must be [`Send`].
struct CheckTaskContext<'a, 'scheduler, S>
where
S: ScheduleDependency,
{
dependency_scheduler: &'scheduler S,
program: &'a Program,
}
impl<'a, 'scheduler, S> CheckTaskContext<'a, 'scheduler, S>
where
S: ScheduleDependency,
{
fn as_file_context(&self) -> CheckFileContext<'scheduler> {
CheckFileContext {
dependency_scheduler: self.dependency_scheduler,
}
}
}
/// Context passed when checking a single file.
///
/// This is a trimmed down version of [`CheckTaskContext`] with the type parameter `S` erased
/// to avoid monomorphization of [`Program:check_file`].
struct CheckFileContext<'a> {
dependency_scheduler: &'a dyn ScheduleDependency,
}
impl<'a> CheckFileContext<'a> {
fn schedule_dependency(&self, file_id: FileId) {
self.dependency_scheduler.schedule(file_id);
}
}
#[derive(Debug)]
enum CheckFileTask {
OpenFile(CheckOpenFileTask),
Dependency(CheckDependencyTask),
}
impl CheckFileTask {
/// Runs the task and returns the results for checking this file.
fn run<S>(&self, context: &CheckTaskContext<S>) -> QueryResult<Diagnostics>
where
S: ScheduleDependency,
{
match self {
Self::OpenFile(task) => task.run(context),
Self::Dependency(task) => task.run(context),
}
}
fn file_id(&self) -> FileId {
match self {
CheckFileTask::OpenFile(task) => task.file_id,
CheckFileTask::Dependency(task) => task.file_id,
}
}
}
/// Task to check an open file.
#[derive(Debug)]
struct CheckOpenFileTask {
file_id: FileId,
}
impl CheckOpenFileTask {
fn run<S>(&self, context: &CheckTaskContext<S>) -> QueryResult<Diagnostics>
where
S: ScheduleDependency,
{
context
.program
.check_file(self.file_id, &context.as_file_context())
}
}
/// Task to check a dependency file.
#[derive(Debug)]
struct CheckDependencyTask {
file_id: FileId,
}
impl CheckDependencyTask {
fn run<S>(&self, context: &CheckTaskContext<S>) -> QueryResult<Diagnostics>
where
S: ScheduleDependency,
{
context
.program
.check_file(self.file_id, &context.as_file_context())
}
}
/// Executor that schedules the checking of individual program files.
trait CheckExecutor {
fn run(self, context: &mut CheckContext) -> QueryResult<()>;
}
/// Executor that runs all check operations on the current thread.
///
/// The executor does not schedule dependencies for checking.
/// The main motivation for scheduling dependencies
/// in a multithreaded environment is to parse and index the dependencies concurrently.
/// However, that doesn't make sense in a single threaded environment, because the dependencies then compute
/// with checking the open files. Checking dependencies in a single threaded environment is more likely
/// to hurt performance because we end up analyzing files in their entirety, even if we only need to type check parts of them.
#[derive(Debug, Default)]
struct SingleThreadedExecutor;
impl CheckExecutor for SingleThreadedExecutor {
fn run(self, context: &mut CheckContext) -> QueryResult<()> {
let mut queue = context.check_open_files();
let noop_schedule_dependency = |_| {};
while let Some(file) = queue.pop() {
context.program().cancelled()?;
let task_context = context.task_context(&noop_schedule_dependency);
context.push_diagnostics(&file.run(&task_context)?);
}
Ok(())
}
}
/// Executor that runs the check operations on a thread pool.
///
/// The executor runs each check operation as its own task using a thread pool.
///
/// Other than [`SingleThreadedExecutor`], this executor schedules dependencies for checking. It
/// even schedules dependencies for checking when the thread pool size is 1 for a better debugging experience.
#[derive(Debug, Default)]
struct ThreadPoolExecutor;
impl CheckExecutor for ThreadPoolExecutor {
fn run(self, context: &mut CheckContext) -> QueryResult<()> {
let num_threads = current_num_threads();
let single_threaded = num_threads == 1;
let span = tracing::trace_span!("ThreadPoolExecutor::run", num_threads);
let _ = span.enter();
let mut queue: Vec<_> = context
.check_open_files()
.into_iter()
.map(CheckFileTask::OpenFile)
.collect();
let (sender, receiver) = if single_threaded {
// Use an unbounded queue for single threaded execution to prevent deadlocks
// when a single file schedules multiple dependencies.
crossbeam::channel::unbounded()
} else {
// Use a bounded queue to apply backpressure when the orchestration thread isn't able to keep
// up processing messages from the worker threads.
crossbeam::channel::bounded(num_threads)
};
let schedule_sender = sender.clone();
let schedule_dependency = move |file_id| {
schedule_sender
.send(ThreadPoolMessage::ScheduleDependency(file_id))
.unwrap();
};
let result = rayon::in_place_scope(|scope| {
let mut pending = 0usize;
loop {
context.program().cancelled()?;
// 1. Try to get a queued message to ensure that we have always remaining space in the channel to prevent blocking the worker threads.
// 2. Try to process a queued file
// 3. If there's no queued file wait for the next incoming message.
// 4. Exit if there are no more messages and no senders.
let message = if let Ok(message) = receiver.try_recv() {
message
} else if let Some(task) = queue.pop() {
pending += 1;
let task_context = context.task_context(&schedule_dependency);
let sender = sender.clone();
let task_span = tracing::trace_span!(
parent: &span,
"CheckFileTask::run",
file_id = task.file_id().as_u32(),
);
scope.spawn(move |_| {
task_span.in_scope(|| match task.run(&task_context) {
Ok(result) => {
sender.send(ThreadPoolMessage::Completed(result)).unwrap();
}
Err(err) => sender.send(ThreadPoolMessage::Errored(err)).unwrap(),
});
});
// If this is a single threaded rayon thread pool, yield the current thread
// or we never start processing the work items.
if single_threaded {
yield_local();
}
continue;
} else if let Ok(message) = receiver.recv() {
message
} else {
break;
};
match message {
ThreadPoolMessage::ScheduleDependency(dependency) => {
if let Some(task) = context.check_dependency(dependency) {
queue.push(CheckFileTask::Dependency(task));
}
}
ThreadPoolMessage::Completed(diagnostics) => {
context.push_diagnostics(&diagnostics);
pending -= 1;
if pending == 0 && queue.is_empty() {
break;
}
}
ThreadPoolMessage::Errored(err) => {
return Err(err);
}
}
}
Ok(())
});
result
}
}
#[derive(Debug)]
enum ThreadPoolMessage {
ScheduleDependency(FileId),
Completed(Diagnostics),
Errored(QueryError),
}

View File

@@ -0,0 +1,275 @@
use std::collections::hash_map::Entry;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use rustc_hash::FxHashMap;
use crate::db::{
Database, Db, DbRuntime, DbWithJar, HasJar, HasJars, JarsStorage, LintDb, LintJar,
ParallelDatabase, QueryResult, SemanticDb, SemanticJar, Snapshot, SourceDb, SourceJar, Upcast,
};
use crate::files::{FileId, Files};
use crate::Workspace;
pub mod check;
#[derive(Debug)]
pub struct Program {
jars: JarsStorage<Program>,
files: Files,
workspace: Workspace,
}
impl Program {
pub fn new(workspace: Workspace) -> Self {
Self {
jars: JarsStorage::default(),
files: Files::default(),
workspace,
}
}
pub fn apply_changes<I>(&mut self, changes: I)
where
I: IntoIterator<Item = FileWatcherChange>,
{
let mut aggregated_changes = AggregatedChanges::default();
aggregated_changes.extend(changes.into_iter().map(|change| FileChange {
id: self.files.intern(&change.path),
kind: change.kind,
}));
let (source, semantic, lint) = self.jars_mut();
for change in aggregated_changes.iter() {
semantic.module_resolver.remove_module_by_file(change.id);
semantic.semantic_indices.remove(&change.id);
source.sources.remove(&change.id);
source.parsed.remove(&change.id);
// TODO: remove all dependent modules as well
semantic.type_store.remove_module(change.id);
lint.lint_syntax.remove(&change.id);
lint.lint_semantic.remove(&change.id);
}
}
pub fn files(&self) -> &Files {
&self.files
}
pub fn workspace(&self) -> &Workspace {
&self.workspace
}
pub fn workspace_mut(&mut self) -> &mut Workspace {
&mut self.workspace
}
}
impl SourceDb for Program {
fn file_id(&self, path: &Path) -> FileId {
self.files.intern(path)
}
fn file_path(&self, file_id: FileId) -> Arc<Path> {
self.files.path(file_id)
}
}
impl DbWithJar<SourceJar> for Program {}
impl SemanticDb for Program {}
impl DbWithJar<SemanticJar> for Program {}
impl LintDb for Program {}
impl DbWithJar<LintJar> for Program {}
impl Upcast<dyn SemanticDb> for Program {
fn upcast(&self) -> &(dyn SemanticDb + 'static) {
self
}
}
impl Upcast<dyn SourceDb> for Program {
fn upcast(&self) -> &(dyn SourceDb + 'static) {
self
}
}
impl Upcast<dyn LintDb> for Program {
fn upcast(&self) -> &(dyn LintDb + 'static) {
self
}
}
impl Db for Program {}
impl Database for Program {
fn runtime(&self) -> &DbRuntime {
self.jars.runtime()
}
fn runtime_mut(&mut self) -> &mut DbRuntime {
self.jars.runtime_mut()
}
}
impl ParallelDatabase for Program {
fn snapshot(&self) -> Snapshot<Self> {
Snapshot::new(Self {
jars: self.jars.snapshot(),
files: self.files.snapshot(),
workspace: self.workspace.clone(),
})
}
}
impl HasJars for Program {
type Jars = (SourceJar, SemanticJar, LintJar);
fn jars(&self) -> QueryResult<&Self::Jars> {
self.jars.jars()
}
fn jars_mut(&mut self) -> &mut Self::Jars {
self.jars.jars_mut()
}
}
impl HasJar<SourceJar> for Program {
fn jar(&self) -> QueryResult<&SourceJar> {
Ok(&self.jars()?.0)
}
fn jar_mut(&mut self) -> &mut SourceJar {
&mut self.jars_mut().0
}
}
impl HasJar<SemanticJar> for Program {
fn jar(&self) -> QueryResult<&SemanticJar> {
Ok(&self.jars()?.1)
}
fn jar_mut(&mut self) -> &mut SemanticJar {
&mut self.jars_mut().1
}
}
impl HasJar<LintJar> for Program {
fn jar(&self) -> QueryResult<&LintJar> {
Ok(&self.jars()?.2)
}
fn jar_mut(&mut self) -> &mut LintJar {
&mut self.jars_mut().2
}
}
#[derive(Clone, Debug)]
pub struct FileWatcherChange {
path: PathBuf,
kind: FileChangeKind,
}
impl FileWatcherChange {
pub fn new(path: PathBuf, kind: FileChangeKind) -> Self {
Self { path, kind }
}
}
#[derive(Copy, Clone, Debug)]
struct FileChange {
id: FileId,
kind: FileChangeKind,
}
impl FileChange {
fn file_id(self) -> FileId {
self.id
}
fn kind(self) -> FileChangeKind {
self.kind
}
}
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
pub enum FileChangeKind {
Created,
Modified,
Deleted,
}
#[derive(Default, Debug)]
struct AggregatedChanges {
changes: FxHashMap<FileId, FileChangeKind>,
}
impl AggregatedChanges {
fn add(&mut self, change: FileChange) {
match self.changes.entry(change.file_id()) {
Entry::Occupied(mut entry) => {
let merged = entry.get_mut();
match (merged, change.kind()) {
(FileChangeKind::Created, FileChangeKind::Deleted) => {
// Deletion after creations means that ruff never saw the file.
entry.remove();
}
(FileChangeKind::Created, FileChangeKind::Modified) => {
// No-op, for ruff, modifying a file that it doesn't yet know that it exists is still considered a creation.
}
(FileChangeKind::Modified, FileChangeKind::Created) => {
// Uhh, that should probably not happen. Continue considering it a modification.
}
(FileChangeKind::Modified, FileChangeKind::Deleted) => {
*entry.get_mut() = FileChangeKind::Deleted;
}
(FileChangeKind::Deleted, FileChangeKind::Created) => {
*entry.get_mut() = FileChangeKind::Modified;
}
(FileChangeKind::Deleted, FileChangeKind::Modified) => {
// That's weird, but let's consider it a modification.
*entry.get_mut() = FileChangeKind::Modified;
}
(FileChangeKind::Created, FileChangeKind::Created)
| (FileChangeKind::Modified, FileChangeKind::Modified)
| (FileChangeKind::Deleted, FileChangeKind::Deleted) => {
// No-op transitions. Some of them should be impossible but we handle them anyway.
}
}
}
Entry::Vacant(entry) => {
entry.insert(change.kind());
}
}
}
fn extend<I>(&mut self, changes: I)
where
I: IntoIterator<Item = FileChange>,
{
let iter = changes.into_iter();
let (lower, _) = iter.size_hint();
self.changes.reserve(lower);
for change in iter {
self.add(change);
}
}
fn iter(&self) -> impl Iterator<Item = FileChange> + '_ {
self.changes.iter().map(|(id, kind)| FileChange {
id: *id,
kind: *kind,
})
}
}

View File

@@ -0,0 +1,747 @@
use std::num::NonZeroU32;
use ruff_python_ast as ast;
use ruff_python_ast::visitor::preorder::PreorderVisitor;
use crate::ast_ids::{NodeKey, TypedNodeKey};
use crate::cache::KeyValueCache;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::files::FileId;
use crate::module::Module;
use crate::module::ModuleName;
use crate::parse::parse;
use crate::Name;
use flow_graph::{FlowGraph, FlowGraphBuilder, FlowNodeId, ReachableDefinitionsIterator};
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
pub(crate) use symbol_table::{Definition, Dependency, SymbolId};
use symbol_table::{
ImportDefinition, ImportFromDefinition, ScopeId, ScopeKind, SymbolFlags, SymbolTable,
SymbolTableBuilder,
};
pub(crate) use types::{infer_definition_type, infer_symbol_public_type, Type, TypeStore};
mod flow_graph;
mod symbol_table;
mod types;
#[tracing::instrument(level = "debug", skip(db))]
pub fn semantic_index(db: &dyn SemanticDb, file_id: FileId) -> QueryResult<Arc<SemanticIndex>> {
let jar: &SemanticJar = db.jar()?;
jar.semantic_indices.get(&file_id, |_| {
let parsed = parse(db.upcast(), file_id)?;
Ok(Arc::from(SemanticIndex::from_ast(parsed.syntax())))
})
}
#[tracing::instrument(level = "debug", skip(db))]
pub fn resolve_global_symbol(
db: &dyn SemanticDb,
module: Module,
name: &str,
) -> QueryResult<Option<GlobalSymbolId>> {
let file_id = module.path(db)?.file();
let symbol_table = &semantic_index(db, file_id)?.symbol_table;
let Some(symbol_id) = symbol_table.root_symbol_id_by_name(name) else {
return Ok(None);
};
Ok(Some(GlobalSymbolId { file_id, symbol_id }))
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub struct GlobalSymbolId {
pub(crate) file_id: FileId,
pub(crate) symbol_id: SymbolId,
}
#[derive(Debug)]
pub struct SemanticIndex {
symbol_table: SymbolTable,
flow_graph: FlowGraph,
}
impl SemanticIndex {
pub fn from_ast(module: &ast::ModModule) -> Self {
let root_scope_id = SymbolTable::root_scope_id();
let mut indexer = SemanticIndexer {
symbol_table_builder: SymbolTableBuilder::new(),
flow_graph_builder: FlowGraphBuilder::new(),
scopes: vec![ScopeState {
scope_id: root_scope_id,
current_flow_node_id: FlowGraph::start(),
}],
current_definition: None,
};
indexer.visit_body(&module.body);
indexer.finish()
}
/// Return an iterator over all definitions of `symbol_id` reachable from `use_expr`. The value
/// of `symbol_id` in `use_expr` must originate from one of the iterated definitions (or from
/// an external reassignment of the name outside of this scope).
pub fn reachable_definitions(
&self,
symbol_id: SymbolId,
use_expr: &ast::Expr,
) -> ReachableDefinitionsIterator {
ReachableDefinitionsIterator::new(
&self.flow_graph,
symbol_id,
self.flow_graph.for_expr(use_expr),
)
}
pub fn symbol_table(&self) -> &SymbolTable {
&self.symbol_table
}
}
#[derive(Debug)]
struct ScopeState {
scope_id: ScopeId,
current_flow_node_id: FlowNodeId,
}
#[derive(Debug)]
struct SemanticIndexer {
symbol_table_builder: SymbolTableBuilder,
flow_graph_builder: FlowGraphBuilder,
scopes: Vec<ScopeState>,
/// the definition whose target(s) we are currently walking
current_definition: Option<Definition>,
}
impl SemanticIndexer {
pub(crate) fn finish(self) -> SemanticIndex {
let SemanticIndexer {
flow_graph_builder,
symbol_table_builder,
..
} = self;
SemanticIndex {
flow_graph: flow_graph_builder.finish(),
symbol_table: symbol_table_builder.finish(),
}
}
fn set_current_flow_node(&mut self, new_flow_node_id: FlowNodeId) {
let scope_state = self.scopes.last_mut().expect("scope stack is never empty");
scope_state.current_flow_node_id = new_flow_node_id;
}
fn current_flow_node(&self) -> FlowNodeId {
self.scopes
.last()
.expect("scope stack is never empty")
.current_flow_node_id
}
fn add_or_update_symbol(&mut self, identifier: &str, flags: SymbolFlags) -> SymbolId {
self.symbol_table_builder
.add_or_update_symbol(self.cur_scope(), identifier, flags)
}
fn add_or_update_symbol_with_def(
&mut self,
identifier: &str,
definition: Definition,
) -> SymbolId {
let symbol_id = self.add_or_update_symbol(identifier, SymbolFlags::IS_DEFINED);
self.symbol_table_builder
.add_definition(symbol_id, definition.clone());
let new_flow_node_id =
self.flow_graph_builder
.add_definition(symbol_id, definition, self.current_flow_node());
self.set_current_flow_node(new_flow_node_id);
symbol_id
}
fn push_scope(
&mut self,
name: &str,
kind: ScopeKind,
definition: Option<Definition>,
defining_symbol: Option<SymbolId>,
) -> ScopeId {
let scope_id = self.symbol_table_builder.add_child_scope(
self.cur_scope(),
name,
kind,
definition,
defining_symbol,
);
self.scopes.push(ScopeState {
scope_id,
current_flow_node_id: FlowGraph::start(),
});
scope_id
}
fn pop_scope(&mut self) -> ScopeId {
self.scopes
.pop()
.expect("Scope stack should never be empty")
.scope_id
}
fn cur_scope(&self) -> ScopeId {
self.scopes
.last()
.expect("Scope stack should never be empty")
.scope_id
}
fn record_scope_for_node(&mut self, node_key: NodeKey, scope_id: ScopeId) {
self.symbol_table_builder
.record_scope_for_node(node_key, scope_id);
}
fn with_type_params(
&mut self,
name: &str,
params: &Option<Box<ast::TypeParams>>,
definition: Option<Definition>,
defining_symbol: Option<SymbolId>,
nested: impl FnOnce(&mut Self) -> ScopeId,
) -> ScopeId {
if let Some(type_params) = params {
self.push_scope(name, ScopeKind::Annotation, definition, defining_symbol);
for type_param in &type_params.type_params {
let name = match type_param {
ast::TypeParam::TypeVar(ast::TypeParamTypeVar { name, .. }) => name,
ast::TypeParam::ParamSpec(ast::TypeParamParamSpec { name, .. }) => name,
ast::TypeParam::TypeVarTuple(ast::TypeParamTypeVarTuple { name, .. }) => name,
};
self.add_or_update_symbol(name, SymbolFlags::IS_DEFINED);
}
}
let scope_id = nested(self);
if params.is_some() {
self.pop_scope();
}
scope_id
}
}
impl PreorderVisitor<'_> for SemanticIndexer {
fn visit_expr(&mut self, expr: &ast::Expr) {
if let ast::Expr::Name(ast::ExprName { id, ctx, .. }) = expr {
let flags = match ctx {
ast::ExprContext::Load => SymbolFlags::IS_USED,
ast::ExprContext::Store => SymbolFlags::IS_DEFINED,
ast::ExprContext::Del => SymbolFlags::IS_DEFINED,
ast::ExprContext::Invalid => SymbolFlags::empty(),
};
self.add_or_update_symbol(id, flags);
if flags.contains(SymbolFlags::IS_DEFINED) {
if let Some(curdef) = self.current_definition.clone() {
self.add_or_update_symbol_with_def(id, curdef);
}
}
}
self.flow_graph_builder
.record_expr(expr, self.current_flow_node());
ast::visitor::preorder::walk_expr(self, expr);
}
fn visit_stmt(&mut self, stmt: &ast::Stmt) {
// TODO need to capture more definition statements here
match stmt {
ast::Stmt::ClassDef(node) => {
let node_key = TypedNodeKey::from_node(node);
let def = Definition::ClassDef(node_key.clone());
let symbol_id = self.add_or_update_symbol_with_def(&node.name, def.clone());
for decorator in &node.decorator_list {
self.visit_decorator(decorator);
}
let scope_id = self.with_type_params(
&node.name,
&node.type_params,
Some(def.clone()),
Some(symbol_id),
|indexer| {
if let Some(arguments) = &node.arguments {
indexer.visit_arguments(arguments);
}
let scope_id = indexer.push_scope(
&node.name,
ScopeKind::Class,
Some(def.clone()),
Some(symbol_id),
);
indexer.visit_body(&node.body);
indexer.pop_scope();
scope_id
},
);
self.record_scope_for_node(*node_key.erased(), scope_id);
}
ast::Stmt::FunctionDef(node) => {
let node_key = TypedNodeKey::from_node(node);
let def = Definition::FunctionDef(node_key.clone());
let symbol_id = self.add_or_update_symbol_with_def(&node.name, def.clone());
for decorator in &node.decorator_list {
self.visit_decorator(decorator);
}
let scope_id = self.with_type_params(
&node.name,
&node.type_params,
Some(def.clone()),
Some(symbol_id),
|indexer| {
indexer.visit_parameters(&node.parameters);
for expr in &node.returns {
indexer.visit_annotation(expr);
}
let scope_id = indexer.push_scope(
&node.name,
ScopeKind::Function,
Some(def.clone()),
Some(symbol_id),
);
indexer.visit_body(&node.body);
indexer.pop_scope();
scope_id
},
);
self.record_scope_for_node(*node_key.erased(), scope_id);
}
ast::Stmt::Import(ast::StmtImport { names, .. }) => {
for alias in names {
let symbol_name = if let Some(asname) = &alias.asname {
asname.id.as_str()
} else {
alias.name.id.split('.').next().unwrap()
};
let module = ModuleName::new(&alias.name.id);
let def = Definition::Import(ImportDefinition {
module: module.clone(),
});
self.add_or_update_symbol_with_def(symbol_name, def);
self.symbol_table_builder
.add_dependency(Dependency::Module(module));
}
}
ast::Stmt::ImportFrom(ast::StmtImportFrom {
module,
names,
level,
..
}) => {
let module = module.as_ref().map(|m| ModuleName::new(&m.id));
for alias in names {
let symbol_name = if let Some(asname) = &alias.asname {
asname.id.as_str()
} else {
alias.name.id.as_str()
};
let def = Definition::ImportFrom(ImportFromDefinition {
module: module.clone(),
name: Name::new(&alias.name.id),
level: *level,
});
self.add_or_update_symbol_with_def(symbol_name, def);
}
let dependency = if let Some(module) = module {
match NonZeroU32::new(*level) {
Some(level) => Dependency::Relative {
level,
module: Some(module),
},
None => Dependency::Module(module),
}
} else {
Dependency::Relative {
level: NonZeroU32::new(*level)
.expect("Import without a module to have a level > 0"),
module,
}
};
self.symbol_table_builder.add_dependency(dependency);
}
ast::Stmt::Assign(node) => {
debug_assert!(self.current_definition.is_none());
self.current_definition =
Some(Definition::Assignment(TypedNodeKey::from_node(node)));
ast::visitor::preorder::walk_stmt(self, stmt);
self.current_definition = None;
}
ast::Stmt::If(node) => {
// we visit the if "test" condition first regardless
self.visit_expr(&node.test);
// create branch node: does the if test pass or not?
let if_branch = self.flow_graph_builder.add_branch(self.current_flow_node());
// visit the body of the `if` clause
self.set_current_flow_node(if_branch);
self.visit_body(&node.body);
// Flow node for the last if/elif condition branch; represents the "no branch
// taken yet" possibility (where "taking a branch" means that the condition in an
// if or elif evaluated to true and control flow went into that clause).
let mut prior_branch = if_branch;
// Flow node for the state after the prior if/elif/else clause; represents "we have
// taken one of the branches up to this point." Initially set to the post-if-clause
// state, later will be set to the phi node joining that possible path with the
// possibility that we took a later if/elif/else clause instead.
let mut post_prior_clause = self.current_flow_node();
// Flag to mark if the final clause is an "else" -- if so, that means the "match no
// clauses" path is not possible, we have to go through one of the clauses.
let mut last_branch_is_else = false;
for clause in &node.elif_else_clauses {
if clause.test.is_some() {
// This is an elif clause. Create a new branch node. Its predecessor is the
// previous branch node, because we can only take one branch in an entire
// if/elif/else chain, so if we take this branch, it can only be because we
// didn't take the previous one.
prior_branch = self.flow_graph_builder.add_branch(prior_branch);
self.set_current_flow_node(prior_branch);
} else {
// This is an else clause. No need to create a branch node; there's no
// branch here, if we haven't taken any previous branch, we definitely go
// into the "else" clause.
self.set_current_flow_node(prior_branch);
last_branch_is_else = true;
}
self.visit_elif_else_clause(clause);
// Update `post_prior_clause` to a new phi node joining the possibility that we
// took any of the previous branches with the possibility that we took the one
// just visited.
post_prior_clause = self
.flow_graph_builder
.add_phi(self.current_flow_node(), post_prior_clause);
}
if !last_branch_is_else {
// Final branch was not an "else", which means it's possible we took zero
// branches in the entire if/elif chain, so we need one more phi node to join
// the "no branches taken" possibility.
post_prior_clause = self
.flow_graph_builder
.add_phi(post_prior_clause, prior_branch);
}
// Onward, with current flow node set to our final Phi node.
self.set_current_flow_node(post_prior_clause);
}
_ => {
ast::visitor::preorder::walk_stmt(self, stmt);
}
}
}
}
#[derive(Debug, Default)]
pub struct SemanticIndexStorage(KeyValueCache<FileId, Arc<SemanticIndex>>);
impl Deref for SemanticIndexStorage {
type Target = KeyValueCache<FileId, Arc<SemanticIndex>>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for SemanticIndexStorage {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
#[cfg(test)]
mod tests {
use crate::semantic::symbol_table::{Symbol, SymbolIterator};
use ruff_python_ast as ast;
use ruff_python_ast::ModModule;
use ruff_python_parser::{Mode, Parsed};
use super::{Definition, ScopeKind, SemanticIndex, SymbolId};
fn parse(code: &str) -> Parsed<ModModule> {
ruff_python_parser::parse_unchecked(code, Mode::Module)
.try_into_module()
.unwrap()
}
fn names<I>(it: SymbolIterator<I>) -> Vec<&str>
where
I: Iterator<Item = SymbolId>,
{
let mut symbols: Vec<_> = it.map(Symbol::name).collect();
symbols.sort_unstable();
symbols
}
#[test]
fn empty() {
let parsed = parse("");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()).len(), 0);
}
#[test]
fn simple() {
let parsed = parse("x");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("x").unwrap())
.len(),
0
);
}
#[test]
fn annotation_only() {
let parsed = parse("x: int");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["int", "x"]);
// TODO record definition
}
#[test]
fn import() {
let parsed = parse("import foo");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("foo").unwrap())
.len(),
1
);
}
#[test]
fn import_sub() {
let parsed = parse("import foo.bar");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo"]);
}
#[test]
fn import_as() {
let parsed = parse("import foo.bar as baz");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["baz"]);
}
#[test]
fn import_from() {
let parsed = parse("from bar import foo");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("foo").unwrap())
.len(),
1
);
assert!(
table.root_symbol_id_by_name("foo").is_some_and(|sid| {
let s = sid.symbol(&table);
s.is_defined() || !s.is_used()
}),
"symbols that are defined get the defined flag"
);
}
#[test]
fn assign() {
let parsed = parse("x = foo");
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["foo", "x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("x").unwrap())
.len(),
1
);
assert!(
table.root_symbol_id_by_name("foo").is_some_and(|sid| {
let s = sid.symbol(&table);
!s.is_defined() && s.is_used()
}),
"a symbol used but not defined in a scope should have only the used flag"
);
}
#[test]
fn class_scope() {
let parsed = parse(
"
class C:
x = 1
y = 2
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["C", "y"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let c_scope = scopes[0].scope(&table);
assert_eq!(c_scope.kind(), ScopeKind::Class);
assert_eq!(c_scope.name(), "C");
assert_eq!(names(table.symbols_for_scope(scopes[0])), vec!["x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("C").unwrap())
.len(),
1
);
}
#[test]
fn func_scope() {
let parsed = parse(
"
def func():
x = 1
y = 2
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["func", "y"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let func_scope = scopes[0].scope(&table);
assert_eq!(func_scope.kind(), ScopeKind::Function);
assert_eq!(func_scope.name(), "func");
assert_eq!(names(table.symbols_for_scope(scopes[0])), vec!["x"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("func").unwrap())
.len(),
1
);
}
#[test]
fn dupes() {
let parsed = parse(
"
def func():
x = 1
def func():
y = 2
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["func"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 2);
let func_scope_1 = scopes[0].scope(&table);
let func_scope_2 = scopes[1].scope(&table);
assert_eq!(func_scope_1.kind(), ScopeKind::Function);
assert_eq!(func_scope_1.name(), "func");
assert_eq!(func_scope_2.kind(), ScopeKind::Function);
assert_eq!(func_scope_2.name(), "func");
assert_eq!(names(table.symbols_for_scope(scopes[0])), vec!["x"]);
assert_eq!(names(table.symbols_for_scope(scopes[1])), vec!["y"]);
assert_eq!(
table
.definitions(table.root_symbol_id_by_name("func").unwrap())
.len(),
2
);
}
#[test]
fn generic_func() {
let parsed = parse(
"
def func[T]():
x = 1
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["func"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let ann_scope_id = scopes[0];
let ann_scope = ann_scope_id.scope(&table);
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope.name(), "func");
assert_eq!(names(table.symbols_for_scope(ann_scope_id)), vec!["T"]);
let scopes = table.child_scope_ids_of(ann_scope_id);
assert_eq!(scopes.len(), 1);
let func_scope_id = scopes[0];
let func_scope = func_scope_id.scope(&table);
assert_eq!(func_scope.kind(), ScopeKind::Function);
assert_eq!(func_scope.name(), "func");
assert_eq!(names(table.symbols_for_scope(func_scope_id)), vec!["x"]);
}
#[test]
fn generic_class() {
let parsed = parse(
"
class C[T]:
x = 1
",
);
let table = SemanticIndex::from_ast(parsed.syntax()).symbol_table;
assert_eq!(names(table.root_symbols()), vec!["C"]);
let scopes = table.root_child_scope_ids();
assert_eq!(scopes.len(), 1);
let ann_scope_id = scopes[0];
let ann_scope = ann_scope_id.scope(&table);
assert_eq!(ann_scope.kind(), ScopeKind::Annotation);
assert_eq!(ann_scope.name(), "C");
assert_eq!(names(table.symbols_for_scope(ann_scope_id)), vec!["T"]);
assert!(
table
.symbol_by_name(ann_scope_id, "T")
.is_some_and(|s| s.is_defined() && !s.is_used()),
"type parameters are defined by the scope that introduces them"
);
let scopes = table.child_scope_ids_of(ann_scope_id);
assert_eq!(scopes.len(), 1);
let func_scope_id = scopes[0];
let func_scope = func_scope_id.scope(&table);
assert_eq!(func_scope.kind(), ScopeKind::Class);
assert_eq!(func_scope.name(), "C");
assert_eq!(names(table.symbols_for_scope(func_scope_id)), vec!["x"]);
}
#[test]
fn reachability_trivial() {
let parsed = parse("x = 1; x");
let ast = parsed.syntax();
let index = SemanticIndex::from_ast(ast);
let table = &index.symbol_table;
let x_sym = table
.root_symbol_id_by_name("x")
.expect("x symbol should exist");
let ast::Stmt::Expr(ast::StmtExpr { value: x_use, .. }) = &ast.body[1] else {
panic!("should be an expr")
};
let x_defs: Vec<_> = index.reachable_definitions(x_sym, x_use).collect();
assert_eq!(x_defs.len(), 1);
let Definition::Assignment(node_key) = &x_defs[0] else {
panic!("def should be an assignment")
};
let Some(def_node) = node_key.resolve(ast.into()) else {
panic!("node key should resolve")
};
let ast::Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(num),
..
}) = &*def_node.value
else {
panic!("should be a number literal")
};
assert_eq!(*num, 1);
}
}

View File

@@ -0,0 +1,204 @@
use super::symbol_table::{Definition, SymbolId};
use crate::ast_ids::NodeKey;
use ruff_index::{newtype_index, IndexVec};
use ruff_python_ast as ast;
use rustc_hash::FxHashMap;
use std::iter::FusedIterator;
#[newtype_index]
pub struct FlowNodeId;
#[derive(Debug)]
pub(crate) enum FlowNode {
Start,
Definition(DefinitionFlowNode),
Branch(BranchFlowNode),
Phi(PhiFlowNode),
}
/// A Definition node represents a point in control flow where a symbol is defined
#[derive(Debug)]
pub(crate) struct DefinitionFlowNode {
symbol_id: SymbolId,
definition: Definition,
predecessor: FlowNodeId,
}
/// A Branch node represents a branch in control flow
#[derive(Debug)]
pub(crate) struct BranchFlowNode {
predecessor: FlowNodeId,
}
/// A Phi node represents a join point where control flow paths come together
#[derive(Debug)]
pub(crate) struct PhiFlowNode {
first_predecessor: FlowNodeId,
second_predecessor: FlowNodeId,
}
#[derive(Debug)]
pub struct FlowGraph {
flow_nodes_by_id: IndexVec<FlowNodeId, FlowNode>,
ast_to_flow: FxHashMap<NodeKey, FlowNodeId>,
}
impl FlowGraph {
pub fn start() -> FlowNodeId {
FlowNodeId::from_usize(0)
}
pub fn for_expr(&self, expr: &ast::Expr) -> FlowNodeId {
let node_key = NodeKey::from_node(expr.into());
self.ast_to_flow[&node_key]
}
}
#[derive(Debug)]
pub(crate) struct FlowGraphBuilder {
flow_graph: FlowGraph,
}
impl FlowGraphBuilder {
pub(crate) fn new() -> Self {
let mut graph = FlowGraph {
flow_nodes_by_id: IndexVec::default(),
ast_to_flow: FxHashMap::default(),
};
graph.flow_nodes_by_id.push(FlowNode::Start);
Self { flow_graph: graph }
}
pub(crate) fn add(&mut self, node: FlowNode) -> FlowNodeId {
self.flow_graph.flow_nodes_by_id.push(node)
}
pub(crate) fn add_definition(
&mut self,
symbol_id: SymbolId,
definition: Definition,
predecessor: FlowNodeId,
) -> FlowNodeId {
self.add(FlowNode::Definition(DefinitionFlowNode {
symbol_id,
definition,
predecessor,
}))
}
pub(crate) fn add_branch(&mut self, predecessor: FlowNodeId) -> FlowNodeId {
self.add(FlowNode::Branch(BranchFlowNode { predecessor }))
}
pub(crate) fn add_phi(
&mut self,
first_predecessor: FlowNodeId,
second_predecessor: FlowNodeId,
) -> FlowNodeId {
self.add(FlowNode::Phi(PhiFlowNode {
first_predecessor,
second_predecessor,
}))
}
pub(crate) fn record_expr(&mut self, expr: &ast::Expr, node_id: FlowNodeId) {
self.flow_graph
.ast_to_flow
.insert(NodeKey::from_node(expr.into()), node_id);
}
pub(crate) fn finish(self) -> FlowGraph {
self.flow_graph
}
}
#[derive(Debug)]
pub struct ReachableDefinitionsIterator<'a> {
flow_graph: &'a FlowGraph,
symbol_id: SymbolId,
pending: Vec<FlowNodeId>,
}
impl<'a> ReachableDefinitionsIterator<'a> {
pub fn new(flow_graph: &'a FlowGraph, symbol_id: SymbolId, start_node_id: FlowNodeId) -> Self {
Self {
flow_graph,
symbol_id,
pending: vec![start_node_id],
}
}
}
impl<'a> Iterator for ReachableDefinitionsIterator<'a> {
type Item = Definition;
fn next(&mut self) -> Option<Self::Item> {
loop {
let flow_node_id = self.pending.pop()?;
match &self.flow_graph.flow_nodes_by_id[flow_node_id] {
FlowNode::Start => return Some(Definition::None),
FlowNode::Definition(def_node) => {
if def_node.symbol_id == self.symbol_id {
return Some(def_node.definition.clone());
}
self.pending.push(def_node.predecessor);
}
FlowNode::Branch(branch_node) => {
self.pending.push(branch_node.predecessor);
}
FlowNode::Phi(phi_node) => {
self.pending.push(phi_node.first_predecessor);
self.pending.push(phi_node.second_predecessor);
}
}
}
}
}
impl<'a> FusedIterator for ReachableDefinitionsIterator<'a> {}
impl std::fmt::Display for FlowGraph {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
writeln!(f, "flowchart TD")?;
for (id, node) in self.flow_nodes_by_id.iter_enumerated() {
write!(f, " id{}", id.as_u32())?;
match node {
FlowNode::Start => writeln!(f, r"[\Start/]")?,
FlowNode::Definition(def_node) => {
writeln!(f, r"(Define symbol {})", def_node.symbol_id.as_u32())?;
writeln!(
f,
r" id{}-->id{}",
def_node.predecessor.as_u32(),
id.as_u32()
)?;
}
FlowNode::Branch(branch_node) => {
writeln!(f, r"{{Branch}}")?;
writeln!(
f,
r" id{}-->id{}",
branch_node.predecessor.as_u32(),
id.as_u32()
)?;
}
FlowNode::Phi(phi_node) => {
writeln!(f, r"((Phi))")?;
writeln!(
f,
r" id{}-->id{}",
phi_node.second_predecessor.as_u32(),
id.as_u32()
)?;
writeln!(
f,
r" id{}-->id{}",
phi_node.first_predecessor.as_u32(),
id.as_u32()
)?;
}
}
}
Ok(())
}
}

View File

@@ -0,0 +1,583 @@
#![allow(dead_code)]
use std::hash::{Hash, Hasher};
use std::iter::{Copied, DoubleEndedIterator, FusedIterator};
use std::num::NonZeroU32;
use bitflags::bitflags;
use hashbrown::hash_map::{Keys, RawEntryMut};
use rustc_hash::{FxHashMap, FxHasher};
use ruff_index::{newtype_index, IndexVec};
use ruff_python_ast as ast;
use crate::ast_ids::{NodeKey, TypedNodeKey};
use crate::module::ModuleName;
use crate::Name;
type Map<K, V> = hashbrown::HashMap<K, V, ()>;
#[newtype_index]
pub struct ScopeId;
impl ScopeId {
pub fn scope(self, table: &SymbolTable) -> &Scope {
&table.scopes_by_id[self]
}
}
#[newtype_index]
pub struct SymbolId;
impl SymbolId {
pub fn symbol(self, table: &SymbolTable) -> &Symbol {
&table.symbols_by_id[self]
}
}
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum ScopeKind {
Module,
Annotation,
Class,
Function,
}
#[derive(Debug)]
pub struct Scope {
name: Name,
kind: ScopeKind,
parent: Option<ScopeId>,
children: Vec<ScopeId>,
/// the definition (e.g. class or function) that created this scope
definition: Option<Definition>,
/// the symbol (e.g. class or function) that owns this scope
defining_symbol: Option<SymbolId>,
/// symbol IDs, hashed by symbol name
symbols_by_name: Map<SymbolId, ()>,
}
impl Scope {
pub fn name(&self) -> &str {
self.name.as_str()
}
pub fn kind(&self) -> ScopeKind {
self.kind
}
pub fn definition(&self) -> Option<Definition> {
self.definition.clone()
}
pub fn defining_symbol(&self) -> Option<SymbolId> {
self.defining_symbol
}
}
#[derive(Debug)]
pub(crate) enum Kind {
FreeVar,
CellVar,
CellVarAssigned,
ExplicitGlobal,
ImplicitGlobal,
}
bitflags! {
#[derive(Copy,Clone,Debug)]
pub struct SymbolFlags: u8 {
const IS_USED = 1 << 0;
const IS_DEFINED = 1 << 1;
/// TODO: This flag is not yet set by anything
const MARKED_GLOBAL = 1 << 2;
/// TODO: This flag is not yet set by anything
const MARKED_NONLOCAL = 1 << 3;
}
}
#[derive(Debug)]
pub struct Symbol {
name: Name,
flags: SymbolFlags,
scope_id: ScopeId,
// kind: Kind,
}
impl Symbol {
pub fn name(&self) -> &str {
self.name.as_str()
}
pub fn scope_id(&self) -> ScopeId {
self.scope_id
}
/// Is the symbol used in its containing scope?
pub fn is_used(&self) -> bool {
self.flags.contains(SymbolFlags::IS_USED)
}
/// Is the symbol defined in its containing scope?
pub fn is_defined(&self) -> bool {
self.flags.contains(SymbolFlags::IS_DEFINED)
}
// TODO: implement Symbol.kind 2-pass analysis to categorize as: free-var, cell-var,
// explicit-global, implicit-global and implement Symbol.kind by modifying the preorder
// traversal code
}
// TODO storing TypedNodeKey for definitions means we have to search to find them again in the AST;
// this is at best O(log n). If looking up definitions is a bottleneck we should look for
// alternatives here.
// TODO intern Definitions in SymbolTable and reference using IDs?
#[derive(Clone, Debug)]
pub enum Definition {
// For the import cases, we don't need reference to any arbitrary AST subtrees (annotations,
// RHS), and referencing just the import statement node is imprecise (a single import statement
// can assign many symbols, we'd have to re-search for the one we care about), so we just copy
// the small amount of information we need from the AST.
Import(ImportDefinition),
ImportFrom(ImportFromDefinition),
ClassDef(TypedNodeKey<ast::StmtClassDef>),
FunctionDef(TypedNodeKey<ast::StmtFunctionDef>),
Assignment(TypedNodeKey<ast::StmtAssign>),
AnnotatedAssignment(TypedNodeKey<ast::StmtAnnAssign>),
None,
// TODO with statements, except handlers, function args...
}
#[derive(Clone, Debug)]
pub struct ImportDefinition {
pub module: ModuleName,
}
#[derive(Clone, Debug)]
pub struct ImportFromDefinition {
pub module: Option<ModuleName>,
pub name: Name,
pub level: u32,
}
impl ImportFromDefinition {
pub fn module(&self) -> Option<&ModuleName> {
self.module.as_ref()
}
pub fn name(&self) -> &Name {
&self.name
}
pub fn level(&self) -> u32 {
self.level
}
}
#[derive(Debug, Clone)]
pub enum Dependency {
Module(ModuleName),
Relative {
level: NonZeroU32,
module: Option<ModuleName>,
},
}
/// Table of all symbols in all scopes for a module.
#[derive(Debug)]
pub struct SymbolTable {
scopes_by_id: IndexVec<ScopeId, Scope>,
symbols_by_id: IndexVec<SymbolId, Symbol>,
/// the definitions for each symbol
defs: FxHashMap<SymbolId, Vec<Definition>>,
/// map of AST node (e.g. class/function def) to sub-scope it creates
scopes_by_node: FxHashMap<NodeKey, ScopeId>,
/// dependencies of this module
dependencies: Vec<Dependency>,
}
impl SymbolTable {
pub fn dependencies(&self) -> &[Dependency] {
&self.dependencies
}
pub const fn root_scope_id() -> ScopeId {
ScopeId::from_usize(0)
}
pub fn root_scope(&self) -> &Scope {
&self.scopes_by_id[SymbolTable::root_scope_id()]
}
pub fn symbol_ids_for_scope(&self, scope_id: ScopeId) -> Copied<Keys<SymbolId, ()>> {
self.scopes_by_id[scope_id].symbols_by_name.keys().copied()
}
pub fn symbols_for_scope(
&self,
scope_id: ScopeId,
) -> SymbolIterator<Copied<Keys<SymbolId, ()>>> {
SymbolIterator {
table: self,
ids: self.symbol_ids_for_scope(scope_id),
}
}
pub fn root_symbol_ids(&self) -> Copied<Keys<SymbolId, ()>> {
self.symbol_ids_for_scope(SymbolTable::root_scope_id())
}
pub fn root_symbols(&self) -> SymbolIterator<Copied<Keys<SymbolId, ()>>> {
self.symbols_for_scope(SymbolTable::root_scope_id())
}
pub fn child_scope_ids_of(&self, scope_id: ScopeId) -> &[ScopeId] {
&self.scopes_by_id[scope_id].children
}
pub fn child_scopes_of(&self, scope_id: ScopeId) -> ScopeIterator<&[ScopeId]> {
ScopeIterator {
table: self,
ids: self.child_scope_ids_of(scope_id),
}
}
pub fn root_child_scope_ids(&self) -> &[ScopeId] {
self.child_scope_ids_of(SymbolTable::root_scope_id())
}
pub fn root_child_scopes(&self) -> ScopeIterator<&[ScopeId]> {
self.child_scopes_of(SymbolTable::root_scope_id())
}
pub fn symbol_id_by_name(&self, scope_id: ScopeId, name: &str) -> Option<SymbolId> {
let scope = &self.scopes_by_id[scope_id];
let hash = SymbolTable::hash_name(name);
let name = Name::new(name);
Some(
*scope
.symbols_by_name
.raw_entry()
.from_hash(hash, |symid| self.symbols_by_id[*symid].name == name)?
.0,
)
}
pub fn symbol_by_name(&self, scope_id: ScopeId, name: &str) -> Option<&Symbol> {
Some(&self.symbols_by_id[self.symbol_id_by_name(scope_id, name)?])
}
pub fn root_symbol_id_by_name(&self, name: &str) -> Option<SymbolId> {
self.symbol_id_by_name(SymbolTable::root_scope_id(), name)
}
pub fn root_symbol_by_name(&self, name: &str) -> Option<&Symbol> {
self.symbol_by_name(SymbolTable::root_scope_id(), name)
}
pub fn scope_id_of_symbol(&self, symbol_id: SymbolId) -> ScopeId {
self.symbols_by_id[symbol_id].scope_id
}
pub fn scope_of_symbol(&self, symbol_id: SymbolId) -> &Scope {
&self.scopes_by_id[self.scope_id_of_symbol(symbol_id)]
}
pub fn parent_scopes(
&self,
scope_id: ScopeId,
) -> ScopeIterator<impl Iterator<Item = ScopeId> + '_> {
ScopeIterator {
table: self,
ids: std::iter::successors(Some(scope_id), |scope| self.scopes_by_id[*scope].parent),
}
}
pub fn parent_scope(&self, scope_id: ScopeId) -> Option<ScopeId> {
self.scopes_by_id[scope_id].parent
}
pub fn scope_id_for_node(&self, node_key: &NodeKey) -> ScopeId {
self.scopes_by_node[node_key]
}
pub fn definitions(&self, symbol_id: SymbolId) -> &[Definition] {
self.defs
.get(&symbol_id)
.map(std::vec::Vec::as_slice)
.unwrap_or_default()
}
pub fn all_definitions(&self) -> impl Iterator<Item = (SymbolId, &Definition)> + '_ {
self.defs
.iter()
.flat_map(|(sym_id, defs)| defs.iter().map(move |def| (*sym_id, def)))
}
fn hash_name(name: &str) -> u64 {
let mut hasher = FxHasher::default();
name.hash(&mut hasher);
hasher.finish()
}
}
pub struct SymbolIterator<'a, I> {
table: &'a SymbolTable,
ids: I,
}
impl<'a, I> Iterator for SymbolIterator<'a, I>
where
I: Iterator<Item = SymbolId>,
{
type Item = &'a Symbol;
fn next(&mut self) -> Option<Self::Item> {
let id = self.ids.next()?;
Some(&self.table.symbols_by_id[id])
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.ids.size_hint()
}
}
impl<'a, I> FusedIterator for SymbolIterator<'a, I> where
I: Iterator<Item = SymbolId> + FusedIterator
{
}
impl<'a, I> DoubleEndedIterator for SymbolIterator<'a, I>
where
I: Iterator<Item = SymbolId> + DoubleEndedIterator,
{
fn next_back(&mut self) -> Option<Self::Item> {
let id = self.ids.next_back()?;
Some(&self.table.symbols_by_id[id])
}
}
// TODO maybe get rid of this and just do all data access via methods on ScopeId?
pub struct ScopeIterator<'a, I> {
table: &'a SymbolTable,
ids: I,
}
/// iterate (`ScopeId`, `Scope`) pairs for given `ScopeId` iterator
impl<'a, I> Iterator for ScopeIterator<'a, I>
where
I: Iterator<Item = ScopeId>,
{
type Item = (ScopeId, &'a Scope);
fn next(&mut self) -> Option<Self::Item> {
let id = self.ids.next()?;
Some((id, &self.table.scopes_by_id[id]))
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.ids.size_hint()
}
}
impl<'a, I> FusedIterator for ScopeIterator<'a, I> where I: Iterator<Item = ScopeId> + FusedIterator {}
impl<'a, I> DoubleEndedIterator for ScopeIterator<'a, I>
where
I: Iterator<Item = ScopeId> + DoubleEndedIterator,
{
fn next_back(&mut self) -> Option<Self::Item> {
let id = self.ids.next_back()?;
Some((id, &self.table.scopes_by_id[id]))
}
}
#[derive(Debug)]
pub(crate) struct SymbolTableBuilder {
symbol_table: SymbolTable,
}
impl SymbolTableBuilder {
pub(crate) fn new() -> Self {
let mut table = SymbolTable {
scopes_by_id: IndexVec::new(),
symbols_by_id: IndexVec::new(),
defs: FxHashMap::default(),
scopes_by_node: FxHashMap::default(),
dependencies: Vec::new(),
};
table.scopes_by_id.push(Scope {
name: Name::new("<module>"),
kind: ScopeKind::Module,
parent: None,
children: Vec::new(),
definition: None,
defining_symbol: None,
symbols_by_name: Map::default(),
});
Self {
symbol_table: table,
}
}
pub(crate) fn finish(self) -> SymbolTable {
self.symbol_table
}
pub(crate) fn add_or_update_symbol(
&mut self,
scope_id: ScopeId,
name: &str,
flags: SymbolFlags,
) -> SymbolId {
let hash = SymbolTable::hash_name(name);
let scope = &mut self.symbol_table.scopes_by_id[scope_id];
let name = Name::new(name);
let entry = scope
.symbols_by_name
.raw_entry_mut()
.from_hash(hash, |existing| {
self.symbol_table.symbols_by_id[*existing].name == name
});
match entry {
RawEntryMut::Occupied(entry) => {
if let Some(symbol) = self.symbol_table.symbols_by_id.get_mut(*entry.key()) {
symbol.flags.insert(flags);
};
*entry.key()
}
RawEntryMut::Vacant(entry) => {
let id = self.symbol_table.symbols_by_id.push(Symbol {
name,
flags,
scope_id,
});
entry.insert_with_hasher(hash, id, (), |symid| {
SymbolTable::hash_name(&self.symbol_table.symbols_by_id[*symid].name)
});
id
}
}
}
pub(crate) fn add_definition(&mut self, symbol_id: SymbolId, definition: Definition) {
self.symbol_table
.defs
.entry(symbol_id)
.or_default()
.push(definition);
}
pub(crate) fn add_child_scope(
&mut self,
parent_scope_id: ScopeId,
name: &str,
kind: ScopeKind,
definition: Option<Definition>,
defining_symbol: Option<SymbolId>,
) -> ScopeId {
let new_scope_id = self.symbol_table.scopes_by_id.push(Scope {
name: Name::new(name),
kind,
parent: Some(parent_scope_id),
children: Vec::new(),
definition,
defining_symbol,
symbols_by_name: Map::default(),
});
let parent_scope = &mut self.symbol_table.scopes_by_id[parent_scope_id];
parent_scope.children.push(new_scope_id);
new_scope_id
}
pub(crate) fn record_scope_for_node(&mut self, node_key: NodeKey, scope_id: ScopeId) {
self.symbol_table.scopes_by_node.insert(node_key, scope_id);
}
pub(crate) fn add_dependency(&mut self, dependency: Dependency) {
self.symbol_table.dependencies.push(dependency);
}
}
#[cfg(test)]
mod tests {
use super::{ScopeKind, SymbolFlags, SymbolTable, SymbolTableBuilder};
#[test]
fn insert_same_name_symbol_twice() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let symbol_id_1 =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::IS_DEFINED);
let symbol_id_2 = builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::IS_USED);
let table = builder.finish();
assert_eq!(symbol_id_1, symbol_id_2);
assert!(symbol_id_1.symbol(&table).is_used(), "flags must merge");
assert!(symbol_id_1.symbol(&table).is_defined(), "flags must merge");
}
#[test]
fn insert_different_named_symbols() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let symbol_id_1 = builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
let symbol_id_2 = builder.add_or_update_symbol(root_scope_id, "bar", SymbolFlags::empty());
assert_ne!(symbol_id_1, symbol_id_2);
}
#[test]
fn add_child_scope_with_symbol() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let foo_symbol_top =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
let c_scope = builder.add_child_scope(root_scope_id, "C", ScopeKind::Class, None, None);
let foo_symbol_inner = builder.add_or_update_symbol(c_scope, "foo", SymbolFlags::empty());
assert_ne!(foo_symbol_top, foo_symbol_inner);
}
#[test]
fn scope_from_id() {
let table = SymbolTableBuilder::new().finish();
let root_scope_id = SymbolTable::root_scope_id();
let scope = root_scope_id.scope(&table);
assert_eq!(scope.name.as_str(), "<module>");
assert_eq!(scope.kind, ScopeKind::Module);
}
#[test]
fn symbol_from_id() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let foo_symbol_id =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
let table = builder.finish();
let symbol = foo_symbol_id.symbol(&table);
assert_eq!(symbol.name(), "foo");
}
#[test]
fn bigger_symbol_table() {
let mut builder = SymbolTableBuilder::new();
let root_scope_id = SymbolTable::root_scope_id();
let foo_symbol_id =
builder.add_or_update_symbol(root_scope_id, "foo", SymbolFlags::empty());
builder.add_or_update_symbol(root_scope_id, "bar", SymbolFlags::empty());
builder.add_or_update_symbol(root_scope_id, "baz", SymbolFlags::empty());
builder.add_or_update_symbol(root_scope_id, "qux", SymbolFlags::empty());
let table = builder.finish();
let foo_symbol_id_2 = table
.root_symbol_id_by_name("foo")
.expect("foo symbol to be found");
assert_eq!(foo_symbol_id_2, foo_symbol_id);
}
}

View File

@@ -0,0 +1,823 @@
#![allow(dead_code)]
use crate::ast_ids::NodeKey;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::files::FileId;
use crate::module::{Module, ModuleName};
use crate::semantic::{
resolve_global_symbol, semantic_index, GlobalSymbolId, ScopeId, ScopeKind, SymbolId,
};
use crate::{FxDashMap, FxIndexSet, Name};
use ruff_index::{newtype_index, IndexVec};
use rustc_hash::FxHashMap;
pub(crate) mod infer;
pub(crate) use infer::{infer_definition_type, infer_symbol_public_type};
/// unique ID for a type
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
pub enum Type {
/// the dynamic or gradual type: a statically-unknown set of values
Any,
/// the empty set of values
Never,
/// unknown type (no annotation)
/// equivalent to Any, or to object in strict mode
Unknown,
/// name is not bound to any value
Unbound,
/// a specific function object
Function(FunctionTypeId),
/// a specific module object
Module(ModuleTypeId),
/// a specific class object
Class(ClassTypeId),
/// the set of Python objects with the given class in their __class__'s method resolution order
Instance(ClassTypeId),
Union(UnionTypeId),
Intersection(IntersectionTypeId),
IntLiteral(i64),
// TODO protocols, callable types, overloads, generics, type vars
}
impl Type {
fn display<'a>(&'a self, store: &'a TypeStore) -> DisplayType<'a> {
DisplayType { ty: self, store }
}
pub const fn is_unbound(&self) -> bool {
matches!(self, Type::Unbound)
}
pub const fn is_unknown(&self) -> bool {
matches!(self, Type::Unknown)
}
pub fn get_member(&self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
match self {
Type::Any => todo!("attribute lookup on Any type"),
Type::Never => todo!("attribute lookup on Never type"),
Type::Unknown => todo!("attribute lookup on Unknown type"),
Type::Unbound => todo!("attribute lookup on Unbound type"),
Type::Function(_) => todo!("attribute lookup on Function type"),
Type::Module(module_id) => module_id.get_member(db, name),
Type::Class(class_id) => class_id.get_class_member(db, name),
Type::Instance(_) => {
// TODO MRO? get_own_instance_member, get_instance_member
todo!("attribute lookup on Instance type")
}
Type::Union(union_id) => {
let jar: &SemanticJar = db.jar()?;
let _todo_union_ref = jar.type_store.get_union(*union_id);
// TODO perform the get_member on each type in the union
// TODO return the union of those results
// TODO if any of those results is `None` then include Unknown in the result union
todo!("attribute lookup on Union type")
}
Type::Intersection(_) => {
// TODO perform the get_member on each type in the intersection
// TODO return the intersection of those results
todo!("attribute lookup on Intersection type")
}
Type::IntLiteral(_) => {
// TODO raise error
Ok(Some(Type::Unknown))
}
}
}
}
impl From<FunctionTypeId> for Type {
fn from(id: FunctionTypeId) -> Self {
Type::Function(id)
}
}
impl From<UnionTypeId> for Type {
fn from(id: UnionTypeId) -> Self {
Type::Union(id)
}
}
impl From<IntersectionTypeId> for Type {
fn from(id: IntersectionTypeId) -> Self {
Type::Intersection(id)
}
}
// TODO: currently calling `get_function` et al and holding on to the `FunctionTypeRef` will lock a
// shard of this dashmap, for as long as you hold the reference. This may be a problem. We could
// switch to having all the arenas hold Arc, or we could see if we can split up ModuleTypeStore,
// and/or give it inner mutability and finer-grained internal locking.
#[derive(Debug, Default)]
pub struct TypeStore {
modules: FxDashMap<FileId, ModuleTypeStore>,
}
impl TypeStore {
pub fn remove_module(&mut self, file_id: FileId) {
self.modules.remove(&file_id);
}
pub fn cache_symbol_public_type(&self, symbol: GlobalSymbolId, ty: Type) {
self.add_or_get_module(symbol.file_id)
.symbol_types
.insert(symbol.symbol_id, ty);
}
pub fn cache_node_type(&self, file_id: FileId, node_key: NodeKey, ty: Type) {
self.add_or_get_module(file_id)
.node_types
.insert(node_key, ty);
}
pub fn get_cached_symbol_public_type(&self, symbol: GlobalSymbolId) -> Option<Type> {
self.try_get_module(symbol.file_id)?
.symbol_types
.get(&symbol.symbol_id)
.copied()
}
pub fn get_cached_node_type(&self, file_id: FileId, node_key: &NodeKey) -> Option<Type> {
self.try_get_module(file_id)?
.node_types
.get(node_key)
.copied()
}
fn add_or_get_module(&self, file_id: FileId) -> ModuleStoreRefMut {
self.modules
.entry(file_id)
.or_insert_with(|| ModuleTypeStore::new(file_id))
}
fn get_module(&self, file_id: FileId) -> ModuleStoreRef {
self.try_get_module(file_id).expect("module should exist")
}
fn try_get_module(&self, file_id: FileId) -> Option<ModuleStoreRef> {
self.modules.get(&file_id)
}
fn add_function(
&self,
file_id: FileId,
name: &str,
symbol_id: SymbolId,
scope_id: ScopeId,
decorators: Vec<Type>,
) -> FunctionTypeId {
self.add_or_get_module(file_id)
.add_function(name, symbol_id, scope_id, decorators)
}
fn add_class(
&self,
file_id: FileId,
name: &str,
scope_id: ScopeId,
bases: Vec<Type>,
) -> ClassTypeId {
self.add_or_get_module(file_id)
.add_class(name, scope_id, bases)
}
fn add_union(&self, file_id: FileId, elems: &[Type]) -> UnionTypeId {
self.add_or_get_module(file_id).add_union(elems)
}
fn add_intersection(
&self,
file_id: FileId,
positive: &[Type],
negative: &[Type],
) -> IntersectionTypeId {
self.add_or_get_module(file_id)
.add_intersection(positive, negative)
}
fn get_function(&self, id: FunctionTypeId) -> FunctionTypeRef {
FunctionTypeRef {
module_store: self.get_module(id.file_id),
function_id: id.func_id,
}
}
fn get_class(&self, id: ClassTypeId) -> ClassTypeRef {
ClassTypeRef {
module_store: self.get_module(id.file_id),
class_id: id.class_id,
}
}
fn get_union(&self, id: UnionTypeId) -> UnionTypeRef {
UnionTypeRef {
module_store: self.get_module(id.file_id),
union_id: id.union_id,
}
}
fn get_intersection(&self, id: IntersectionTypeId) -> IntersectionTypeRef {
IntersectionTypeRef {
module_store: self.get_module(id.file_id),
intersection_id: id.intersection_id,
}
}
}
type ModuleStoreRef<'a> = dashmap::mapref::one::Ref<
'a,
FileId,
ModuleTypeStore,
std::hash::BuildHasherDefault<rustc_hash::FxHasher>,
>;
type ModuleStoreRefMut<'a> = dashmap::mapref::one::RefMut<
'a,
FileId,
ModuleTypeStore,
std::hash::BuildHasherDefault<rustc_hash::FxHasher>,
>;
#[derive(Debug)]
pub(crate) struct FunctionTypeRef<'a> {
module_store: ModuleStoreRef<'a>,
function_id: ModuleFunctionTypeId,
}
impl<'a> std::ops::Deref for FunctionTypeRef<'a> {
type Target = FunctionType;
fn deref(&self) -> &Self::Target {
self.module_store.get_function(self.function_id)
}
}
#[derive(Debug)]
pub(crate) struct ClassTypeRef<'a> {
module_store: ModuleStoreRef<'a>,
class_id: ModuleClassTypeId,
}
impl<'a> std::ops::Deref for ClassTypeRef<'a> {
type Target = ClassType;
fn deref(&self) -> &Self::Target {
self.module_store.get_class(self.class_id)
}
}
#[derive(Debug)]
pub(crate) struct UnionTypeRef<'a> {
module_store: ModuleStoreRef<'a>,
union_id: ModuleUnionTypeId,
}
impl<'a> std::ops::Deref for UnionTypeRef<'a> {
type Target = UnionType;
fn deref(&self) -> &Self::Target {
self.module_store.get_union(self.union_id)
}
}
#[derive(Debug)]
pub(crate) struct IntersectionTypeRef<'a> {
module_store: ModuleStoreRef<'a>,
intersection_id: ModuleIntersectionTypeId,
}
impl<'a> std::ops::Deref for IntersectionTypeRef<'a> {
type Target = IntersectionType;
fn deref(&self) -> &Self::Target {
self.module_store.get_intersection(self.intersection_id)
}
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct FunctionTypeId {
file_id: FileId,
func_id: ModuleFunctionTypeId,
}
impl FunctionTypeId {
fn function(self, db: &dyn SemanticDb) -> QueryResult<FunctionTypeRef> {
let jar: &SemanticJar = db.jar()?;
Ok(jar.type_store.get_function(self))
}
pub(crate) fn name(self, db: &dyn SemanticDb) -> QueryResult<Name> {
Ok(self.function(db)?.name().into())
}
pub(crate) fn global_symbol(self, db: &dyn SemanticDb) -> QueryResult<GlobalSymbolId> {
Ok(GlobalSymbolId {
file_id: self.file(),
symbol_id: self.symbol(db)?,
})
}
pub(crate) fn file(self) -> FileId {
self.file_id
}
pub(crate) fn symbol(self, db: &dyn SemanticDb) -> QueryResult<SymbolId> {
let FunctionType { symbol_id, .. } = *self.function(db)?;
Ok(symbol_id)
}
pub(crate) fn get_containing_class(
self,
db: &dyn SemanticDb,
) -> QueryResult<Option<ClassTypeId>> {
let index = semantic_index(db, self.file_id)?;
let table = index.symbol_table();
let FunctionType { symbol_id, .. } = *self.function(db)?;
let scope_id = symbol_id.symbol(table).scope_id();
let scope = scope_id.scope(table);
if !matches!(scope.kind(), ScopeKind::Class) {
return Ok(None);
};
let Some(def) = scope.definition() else {
return Ok(None);
};
let Some(symbol_id) = scope.defining_symbol() else {
return Ok(None);
};
let Type::Class(class) = infer_definition_type(
db,
GlobalSymbolId {
file_id: self.file_id,
symbol_id,
},
def,
)?
else {
return Ok(None);
};
Ok(Some(class))
}
pub(crate) fn has_decorator(
self,
db: &dyn SemanticDb,
decorator_symbol: GlobalSymbolId,
) -> QueryResult<bool> {
for deco_ty in self.function(db)?.decorators() {
let Type::Function(deco_func) = deco_ty else {
continue;
};
if deco_func.global_symbol(db)? == decorator_symbol {
return Ok(true);
}
}
Ok(false)
}
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct ModuleTypeId {
module: Module,
file_id: FileId,
}
impl ModuleTypeId {
fn module(self, db: &dyn SemanticDb) -> QueryResult<ModuleStoreRef> {
let jar: &SemanticJar = db.jar()?;
Ok(jar.type_store.add_or_get_module(self.file_id).downgrade())
}
pub(crate) fn name(self, db: &dyn SemanticDb) -> QueryResult<ModuleName> {
self.module.name(db)
}
fn get_member(self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
if let Some(symbol_id) = resolve_global_symbol(db, self.module, name)? {
Ok(Some(infer_symbol_public_type(db, symbol_id)?))
} else {
Ok(None)
}
}
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct ClassTypeId {
file_id: FileId,
class_id: ModuleClassTypeId,
}
impl ClassTypeId {
fn class(self, db: &dyn SemanticDb) -> QueryResult<ClassTypeRef> {
let jar: &SemanticJar = db.jar()?;
Ok(jar.type_store.get_class(self))
}
pub(crate) fn name(self, db: &dyn SemanticDb) -> QueryResult<Name> {
Ok(self.class(db)?.name().into())
}
pub(crate) fn get_super_class_member(
self,
db: &dyn SemanticDb,
name: &Name,
) -> QueryResult<Option<Type>> {
// TODO we should linearize the MRO instead of doing this recursively
let class = self.class(db)?;
for base in class.bases() {
if let Type::Class(base) = base {
if let Some(own_member) = base.get_own_class_member(db, name)? {
return Ok(Some(own_member));
}
if let Some(base_member) = base.get_super_class_member(db, name)? {
return Ok(Some(base_member));
}
}
}
Ok(None)
}
fn get_own_class_member(self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
// TODO: this should distinguish instance-only members (e.g. `x: int`) and not return them
let ClassType { scope_id, .. } = *self.class(db)?;
let index = semantic_index(db, self.file_id)?;
if let Some(symbol_id) = index.symbol_table().symbol_id_by_name(scope_id, name) {
Ok(Some(infer_symbol_public_type(
db,
GlobalSymbolId {
file_id: self.file_id,
symbol_id,
},
)?))
} else {
Ok(None)
}
}
/// Get own class member or fall back to super-class member.
fn get_class_member(self, db: &dyn SemanticDb, name: &Name) -> QueryResult<Option<Type>> {
self.get_own_class_member(db, name)
.or_else(|_| self.get_super_class_member(db, name))
}
// TODO: get_own_instance_member, get_instance_member
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct UnionTypeId {
file_id: FileId,
union_id: ModuleUnionTypeId,
}
#[derive(Copy, Clone, Debug, Hash, Eq, PartialEq)]
pub struct IntersectionTypeId {
file_id: FileId,
intersection_id: ModuleIntersectionTypeId,
}
#[newtype_index]
struct ModuleFunctionTypeId;
#[newtype_index]
struct ModuleClassTypeId;
#[newtype_index]
struct ModuleUnionTypeId;
#[newtype_index]
struct ModuleIntersectionTypeId;
#[derive(Debug)]
struct ModuleTypeStore {
file_id: FileId,
/// arena of all function types defined in this module
functions: IndexVec<ModuleFunctionTypeId, FunctionType>,
/// arena of all class types defined in this module
classes: IndexVec<ModuleClassTypeId, ClassType>,
/// arenda of all union types created in this module
unions: IndexVec<ModuleUnionTypeId, UnionType>,
/// arena of all intersection types created in this module
intersections: IndexVec<ModuleIntersectionTypeId, IntersectionType>,
/// cached public types of symbols in this module
symbol_types: FxHashMap<SymbolId, Type>,
/// cached types of AST nodes in this module
node_types: FxHashMap<NodeKey, Type>,
}
impl ModuleTypeStore {
fn new(file_id: FileId) -> Self {
Self {
file_id,
functions: IndexVec::default(),
classes: IndexVec::default(),
unions: IndexVec::default(),
intersections: IndexVec::default(),
symbol_types: FxHashMap::default(),
node_types: FxHashMap::default(),
}
}
fn add_function(
&mut self,
name: &str,
symbol_id: SymbolId,
scope_id: ScopeId,
decorators: Vec<Type>,
) -> FunctionTypeId {
let func_id = self.functions.push(FunctionType {
name: Name::new(name),
symbol_id,
scope_id,
decorators,
});
FunctionTypeId {
file_id: self.file_id,
func_id,
}
}
fn add_class(&mut self, name: &str, scope_id: ScopeId, bases: Vec<Type>) -> ClassTypeId {
let class_id = self.classes.push(ClassType {
name: Name::new(name),
scope_id,
// TODO: if no bases are given, that should imply [object]
bases,
});
ClassTypeId {
file_id: self.file_id,
class_id,
}
}
fn add_union(&mut self, elems: &[Type]) -> UnionTypeId {
let union_id = self.unions.push(UnionType {
elements: elems.iter().copied().collect(),
});
UnionTypeId {
file_id: self.file_id,
union_id,
}
}
fn add_intersection(&mut self, positive: &[Type], negative: &[Type]) -> IntersectionTypeId {
let intersection_id = self.intersections.push(IntersectionType {
positive: positive.iter().copied().collect(),
negative: negative.iter().copied().collect(),
});
IntersectionTypeId {
file_id: self.file_id,
intersection_id,
}
}
fn get_function(&self, func_id: ModuleFunctionTypeId) -> &FunctionType {
&self.functions[func_id]
}
fn get_class(&self, class_id: ModuleClassTypeId) -> &ClassType {
&self.classes[class_id]
}
fn get_union(&self, union_id: ModuleUnionTypeId) -> &UnionType {
&self.unions[union_id]
}
fn get_intersection(&self, intersection_id: ModuleIntersectionTypeId) -> &IntersectionType {
&self.intersections[intersection_id]
}
}
#[derive(Copy, Clone, Debug)]
struct DisplayType<'a> {
ty: &'a Type,
store: &'a TypeStore,
}
impl std::fmt::Display for DisplayType<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self.ty {
Type::Any => f.write_str("Any"),
Type::Never => f.write_str("Never"),
Type::Unknown => f.write_str("Unknown"),
Type::Unbound => f.write_str("Unbound"),
Type::Module(module_id) => {
// NOTE: something like this?: "<module 'module-name' from 'path-from-fileid'>"
todo!("{module_id:?}")
}
// TODO functions and classes should display using a fully qualified name
Type::Class(class_id) => {
f.write_str("Literal[")?;
f.write_str(self.store.get_class(*class_id).name())?;
f.write_str("]")
}
Type::Instance(class_id) => f.write_str(self.store.get_class(*class_id).name()),
Type::Function(func_id) => f.write_str(self.store.get_function(*func_id).name()),
Type::Union(union_id) => self
.store
.get_module(union_id.file_id)
.get_union(union_id.union_id)
.display(f, self.store),
Type::Intersection(int_id) => self
.store
.get_module(int_id.file_id)
.get_intersection(int_id.intersection_id)
.display(f, self.store),
Type::IntLiteral(n) => write!(f, "Literal[{n}]"),
}
}
}
#[derive(Debug)]
pub(crate) struct ClassType {
/// Name of the class at definition
name: Name,
/// `ScopeId` of the class body
scope_id: ScopeId,
/// Types of all class bases
bases: Vec<Type>,
}
impl ClassType {
fn name(&self) -> &str {
self.name.as_str()
}
fn bases(&self) -> &[Type] {
self.bases.as_slice()
}
}
#[derive(Debug)]
pub(crate) struct FunctionType {
/// name of the function at definition
name: Name,
/// symbol which this function is a definition of
symbol_id: SymbolId,
/// scope of this function's body
scope_id: ScopeId,
/// types of all decorators on this function
decorators: Vec<Type>,
}
impl FunctionType {
fn name(&self) -> &str {
self.name.as_str()
}
fn scope_id(&self) -> ScopeId {
self.scope_id
}
pub(crate) fn decorators(&self) -> &[Type] {
self.decorators.as_slice()
}
}
#[derive(Debug)]
pub(crate) struct UnionType {
// the union type includes values in any of these types
elements: FxIndexSet<Type>,
}
impl UnionType {
fn display(&self, f: &mut std::fmt::Formatter<'_>, store: &TypeStore) -> std::fmt::Result {
f.write_str("(")?;
let mut first = true;
for ty in &self.elements {
if !first {
f.write_str(" | ")?;
};
first = false;
write!(f, "{}", ty.display(store))?;
}
f.write_str(")")
}
}
// Negation types aren't expressible in annotations, and are most likely to arise from type
// narrowing along with intersections (e.g. `if not isinstance(...)`), so we represent them
// directly in intersections rather than as a separate type. This sacrifices some efficiency in the
// case where a Not appears outside an intersection (unclear when that could even happen, but we'd
// have to represent it as a single-element intersection if it did) in exchange for better
// efficiency in the within-intersection case.
#[derive(Debug)]
pub(crate) struct IntersectionType {
// the intersection type includes only values in all of these types
positive: FxIndexSet<Type>,
// the intersection type does not include any value in any of these types
negative: FxIndexSet<Type>,
}
impl IntersectionType {
fn display(&self, f: &mut std::fmt::Formatter<'_>, store: &TypeStore) -> std::fmt::Result {
f.write_str("(")?;
let mut first = true;
for (neg, ty) in self
.positive
.iter()
.map(|ty| (false, ty))
.chain(self.negative.iter().map(|ty| (true, ty)))
{
if !first {
f.write_str(" & ")?;
};
first = false;
if neg {
f.write_str("~")?;
};
write!(f, "{}", ty.display(store))?;
}
f.write_str(")")
}
}
#[cfg(test)]
mod tests {
use super::Type;
use std::path::Path;
use crate::files::Files;
use crate::semantic::symbol_table::SymbolTableBuilder;
use crate::semantic::{SymbolFlags, SymbolTable, TypeStore};
use crate::FxIndexSet;
#[test]
fn add_class() {
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let id = store.add_class(file_id, "C", SymbolTable::root_scope_id(), Vec::new());
assert_eq!(store.get_class(id).name(), "C");
let inst = Type::Instance(id);
assert_eq!(format!("{}", inst.display(&store)), "C");
}
#[test]
fn add_function() {
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let mut builder = SymbolTableBuilder::new();
let func_symbol = builder.add_or_update_symbol(
SymbolTable::root_scope_id(),
"func",
SymbolFlags::IS_DEFINED,
);
builder.finish();
let id = store.add_function(
file_id,
"func",
func_symbol,
SymbolTable::root_scope_id(),
vec![Type::Unknown],
);
assert_eq!(store.get_function(id).name(), "func");
assert_eq!(store.get_function(id).decorators(), vec![Type::Unknown]);
let func = Type::Function(id);
assert_eq!(format!("{}", func.display(&store)), "func");
}
#[test]
fn add_union() {
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let c1 = store.add_class(file_id, "C1", SymbolTable::root_scope_id(), Vec::new());
let c2 = store.add_class(file_id, "C2", SymbolTable::root_scope_id(), Vec::new());
let elems = vec![Type::Instance(c1), Type::Instance(c2)];
let id = store.add_union(file_id, &elems);
assert_eq!(
store.get_union(id).elements,
elems.into_iter().collect::<FxIndexSet<_>>()
);
let union = Type::Union(id);
assert_eq!(format!("{}", union.display(&store)), "(C1 | C2)");
}
#[test]
fn add_intersection() {
let store = TypeStore::default();
let files = Files::default();
let file_id = files.intern(Path::new("/foo"));
let c1 = store.add_class(file_id, "C1", SymbolTable::root_scope_id(), Vec::new());
let c2 = store.add_class(file_id, "C2", SymbolTable::root_scope_id(), Vec::new());
let c3 = store.add_class(file_id, "C3", SymbolTable::root_scope_id(), Vec::new());
let pos = vec![Type::Instance(c1), Type::Instance(c2)];
let neg = vec![Type::Instance(c3)];
let id = store.add_intersection(file_id, &pos, &neg);
assert_eq!(
store.get_intersection(id).positive,
pos.into_iter().collect::<FxIndexSet<_>>()
);
assert_eq!(
store.get_intersection(id).negative,
neg.into_iter().collect::<FxIndexSet<_>>()
);
let intersection = Type::Intersection(id);
assert_eq!(
format!("{}", intersection.display(&store)),
"(C1 & C2 & ~C3)"
);
}
}

View File

@@ -0,0 +1,491 @@
#![allow(dead_code)]
use ruff_python_ast as ast;
use ruff_python_ast::AstNode;
use std::fmt::Debug;
use crate::db::{QueryResult, SemanticDb, SemanticJar};
use crate::module::{resolve_module, ModuleName};
use crate::parse::parse;
use crate::semantic::types::{ModuleTypeId, Type};
use crate::semantic::{
resolve_global_symbol, semantic_index, Definition, GlobalSymbolId, ImportDefinition,
ImportFromDefinition,
};
use crate::{FileId, Name};
// FIXME: Figure out proper dead-lock free synchronisation now that this takes `&db` instead of `&mut db`.
/// Resolve the public-facing type for a symbol (the type seen by other scopes: other modules, or
/// nested functions). Because calls to nested functions and imports can occur anywhere in control
/// flow, this type must be conservative and consider all definitions of the symbol that could
/// possibly be seen by another scope. Currently we take the most conservative approach, which is
/// the union of all definitions. We may be able to narrow this in future to eliminate definitions
/// which can't possibly (or at least likely) be seen by any other scope, so that e.g. we could
/// infer `Literal["1"]` instead of `Literal[1] | Literal["1"]` for `x` in `x = x; x = str(x);`.
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_symbol_public_type(db: &dyn SemanticDb, symbol: GlobalSymbolId) -> QueryResult<Type> {
let index = semantic_index(db, symbol.file_id)?;
let defs = index.symbol_table().definitions(symbol.symbol_id).to_vec();
let jar: &SemanticJar = db.jar()?;
if let Some(ty) = jar.type_store.get_cached_symbol_public_type(symbol) {
return Ok(ty);
}
let ty = infer_type_from_definitions(db, symbol, defs.iter().cloned())?;
jar.type_store.cache_symbol_public_type(symbol, ty);
// TODO record dependencies
Ok(ty)
}
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_type_from_definitions<T>(
db: &dyn SemanticDb,
symbol: GlobalSymbolId,
definitions: T,
) -> QueryResult<Type>
where
T: Debug + Iterator<Item = Definition>,
{
let jar: &SemanticJar = db.jar()?;
let mut tys = definitions
.map(|def| infer_definition_type(db, symbol, def.clone()))
.peekable();
if let Some(first) = tys.next() {
if tys.peek().is_some() {
Ok(Type::Union(jar.type_store.add_union(
symbol.file_id,
&Iterator::chain([first].into_iter(), tys).collect::<QueryResult<Vec<_>>>()?,
)))
} else {
first
}
} else {
Ok(Type::Unknown)
}
}
#[tracing::instrument(level = "trace", skip(db))]
pub fn infer_definition_type(
db: &dyn SemanticDb,
symbol: GlobalSymbolId,
definition: Definition,
) -> QueryResult<Type> {
let jar: &SemanticJar = db.jar()?;
let type_store = &jar.type_store;
let file_id = symbol.file_id;
match definition {
Definition::None => Ok(Type::Unbound),
Definition::Import(ImportDefinition {
module: module_name,
}) => {
if let Some(module) = resolve_module(db, module_name.clone())? {
Ok(Type::Module(ModuleTypeId { module, file_id }))
} else {
Ok(Type::Unknown)
}
}
Definition::ImportFrom(ImportFromDefinition {
module,
name,
level,
}) => {
// TODO relative imports
assert!(matches!(level, 0));
let module_name = ModuleName::new(module.as_ref().expect("TODO relative imports"));
let Some(module) = resolve_module(db, module_name.clone())? else {
return Ok(Type::Unknown);
};
if let Some(remote_symbol) = resolve_global_symbol(db, module, &name)? {
infer_symbol_public_type(db, remote_symbol)
} else {
Ok(Type::Unknown)
}
}
Definition::ClassDef(node_key) => {
if let Some(ty) = type_store.get_cached_node_type(file_id, node_key.erased()) {
Ok(ty)
} else {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let index = semantic_index(db, file_id)?;
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
let mut bases = Vec::with_capacity(node.bases().len());
for base in node.bases() {
bases.push(infer_expr_type(db, file_id, base)?);
}
let scope_id = index.symbol_table().scope_id_for_node(node_key.erased());
let ty = Type::Class(type_store.add_class(file_id, &node.name.id, scope_id, bases));
type_store.cache_node_type(file_id, *node_key.erased(), ty);
Ok(ty)
}
}
Definition::FunctionDef(node_key) => {
if let Some(ty) = type_store.get_cached_node_type(file_id, node_key.erased()) {
Ok(ty)
} else {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let index = semantic_index(db, file_id)?;
let node = node_key
.resolve(ast.as_any_node_ref())
.expect("node key should resolve");
let decorator_tys = node
.decorator_list
.iter()
.map(|decorator| infer_expr_type(db, file_id, &decorator.expression))
.collect::<QueryResult<_>>()?;
let scope_id = index.symbol_table().scope_id_for_node(node_key.erased());
let ty = type_store
.add_function(
file_id,
&node.name.id,
symbol.symbol_id,
scope_id,
decorator_tys,
)
.into();
type_store.cache_node_type(file_id, *node_key.erased(), ty);
Ok(ty)
}
}
Definition::Assignment(node_key) => {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
// TODO handle unpacking assignment correctly (here and for AnnotatedAssignment case, below)
infer_expr_type(db, file_id, &node.value)
}
Definition::AnnotatedAssignment(node_key) => {
let parsed = parse(db.upcast(), file_id)?;
let ast = parsed.syntax();
let node = node_key.resolve_unwrap(ast.as_any_node_ref());
// TODO actually look at the annotation
let Some(value) = &node.value else {
return Ok(Type::Unknown);
};
// TODO handle unpacking assignment correctly (here and for Assignment case, above)
infer_expr_type(db, file_id, value)
}
}
}
fn infer_expr_type(db: &dyn SemanticDb, file_id: FileId, expr: &ast::Expr) -> QueryResult<Type> {
// TODO cache the resolution of the type on the node
let index = semantic_index(db, file_id)?;
match expr {
ast::Expr::NumberLiteral(ast::ExprNumberLiteral { value, .. }) => {
match value {
ast::Number::Int(n) => {
// TODO support big int literals
Ok(n.as_i64().map(Type::IntLiteral).unwrap_or(Type::Unknown))
}
// TODO builtins.float or builtins.complex
_ => Ok(Type::Unknown),
}
}
ast::Expr::Name(name) => {
// TODO look up in the correct scope, don't assume global
if let Some(symbol_id) = index.symbol_table().root_symbol_id_by_name(&name.id) {
// TODO should use only reachable definitions, not public type
infer_type_from_definitions(
db,
GlobalSymbolId { file_id, symbol_id },
index.reachable_definitions(symbol_id, expr),
)
} else {
Ok(Type::Unknown)
}
}
ast::Expr::Attribute(ast::ExprAttribute { value, attr, .. }) => {
let value_type = infer_expr_type(db, file_id, value)?;
let attr_name = &Name::new(&attr.id);
value_type
.get_member(db, attr_name)
.map(|ty| ty.unwrap_or(Type::Unknown))
}
_ => todo!("full expression type resolution"),
}
}
#[cfg(test)]
mod tests {
use crate::db::tests::TestDb;
use crate::db::{HasJar, SemanticJar};
use crate::module::{
resolve_module, set_module_search_paths, ModuleName, ModuleSearchPath, ModuleSearchPathKind,
};
use crate::semantic::{infer_symbol_public_type, resolve_global_symbol, Type};
use crate::Name;
// TODO with virtual filesystem we shouldn't have to write files to disk for these
// tests
struct TestCase {
temp_dir: tempfile::TempDir,
db: TestDb,
src: ModuleSearchPath,
}
fn create_test() -> std::io::Result<TestCase> {
let temp_dir = tempfile::tempdir()?;
let src = temp_dir.path().join("src");
std::fs::create_dir(&src)?;
let src = ModuleSearchPath::new(src.canonicalize()?, ModuleSearchPathKind::FirstParty);
let roots = vec![src.clone()];
let mut db = TestDb::default();
set_module_search_paths(&mut db, roots);
Ok(TestCase { temp_dir, db, src })
}
fn write_to_path(case: &TestCase, relative_path: &str, contents: &str) -> anyhow::Result<()> {
let path = case.src.path().join(relative_path);
std::fs::write(path, contents)?;
Ok(())
}
fn get_public_type(
case: &TestCase,
module_name: &str,
variable_name: &str,
) -> anyhow::Result<Type> {
let db = &case.db;
let module = resolve_module(db, ModuleName::new(module_name))?.expect("Module to exist");
let symbol = resolve_global_symbol(db, module, variable_name)?.expect("symbol to exist");
Ok(infer_symbol_public_type(db, symbol)?)
}
fn assert_public_type(
case: &TestCase,
module_name: &str,
variable_name: &str,
type_name: &str,
) -> anyhow::Result<()> {
let ty = get_public_type(case, module_name, variable_name)?;
let jar = HasJar::<SemanticJar>::jar(&case.db)?;
assert_eq!(format!("{}", ty.display(&jar.type_store)), type_name);
Ok(())
}
#[test]
fn follow_import_to_class() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "from b import C as D; E = D")?;
write_to_path(&case, "b.py", "class C: pass")?;
assert_public_type(&case, "a", "E", "Literal[C]")
}
#[test]
fn resolve_base_class_by_name() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"mod.py",
"
class Base: pass
class Sub(Base): pass
",
)?;
let ty = get_public_type(&case, "mod", "Sub")?;
let Type::Class(class_id) = ty else {
panic!("Sub is not a Class")
};
let jar = HasJar::<SemanticJar>::jar(&case.db)?;
let base_names: Vec<_> = jar
.type_store
.get_class(class_id)
.bases()
.iter()
.map(|base_ty| format!("{}", base_ty.display(&jar.type_store)))
.collect();
assert_eq!(base_names, vec!["Literal[Base]"]);
Ok(())
}
#[test]
fn resolve_method() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"mod.py",
"
class C:
def f(self): pass
",
)?;
let ty = get_public_type(&case, "mod", "C")?;
let Type::Class(class_id) = ty else {
panic!("C is not a Class");
};
let member_ty = class_id
.get_own_class_member(&case.db, &Name::new("f"))
.expect("C.f to resolve");
let Some(Type::Function(func_id)) = member_ty else {
panic!("C.f is not a Function");
};
let jar = HasJar::<SemanticJar>::jar(&case.db)?;
let function = jar.type_store.get_function(func_id);
assert_eq!(function.name(), "f");
Ok(())
}
#[test]
fn resolve_module_member() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "import b; D = b.C")?;
write_to_path(&case, "b.py", "class C: pass")?;
assert_public_type(&case, "a", "D", "Literal[C]")
}
#[test]
fn resolve_literal() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "x = 1")?;
assert_public_type(&case, "a", "x", "Literal[1]")
}
#[test]
fn resolve_union() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
if flag:
x = 1
else:
x = 2
",
)?;
assert_public_type(&case, "a", "x", "(Literal[1] | Literal[2])")
}
#[test]
fn resolve_visible_def() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(&case, "a.py", "y = 1; y = 2; x = y")?;
assert_public_type(&case, "a", "x", "Literal[2]")
}
#[test]
fn join_paths() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
y = 1
y = 2
if flag:
y = 3
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Literal[2] | Literal[3])")
}
#[test]
fn maybe_unbound() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
if flag:
y = 1
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Unbound | Literal[1])")
}
#[test]
fn if_elif_else() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
y = 1
y = 2
if flag:
y = 3
elif flag2:
y = 4
else:
r = y
y = 5
s = y
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Literal[3] | Literal[4] | Literal[5])")?;
assert_public_type(&case, "a", "r", "Literal[2]")?;
assert_public_type(&case, "a", "s", "Literal[5]")
}
#[test]
fn if_elif() -> anyhow::Result<()> {
let case = create_test()?;
write_to_path(
&case,
"a.py",
"
y = 1
y = 2
if flag:
y = 3
elif flag2:
y = 4
x = y
",
)?;
assert_public_type(&case, "a", "x", "(Literal[2] | Literal[3] | Literal[4])")
}
}

View File

@@ -0,0 +1,105 @@
use std::ops::{Deref, DerefMut};
use std::sync::Arc;
use ruff_notebook::Notebook;
use ruff_python_ast::PySourceType;
use crate::cache::KeyValueCache;
use crate::db::{QueryResult, SourceDb};
use crate::files::FileId;
#[tracing::instrument(level = "debug", skip(db))]
pub(crate) fn source_text(db: &dyn SourceDb, file_id: FileId) -> QueryResult<Source> {
let jar = db.jar()?;
let sources = &jar.sources;
sources.get(&file_id, |file_id| {
let path = db.file_path(*file_id);
let source_text = std::fs::read_to_string(&path).unwrap_or_else(|err| {
tracing::error!("Failed to read file '{path:?}: {err}'. Falling back to empty text");
String::new()
});
let python_ty = PySourceType::from(&path);
let kind = match python_ty {
PySourceType::Python => {
SourceKind::Python(Arc::from(source_text))
}
PySourceType::Stub => SourceKind::Stub(Arc::from(source_text)),
PySourceType::Ipynb => {
let notebook = Notebook::from_source_code(&source_text).unwrap_or_else(|err| {
// TODO should this be changed to never fail?
// or should we instead add a diagnostic somewhere? But what would we return in this case?
tracing::error!(
"Failed to parse notebook '{path:?}: {err}'. Falling back to an empty notebook"
);
Notebook::from_source_code("").unwrap()
});
SourceKind::IpyNotebook(Arc::new(notebook))
}
};
Ok(Source { kind })
})
}
#[derive(Debug, Clone, PartialEq)]
pub enum SourceKind {
Python(Arc<str>),
Stub(Arc<str>),
IpyNotebook(Arc<Notebook>),
}
impl<'a> From<&'a SourceKind> for PySourceType {
fn from(value: &'a SourceKind) -> Self {
match value {
SourceKind::Python(_) => PySourceType::Python,
SourceKind::Stub(_) => PySourceType::Stub,
SourceKind::IpyNotebook(_) => PySourceType::Ipynb,
}
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct Source {
kind: SourceKind,
}
impl Source {
pub fn python<T: Into<Arc<str>>>(source: T) -> Self {
Self {
kind: SourceKind::Python(source.into()),
}
}
pub fn kind(&self) -> &SourceKind {
&self.kind
}
pub fn text(&self) -> &str {
match &self.kind {
SourceKind::Python(text) => text,
SourceKind::Stub(text) => text,
SourceKind::IpyNotebook(notebook) => notebook.source_code(),
}
}
}
#[derive(Debug, Default)]
pub struct SourceStorage(pub(crate) KeyValueCache<FileId, Source>);
impl Deref for SourceStorage {
type Target = KeyValueCache<FileId, Source>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for SourceStorage {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}

View File

@@ -0,0 +1,77 @@
use std::path::Path;
use anyhow::Context;
use notify::event::{CreateKind, RemoveKind};
use notify::{recommended_watcher, Event, EventKind, RecommendedWatcher, RecursiveMode, Watcher};
use crate::program::{FileChangeKind, FileWatcherChange};
pub struct FileWatcher {
watcher: RecommendedWatcher,
}
pub trait EventHandler: Send + 'static {
fn handle(&self, changes: Vec<FileWatcherChange>);
}
impl<F> EventHandler for F
where
F: Fn(Vec<FileWatcherChange>) + Send + 'static,
{
fn handle(&self, changes: Vec<FileWatcherChange>) {
let f = self;
f(changes);
}
}
impl FileWatcher {
pub fn new<E>(handler: E) -> anyhow::Result<Self>
where
E: EventHandler,
{
Self::from_handler(Box::new(handler))
}
fn from_handler(handler: Box<dyn EventHandler>) -> anyhow::Result<Self> {
let watcher = recommended_watcher(move |changes: notify::Result<Event>| {
match changes {
Ok(event) => {
// TODO verify that this handles all events correctly
let change_kind = match event.kind {
EventKind::Create(CreateKind::File) => FileChangeKind::Created,
EventKind::Modify(_) => FileChangeKind::Modified,
EventKind::Remove(RemoveKind::File) => FileChangeKind::Deleted,
_ => {
return;
}
};
let mut changes = Vec::new();
for path in event.paths {
if path.is_file() {
changes.push(FileWatcherChange::new(path, change_kind));
}
}
if !changes.is_empty() {
handler.handle(changes);
}
}
// TODO proper error handling
Err(err) => {
panic!("Error: {err}");
}
}
})
.context("Failed to create file watcher.")?;
Ok(Self { watcher })
}
pub fn watch_folder(&mut self, path: &Path) -> anyhow::Result<()> {
self.watcher.watch(path, RecursiveMode::Recursive)?;
Ok(())
}
}

237
crates/red_knot/vendor/typeshed/LICENSE vendored Normal file
View File

@@ -0,0 +1,237 @@
The "typeshed" project is licensed under the terms of the Apache license, as
reproduced below.
= = = = =
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
= = = = =
Parts of typeshed are licensed under different licenses (like the MIT
license), reproduced below.
= = = = =
The MIT License
Copyright (c) 2015 Jukka Lehtosalo and contributors
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
= = = = =

View File

@@ -0,0 +1,124 @@
# typeshed
[![Tests](https://github.com/python/typeshed/actions/workflows/tests.yml/badge.svg)](https://github.com/python/typeshed/actions/workflows/tests.yml)
[![Chat at https://gitter.im/python/typing](https://badges.gitter.im/python/typing.svg)](https://gitter.im/python/typing?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Pull Requests Welcome](https://img.shields.io/badge/pull%20requests-welcome-brightgreen.svg)](https://github.com/python/typeshed/blob/main/CONTRIBUTING.md)
## About
Typeshed contains external type annotations for the Python standard library
and Python builtins, as well as third party packages as contributed by
people external to those projects.
This data can e.g. be used for static analysis, type checking, type inference,
and autocompletion.
For information on how to use typeshed, read below. Information for
contributors can be found in [CONTRIBUTING.md](CONTRIBUTING.md). **Please read
it before submitting pull requests; do not report issues with annotations to
the project the stubs are for, but instead report them here to typeshed.**
Further documentation on stub files, typeshed, and Python's typing system in
general, can also be found at https://typing.readthedocs.io/en/latest/.
Typeshed supports Python versions 3.8 and up.
## Using
If you're just using a type checker ([mypy](https://github.com/python/mypy/),
[pyright](https://github.com/microsoft/pyright),
[pytype](https://github.com/google/pytype/), PyCharm, ...), as opposed to
developing it, you don't need to interact with the typeshed repo at
all: a copy of standard library part of typeshed is bundled with type checkers.
And type stubs for third party packages and modules you are using can
be installed from PyPI. For example, if you are using `html5lib` and `requests`,
you can install the type stubs using
```bash
$ pip install types-html5lib types-requests
```
These PyPI packages follow [PEP 561](http://www.python.org/dev/peps/pep-0561/)
and are automatically released (up to once a day) by
[typeshed internal machinery](https://github.com/typeshed-internal/stub_uploader).
Type checkers should be able to use these stub packages when installed. For more
details, see the documentation for your type checker.
### Package versioning for third-party stubs
Version numbers of third-party stub packages consist of at least four parts.
All parts of the stub version, except for the last part, correspond to the
version of the runtime package being stubbed. For example, if the `types-foo`
package has version `1.2.0.20240309`, this guarantees that the `types-foo` package
contains stubs targeted against `foo==1.2.*` and tested against the latest
version of `foo` matching that specifier. In this example, the final element
of the version number (20240309) indicates that the stub package was pushed on
March 9, 2024.
At typeshed, we try to keep breaking changes to a minimum. However, due to the
nature of stubs, any version bump can introduce changes that might make your
code fail to type check.
There are several strategies available for specifying the version of a stubs
package you're using, each with its own tradeoffs:
1. Use the same bounds that you use for the package being stubbed. For example,
if you use `requests>=2.30.0,<2.32`, you can use
`types-requests>=2.30.0,<2.32`. This ensures that the stubs are compatible
with the package you are using, but it carries a small risk of breaking
type checking due to changes in the stubs.
Another risk of this strategy is that stubs often lag behind
the package being stubbed. You might want to force the package being stubbed
to a certain minimum version because it fixes a critical bug, but if
correspondingly updated stubs have not been released, your type
checking results may not be fully accurate.
2. Pin the stubs to a known good version and update the pin from time to time
(either manually, or using a tool such as dependabot or renovate).
For example, if you use `types-requests==2.31.0.1`, you can have confidence
that upgrading dependencies will not break type checking. However, you will
miss out on improvements in the stubs that could potentially improve type
checking until you update the pin. This strategy also has the risk that the
stubs you are using might become incompatible with the package being stubbed.
3. Don't pin the stubs. This is the option that demands the least work from
you when it comes to updating version pins, and has the advantage that you
will automatically benefit from improved stubs whenever a new version of the
stubs package is released. However, it carries the risk that the stubs
become incompatible with the package being stubbed.
For example, if a new major version of the package is released, there's a
chance the stubs might be updated to reflect the new version of the runtime
package before you update the package being stubbed.
You can also switch between the different strategies as needed. For example,
you could default to strategy (1), but fall back to strategy (2) when
a problem arises that can't easily be fixed.
### The `_typeshed` package
typeshed includes a package `_typeshed` as part of the standard library.
This package and its submodules contain utility types, but are not
available at runtime. For more information about how to use this package,
[see the `stdlib/_typeshed` directory](https://github.com/python/typeshed/tree/main/stdlib/_typeshed).
## Discussion
If you've run into behavior in the type checker that suggests the type
stubs for a given library are incorrect or incomplete,
we want to hear from you!
Our main forum for discussion is the project's [GitHub issue
tracker](https://github.com/python/typeshed/issues). This is the right
place to start a discussion of any of the above or most any other
topic concerning the project.
If you have general questions about typing with Python, or you need
a review of your type annotations or stubs outside of typeshed, head over to
[our discussion forum](https://github.com/python/typing/discussions).
For less formal discussion, try the typing chat room on
[gitter.im](https://gitter.im/python/typing). Some typeshed maintainers
are almost always present; feel free to find us there and we're happy
to chat. Substantive technical discussion will be directed to the
issue tracker.

View File

@@ -0,0 +1 @@
4b6558c12ac43cd40716cd6452fe98a632ae65d7

View File

@@ -0,0 +1,309 @@
# The structure of this file is as follows:
# - Blank lines and comments starting with `#` are ignored.
# - Lines contain the name of a module, followed by a colon,
# a space, and a version range (for example: `symbol: 3.0-3.9`).
#
# Version ranges may be of the form "X.Y-A.B" or "X.Y-". The
# first form means that a module was introduced in version X.Y and last
# available in version A.B. The second form means that the module was
# introduced in version X.Y and is still available in the latest
# version of Python.
#
# If a submodule is not listed separately, it has the same lifetime as
# its parent module.
#
# Python versions before 3.0 are ignored, so any module that was already
# present in 3.0 will have "3.0" as its minimum version. Version ranges
# for unsupported versions of Python 3 are generally accurate but we do
# not guarantee their correctness.
__future__: 3.0-
__main__: 3.0-
_ast: 3.0-
_bisect: 3.0-
_bootlocale: 3.4-3.9
_codecs: 3.0-
_collections_abc: 3.3-
_compat_pickle: 3.1-
_compression: 3.5-
_csv: 3.0-
_ctypes: 3.0-
_curses: 3.0-
_decimal: 3.3-
_dummy_thread: 3.0-3.8
_dummy_threading: 3.0-3.8
_heapq: 3.0-
_imp: 3.0-
_json: 3.0-
_locale: 3.0-
_lsprof: 3.0-
_markupbase: 3.0-
_msi: 3.0-
_operator: 3.4-
_osx_support: 3.0-
_posixsubprocess: 3.2-
_py_abc: 3.7-
_pydecimal: 3.5-
_random: 3.0-
_sitebuiltins: 3.4-
_socket: 3.0- # present in 3.0 at runtime, but not in typeshed
_stat: 3.4-
_thread: 3.0-
_threading_local: 3.0-
_tkinter: 3.0-
_tracemalloc: 3.4-
_typeshed: 3.0- # not present at runtime, only for type checking
_warnings: 3.0-
_weakref: 3.0-
_weakrefset: 3.0-
_winapi: 3.3-
abc: 3.0-
aifc: 3.0-3.12
antigravity: 3.0-
argparse: 3.0-
array: 3.0-
ast: 3.0-
asynchat: 3.0-3.11
asyncio: 3.4-
asyncio.mixins: 3.10-
asyncio.exceptions: 3.8-
asyncio.format_helpers: 3.7-
asyncio.runners: 3.7-
asyncio.staggered: 3.8-
asyncio.taskgroups: 3.11-
asyncio.threads: 3.9-
asyncio.timeouts: 3.11-
asyncio.trsock: 3.8-
asyncore: 3.0-3.11
atexit: 3.0-
audioop: 3.0-3.12
base64: 3.0-
bdb: 3.0-
binascii: 3.0-
binhex: 3.0-3.10
bisect: 3.0-
builtins: 3.0-
bz2: 3.0-
cProfile: 3.0-
calendar: 3.0-
cgi: 3.0-3.12
cgitb: 3.0-3.12
chunk: 3.0-3.12
cmath: 3.0-
cmd: 3.0-
code: 3.0-
codecs: 3.0-
codeop: 3.0-
collections: 3.0-
collections.abc: 3.3-
colorsys: 3.0-
compileall: 3.0-
concurrent: 3.2-
configparser: 3.0-
contextlib: 3.0-
contextvars: 3.7-
copy: 3.0-
copyreg: 3.0-
crypt: 3.0-3.12
csv: 3.0-
ctypes: 3.0-
curses: 3.0-
dataclasses: 3.7-
datetime: 3.0-
dbm: 3.0-
decimal: 3.0-
difflib: 3.0-
dis: 3.0-
distutils: 3.0-3.11
distutils.command.bdist_msi: 3.0-3.10
distutils.command.bdist_wininst: 3.0-3.9
doctest: 3.0-
dummy_threading: 3.0-3.8
email: 3.0-
encodings: 3.0-
ensurepip: 3.0-
enum: 3.4-
errno: 3.0-
faulthandler: 3.3-
fcntl: 3.0-
filecmp: 3.0-
fileinput: 3.0-
fnmatch: 3.0-
formatter: 3.0-3.9
fractions: 3.0-
ftplib: 3.0-
functools: 3.0-
gc: 3.0-
genericpath: 3.0-
getopt: 3.0-
getpass: 3.0-
gettext: 3.0-
glob: 3.0-
graphlib: 3.9-
grp: 3.0-
gzip: 3.0-
hashlib: 3.0-
heapq: 3.0-
hmac: 3.0-
html: 3.0-
http: 3.0-
imaplib: 3.0-
imghdr: 3.0-3.12
imp: 3.0-3.11
importlib: 3.0-
importlib._abc: 3.10-
importlib.metadata: 3.8-
importlib.metadata._meta: 3.10-
importlib.readers: 3.10-
importlib.resources: 3.7-
importlib.resources.abc: 3.11-
importlib.resources.readers: 3.11-
importlib.resources.simple: 3.11-
importlib.simple: 3.11-
inspect: 3.0-
io: 3.0-
ipaddress: 3.3-
itertools: 3.0-
json: 3.0-
keyword: 3.0-
lib2to3: 3.0-3.12
linecache: 3.0-
locale: 3.0-
logging: 3.0-
lzma: 3.3-
mailbox: 3.0-
mailcap: 3.0-3.12
marshal: 3.0-
math: 3.0-
mimetypes: 3.0-
mmap: 3.0-
modulefinder: 3.0-
msilib: 3.0-3.12
msvcrt: 3.0-
multiprocessing: 3.0-
multiprocessing.resource_tracker: 3.8-
multiprocessing.shared_memory: 3.8-
netrc: 3.0-
nis: 3.0-3.12
nntplib: 3.0-3.12
nt: 3.0-
ntpath: 3.0-
nturl2path: 3.0-
numbers: 3.0-
opcode: 3.0-
operator: 3.0-
optparse: 3.0-
os: 3.0-
ossaudiodev: 3.0-3.12
parser: 3.0-3.9
pathlib: 3.4-
pdb: 3.0-
pickle: 3.0-
pickletools: 3.0-
pipes: 3.0-3.12
pkgutil: 3.0-
platform: 3.0-
plistlib: 3.0-
poplib: 3.0-
posix: 3.0-
posixpath: 3.0-
pprint: 3.0-
profile: 3.0-
pstats: 3.0-
pty: 3.0-
pwd: 3.0-
py_compile: 3.0-
pyclbr: 3.0-
pydoc: 3.0-
pydoc_data: 3.0-
pyexpat: 3.0-
queue: 3.0-
quopri: 3.0-
random: 3.0-
re: 3.0-
readline: 3.0-
reprlib: 3.0-
resource: 3.0-
rlcompleter: 3.0-
runpy: 3.0-
sched: 3.0-
secrets: 3.6-
select: 3.0-
selectors: 3.4-
shelve: 3.0-
shlex: 3.0-
shutil: 3.0-
signal: 3.0-
site: 3.0-
smtpd: 3.0-3.11
smtplib: 3.0-
sndhdr: 3.0-3.12
socket: 3.0-
socketserver: 3.0-
spwd: 3.0-3.12
sqlite3: 3.0-
sre_compile: 3.0-
sre_constants: 3.0-
sre_parse: 3.0-
ssl: 3.0-
stat: 3.0-
statistics: 3.4-
string: 3.0-
stringprep: 3.0-
struct: 3.0-
subprocess: 3.0-
sunau: 3.0-3.12
symbol: 3.0-3.9
symtable: 3.0-
sys: 3.0-
sys._monitoring: 3.12- # Doesn't actually exist. See comments in the stub.
sysconfig: 3.0-
syslog: 3.0-
tabnanny: 3.0-
tarfile: 3.0-
telnetlib: 3.0-3.12
tempfile: 3.0-
termios: 3.0-
textwrap: 3.0-
this: 3.0-
threading: 3.0-
time: 3.0-
timeit: 3.0-
tkinter: 3.0-
token: 3.0-
tokenize: 3.0-
tomllib: 3.11-
trace: 3.0-
traceback: 3.0-
tracemalloc: 3.4-
tty: 3.0-
turtle: 3.0-
types: 3.0-
typing: 3.5-
typing_extensions: 3.0-
unicodedata: 3.0-
unittest: 3.0-
unittest._log: 3.9-
unittest.async_case: 3.8-
urllib: 3.0-
uu: 3.0-3.12
uuid: 3.0-
venv: 3.3-
warnings: 3.0-
wave: 3.0-
weakref: 3.0-
webbrowser: 3.0-
winreg: 3.0-
winsound: 3.0-
wsgiref: 3.0-
wsgiref.types: 3.11-
xdrlib: 3.0-3.12
xml: 3.0-
xmlrpc: 3.0-
xxlimited: 3.2-
zipapp: 3.5-
zipfile: 3.0-
zipfile._path: 3.12-
zipimport: 3.0-
zlib: 3.0-
zoneinfo: 3.9-

View File

@@ -0,0 +1,36 @@
from typing_extensions import TypeAlias
_VersionInfo: TypeAlias = tuple[int, int, int, str, int]
class _Feature:
def __init__(self, optionalRelease: _VersionInfo, mandatoryRelease: _VersionInfo | None, compiler_flag: int) -> None: ...
def getOptionalRelease(self) -> _VersionInfo: ...
def getMandatoryRelease(self) -> _VersionInfo | None: ...
compiler_flag: int
absolute_import: _Feature
division: _Feature
generators: _Feature
nested_scopes: _Feature
print_function: _Feature
unicode_literals: _Feature
with_statement: _Feature
barry_as_FLUFL: _Feature
generator_stop: _Feature
annotations: _Feature
all_feature_names: list[str] # undocumented
__all__ = [
"all_feature_names",
"absolute_import",
"division",
"generators",
"nested_scopes",
"print_function",
"unicode_literals",
"with_statement",
"barry_as_FLUFL",
"generator_stop",
"annotations",
]

View File

@@ -0,0 +1,3 @@
from typing import Any
def __getattr__(name: str) -> Any: ...

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,84 @@
import sys
from _typeshed import SupportsLenAndGetItem, SupportsRichComparisonT
from collections.abc import Callable, MutableSequence
from typing import TypeVar, overload
_T = TypeVar("_T")
if sys.version_info >= (3, 10):
@overload
def bisect_left(
a: SupportsLenAndGetItem[SupportsRichComparisonT],
x: SupportsRichComparisonT,
lo: int = 0,
hi: int | None = None,
*,
key: None = None,
) -> int: ...
@overload
def bisect_left(
a: SupportsLenAndGetItem[_T],
x: SupportsRichComparisonT,
lo: int = 0,
hi: int | None = None,
*,
key: Callable[[_T], SupportsRichComparisonT],
) -> int: ...
@overload
def bisect_right(
a: SupportsLenAndGetItem[SupportsRichComparisonT],
x: SupportsRichComparisonT,
lo: int = 0,
hi: int | None = None,
*,
key: None = None,
) -> int: ...
@overload
def bisect_right(
a: SupportsLenAndGetItem[_T],
x: SupportsRichComparisonT,
lo: int = 0,
hi: int | None = None,
*,
key: Callable[[_T], SupportsRichComparisonT],
) -> int: ...
@overload
def insort_left(
a: MutableSequence[SupportsRichComparisonT],
x: SupportsRichComparisonT,
lo: int = 0,
hi: int | None = None,
*,
key: None = None,
) -> None: ...
@overload
def insort_left(
a: MutableSequence[_T], x: _T, lo: int = 0, hi: int | None = None, *, key: Callable[[_T], SupportsRichComparisonT]
) -> None: ...
@overload
def insort_right(
a: MutableSequence[SupportsRichComparisonT],
x: SupportsRichComparisonT,
lo: int = 0,
hi: int | None = None,
*,
key: None = None,
) -> None: ...
@overload
def insort_right(
a: MutableSequence[_T], x: _T, lo: int = 0, hi: int | None = None, *, key: Callable[[_T], SupportsRichComparisonT]
) -> None: ...
else:
def bisect_left(
a: SupportsLenAndGetItem[SupportsRichComparisonT], x: SupportsRichComparisonT, lo: int = 0, hi: int | None = None
) -> int: ...
def bisect_right(
a: SupportsLenAndGetItem[SupportsRichComparisonT], x: SupportsRichComparisonT, lo: int = 0, hi: int | None = None
) -> int: ...
def insort_left(
a: MutableSequence[SupportsRichComparisonT], x: SupportsRichComparisonT, lo: int = 0, hi: int | None = None
) -> None: ...
def insort_right(
a: MutableSequence[SupportsRichComparisonT], x: SupportsRichComparisonT, lo: int = 0, hi: int | None = None
) -> None: ...

View File

@@ -0,0 +1 @@
def getpreferredencoding(do_setlocale: bool = True) -> str: ...

View File

@@ -0,0 +1,133 @@
import codecs
import sys
from _typeshed import ReadableBuffer
from collections.abc import Callable
from typing import Literal, overload
from typing_extensions import TypeAlias
# This type is not exposed; it is defined in unicodeobject.c
class _EncodingMap:
def size(self) -> int: ...
_CharMap: TypeAlias = dict[int, int] | _EncodingMap
_Handler: TypeAlias = Callable[[UnicodeError], tuple[str | bytes, int]]
_SearchFunction: TypeAlias = Callable[[str], codecs.CodecInfo | None]
def register(search_function: _SearchFunction, /) -> None: ...
if sys.version_info >= (3, 10):
def unregister(search_function: _SearchFunction, /) -> None: ...
def register_error(errors: str, handler: _Handler, /) -> None: ...
def lookup_error(name: str, /) -> _Handler: ...
# The type ignore on `encode` and `decode` is to avoid issues with overlapping overloads, for more details, see #300
# https://docs.python.org/3/library/codecs.html#binary-transforms
_BytesToBytesEncoding: TypeAlias = Literal[
"base64",
"base_64",
"base64_codec",
"bz2",
"bz2_codec",
"hex",
"hex_codec",
"quopri",
"quotedprintable",
"quoted_printable",
"quopri_codec",
"uu",
"uu_codec",
"zip",
"zlib",
"zlib_codec",
]
# https://docs.python.org/3/library/codecs.html#text-transforms
_StrToStrEncoding: TypeAlias = Literal["rot13", "rot_13"]
@overload
def encode(obj: ReadableBuffer, encoding: _BytesToBytesEncoding, errors: str = "strict") -> bytes: ...
@overload
def encode(obj: str, encoding: _StrToStrEncoding, errors: str = "strict") -> str: ... # type: ignore[overload-overlap]
@overload
def encode(obj: str, encoding: str = "utf-8", errors: str = "strict") -> bytes: ...
@overload
def decode(obj: ReadableBuffer, encoding: _BytesToBytesEncoding, errors: str = "strict") -> bytes: ... # type: ignore[overload-overlap]
@overload
def decode(obj: str, encoding: _StrToStrEncoding, errors: str = "strict") -> str: ...
# these are documented as text encodings but in practice they also accept str as input
@overload
def decode(
obj: str,
encoding: Literal["unicode_escape", "unicode-escape", "raw_unicode_escape", "raw-unicode-escape"],
errors: str = "strict",
) -> str: ...
# hex is officially documented as a bytes to bytes encoding, but it appears to also work with str
@overload
def decode(obj: str, encoding: Literal["hex", "hex_codec"], errors: str = "strict") -> bytes: ...
@overload
def decode(obj: ReadableBuffer, encoding: str = "utf-8", errors: str = "strict") -> str: ...
def lookup(encoding: str, /) -> codecs.CodecInfo: ...
def charmap_build(map: str, /) -> _CharMap: ...
def ascii_decode(data: ReadableBuffer, errors: str | None = None, /) -> tuple[str, int]: ...
def ascii_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def charmap_decode(data: ReadableBuffer, errors: str | None = None, mapping: _CharMap | None = None, /) -> tuple[str, int]: ...
def charmap_encode(str: str, errors: str | None = None, mapping: _CharMap | None = None, /) -> tuple[bytes, int]: ...
def escape_decode(data: str | ReadableBuffer, errors: str | None = None, /) -> tuple[str, int]: ...
def escape_encode(data: bytes, errors: str | None = None, /) -> tuple[bytes, int]: ...
def latin_1_decode(data: ReadableBuffer, errors: str | None = None, /) -> tuple[str, int]: ...
def latin_1_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
if sys.version_info >= (3, 9):
def raw_unicode_escape_decode(
data: str | ReadableBuffer, errors: str | None = None, final: bool = True, /
) -> tuple[str, int]: ...
else:
def raw_unicode_escape_decode(data: str | ReadableBuffer, errors: str | None = None, /) -> tuple[str, int]: ...
def raw_unicode_escape_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def readbuffer_encode(data: str | ReadableBuffer, errors: str | None = None, /) -> tuple[bytes, int]: ...
if sys.version_info >= (3, 9):
def unicode_escape_decode(
data: str | ReadableBuffer, errors: str | None = None, final: bool = True, /
) -> tuple[str, int]: ...
else:
def unicode_escape_decode(data: str | ReadableBuffer, errors: str | None = None, /) -> tuple[str, int]: ...
def unicode_escape_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def utf_16_be_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_16_be_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def utf_16_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_16_encode(str: str, errors: str | None = None, byteorder: int = 0, /) -> tuple[bytes, int]: ...
def utf_16_ex_decode(
data: ReadableBuffer, errors: str | None = None, byteorder: int = 0, final: bool = False, /
) -> tuple[str, int, int]: ...
def utf_16_le_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_16_le_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def utf_32_be_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_32_be_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def utf_32_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_32_encode(str: str, errors: str | None = None, byteorder: int = 0, /) -> tuple[bytes, int]: ...
def utf_32_ex_decode(
data: ReadableBuffer, errors: str | None = None, byteorder: int = 0, final: bool = False, /
) -> tuple[str, int, int]: ...
def utf_32_le_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_32_le_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def utf_7_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_7_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def utf_8_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def utf_8_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
if sys.platform == "win32":
def mbcs_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def mbcs_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def code_page_decode(
codepage: int, data: ReadableBuffer, errors: str | None = None, final: bool = False, /
) -> tuple[str, int]: ...
def code_page_encode(code_page: int, str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...
def oem_decode(data: ReadableBuffer, errors: str | None = None, final: bool = False, /) -> tuple[str, int]: ...
def oem_encode(str: str, errors: str | None = None, /) -> tuple[bytes, int]: ...

View File

@@ -0,0 +1,94 @@
import sys
from abc import abstractmethod
from types import MappingProxyType
from typing import ( # noqa: Y022,Y038,Y057
AbstractSet as Set,
AsyncGenerator as AsyncGenerator,
AsyncIterable as AsyncIterable,
AsyncIterator as AsyncIterator,
Awaitable as Awaitable,
ByteString as ByteString,
Callable as Callable,
Collection as Collection,
Container as Container,
Coroutine as Coroutine,
Generator as Generator,
Generic,
Hashable as Hashable,
ItemsView as ItemsView,
Iterable as Iterable,
Iterator as Iterator,
KeysView as KeysView,
Mapping as Mapping,
MappingView as MappingView,
MutableMapping as MutableMapping,
MutableSequence as MutableSequence,
MutableSet as MutableSet,
Protocol,
Reversible as Reversible,
Sequence as Sequence,
Sized as Sized,
TypeVar,
ValuesView as ValuesView,
final,
runtime_checkable,
)
__all__ = [
"Awaitable",
"Coroutine",
"AsyncIterable",
"AsyncIterator",
"AsyncGenerator",
"Hashable",
"Iterable",
"Iterator",
"Generator",
"Reversible",
"Sized",
"Container",
"Callable",
"Collection",
"Set",
"MutableSet",
"Mapping",
"MutableMapping",
"MappingView",
"KeysView",
"ItemsView",
"ValuesView",
"Sequence",
"MutableSequence",
"ByteString",
]
if sys.version_info >= (3, 12):
__all__ += ["Buffer"]
_KT_co = TypeVar("_KT_co", covariant=True) # Key type covariant containers.
_VT_co = TypeVar("_VT_co", covariant=True) # Value type covariant containers.
@final
class dict_keys(KeysView[_KT_co], Generic[_KT_co, _VT_co]): # undocumented
def __eq__(self, value: object, /) -> bool: ...
if sys.version_info >= (3, 10):
@property
def mapping(self) -> MappingProxyType[_KT_co, _VT_co]: ...
@final
class dict_values(ValuesView[_VT_co], Generic[_KT_co, _VT_co]): # undocumented
if sys.version_info >= (3, 10):
@property
def mapping(self) -> MappingProxyType[_KT_co, _VT_co]: ...
@final
class dict_items(ItemsView[_KT_co, _VT_co]): # undocumented
def __eq__(self, value: object, /) -> bool: ...
if sys.version_info >= (3, 10):
@property
def mapping(self) -> MappingProxyType[_KT_co, _VT_co]: ...
if sys.version_info >= (3, 12):
@runtime_checkable
class Buffer(Protocol):
@abstractmethod
def __buffer__(self, flags: int, /) -> memoryview: ...

View File

@@ -0,0 +1,8 @@
IMPORT_MAPPING: dict[str, str]
NAME_MAPPING: dict[tuple[str, str], tuple[str, str]]
PYTHON2_EXCEPTIONS: tuple[str, ...]
MULTIPROCESSING_EXCEPTIONS: tuple[str, ...]
REVERSE_IMPORT_MAPPING: dict[str, str]
REVERSE_NAME_MAPPING: dict[tuple[str, str], tuple[str, str]]
PYTHON3_OSERROR_EXCEPTIONS: tuple[str, ...]
PYTHON3_IMPORTERROR_EXCEPTIONS: tuple[str, ...]

View File

@@ -0,0 +1,25 @@
from _typeshed import WriteableBuffer
from collections.abc import Callable
from io import DEFAULT_BUFFER_SIZE, BufferedIOBase, RawIOBase
from typing import Any, Protocol
BUFFER_SIZE = DEFAULT_BUFFER_SIZE
class _Reader(Protocol):
def read(self, n: int, /) -> bytes: ...
def seekable(self) -> bool: ...
def seek(self, n: int, /) -> Any: ...
class BaseStream(BufferedIOBase): ...
class DecompressReader(RawIOBase):
def __init__(
self,
fp: _Reader,
decomp_factory: Callable[..., object],
trailing_error: type[Exception] | tuple[type[Exception], ...] = (),
**decomp_args: Any,
) -> None: ...
def readinto(self, b: WriteableBuffer) -> int: ...
def read(self, size: int = -1) -> bytes: ...
def seek(self, offset: int, whence: int = 0) -> int: ...

View File

@@ -0,0 +1,90 @@
import sys
from _typeshed import SupportsWrite
from collections.abc import Iterable, Iterator
from typing import Any, Final, Literal
from typing_extensions import TypeAlias
__version__: Final[str]
QUOTE_ALL: Literal[1]
QUOTE_MINIMAL: Literal[0]
QUOTE_NONE: Literal[3]
QUOTE_NONNUMERIC: Literal[2]
if sys.version_info >= (3, 12):
QUOTE_STRINGS: Literal[4]
QUOTE_NOTNULL: Literal[5]
# Ideally this would be `QUOTE_ALL | QUOTE_MINIMAL | QUOTE_NONE | QUOTE_NONNUMERIC`
# However, using literals in situations like these can cause false-positives (see #7258)
_QuotingType: TypeAlias = int
class Error(Exception): ...
class Dialect:
delimiter: str
quotechar: str | None
escapechar: str | None
doublequote: bool
skipinitialspace: bool
lineterminator: str
quoting: _QuotingType
strict: bool
def __init__(self) -> None: ...
_DialectLike: TypeAlias = str | Dialect | type[Dialect]
class _reader(Iterator[list[str]]):
@property
def dialect(self) -> Dialect: ...
line_num: int
def __next__(self) -> list[str]: ...
class _writer:
@property
def dialect(self) -> Dialect: ...
def writerow(self, row: Iterable[Any]) -> Any: ...
def writerows(self, rows: Iterable[Iterable[Any]]) -> None: ...
def writer(
csvfile: SupportsWrite[str],
dialect: _DialectLike = "excel",
*,
delimiter: str = ",",
quotechar: str | None = '"',
escapechar: str | None = None,
doublequote: bool = True,
skipinitialspace: bool = False,
lineterminator: str = "\r\n",
quoting: _QuotingType = 0,
strict: bool = False,
) -> _writer: ...
def reader(
csvfile: Iterable[str],
dialect: _DialectLike = "excel",
*,
delimiter: str = ",",
quotechar: str | None = '"',
escapechar: str | None = None,
doublequote: bool = True,
skipinitialspace: bool = False,
lineterminator: str = "\r\n",
quoting: _QuotingType = 0,
strict: bool = False,
) -> _reader: ...
def register_dialect(
name: str,
dialect: type[Dialect] = ...,
*,
delimiter: str = ",",
quotechar: str | None = '"',
escapechar: str | None = None,
doublequote: bool = True,
skipinitialspace: bool = False,
lineterminator: str = "\r\n",
quoting: _QuotingType = 0,
strict: bool = False,
) -> None: ...
def unregister_dialect(name: str) -> None: ...
def get_dialect(name: str) -> Dialect: ...
def list_dialects() -> list[str]: ...
def field_size_limit(new_limit: int = ...) -> int: ...

View File

@@ -0,0 +1,211 @@
import sys
from _typeshed import ReadableBuffer, WriteableBuffer
from abc import abstractmethod
from collections.abc import Callable, Iterable, Iterator, Mapping, Sequence
from ctypes import CDLL, ArgumentError as ArgumentError
from typing import Any, ClassVar, Generic, TypeVar, overload
from typing_extensions import Self, TypeAlias
if sys.version_info >= (3, 9):
from types import GenericAlias
_T = TypeVar("_T")
_CT = TypeVar("_CT", bound=_CData)
FUNCFLAG_CDECL: int
FUNCFLAG_PYTHONAPI: int
FUNCFLAG_USE_ERRNO: int
FUNCFLAG_USE_LASTERROR: int
RTLD_GLOBAL: int
RTLD_LOCAL: int
if sys.version_info >= (3, 11):
CTYPES_MAX_ARGCOUNT: int
if sys.version_info >= (3, 12):
SIZEOF_TIME_T: int
if sys.platform == "win32":
# Description, Source, HelpFile, HelpContext, scode
_COMError_Details: TypeAlias = tuple[str | None, str | None, str | None, int | None, int | None]
class COMError(Exception):
hresult: int
text: str | None
details: _COMError_Details
def __init__(self, hresult: int, text: str | None, details: _COMError_Details) -> None: ...
def CopyComPointer(src: _PointerLike, dst: _PointerLike | _CArgObject) -> int: ...
FUNCFLAG_HRESULT: int
FUNCFLAG_STDCALL: int
def FormatError(code: int = ...) -> str: ...
def get_last_error() -> int: ...
def set_last_error(value: int) -> int: ...
def LoadLibrary(name: str, load_flags: int = 0, /) -> int: ...
def FreeLibrary(handle: int, /) -> None: ...
class _CDataMeta(type):
# By default mypy complains about the following two methods, because strictly speaking cls
# might not be a Type[_CT]. However this can never actually happen, because the only class that
# uses _CDataMeta as its metaclass is _CData. So it's safe to ignore the errors here.
def __mul__(cls: type[_CT], other: int) -> type[Array[_CT]]: ... # type: ignore[misc]
def __rmul__(cls: type[_CT], other: int) -> type[Array[_CT]]: ... # type: ignore[misc]
class _CData(metaclass=_CDataMeta):
_b_base_: int
_b_needsfree_: bool
_objects: Mapping[Any, int] | None
# At runtime the following classmethods are available only on classes, not
# on instances. This can't be reflected properly in the type system:
#
# Structure.from_buffer(...) # valid at runtime
# Structure(...).from_buffer(...) # invalid at runtime
#
@classmethod
def from_buffer(cls, source: WriteableBuffer, offset: int = ...) -> Self: ...
@classmethod
def from_buffer_copy(cls, source: ReadableBuffer, offset: int = ...) -> Self: ...
@classmethod
def from_address(cls, address: int) -> Self: ...
@classmethod
def from_param(cls, obj: Any) -> Self | _CArgObject: ...
@classmethod
def in_dll(cls, library: CDLL, name: str) -> Self: ...
def __buffer__(self, flags: int, /) -> memoryview: ...
def __release_buffer__(self, buffer: memoryview, /) -> None: ...
class _SimpleCData(_CData, Generic[_T]):
value: _T
# The TypeVar can be unsolved here,
# but we can't use overloads without creating many, many mypy false-positive errors
def __init__(self, value: _T = ...) -> None: ... # pyright: ignore[reportInvalidTypeVarUse]
class _CanCastTo(_CData): ...
class _PointerLike(_CanCastTo): ...
class _Pointer(_PointerLike, _CData, Generic[_CT]):
_type_: type[_CT]
contents: _CT
@overload
def __init__(self) -> None: ...
@overload
def __init__(self, arg: _CT) -> None: ...
@overload
def __getitem__(self, key: int, /) -> Any: ...
@overload
def __getitem__(self, key: slice, /) -> list[Any]: ...
def __setitem__(self, key: int, value: Any, /) -> None: ...
def POINTER(type: type[_CT]) -> type[_Pointer[_CT]]: ...
def pointer(arg: _CT, /) -> _Pointer[_CT]: ...
class _CArgObject: ...
def byref(obj: _CData, offset: int = ...) -> _CArgObject: ...
_ECT: TypeAlias = Callable[[_CData | None, CFuncPtr, tuple[_CData, ...]], _CData]
_PF: TypeAlias = tuple[int] | tuple[int, str | None] | tuple[int, str | None, Any]
class CFuncPtr(_PointerLike, _CData):
restype: type[_CData] | Callable[[int], Any] | None
argtypes: Sequence[type[_CData]]
errcheck: _ECT
# Abstract attribute that must be defined on subclasses
_flags_: ClassVar[int]
@overload
def __init__(self) -> None: ...
@overload
def __init__(self, address: int, /) -> None: ...
@overload
def __init__(self, callable: Callable[..., Any], /) -> None: ...
@overload
def __init__(self, func_spec: tuple[str | int, CDLL], paramflags: tuple[_PF, ...] | None = ..., /) -> None: ...
if sys.platform == "win32":
@overload
def __init__(
self, vtbl_index: int, name: str, paramflags: tuple[_PF, ...] | None = ..., iid: _CData | None = ..., /
) -> None: ...
def __call__(self, *args: Any, **kwargs: Any) -> Any: ...
_GetT = TypeVar("_GetT")
_SetT = TypeVar("_SetT")
class _CField(Generic[_CT, _GetT, _SetT]):
offset: int
size: int
@overload
def __get__(self, instance: None, owner: type[Any] | None, /) -> Self: ...
@overload
def __get__(self, instance: Any, owner: type[Any] | None, /) -> _GetT: ...
def __set__(self, instance: Any, value: _SetT, /) -> None: ...
class _StructUnionMeta(_CDataMeta):
_fields_: Sequence[tuple[str, type[_CData]] | tuple[str, type[_CData], int]]
_pack_: int
_anonymous_: Sequence[str]
def __getattr__(self, name: str) -> _CField[Any, Any, Any]: ...
class _StructUnionBase(_CData, metaclass=_StructUnionMeta):
def __init__(self, *args: Any, **kw: Any) -> None: ...
def __getattr__(self, name: str) -> Any: ...
def __setattr__(self, name: str, value: Any) -> None: ...
class Union(_StructUnionBase): ...
class Structure(_StructUnionBase): ...
class Array(_CData, Generic[_CT]):
@property
@abstractmethod
def _length_(self) -> int: ...
@_length_.setter
def _length_(self, value: int) -> None: ...
@property
@abstractmethod
def _type_(self) -> type[_CT]: ...
@_type_.setter
def _type_(self, value: type[_CT]) -> None: ...
# Note: only available if _CT == c_char
@property
def raw(self) -> bytes: ...
@raw.setter
def raw(self, value: ReadableBuffer) -> None: ...
value: Any # Note: bytes if _CT == c_char, str if _CT == c_wchar, unavailable otherwise
# TODO These methods cannot be annotated correctly at the moment.
# All of these "Any"s stand for the array's element type, but it's not possible to use _CT
# here, because of a special feature of ctypes.
# By default, when accessing an element of an Array[_CT], the returned object has type _CT.
# However, when _CT is a "simple type" like c_int, ctypes automatically "unboxes" the object
# and converts it to the corresponding Python primitive. For example, when accessing an element
# of an Array[c_int], a Python int object is returned, not a c_int.
# This behavior does *not* apply to subclasses of "simple types".
# If MyInt is a subclass of c_int, then accessing an element of an Array[MyInt] returns
# a MyInt, not an int.
# This special behavior is not easy to model in a stub, so for now all places where
# the array element type would belong are annotated with Any instead.
def __init__(self, *args: Any) -> None: ...
@overload
def __getitem__(self, key: int, /) -> Any: ...
@overload
def __getitem__(self, key: slice, /) -> list[Any]: ...
@overload
def __setitem__(self, key: int, value: Any, /) -> None: ...
@overload
def __setitem__(self, key: slice, value: Iterable[Any], /) -> None: ...
def __iter__(self) -> Iterator[Any]: ...
# Can't inherit from Sized because the metaclass conflict between
# Sized and _CData prevents using _CDataMeta.
def __len__(self) -> int: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
def addressof(obj: _CData) -> int: ...
def alignment(obj_or_type: _CData | type[_CData]) -> int: ...
def get_errno() -> int: ...
def resize(obj: _CData, size: int) -> None: ...
def set_errno(value: int) -> int: ...
def sizeof(obj_or_type: _CData | type[_CData]) -> int: ...

View File

@@ -0,0 +1,566 @@
import sys
from _typeshed import ReadOnlyBuffer, SupportsRead
from typing import IO, Any, NamedTuple, final, overload
from typing_extensions import TypeAlias
# NOTE: This module is ordinarily only available on Unix, but the windows-curses
# package makes it available on Windows as well with the same contents.
# Handled by PyCurses_ConvertToChtype in _cursesmodule.c.
_ChType: TypeAlias = str | bytes | int
# ACS codes are only initialized after initscr is called
ACS_BBSS: int
ACS_BLOCK: int
ACS_BOARD: int
ACS_BSBS: int
ACS_BSSB: int
ACS_BSSS: int
ACS_BTEE: int
ACS_BULLET: int
ACS_CKBOARD: int
ACS_DARROW: int
ACS_DEGREE: int
ACS_DIAMOND: int
ACS_GEQUAL: int
ACS_HLINE: int
ACS_LANTERN: int
ACS_LARROW: int
ACS_LEQUAL: int
ACS_LLCORNER: int
ACS_LRCORNER: int
ACS_LTEE: int
ACS_NEQUAL: int
ACS_PI: int
ACS_PLMINUS: int
ACS_PLUS: int
ACS_RARROW: int
ACS_RTEE: int
ACS_S1: int
ACS_S3: int
ACS_S7: int
ACS_S9: int
ACS_SBBS: int
ACS_SBSB: int
ACS_SBSS: int
ACS_SSBB: int
ACS_SSBS: int
ACS_SSSB: int
ACS_SSSS: int
ACS_STERLING: int
ACS_TTEE: int
ACS_UARROW: int
ACS_ULCORNER: int
ACS_URCORNER: int
ACS_VLINE: int
ALL_MOUSE_EVENTS: int
A_ALTCHARSET: int
A_ATTRIBUTES: int
A_BLINK: int
A_BOLD: int
A_CHARTEXT: int
A_COLOR: int
A_DIM: int
A_HORIZONTAL: int
A_INVIS: int
if sys.platform != "darwin":
A_ITALIC: int
A_LEFT: int
A_LOW: int
A_NORMAL: int
A_PROTECT: int
A_REVERSE: int
A_RIGHT: int
A_STANDOUT: int
A_TOP: int
A_UNDERLINE: int
A_VERTICAL: int
BUTTON1_CLICKED: int
BUTTON1_DOUBLE_CLICKED: int
BUTTON1_PRESSED: int
BUTTON1_RELEASED: int
BUTTON1_TRIPLE_CLICKED: int
BUTTON2_CLICKED: int
BUTTON2_DOUBLE_CLICKED: int
BUTTON2_PRESSED: int
BUTTON2_RELEASED: int
BUTTON2_TRIPLE_CLICKED: int
BUTTON3_CLICKED: int
BUTTON3_DOUBLE_CLICKED: int
BUTTON3_PRESSED: int
BUTTON3_RELEASED: int
BUTTON3_TRIPLE_CLICKED: int
BUTTON4_CLICKED: int
BUTTON4_DOUBLE_CLICKED: int
BUTTON4_PRESSED: int
BUTTON4_RELEASED: int
BUTTON4_TRIPLE_CLICKED: int
# Darwin ncurses doesn't provide BUTTON5_* constants
if sys.version_info >= (3, 10) and sys.platform != "darwin":
BUTTON5_PRESSED: int
BUTTON5_RELEASED: int
BUTTON5_CLICKED: int
BUTTON5_DOUBLE_CLICKED: int
BUTTON5_TRIPLE_CLICKED: int
BUTTON_ALT: int
BUTTON_CTRL: int
BUTTON_SHIFT: int
COLOR_BLACK: int
COLOR_BLUE: int
COLOR_CYAN: int
COLOR_GREEN: int
COLOR_MAGENTA: int
COLOR_RED: int
COLOR_WHITE: int
COLOR_YELLOW: int
ERR: int
KEY_A1: int
KEY_A3: int
KEY_B2: int
KEY_BACKSPACE: int
KEY_BEG: int
KEY_BREAK: int
KEY_BTAB: int
KEY_C1: int
KEY_C3: int
KEY_CANCEL: int
KEY_CATAB: int
KEY_CLEAR: int
KEY_CLOSE: int
KEY_COMMAND: int
KEY_COPY: int
KEY_CREATE: int
KEY_CTAB: int
KEY_DC: int
KEY_DL: int
KEY_DOWN: int
KEY_EIC: int
KEY_END: int
KEY_ENTER: int
KEY_EOL: int
KEY_EOS: int
KEY_EXIT: int
KEY_F0: int
KEY_F1: int
KEY_F10: int
KEY_F11: int
KEY_F12: int
KEY_F13: int
KEY_F14: int
KEY_F15: int
KEY_F16: int
KEY_F17: int
KEY_F18: int
KEY_F19: int
KEY_F2: int
KEY_F20: int
KEY_F21: int
KEY_F22: int
KEY_F23: int
KEY_F24: int
KEY_F25: int
KEY_F26: int
KEY_F27: int
KEY_F28: int
KEY_F29: int
KEY_F3: int
KEY_F30: int
KEY_F31: int
KEY_F32: int
KEY_F33: int
KEY_F34: int
KEY_F35: int
KEY_F36: int
KEY_F37: int
KEY_F38: int
KEY_F39: int
KEY_F4: int
KEY_F40: int
KEY_F41: int
KEY_F42: int
KEY_F43: int
KEY_F44: int
KEY_F45: int
KEY_F46: int
KEY_F47: int
KEY_F48: int
KEY_F49: int
KEY_F5: int
KEY_F50: int
KEY_F51: int
KEY_F52: int
KEY_F53: int
KEY_F54: int
KEY_F55: int
KEY_F56: int
KEY_F57: int
KEY_F58: int
KEY_F59: int
KEY_F6: int
KEY_F60: int
KEY_F61: int
KEY_F62: int
KEY_F63: int
KEY_F7: int
KEY_F8: int
KEY_F9: int
KEY_FIND: int
KEY_HELP: int
KEY_HOME: int
KEY_IC: int
KEY_IL: int
KEY_LEFT: int
KEY_LL: int
KEY_MARK: int
KEY_MAX: int
KEY_MESSAGE: int
KEY_MIN: int
KEY_MOUSE: int
KEY_MOVE: int
KEY_NEXT: int
KEY_NPAGE: int
KEY_OPEN: int
KEY_OPTIONS: int
KEY_PPAGE: int
KEY_PREVIOUS: int
KEY_PRINT: int
KEY_REDO: int
KEY_REFERENCE: int
KEY_REFRESH: int
KEY_REPLACE: int
KEY_RESET: int
KEY_RESIZE: int
KEY_RESTART: int
KEY_RESUME: int
KEY_RIGHT: int
KEY_SAVE: int
KEY_SBEG: int
KEY_SCANCEL: int
KEY_SCOMMAND: int
KEY_SCOPY: int
KEY_SCREATE: int
KEY_SDC: int
KEY_SDL: int
KEY_SELECT: int
KEY_SEND: int
KEY_SEOL: int
KEY_SEXIT: int
KEY_SF: int
KEY_SFIND: int
KEY_SHELP: int
KEY_SHOME: int
KEY_SIC: int
KEY_SLEFT: int
KEY_SMESSAGE: int
KEY_SMOVE: int
KEY_SNEXT: int
KEY_SOPTIONS: int
KEY_SPREVIOUS: int
KEY_SPRINT: int
KEY_SR: int
KEY_SREDO: int
KEY_SREPLACE: int
KEY_SRESET: int
KEY_SRIGHT: int
KEY_SRSUME: int
KEY_SSAVE: int
KEY_SSUSPEND: int
KEY_STAB: int
KEY_SUNDO: int
KEY_SUSPEND: int
KEY_UNDO: int
KEY_UP: int
OK: int
REPORT_MOUSE_POSITION: int
_C_API: Any
version: bytes
def baudrate() -> int: ...
def beep() -> None: ...
def can_change_color() -> bool: ...
def cbreak(flag: bool = True, /) -> None: ...
def color_content(color_number: int, /) -> tuple[int, int, int]: ...
def color_pair(pair_number: int, /) -> int: ...
def curs_set(visibility: int, /) -> int: ...
def def_prog_mode() -> None: ...
def def_shell_mode() -> None: ...
def delay_output(ms: int, /) -> None: ...
def doupdate() -> None: ...
def echo(flag: bool = True, /) -> None: ...
def endwin() -> None: ...
def erasechar() -> bytes: ...
def filter() -> None: ...
def flash() -> None: ...
def flushinp() -> None: ...
if sys.version_info >= (3, 9):
def get_escdelay() -> int: ...
def get_tabsize() -> int: ...
def getmouse() -> tuple[int, int, int, int, int]: ...
def getsyx() -> tuple[int, int]: ...
def getwin(file: SupportsRead[bytes], /) -> _CursesWindow: ...
def halfdelay(tenths: int, /) -> None: ...
def has_colors() -> bool: ...
if sys.version_info >= (3, 10):
def has_extended_color_support() -> bool: ...
def has_ic() -> bool: ...
def has_il() -> bool: ...
def has_key(key: int, /) -> bool: ...
def init_color(color_number: int, r: int, g: int, b: int, /) -> None: ...
def init_pair(pair_number: int, fg: int, bg: int, /) -> None: ...
def initscr() -> _CursesWindow: ...
def intrflush(flag: bool, /) -> None: ...
def is_term_resized(nlines: int, ncols: int, /) -> bool: ...
def isendwin() -> bool: ...
def keyname(key: int, /) -> bytes: ...
def killchar() -> bytes: ...
def longname() -> bytes: ...
def meta(yes: bool, /) -> None: ...
def mouseinterval(interval: int, /) -> None: ...
def mousemask(newmask: int, /) -> tuple[int, int]: ...
def napms(ms: int, /) -> int: ...
def newpad(nlines: int, ncols: int, /) -> _CursesWindow: ...
def newwin(nlines: int, ncols: int, begin_y: int = ..., begin_x: int = ..., /) -> _CursesWindow: ...
def nl(flag: bool = True, /) -> None: ...
def nocbreak() -> None: ...
def noecho() -> None: ...
def nonl() -> None: ...
def noqiflush() -> None: ...
def noraw() -> None: ...
def pair_content(pair_number: int, /) -> tuple[int, int]: ...
def pair_number(attr: int, /) -> int: ...
def putp(string: ReadOnlyBuffer, /) -> None: ...
def qiflush(flag: bool = True, /) -> None: ...
def raw(flag: bool = True, /) -> None: ...
def reset_prog_mode() -> None: ...
def reset_shell_mode() -> None: ...
def resetty() -> None: ...
def resize_term(nlines: int, ncols: int, /) -> None: ...
def resizeterm(nlines: int, ncols: int, /) -> None: ...
def savetty() -> None: ...
if sys.version_info >= (3, 9):
def set_escdelay(ms: int, /) -> None: ...
def set_tabsize(size: int, /) -> None: ...
def setsyx(y: int, x: int, /) -> None: ...
def setupterm(term: str | None = None, fd: int = -1) -> None: ...
def start_color() -> None: ...
def termattrs() -> int: ...
def termname() -> bytes: ...
def tigetflag(capname: str, /) -> int: ...
def tigetnum(capname: str, /) -> int: ...
def tigetstr(capname: str, /) -> bytes | None: ...
def tparm(
str: ReadOnlyBuffer,
i1: int = 0,
i2: int = 0,
i3: int = 0,
i4: int = 0,
i5: int = 0,
i6: int = 0,
i7: int = 0,
i8: int = 0,
i9: int = 0,
/,
) -> bytes: ...
def typeahead(fd: int, /) -> None: ...
def unctrl(ch: _ChType, /) -> bytes: ...
if sys.version_info < (3, 12) or sys.platform != "darwin":
# The support for macos was dropped in 3.12
def unget_wch(ch: int | str, /) -> None: ...
def ungetch(ch: _ChType, /) -> None: ...
def ungetmouse(id: int, x: int, y: int, z: int, bstate: int, /) -> None: ...
def update_lines_cols() -> None: ...
def use_default_colors() -> None: ...
def use_env(flag: bool, /) -> None: ...
class error(Exception): ...
@final
class _CursesWindow:
encoding: str
@overload
def addch(self, ch: _ChType, attr: int = ...) -> None: ...
@overload
def addch(self, y: int, x: int, ch: _ChType, attr: int = ...) -> None: ...
@overload
def addnstr(self, str: str, n: int, attr: int = ...) -> None: ...
@overload
def addnstr(self, y: int, x: int, str: str, n: int, attr: int = ...) -> None: ...
@overload
def addstr(self, str: str, attr: int = ...) -> None: ...
@overload
def addstr(self, y: int, x: int, str: str, attr: int = ...) -> None: ...
def attroff(self, attr: int, /) -> None: ...
def attron(self, attr: int, /) -> None: ...
def attrset(self, attr: int, /) -> None: ...
def bkgd(self, ch: _ChType, attr: int = ..., /) -> None: ...
def bkgdset(self, ch: _ChType, attr: int = ..., /) -> None: ...
def border(
self,
ls: _ChType = ...,
rs: _ChType = ...,
ts: _ChType = ...,
bs: _ChType = ...,
tl: _ChType = ...,
tr: _ChType = ...,
bl: _ChType = ...,
br: _ChType = ...,
) -> None: ...
@overload
def box(self) -> None: ...
@overload
def box(self, vertch: _ChType = ..., horch: _ChType = ...) -> None: ...
@overload
def chgat(self, attr: int) -> None: ...
@overload
def chgat(self, num: int, attr: int) -> None: ...
@overload
def chgat(self, y: int, x: int, attr: int) -> None: ...
@overload
def chgat(self, y: int, x: int, num: int, attr: int) -> None: ...
def clear(self) -> None: ...
def clearok(self, yes: int) -> None: ...
def clrtobot(self) -> None: ...
def clrtoeol(self) -> None: ...
def cursyncup(self) -> None: ...
@overload
def delch(self) -> None: ...
@overload
def delch(self, y: int, x: int) -> None: ...
def deleteln(self) -> None: ...
@overload
def derwin(self, begin_y: int, begin_x: int) -> _CursesWindow: ...
@overload
def derwin(self, nlines: int, ncols: int, begin_y: int, begin_x: int) -> _CursesWindow: ...
def echochar(self, ch: _ChType, attr: int = ..., /) -> None: ...
def enclose(self, y: int, x: int, /) -> bool: ...
def erase(self) -> None: ...
def getbegyx(self) -> tuple[int, int]: ...
def getbkgd(self) -> tuple[int, int]: ...
@overload
def getch(self) -> int: ...
@overload
def getch(self, y: int, x: int) -> int: ...
if sys.version_info < (3, 12) or sys.platform != "darwin":
# The support for macos was dropped in 3.12
@overload
def get_wch(self) -> int | str: ...
@overload
def get_wch(self, y: int, x: int) -> int | str: ...
@overload
def getkey(self) -> str: ...
@overload
def getkey(self, y: int, x: int) -> str: ...
def getmaxyx(self) -> tuple[int, int]: ...
def getparyx(self) -> tuple[int, int]: ...
@overload
def getstr(self) -> bytes: ...
@overload
def getstr(self, n: int) -> bytes: ...
@overload
def getstr(self, y: int, x: int) -> bytes: ...
@overload
def getstr(self, y: int, x: int, n: int) -> bytes: ...
def getyx(self) -> tuple[int, int]: ...
@overload
def hline(self, ch: _ChType, n: int) -> None: ...
@overload
def hline(self, y: int, x: int, ch: _ChType, n: int) -> None: ...
def idcok(self, flag: bool) -> None: ...
def idlok(self, yes: bool) -> None: ...
def immedok(self, flag: bool) -> None: ...
@overload
def inch(self) -> int: ...
@overload
def inch(self, y: int, x: int) -> int: ...
@overload
def insch(self, ch: _ChType, attr: int = ...) -> None: ...
@overload
def insch(self, y: int, x: int, ch: _ChType, attr: int = ...) -> None: ...
def insdelln(self, nlines: int) -> None: ...
def insertln(self) -> None: ...
@overload
def insnstr(self, str: str, n: int, attr: int = ...) -> None: ...
@overload
def insnstr(self, y: int, x: int, str: str, n: int, attr: int = ...) -> None: ...
@overload
def insstr(self, str: str, attr: int = ...) -> None: ...
@overload
def insstr(self, y: int, x: int, str: str, attr: int = ...) -> None: ...
@overload
def instr(self, n: int = ...) -> bytes: ...
@overload
def instr(self, y: int, x: int, n: int = ...) -> bytes: ...
def is_linetouched(self, line: int, /) -> bool: ...
def is_wintouched(self) -> bool: ...
def keypad(self, yes: bool) -> None: ...
def leaveok(self, yes: bool) -> None: ...
def move(self, new_y: int, new_x: int) -> None: ...
def mvderwin(self, y: int, x: int) -> None: ...
def mvwin(self, new_y: int, new_x: int) -> None: ...
def nodelay(self, yes: bool) -> None: ...
def notimeout(self, yes: bool) -> None: ...
@overload
def noutrefresh(self) -> None: ...
@overload
def noutrefresh(self, pminrow: int, pmincol: int, sminrow: int, smincol: int, smaxrow: int, smaxcol: int) -> None: ...
@overload
def overlay(self, destwin: _CursesWindow) -> None: ...
@overload
def overlay(
self, destwin: _CursesWindow, sminrow: int, smincol: int, dminrow: int, dmincol: int, dmaxrow: int, dmaxcol: int
) -> None: ...
@overload
def overwrite(self, destwin: _CursesWindow) -> None: ...
@overload
def overwrite(
self, destwin: _CursesWindow, sminrow: int, smincol: int, dminrow: int, dmincol: int, dmaxrow: int, dmaxcol: int
) -> None: ...
def putwin(self, file: IO[Any], /) -> None: ...
def redrawln(self, beg: int, num: int, /) -> None: ...
def redrawwin(self) -> None: ...
@overload
def refresh(self) -> None: ...
@overload
def refresh(self, pminrow: int, pmincol: int, sminrow: int, smincol: int, smaxrow: int, smaxcol: int) -> None: ...
def resize(self, nlines: int, ncols: int) -> None: ...
def scroll(self, lines: int = ...) -> None: ...
def scrollok(self, flag: bool) -> None: ...
def setscrreg(self, top: int, bottom: int, /) -> None: ...
def standend(self) -> None: ...
def standout(self) -> None: ...
@overload
def subpad(self, begin_y: int, begin_x: int) -> _CursesWindow: ...
@overload
def subpad(self, nlines: int, ncols: int, begin_y: int, begin_x: int) -> _CursesWindow: ...
@overload
def subwin(self, begin_y: int, begin_x: int) -> _CursesWindow: ...
@overload
def subwin(self, nlines: int, ncols: int, begin_y: int, begin_x: int) -> _CursesWindow: ...
def syncdown(self) -> None: ...
def syncok(self, flag: bool) -> None: ...
def syncup(self) -> None: ...
def timeout(self, delay: int) -> None: ...
def touchline(self, start: int, count: int, changed: bool = ...) -> None: ...
def touchwin(self) -> None: ...
def untouchwin(self) -> None: ...
@overload
def vline(self, ch: _ChType, n: int) -> None: ...
@overload
def vline(self, y: int, x: int, ch: _ChType, n: int) -> None: ...
class _ncurses_version(NamedTuple):
major: int
minor: int
patch: int
ncurses_version: _ncurses_version
window = _CursesWindow # undocumented

View File

@@ -0,0 +1,281 @@
import numbers
import sys
from collections.abc import Container, Sequence
from types import TracebackType
from typing import Any, ClassVar, Final, Literal, NamedTuple, overload
from typing_extensions import Self, TypeAlias
_Decimal: TypeAlias = Decimal | int
_DecimalNew: TypeAlias = Decimal | float | str | tuple[int, Sequence[int], int]
_ComparableNum: TypeAlias = Decimal | float | numbers.Rational
__version__: Final[str]
__libmpdec_version__: Final[str]
class DecimalTuple(NamedTuple):
sign: int
digits: tuple[int, ...]
exponent: int | Literal["n", "N", "F"]
ROUND_DOWN: str
ROUND_HALF_UP: str
ROUND_HALF_EVEN: str
ROUND_CEILING: str
ROUND_FLOOR: str
ROUND_UP: str
ROUND_HALF_DOWN: str
ROUND_05UP: str
HAVE_CONTEXTVAR: bool
HAVE_THREADS: bool
MAX_EMAX: int
MAX_PREC: int
MIN_EMIN: int
MIN_ETINY: int
class DecimalException(ArithmeticError): ...
class Clamped(DecimalException): ...
class InvalidOperation(DecimalException): ...
class ConversionSyntax(InvalidOperation): ...
class DivisionByZero(DecimalException, ZeroDivisionError): ...
class DivisionImpossible(InvalidOperation): ...
class DivisionUndefined(InvalidOperation, ZeroDivisionError): ...
class Inexact(DecimalException): ...
class InvalidContext(InvalidOperation): ...
class Rounded(DecimalException): ...
class Subnormal(DecimalException): ...
class Overflow(Inexact, Rounded): ...
class Underflow(Inexact, Rounded, Subnormal): ...
class FloatOperation(DecimalException, TypeError): ...
def setcontext(context: Context, /) -> None: ...
def getcontext() -> Context: ...
if sys.version_info >= (3, 11):
def localcontext(
ctx: Context | None = None,
*,
prec: int | None = ...,
rounding: str | None = ...,
Emin: int | None = ...,
Emax: int | None = ...,
capitals: int | None = ...,
clamp: int | None = ...,
traps: dict[_TrapType, bool] | None = ...,
flags: dict[_TrapType, bool] | None = ...,
) -> _ContextManager: ...
else:
def localcontext(ctx: Context | None = None) -> _ContextManager: ...
class Decimal:
def __new__(cls, value: _DecimalNew = ..., context: Context | None = ...) -> Self: ...
@classmethod
def from_float(cls, f: float, /) -> Self: ...
def __bool__(self) -> bool: ...
def compare(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def __hash__(self) -> int: ...
def as_tuple(self) -> DecimalTuple: ...
def as_integer_ratio(self) -> tuple[int, int]: ...
def to_eng_string(self, context: Context | None = None) -> str: ...
def __abs__(self) -> Decimal: ...
def __add__(self, value: _Decimal, /) -> Decimal: ...
def __divmod__(self, value: _Decimal, /) -> tuple[Decimal, Decimal]: ...
def __eq__(self, value: object, /) -> bool: ...
def __floordiv__(self, value: _Decimal, /) -> Decimal: ...
def __ge__(self, value: _ComparableNum, /) -> bool: ...
def __gt__(self, value: _ComparableNum, /) -> bool: ...
def __le__(self, value: _ComparableNum, /) -> bool: ...
def __lt__(self, value: _ComparableNum, /) -> bool: ...
def __mod__(self, value: _Decimal, /) -> Decimal: ...
def __mul__(self, value: _Decimal, /) -> Decimal: ...
def __neg__(self) -> Decimal: ...
def __pos__(self) -> Decimal: ...
def __pow__(self, value: _Decimal, mod: _Decimal | None = None, /) -> Decimal: ...
def __radd__(self, value: _Decimal, /) -> Decimal: ...
def __rdivmod__(self, value: _Decimal, /) -> tuple[Decimal, Decimal]: ...
def __rfloordiv__(self, value: _Decimal, /) -> Decimal: ...
def __rmod__(self, value: _Decimal, /) -> Decimal: ...
def __rmul__(self, value: _Decimal, /) -> Decimal: ...
def __rsub__(self, value: _Decimal, /) -> Decimal: ...
def __rtruediv__(self, value: _Decimal, /) -> Decimal: ...
def __sub__(self, value: _Decimal, /) -> Decimal: ...
def __truediv__(self, value: _Decimal, /) -> Decimal: ...
def remainder_near(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def __float__(self) -> float: ...
def __int__(self) -> int: ...
def __trunc__(self) -> int: ...
@property
def real(self) -> Decimal: ...
@property
def imag(self) -> Decimal: ...
def conjugate(self) -> Decimal: ...
def __complex__(self) -> complex: ...
@overload
def __round__(self) -> int: ...
@overload
def __round__(self, ndigits: int, /) -> Decimal: ...
def __floor__(self) -> int: ...
def __ceil__(self) -> int: ...
def fma(self, other: _Decimal, third: _Decimal, context: Context | None = None) -> Decimal: ...
def __rpow__(self, value: _Decimal, mod: Context | None = None, /) -> Decimal: ...
def normalize(self, context: Context | None = None) -> Decimal: ...
def quantize(self, exp: _Decimal, rounding: str | None = None, context: Context | None = None) -> Decimal: ...
def same_quantum(self, other: _Decimal, context: Context | None = None) -> bool: ...
def to_integral_exact(self, rounding: str | None = None, context: Context | None = None) -> Decimal: ...
def to_integral_value(self, rounding: str | None = None, context: Context | None = None) -> Decimal: ...
def to_integral(self, rounding: str | None = None, context: Context | None = None) -> Decimal: ...
def sqrt(self, context: Context | None = None) -> Decimal: ...
def max(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def min(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def adjusted(self) -> int: ...
def canonical(self) -> Decimal: ...
def compare_signal(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def compare_total(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def compare_total_mag(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def copy_abs(self) -> Decimal: ...
def copy_negate(self) -> Decimal: ...
def copy_sign(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def exp(self, context: Context | None = None) -> Decimal: ...
def is_canonical(self) -> bool: ...
def is_finite(self) -> bool: ...
def is_infinite(self) -> bool: ...
def is_nan(self) -> bool: ...
def is_normal(self, context: Context | None = None) -> bool: ...
def is_qnan(self) -> bool: ...
def is_signed(self) -> bool: ...
def is_snan(self) -> bool: ...
def is_subnormal(self, context: Context | None = None) -> bool: ...
def is_zero(self) -> bool: ...
def ln(self, context: Context | None = None) -> Decimal: ...
def log10(self, context: Context | None = None) -> Decimal: ...
def logb(self, context: Context | None = None) -> Decimal: ...
def logical_and(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def logical_invert(self, context: Context | None = None) -> Decimal: ...
def logical_or(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def logical_xor(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def max_mag(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def min_mag(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def next_minus(self, context: Context | None = None) -> Decimal: ...
def next_plus(self, context: Context | None = None) -> Decimal: ...
def next_toward(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def number_class(self, context: Context | None = None) -> str: ...
def radix(self) -> Decimal: ...
def rotate(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def scaleb(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def shift(self, other: _Decimal, context: Context | None = None) -> Decimal: ...
def __reduce__(self) -> tuple[type[Self], tuple[str]]: ...
def __copy__(self) -> Self: ...
def __deepcopy__(self, memo: Any, /) -> Self: ...
def __format__(self, specifier: str, context: Context | None = ..., /) -> str: ...
class _ContextManager:
new_context: Context
saved_context: Context
def __init__(self, new_context: Context) -> None: ...
def __enter__(self) -> Context: ...
def __exit__(self, t: type[BaseException] | None, v: BaseException | None, tb: TracebackType | None) -> None: ...
_TrapType: TypeAlias = type[DecimalException]
class Context:
# TODO: Context doesn't allow you to delete *any* attributes from instances of the class at runtime,
# even settable attributes like `prec` and `rounding`,
# but that's inexpressable in the stub.
# Type checkers either ignore it or misinterpret it
# if you add a `def __delattr__(self, name: str, /) -> NoReturn` method to the stub
prec: int
rounding: str
Emin: int
Emax: int
capitals: int
clamp: int
traps: dict[_TrapType, bool]
flags: dict[_TrapType, bool]
def __init__(
self,
prec: int | None = ...,
rounding: str | None = ...,
Emin: int | None = ...,
Emax: int | None = ...,
capitals: int | None = ...,
clamp: int | None = ...,
flags: None | dict[_TrapType, bool] | Container[_TrapType] = ...,
traps: None | dict[_TrapType, bool] | Container[_TrapType] = ...,
_ignored_flags: list[_TrapType] | None = ...,
) -> None: ...
def __reduce__(self) -> tuple[type[Self], tuple[Any, ...]]: ...
def clear_flags(self) -> None: ...
def clear_traps(self) -> None: ...
def copy(self) -> Context: ...
def __copy__(self) -> Context: ...
# see https://github.com/python/cpython/issues/94107
__hash__: ClassVar[None] # type: ignore[assignment]
def Etiny(self) -> int: ...
def Etop(self) -> int: ...
def create_decimal(self, num: _DecimalNew = "0", /) -> Decimal: ...
def create_decimal_from_float(self, f: float, /) -> Decimal: ...
def abs(self, x: _Decimal, /) -> Decimal: ...
def add(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def canonical(self, x: Decimal, /) -> Decimal: ...
def compare(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def compare_signal(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def compare_total(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def compare_total_mag(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def copy_abs(self, x: _Decimal, /) -> Decimal: ...
def copy_decimal(self, x: _Decimal, /) -> Decimal: ...
def copy_negate(self, x: _Decimal, /) -> Decimal: ...
def copy_sign(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def divide(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def divide_int(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def divmod(self, x: _Decimal, y: _Decimal, /) -> tuple[Decimal, Decimal]: ...
def exp(self, x: _Decimal, /) -> Decimal: ...
def fma(self, x: _Decimal, y: _Decimal, z: _Decimal, /) -> Decimal: ...
def is_canonical(self, x: _Decimal, /) -> bool: ...
def is_finite(self, x: _Decimal, /) -> bool: ...
def is_infinite(self, x: _Decimal, /) -> bool: ...
def is_nan(self, x: _Decimal, /) -> bool: ...
def is_normal(self, x: _Decimal, /) -> bool: ...
def is_qnan(self, x: _Decimal, /) -> bool: ...
def is_signed(self, x: _Decimal, /) -> bool: ...
def is_snan(self, x: _Decimal, /) -> bool: ...
def is_subnormal(self, x: _Decimal, /) -> bool: ...
def is_zero(self, x: _Decimal, /) -> bool: ...
def ln(self, x: _Decimal, /) -> Decimal: ...
def log10(self, x: _Decimal, /) -> Decimal: ...
def logb(self, x: _Decimal, /) -> Decimal: ...
def logical_and(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def logical_invert(self, x: _Decimal, /) -> Decimal: ...
def logical_or(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def logical_xor(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def max(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def max_mag(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def min(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def min_mag(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def minus(self, x: _Decimal, /) -> Decimal: ...
def multiply(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def next_minus(self, x: _Decimal, /) -> Decimal: ...
def next_plus(self, x: _Decimal, /) -> Decimal: ...
def next_toward(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def normalize(self, x: _Decimal, /) -> Decimal: ...
def number_class(self, x: _Decimal, /) -> str: ...
def plus(self, x: _Decimal, /) -> Decimal: ...
def power(self, a: _Decimal, b: _Decimal, modulo: _Decimal | None = None) -> Decimal: ...
def quantize(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def radix(self) -> Decimal: ...
def remainder(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def remainder_near(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def rotate(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def same_quantum(self, x: _Decimal, y: _Decimal, /) -> bool: ...
def scaleb(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def shift(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def sqrt(self, x: _Decimal, /) -> Decimal: ...
def subtract(self, x: _Decimal, y: _Decimal, /) -> Decimal: ...
def to_eng_string(self, x: _Decimal, /) -> str: ...
def to_sci_string(self, x: _Decimal, /) -> str: ...
def to_integral_exact(self, x: _Decimal, /) -> Decimal: ...
def to_integral_value(self, x: _Decimal, /) -> Decimal: ...
def to_integral(self, x: _Decimal, /) -> Decimal: ...
DefaultContext: Context
BasicContext: Context
ExtendedContext: Context

View File

@@ -0,0 +1,33 @@
from collections.abc import Callable
from types import TracebackType
from typing import Any, NoReturn, overload
from typing_extensions import TypeVarTuple, Unpack
__all__ = ["error", "start_new_thread", "exit", "get_ident", "allocate_lock", "interrupt_main", "LockType", "RLock"]
_Ts = TypeVarTuple("_Ts")
TIMEOUT_MAX: int
error = RuntimeError
@overload
def start_new_thread(function: Callable[[Unpack[_Ts]], object], args: tuple[Unpack[_Ts]]) -> None: ...
@overload
def start_new_thread(function: Callable[..., object], args: tuple[Any, ...], kwargs: dict[str, Any]) -> None: ...
def exit() -> NoReturn: ...
def get_ident() -> int: ...
def allocate_lock() -> LockType: ...
def stack_size(size: int | None = None) -> int: ...
class LockType:
locked_status: bool
def acquire(self, waitflag: bool | None = None, timeout: int = -1) -> bool: ...
def __enter__(self, waitflag: bool | None = None, timeout: int = -1) -> bool: ...
def __exit__(self, typ: type[BaseException] | None, val: BaseException | None, tb: TracebackType | None) -> None: ...
def release(self) -> bool: ...
def locked(self) -> bool: ...
class RLock(LockType):
def release(self) -> None: ... # type: ignore[override]
def interrupt_main() -> None: ...

View File

@@ -0,0 +1,164 @@
import sys
from _thread import _excepthook, _ExceptHookArgs
from _typeshed import ProfileFunction, TraceFunction
from collections.abc import Callable, Iterable, Mapping
from types import TracebackType
from typing import Any, TypeVar
_T = TypeVar("_T")
__all__ = [
"get_ident",
"active_count",
"Condition",
"current_thread",
"enumerate",
"main_thread",
"TIMEOUT_MAX",
"Event",
"Lock",
"RLock",
"Semaphore",
"BoundedSemaphore",
"Thread",
"Barrier",
"BrokenBarrierError",
"Timer",
"ThreadError",
"setprofile",
"settrace",
"local",
"stack_size",
"ExceptHookArgs",
"excepthook",
]
def active_count() -> int: ...
def current_thread() -> Thread: ...
def currentThread() -> Thread: ...
def get_ident() -> int: ...
def enumerate() -> list[Thread]: ...
def main_thread() -> Thread: ...
def settrace(func: TraceFunction) -> None: ...
def setprofile(func: ProfileFunction | None) -> None: ...
def stack_size(size: int | None = None) -> int: ...
TIMEOUT_MAX: float
class ThreadError(Exception): ...
class local:
def __getattribute__(self, name: str) -> Any: ...
def __setattr__(self, name: str, value: Any) -> None: ...
def __delattr__(self, name: str) -> None: ...
class Thread:
name: str
daemon: bool
@property
def ident(self) -> int | None: ...
def __init__(
self,
group: None = None,
target: Callable[..., object] | None = None,
name: str | None = None,
args: Iterable[Any] = (),
kwargs: Mapping[str, Any] | None = None,
*,
daemon: bool | None = None,
) -> None: ...
def start(self) -> None: ...
def run(self) -> None: ...
def join(self, timeout: float | None = None) -> None: ...
def getName(self) -> str: ...
def setName(self, name: str) -> None: ...
@property
def native_id(self) -> int | None: ... # only available on some platforms
def is_alive(self) -> bool: ...
if sys.version_info < (3, 9):
def isAlive(self) -> bool: ...
def isDaemon(self) -> bool: ...
def setDaemon(self, daemonic: bool) -> None: ...
class _DummyThread(Thread): ...
class Lock:
def __enter__(self) -> bool: ...
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None
) -> bool | None: ...
def acquire(self, blocking: bool = ..., timeout: float = ...) -> bool: ...
def release(self) -> None: ...
def locked(self) -> bool: ...
class _RLock:
def __enter__(self) -> bool: ...
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None
) -> bool | None: ...
def acquire(self, blocking: bool = True, timeout: float = -1) -> bool: ...
def release(self) -> None: ...
RLock = _RLock
class Condition:
def __init__(self, lock: Lock | _RLock | None = None) -> None: ...
def __enter__(self) -> bool: ...
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None
) -> bool | None: ...
def acquire(self, blocking: bool = ..., timeout: float = ...) -> bool: ...
def release(self) -> None: ...
def wait(self, timeout: float | None = None) -> bool: ...
def wait_for(self, predicate: Callable[[], _T], timeout: float | None = None) -> _T: ...
def notify(self, n: int = 1) -> None: ...
def notify_all(self) -> None: ...
def notifyAll(self) -> None: ...
class Semaphore:
def __init__(self, value: int = 1) -> None: ...
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None
) -> bool | None: ...
def acquire(self, blocking: bool = True, timeout: float | None = None) -> bool: ...
def __enter__(self, blocking: bool = True, timeout: float | None = None) -> bool: ...
if sys.version_info >= (3, 9):
def release(self, n: int = ...) -> None: ...
else:
def release(self) -> None: ...
class BoundedSemaphore(Semaphore): ...
class Event:
def is_set(self) -> bool: ...
def set(self) -> None: ...
def clear(self) -> None: ...
def wait(self, timeout: float | None = None) -> bool: ...
excepthook = _excepthook
ExceptHookArgs = _ExceptHookArgs
class Timer(Thread):
def __init__(
self,
interval: float,
function: Callable[..., object],
args: Iterable[Any] | None = None,
kwargs: Mapping[str, Any] | None = None,
) -> None: ...
def cancel(self) -> None: ...
class Barrier:
@property
def parties(self) -> int: ...
@property
def n_waiting(self) -> int: ...
@property
def broken(self) -> bool: ...
def __init__(self, parties: int, action: Callable[[], None] | None = None, timeout: float | None = None) -> None: ...
def wait(self, timeout: float | None = None) -> int: ...
def reset(self) -> None: ...
def abort(self) -> None: ...
class BrokenBarrierError(RuntimeError): ...

View File

@@ -0,0 +1,11 @@
from typing import Any, Final, TypeVar
_T = TypeVar("_T")
__about__: Final[str]
def heapify(heap: list[Any], /) -> None: ...
def heappop(heap: list[_T], /) -> _T: ...
def heappush(heap: list[_T], item: _T, /) -> None: ...
def heappushpop(heap: list[_T], item: _T, /) -> _T: ...
def heapreplace(heap: list[_T], item: _T, /) -> _T: ...

View File

@@ -0,0 +1,28 @@
import sys
import types
from _typeshed import ReadableBuffer
from importlib.machinery import ModuleSpec
from typing import Any
check_hash_based_pycs: str
def source_hash(key: int, source: ReadableBuffer) -> bytes: ...
def create_builtin(spec: ModuleSpec, /) -> types.ModuleType: ...
def create_dynamic(spec: ModuleSpec, file: Any = None, /) -> types.ModuleType: ...
def acquire_lock() -> None: ...
def exec_builtin(mod: types.ModuleType, /) -> int: ...
def exec_dynamic(mod: types.ModuleType, /) -> int: ...
def extension_suffixes() -> list[str]: ...
def init_frozen(name: str, /) -> types.ModuleType: ...
def is_builtin(name: str, /) -> int: ...
def is_frozen(name: str, /) -> bool: ...
def is_frozen_package(name: str, /) -> bool: ...
def lock_held() -> bool: ...
def release_lock() -> None: ...
if sys.version_info >= (3, 11):
def find_frozen(name: str, /, *, withdata: bool = False) -> tuple[memoryview | None, bool, str | None] | None: ...
def get_frozen_object(name: str, data: ReadableBuffer | None = None, /) -> types.CodeType: ...
else:
def get_frozen_object(name: str, /) -> types.CodeType: ...

View File

@@ -0,0 +1,49 @@
from collections.abc import Callable
from typing import Any, final
@final
class make_encoder:
@property
def sort_keys(self) -> bool: ...
@property
def skipkeys(self) -> bool: ...
@property
def key_separator(self) -> str: ...
@property
def indent(self) -> int | None: ...
@property
def markers(self) -> dict[int, Any] | None: ...
@property
def default(self) -> Callable[[Any], Any]: ...
@property
def encoder(self) -> Callable[[str], str]: ...
@property
def item_separator(self) -> str: ...
def __init__(
self,
markers: dict[int, Any] | None,
default: Callable[[Any], Any],
encoder: Callable[[str], str],
indent: int | None,
key_separator: str,
item_separator: str,
sort_keys: bool,
skipkeys: bool,
allow_nan: bool,
) -> None: ...
def __call__(self, obj: object, _current_indent_level: int) -> Any: ...
@final
class make_scanner:
object_hook: Any
object_pairs_hook: Any
parse_int: Any
parse_constant: Any
parse_float: Any
strict: bool
# TODO: 'context' needs the attrs above (ducktype), but not __call__.
def __init__(self, context: make_scanner) -> None: ...
def __call__(self, string: str, index: int) -> tuple[Any, int]: ...
def encode_basestring_ascii(s: str) -> str: ...
def scanstring(string: str, end: int, strict: bool = ...) -> tuple[str, int]: ...

View File

@@ -0,0 +1,100 @@
import sys
from _typeshed import StrPath
from collections.abc import Mapping
LC_CTYPE: int
LC_COLLATE: int
LC_TIME: int
LC_MONETARY: int
LC_NUMERIC: int
LC_ALL: int
CHAR_MAX: int
def setlocale(category: int, locale: str | None = None, /) -> str: ...
def localeconv() -> Mapping[str, int | str | list[int]]: ...
if sys.version_info >= (3, 11):
def getencoding() -> str: ...
def strcoll(os1: str, os2: str, /) -> int: ...
def strxfrm(string: str, /) -> str: ...
# native gettext functions
# https://docs.python.org/3/library/locale.html#access-to-message-catalogs
# https://github.com/python/cpython/blob/f4c03484da59049eb62a9bf7777b963e2267d187/Modules/_localemodule.c#L626
if sys.platform != "win32":
LC_MESSAGES: int
ABDAY_1: int
ABDAY_2: int
ABDAY_3: int
ABDAY_4: int
ABDAY_5: int
ABDAY_6: int
ABDAY_7: int
ABMON_1: int
ABMON_2: int
ABMON_3: int
ABMON_4: int
ABMON_5: int
ABMON_6: int
ABMON_7: int
ABMON_8: int
ABMON_9: int
ABMON_10: int
ABMON_11: int
ABMON_12: int
DAY_1: int
DAY_2: int
DAY_3: int
DAY_4: int
DAY_5: int
DAY_6: int
DAY_7: int
ERA: int
ERA_D_T_FMT: int
ERA_D_FMT: int
ERA_T_FMT: int
MON_1: int
MON_2: int
MON_3: int
MON_4: int
MON_5: int
MON_6: int
MON_7: int
MON_8: int
MON_9: int
MON_10: int
MON_11: int
MON_12: int
CODESET: int
D_T_FMT: int
D_FMT: int
T_FMT: int
T_FMT_AMPM: int
AM_STR: int
PM_STR: int
RADIXCHAR: int
THOUSEP: int
YESEXPR: int
NOEXPR: int
CRNCYSTR: int
ALT_DIGITS: int
def nl_langinfo(key: int, /) -> str: ...
# This is dependent on `libintl.h` which is a part of `gettext`
# system dependency. These functions might be missing.
# But, we always say that they are present.
def gettext(msg: str, /) -> str: ...
def dgettext(domain: str | None, msg: str, /) -> str: ...
def dcgettext(domain: str | None, msg: str, category: int, /) -> str: ...
def textdomain(domain: str | None, /) -> str: ...
def bindtextdomain(domain: str, dir: StrPath | None, /) -> str: ...
def bind_textdomain_codeset(domain: str, codeset: str | None, /) -> str | None: ...

View File

@@ -0,0 +1,35 @@
import sys
from _typeshed import structseq
from collections.abc import Callable
from types import CodeType
from typing import Any, Final, final
class Profiler:
def __init__(
self, timer: Callable[[], float] | None = None, timeunit: float = 0.0, subcalls: bool = True, builtins: bool = True
) -> None: ...
def getstats(self) -> list[profiler_entry]: ...
def enable(self, subcalls: bool = True, builtins: bool = True) -> None: ...
def disable(self) -> None: ...
def clear(self) -> None: ...
@final
class profiler_entry(structseq[Any], tuple[CodeType | str, int, int, float, float, list[profiler_subentry]]):
if sys.version_info >= (3, 10):
__match_args__: Final = ("code", "callcount", "reccallcount", "totaltime", "inlinetime", "calls")
code: CodeType | str
callcount: int
reccallcount: int
totaltime: float
inlinetime: float
calls: list[profiler_subentry]
@final
class profiler_subentry(structseq[Any], tuple[CodeType | str, int, int, float, float]):
if sys.version_info >= (3, 10):
__match_args__: Final = ("code", "callcount", "reccallcount", "totaltime", "inlinetime")
code: CodeType | str
callcount: int
reccallcount: int
totaltime: float
inlinetime: float

View File

@@ -0,0 +1,16 @@
import sys
from typing import Any
class ParserBase:
def reset(self) -> None: ...
def getpos(self) -> tuple[int, int]: ...
def unknown_decl(self, data: str) -> None: ...
def parse_comment(self, i: int, report: int = 1) -> int: ... # undocumented
def parse_declaration(self, i: int) -> int: ... # undocumented
def parse_marked_section(self, i: int, report: int = 1) -> int: ... # undocumented
def updatepos(self, i: int, j: int) -> int: ... # undocumented
if sys.version_info < (3, 10):
# Removed from ParserBase: https://bugs.python.org/issue31844
def error(self, message: str) -> Any: ... # undocumented
lineno: int # undocumented
offset: int # undocumented

View File

@@ -0,0 +1,92 @@
import sys
if sys.platform == "win32":
class MSIError(Exception): ...
# Actual typename View, not exposed by the implementation
class _View:
def Execute(self, params: _Record | None = ...) -> None: ...
def GetColumnInfo(self, kind: int) -> _Record: ...
def Fetch(self) -> _Record: ...
def Modify(self, mode: int, record: _Record) -> None: ...
def Close(self) -> None: ...
# Don't exist at runtime
__new__: None # type: ignore[assignment]
__init__: None # type: ignore[assignment]
# Actual typename SummaryInformation, not exposed by the implementation
class _SummaryInformation:
def GetProperty(self, field: int) -> int | bytes | None: ...
def GetPropertyCount(self) -> int: ...
def SetProperty(self, field: int, value: int | str) -> None: ...
def Persist(self) -> None: ...
# Don't exist at runtime
__new__: None # type: ignore[assignment]
__init__: None # type: ignore[assignment]
# Actual typename Database, not exposed by the implementation
class _Database:
def OpenView(self, sql: str) -> _View: ...
def Commit(self) -> None: ...
def GetSummaryInformation(self, updateCount: int) -> _SummaryInformation: ...
def Close(self) -> None: ...
# Don't exist at runtime
__new__: None # type: ignore[assignment]
__init__: None # type: ignore[assignment]
# Actual typename Record, not exposed by the implementation
class _Record:
def GetFieldCount(self) -> int: ...
def GetInteger(self, field: int) -> int: ...
def GetString(self, field: int) -> str: ...
def SetString(self, field: int, str: str) -> None: ...
def SetStream(self, field: int, stream: str) -> None: ...
def SetInteger(self, field: int, int: int) -> None: ...
def ClearData(self) -> None: ...
# Don't exist at runtime
__new__: None # type: ignore[assignment]
__init__: None # type: ignore[assignment]
def UuidCreate() -> str: ...
def FCICreate(cabname: str, files: list[str], /) -> None: ...
def OpenDatabase(path: str, persist: int, /) -> _Database: ...
def CreateRecord(count: int, /) -> _Record: ...
MSICOLINFO_NAMES: int
MSICOLINFO_TYPES: int
MSIDBOPEN_CREATE: int
MSIDBOPEN_CREATEDIRECT: int
MSIDBOPEN_DIRECT: int
MSIDBOPEN_PATCHFILE: int
MSIDBOPEN_READONLY: int
MSIDBOPEN_TRANSACT: int
MSIMODIFY_ASSIGN: int
MSIMODIFY_DELETE: int
MSIMODIFY_INSERT: int
MSIMODIFY_INSERT_TEMPORARY: int
MSIMODIFY_MERGE: int
MSIMODIFY_REFRESH: int
MSIMODIFY_REPLACE: int
MSIMODIFY_SEEK: int
MSIMODIFY_UPDATE: int
MSIMODIFY_VALIDATE: int
MSIMODIFY_VALIDATE_DELETE: int
MSIMODIFY_VALIDATE_FIELD: int
MSIMODIFY_VALIDATE_NEW: int
PID_APPNAME: int
PID_AUTHOR: int
PID_CHARCOUNT: int
PID_CODEPAGE: int
PID_COMMENTS: int
PID_CREATE_DTM: int
PID_KEYWORDS: int
PID_LASTAUTHOR: int
PID_LASTPRINTED: int
PID_LASTSAVE_DTM: int
PID_PAGECOUNT: int
PID_REVNUMBER: int
PID_SECURITY: int
PID_SUBJECT: int
PID_TEMPLATE: int
PID_TITLE: int
PID_WORDCOUNT: int

View File

@@ -0,0 +1,147 @@
import sys
from _typeshed import SupportsGetItem
from collections.abc import Callable, Container, Iterable, MutableMapping, MutableSequence, Sequence
from typing import Any, AnyStr, Generic, Protocol, SupportsAbs, SupportsIndex, TypeVar, final, overload
from typing_extensions import ParamSpec, TypeAlias, TypeVarTuple, Unpack
_R = TypeVar("_R")
_T = TypeVar("_T")
_T_co = TypeVar("_T_co", covariant=True)
_T1 = TypeVar("_T1")
_T2 = TypeVar("_T2")
_K = TypeVar("_K")
_V = TypeVar("_V")
_P = ParamSpec("_P")
_Ts = TypeVarTuple("_Ts")
# The following protocols return "Any" instead of bool, since the comparison
# operators can be overloaded to return an arbitrary object. For example,
# the numpy.array comparison dunders return another numpy.array.
class _SupportsDunderLT(Protocol):
def __lt__(self, other: Any, /) -> Any: ...
class _SupportsDunderGT(Protocol):
def __gt__(self, other: Any, /) -> Any: ...
class _SupportsDunderLE(Protocol):
def __le__(self, other: Any, /) -> Any: ...
class _SupportsDunderGE(Protocol):
def __ge__(self, other: Any, /) -> Any: ...
_SupportsComparison: TypeAlias = _SupportsDunderLE | _SupportsDunderGE | _SupportsDunderGT | _SupportsDunderLT
class _SupportsInversion(Protocol[_T_co]):
def __invert__(self) -> _T_co: ...
class _SupportsNeg(Protocol[_T_co]):
def __neg__(self) -> _T_co: ...
class _SupportsPos(Protocol[_T_co]):
def __pos__(self) -> _T_co: ...
# All four comparison functions must have the same signature, or we get false-positive errors
def lt(a: _SupportsComparison, b: _SupportsComparison, /) -> Any: ...
def le(a: _SupportsComparison, b: _SupportsComparison, /) -> Any: ...
def eq(a: object, b: object, /) -> Any: ...
def ne(a: object, b: object, /) -> Any: ...
def ge(a: _SupportsComparison, b: _SupportsComparison, /) -> Any: ...
def gt(a: _SupportsComparison, b: _SupportsComparison, /) -> Any: ...
def not_(a: object, /) -> bool: ...
def truth(a: object, /) -> bool: ...
def is_(a: object, b: object, /) -> bool: ...
def is_not(a: object, b: object, /) -> bool: ...
def abs(a: SupportsAbs[_T], /) -> _T: ...
def add(a: Any, b: Any, /) -> Any: ...
def and_(a: Any, b: Any, /) -> Any: ...
def floordiv(a: Any, b: Any, /) -> Any: ...
def index(a: SupportsIndex, /) -> int: ...
def inv(a: _SupportsInversion[_T_co], /) -> _T_co: ...
def invert(a: _SupportsInversion[_T_co], /) -> _T_co: ...
def lshift(a: Any, b: Any, /) -> Any: ...
def mod(a: Any, b: Any, /) -> Any: ...
def mul(a: Any, b: Any, /) -> Any: ...
def matmul(a: Any, b: Any, /) -> Any: ...
def neg(a: _SupportsNeg[_T_co], /) -> _T_co: ...
def or_(a: Any, b: Any, /) -> Any: ...
def pos(a: _SupportsPos[_T_co], /) -> _T_co: ...
def pow(a: Any, b: Any, /) -> Any: ...
def rshift(a: Any, b: Any, /) -> Any: ...
def sub(a: Any, b: Any, /) -> Any: ...
def truediv(a: Any, b: Any, /) -> Any: ...
def xor(a: Any, b: Any, /) -> Any: ...
def concat(a: Sequence[_T], b: Sequence[_T], /) -> Sequence[_T]: ...
def contains(a: Container[object], b: object, /) -> bool: ...
def countOf(a: Iterable[object], b: object, /) -> int: ...
@overload
def delitem(a: MutableSequence[Any], b: SupportsIndex, /) -> None: ...
@overload
def delitem(a: MutableSequence[Any], b: slice, /) -> None: ...
@overload
def delitem(a: MutableMapping[_K, Any], b: _K, /) -> None: ...
@overload
def getitem(a: Sequence[_T], b: slice, /) -> Sequence[_T]: ...
@overload
def getitem(a: SupportsGetItem[_K, _V], b: _K, /) -> _V: ...
def indexOf(a: Iterable[_T], b: _T, /) -> int: ...
@overload
def setitem(a: MutableSequence[_T], b: SupportsIndex, c: _T, /) -> None: ...
@overload
def setitem(a: MutableSequence[_T], b: slice, c: Sequence[_T], /) -> None: ...
@overload
def setitem(a: MutableMapping[_K, _V], b: _K, c: _V, /) -> None: ...
def length_hint(obj: object, default: int = 0, /) -> int: ...
@final
class attrgetter(Generic[_T_co]):
@overload
def __new__(cls, attr: str, /) -> attrgetter[Any]: ...
@overload
def __new__(cls, attr: str, attr2: str, /) -> attrgetter[tuple[Any, Any]]: ...
@overload
def __new__(cls, attr: str, attr2: str, attr3: str, /) -> attrgetter[tuple[Any, Any, Any]]: ...
@overload
def __new__(cls, attr: str, attr2: str, attr3: str, attr4: str, /) -> attrgetter[tuple[Any, Any, Any, Any]]: ...
@overload
def __new__(cls, attr: str, /, *attrs: str) -> attrgetter[tuple[Any, ...]]: ...
def __call__(self, obj: Any, /) -> _T_co: ...
@final
class itemgetter(Generic[_T_co]):
@overload
def __new__(cls, item: _T, /) -> itemgetter[_T]: ...
@overload
def __new__(cls, item1: _T1, item2: _T2, /, *items: Unpack[_Ts]) -> itemgetter[tuple[_T1, _T2, Unpack[_Ts]]]: ...
# __key: _KT_contra in SupportsGetItem seems to be causing variance issues, ie:
# TypeVar "_KT_contra@SupportsGetItem" is contravariant
# "tuple[int, int]" is incompatible with protocol "SupportsIndex"
# preventing [_T_co, ...] instead of [Any, ...]
#
# A suspected mypy issue prevents using [..., _T] instead of [..., Any] here.
# https://github.com/python/mypy/issues/14032
def __call__(self, obj: SupportsGetItem[Any, Any]) -> Any: ...
@final
class methodcaller:
def __init__(self, name: str, /, *args: Any, **kwargs: Any) -> None: ...
def __call__(self, obj: Any) -> Any: ...
def iadd(a: Any, b: Any, /) -> Any: ...
def iand(a: Any, b: Any, /) -> Any: ...
def iconcat(a: Any, b: Any, /) -> Any: ...
def ifloordiv(a: Any, b: Any, /) -> Any: ...
def ilshift(a: Any, b: Any, /) -> Any: ...
def imod(a: Any, b: Any, /) -> Any: ...
def imul(a: Any, b: Any, /) -> Any: ...
def imatmul(a: Any, b: Any, /) -> Any: ...
def ior(a: Any, b: Any, /) -> Any: ...
def ipow(a: Any, b: Any, /) -> Any: ...
def irshift(a: Any, b: Any, /) -> Any: ...
def isub(a: Any, b: Any, /) -> Any: ...
def itruediv(a: Any, b: Any, /) -> Any: ...
def ixor(a: Any, b: Any, /) -> Any: ...
if sys.version_info >= (3, 11):
def call(obj: Callable[_P, _R], /, *args: _P.args, **kwargs: _P.kwargs) -> _R: ...
def _compare_digest(a: AnyStr, b: AnyStr, /) -> bool: ...

View File

@@ -0,0 +1,34 @@
from collections.abc import Iterable, Sequence
from typing import TypeVar
_T = TypeVar("_T")
_K = TypeVar("_K")
_V = TypeVar("_V")
__all__ = ["compiler_fixup", "customize_config_vars", "customize_compiler", "get_platform_osx"]
_UNIVERSAL_CONFIG_VARS: tuple[str, ...] # undocumented
_COMPILER_CONFIG_VARS: tuple[str, ...] # undocumented
_INITPRE: str # undocumented
def _find_executable(executable: str, path: str | None = None) -> str | None: ... # undocumented
def _read_output(commandstring: str, capture_stderr: bool = False) -> str | None: ... # undocumented
def _find_build_tool(toolname: str) -> str: ... # undocumented
_SYSTEM_VERSION: str | None # undocumented
def _get_system_version() -> str: ... # undocumented
def _remove_original_values(_config_vars: dict[str, str]) -> None: ... # undocumented
def _save_modified_value(_config_vars: dict[str, str], cv: str, newvalue: str) -> None: ... # undocumented
def _supports_universal_builds() -> bool: ... # undocumented
def _find_appropriate_compiler(_config_vars: dict[str, str]) -> dict[str, str]: ... # undocumented
def _remove_universal_flags(_config_vars: dict[str, str]) -> dict[str, str]: ... # undocumented
def _remove_unsupported_archs(_config_vars: dict[str, str]) -> dict[str, str]: ... # undocumented
def _override_all_archs(_config_vars: dict[str, str]) -> dict[str, str]: ... # undocumented
def _check_for_unavailable_sdk(_config_vars: dict[str, str]) -> dict[str, str]: ... # undocumented
def compiler_fixup(compiler_so: Iterable[str], cc_args: Sequence[str]) -> list[str]: ...
def customize_config_vars(_config_vars: dict[str, str]) -> dict[str, str]: ...
def customize_compiler(_config_vars: dict[str, str]) -> dict[str, str]: ...
def get_platform_osx(
_config_vars: dict[str, str], osname: _T, release: _K, machine: _V
) -> tuple[str | _T, str | _K, str | _V]: ...

View File

@@ -0,0 +1,33 @@
import sys
from _typeshed import StrOrBytesPath
from collections.abc import Callable, Sequence
from typing import SupportsIndex
if sys.platform != "win32":
def cloexec_pipe() -> tuple[int, int]: ...
def fork_exec(
args: Sequence[StrOrBytesPath] | None,
executable_list: Sequence[bytes],
close_fds: bool,
pass_fds: tuple[int, ...],
cwd: str,
env: Sequence[bytes] | None,
p2cread: int,
p2cwrite: int,
c2pread: int,
c2pwrite: int,
errread: int,
errwrite: int,
errpipe_read: int,
errpipe_write: int,
restore_signals: int,
call_setsid: int,
pgid_to_set: int,
gid: SupportsIndex | None,
extra_groups: list[int] | None,
uid: SupportsIndex | None,
child_umask: int,
preexec_fn: Callable[[], None],
allow_vfork: bool,
/,
) -> int: ...

View File

@@ -0,0 +1,14 @@
import _typeshed
from typing import Any, NewType, TypeVar
_T = TypeVar("_T")
_CacheToken = NewType("_CacheToken", int)
def get_cache_token() -> _CacheToken: ...
class ABCMeta(type):
def __new__(
mcls: type[_typeshed.Self], name: str, bases: tuple[type[Any], ...], namespace: dict[str, Any], /
) -> _typeshed.Self: ...
def register(cls, subclass: type[_T]) -> type[_T]: ...

View File

@@ -0,0 +1,43 @@
# This is a slight lie, the implementations aren't exactly identical
# However, in all likelihood, the differences are inconsequential
from _decimal import *
__all__ = [
"Decimal",
"Context",
"DecimalTuple",
"DefaultContext",
"BasicContext",
"ExtendedContext",
"DecimalException",
"Clamped",
"InvalidOperation",
"DivisionByZero",
"Inexact",
"Rounded",
"Subnormal",
"Overflow",
"Underflow",
"FloatOperation",
"DivisionImpossible",
"InvalidContext",
"ConversionSyntax",
"DivisionUndefined",
"ROUND_DOWN",
"ROUND_HALF_UP",
"ROUND_HALF_EVEN",
"ROUND_CEILING",
"ROUND_FLOOR",
"ROUND_UP",
"ROUND_HALF_DOWN",
"ROUND_05UP",
"setcontext",
"getcontext",
"localcontext",
"MAX_PREC",
"MAX_EMAX",
"MIN_EMIN",
"MIN_ETINY",
"HAVE_THREADS",
"HAVE_CONTEXTVAR",
]

View File

@@ -0,0 +1,12 @@
from typing_extensions import TypeAlias
# Actually Tuple[(int,) * 625]
_State: TypeAlias = tuple[int, ...]
class Random:
def __init__(self, seed: object = ...) -> None: ...
def seed(self, n: object = None, /) -> None: ...
def getstate(self) -> _State: ...
def setstate(self, state: _State, /) -> None: ...
def random(self) -> float: ...
def getrandbits(self, k: int, /) -> int: ...

View File

@@ -0,0 +1,16 @@
from collections.abc import Iterable
from typing import ClassVar, Literal, NoReturn
class Quitter:
name: str
eof: str
def __init__(self, name: str, eof: str) -> None: ...
def __call__(self, code: int | None = None) -> NoReturn: ...
class _Printer:
MAXLINES: ClassVar[Literal[23]]
def __init__(self, name: str, data: str, files: Iterable[str] = (), dirs: Iterable[str] = ()) -> None: ...
def __call__(self) -> None: ...
class _Helper:
def __call__(self, request: object) -> None: ...

View File

@@ -0,0 +1,803 @@
import sys
from _typeshed import ReadableBuffer, WriteableBuffer
from collections.abc import Iterable
from typing import Any, SupportsIndex, overload
from typing_extensions import TypeAlias
_CMSG: TypeAlias = tuple[int, int, bytes]
_CMSGArg: TypeAlias = tuple[int, int, ReadableBuffer]
# Addresses can be either tuples of varying lengths (AF_INET, AF_INET6,
# AF_NETLINK, AF_TIPC) or strings/buffers (AF_UNIX).
# See getsockaddrarg() in socketmodule.c.
_Address: TypeAlias = tuple[Any, ...] | str | ReadableBuffer
_RetAddress: TypeAlias = Any
# ===== Constants =====
# This matches the order in the CPython documentation
# https://docs.python.org/3/library/socket.html#constants
if sys.platform != "win32":
AF_UNIX: int
AF_INET: int
AF_INET6: int
AF_UNSPEC: int
SOCK_STREAM: int
SOCK_DGRAM: int
SOCK_RAW: int
SOCK_RDM: int
SOCK_SEQPACKET: int
if sys.platform == "linux":
# Availability: Linux >= 2.6.27
SOCK_CLOEXEC: int
SOCK_NONBLOCK: int
# --------------------
# Many constants of these forms, documented in the Unix documentation on
# sockets and/or the IP protocol, are also defined in the socket module.
# SO_*
# socket.SOMAXCONN
# MSG_*
# SOL_*
# SCM_*
# IPPROTO_*
# IPPORT_*
# INADDR_*
# IP_*
# IPV6_*
# EAI_*
# AI_*
# NI_*
# TCP_*
# --------------------
SO_ACCEPTCONN: int
SO_BROADCAST: int
SO_DEBUG: int
SO_DONTROUTE: int
SO_ERROR: int
SO_KEEPALIVE: int
SO_LINGER: int
SO_OOBINLINE: int
SO_RCVBUF: int
SO_RCVLOWAT: int
SO_RCVTIMEO: int
SO_REUSEADDR: int
SO_SNDBUF: int
SO_SNDLOWAT: int
SO_SNDTIMEO: int
SO_TYPE: int
SO_USELOOPBACK: int
if sys.platform == "win32":
SO_EXCLUSIVEADDRUSE: int
if sys.platform != "win32":
SO_REUSEPORT: int
if sys.platform != "win32" and sys.platform != "darwin":
SO_BINDTODEVICE: int
SO_DOMAIN: int
SO_MARK: int
SO_PASSCRED: int
SO_PASSSEC: int
SO_PEERCRED: int
SO_PEERSEC: int
SO_PRIORITY: int
SO_PROTOCOL: int
SO_SETFIB: int
SOMAXCONN: int
MSG_CTRUNC: int
MSG_DONTROUTE: int
MSG_OOB: int
MSG_PEEK: int
MSG_TRUNC: int
MSG_WAITALL: int
if sys.platform != "win32":
MSG_DONTWAIT: int
MSG_EOF: int
MSG_EOR: int
MSG_NOSIGNAL: int # Sometimes this exists on darwin, sometimes not
if sys.platform != "darwin":
MSG_BCAST: int
MSG_ERRQUEUE: int
MSG_MCAST: int
if sys.platform != "win32" and sys.platform != "darwin":
MSG_BTAG: int
MSG_CMSG_CLOEXEC: int
MSG_CONFIRM: int
MSG_ETAG: int
MSG_FASTOPEN: int
MSG_MORE: int
MSG_NOTIFICATION: int
SOL_IP: int
SOL_SOCKET: int
SOL_TCP: int
SOL_UDP: int
if sys.platform != "win32" and sys.platform != "darwin":
SOL_ATALK: int
SOL_AX25: int
SOL_HCI: int
SOL_IPX: int
SOL_NETROM: int
SOL_ROSE: int
if sys.platform != "win32":
SCM_CREDS: int
SCM_RIGHTS: int
if sys.platform != "win32" and sys.platform != "darwin":
SCM_CREDENTIALS: int
IPPROTO_ICMP: int
IPPROTO_IP: int
IPPROTO_RAW: int
IPPROTO_TCP: int
IPPROTO_UDP: int
IPPROTO_AH: int
IPPROTO_DSTOPTS: int
IPPROTO_EGP: int
IPPROTO_ESP: int
IPPROTO_FRAGMENT: int
IPPROTO_GGP: int
IPPROTO_HOPOPTS: int
IPPROTO_ICMPV6: int
IPPROTO_IDP: int
IPPROTO_IGMP: int
IPPROTO_IPV4: int
IPPROTO_IPV6: int
IPPROTO_MAX: int
IPPROTO_ND: int
IPPROTO_NONE: int
IPPROTO_PIM: int
IPPROTO_PUP: int
IPPROTO_ROUTING: int
IPPROTO_SCTP: int
if sys.platform != "darwin":
IPPROTO_CBT: int
IPPROTO_ICLFXBM: int
IPPROTO_IGP: int
IPPROTO_L2TP: int
IPPROTO_PGM: int
IPPROTO_RDP: int
IPPROTO_ST: int
if sys.platform != "win32":
IPPROTO_EON: int
IPPROTO_GRE: int
IPPROTO_HELLO: int
IPPROTO_IPCOMP: int
IPPROTO_IPIP: int
IPPROTO_RSVP: int
IPPROTO_TP: int
IPPROTO_XTP: int
if sys.platform != "win32" and sys.platform != "darwin":
IPPROTO_BIP: int
IPPROTO_MOBILE: int
IPPROTO_VRRP: int
if sys.version_info >= (3, 9) and sys.platform == "linux":
# Availability: Linux >= 2.6.20, FreeBSD >= 10.1
IPPROTO_UDPLITE: int
if sys.version_info >= (3, 10) and sys.platform == "linux":
IPPROTO_MPTCP: int
IPPORT_RESERVED: int
IPPORT_USERRESERVED: int
INADDR_ALLHOSTS_GROUP: int
INADDR_ANY: int
INADDR_BROADCAST: int
INADDR_LOOPBACK: int
INADDR_MAX_LOCAL_GROUP: int
INADDR_NONE: int
INADDR_UNSPEC_GROUP: int
IP_ADD_MEMBERSHIP: int
IP_DROP_MEMBERSHIP: int
IP_HDRINCL: int
IP_MULTICAST_IF: int
IP_MULTICAST_LOOP: int
IP_MULTICAST_TTL: int
IP_OPTIONS: int
IP_RECVDSTADDR: int
if sys.version_info >= (3, 10):
IP_RECVTOS: int
elif sys.platform != "win32" and sys.platform != "darwin":
IP_RECVTOS: int
IP_TOS: int
IP_TTL: int
if sys.platform != "win32":
IP_DEFAULT_MULTICAST_LOOP: int
IP_DEFAULT_MULTICAST_TTL: int
IP_MAX_MEMBERSHIPS: int
IP_RECVOPTS: int
IP_RECVRETOPTS: int
IP_RETOPTS: int
if sys.platform != "win32" and sys.platform != "darwin":
IP_TRANSPARENT: int
IP_BIND_ADDRESS_NO_PORT: int
if sys.version_info >= (3, 12):
IP_ADD_SOURCE_MEMBERSHIP: int
IP_BLOCK_SOURCE: int
IP_DROP_SOURCE_MEMBERSHIP: int
IP_PKTINFO: int
IP_UNBLOCK_SOURCE: int
IPV6_CHECKSUM: int
IPV6_JOIN_GROUP: int
IPV6_LEAVE_GROUP: int
IPV6_MULTICAST_HOPS: int
IPV6_MULTICAST_IF: int
IPV6_MULTICAST_LOOP: int
IPV6_RECVTCLASS: int
IPV6_TCLASS: int
IPV6_UNICAST_HOPS: int
IPV6_V6ONLY: int
if sys.version_info >= (3, 9) or sys.platform != "darwin":
IPV6_DONTFRAG: int
IPV6_HOPLIMIT: int
IPV6_HOPOPTS: int
IPV6_PKTINFO: int
IPV6_RECVRTHDR: int
IPV6_RTHDR: int
if sys.platform != "win32":
IPV6_RTHDR_TYPE_0: int
if sys.version_info >= (3, 9) or sys.platform != "darwin":
IPV6_DSTOPTS: int
IPV6_NEXTHOP: int
IPV6_PATHMTU: int
IPV6_RECVDSTOPTS: int
IPV6_RECVHOPLIMIT: int
IPV6_RECVHOPOPTS: int
IPV6_RECVPATHMTU: int
IPV6_RECVPKTINFO: int
IPV6_RTHDRDSTOPTS: int
IPV6_USE_MIN_MTU: int
EAI_AGAIN: int
EAI_BADFLAGS: int
EAI_FAIL: int
EAI_FAMILY: int
EAI_MEMORY: int
EAI_NODATA: int
EAI_NONAME: int
EAI_SERVICE: int
EAI_SOCKTYPE: int
if sys.platform != "win32":
EAI_ADDRFAMILY: int
EAI_BADHINTS: int
EAI_MAX: int
EAI_OVERFLOW: int
EAI_PROTOCOL: int
EAI_SYSTEM: int
AI_ADDRCONFIG: int
AI_ALL: int
AI_CANONNAME: int
AI_NUMERICHOST: int
AI_NUMERICSERV: int
AI_PASSIVE: int
AI_V4MAPPED: int
if sys.platform != "win32":
AI_DEFAULT: int
AI_MASK: int
AI_V4MAPPED_CFG: int
NI_DGRAM: int
NI_MAXHOST: int
NI_MAXSERV: int
NI_NAMEREQD: int
NI_NOFQDN: int
NI_NUMERICHOST: int
NI_NUMERICSERV: int
TCP_FASTOPEN: int
TCP_KEEPCNT: int
TCP_KEEPINTVL: int
TCP_MAXSEG: int
TCP_NODELAY: int
if sys.platform != "win32":
TCP_NOTSENT_LOWAT: int
if sys.platform != "darwin":
TCP_KEEPIDLE: int
if sys.version_info >= (3, 10) and sys.platform == "darwin":
TCP_KEEPALIVE: int
if sys.version_info >= (3, 11) and sys.platform == "darwin":
TCP_CONNECTION_INFO: int
if sys.platform != "win32" and sys.platform != "darwin":
TCP_CONGESTION: int
TCP_CORK: int
TCP_DEFER_ACCEPT: int
TCP_INFO: int
TCP_LINGER2: int
TCP_QUICKACK: int
TCP_SYNCNT: int
TCP_USER_TIMEOUT: int
TCP_WINDOW_CLAMP: int
# --------------------
# Specifically documented constants
# --------------------
if sys.platform == "linux":
# Availability: Linux >= 2.6.25, NetBSD >= 8
AF_CAN: int
PF_CAN: int
SOL_CAN_BASE: int
SOL_CAN_RAW: int
CAN_EFF_FLAG: int
CAN_EFF_MASK: int
CAN_ERR_FLAG: int
CAN_ERR_MASK: int
CAN_RAW: int
CAN_RAW_ERR_FILTER: int
CAN_RAW_FILTER: int
CAN_RAW_LOOPBACK: int
CAN_RAW_RECV_OWN_MSGS: int
CAN_RTR_FLAG: int
CAN_SFF_MASK: int
if sys.platform == "linux":
# Availability: Linux >= 2.6.25
CAN_BCM: int
CAN_BCM_TX_SETUP: int
CAN_BCM_TX_DELETE: int
CAN_BCM_TX_READ: int
CAN_BCM_TX_SEND: int
CAN_BCM_RX_SETUP: int
CAN_BCM_RX_DELETE: int
CAN_BCM_RX_READ: int
CAN_BCM_TX_STATUS: int
CAN_BCM_TX_EXPIRED: int
CAN_BCM_RX_STATUS: int
CAN_BCM_RX_TIMEOUT: int
CAN_BCM_RX_CHANGED: int
CAN_BCM_SETTIMER: int
CAN_BCM_STARTTIMER: int
CAN_BCM_TX_COUNTEVT: int
CAN_BCM_TX_ANNOUNCE: int
CAN_BCM_TX_CP_CAN_ID: int
CAN_BCM_RX_FILTER_ID: int
CAN_BCM_RX_CHECK_DLC: int
CAN_BCM_RX_NO_AUTOTIMER: int
CAN_BCM_RX_ANNOUNCE_RESUME: int
CAN_BCM_TX_RESET_MULTI_IDX: int
CAN_BCM_RX_RTR_FRAME: int
CAN_BCM_CAN_FD_FRAME: int
if sys.platform == "linux":
# Availability: Linux >= 3.6
CAN_RAW_FD_FRAMES: int
if sys.platform == "linux" and sys.version_info >= (3, 9):
# Availability: Linux >= 4.1
CAN_RAW_JOIN_FILTERS: int
if sys.platform == "linux":
# Availability: Linux >= 2.6.25
CAN_ISOTP: int
if sys.platform == "linux" and sys.version_info >= (3, 9):
# Availability: Linux >= 5.4
CAN_J1939: int
J1939_MAX_UNICAST_ADDR: int
J1939_IDLE_ADDR: int
J1939_NO_ADDR: int
J1939_NO_NAME: int
J1939_PGN_REQUEST: int
J1939_PGN_ADDRESS_CLAIMED: int
J1939_PGN_ADDRESS_COMMANDED: int
J1939_PGN_PDU1_MAX: int
J1939_PGN_MAX: int
J1939_NO_PGN: int
SO_J1939_FILTER: int
SO_J1939_PROMISC: int
SO_J1939_SEND_PRIO: int
SO_J1939_ERRQUEUE: int
SCM_J1939_DEST_ADDR: int
SCM_J1939_DEST_NAME: int
SCM_J1939_PRIO: int
SCM_J1939_ERRQUEUE: int
J1939_NLA_PAD: int
J1939_NLA_BYTES_ACKED: int
J1939_EE_INFO_NONE: int
J1939_EE_INFO_TX_ABORT: int
J1939_FILTER_MAX: int
if sys.version_info >= (3, 12) and sys.platform != "linux" and sys.platform != "win32" and sys.platform != "darwin":
# Availability: FreeBSD >= 14.0
AF_DIVERT: int
PF_DIVERT: int
if sys.platform == "linux":
# Availability: Linux >= 2.2
AF_PACKET: int
PF_PACKET: int
PACKET_BROADCAST: int
PACKET_FASTROUTE: int
PACKET_HOST: int
PACKET_LOOPBACK: int
PACKET_MULTICAST: int
PACKET_OTHERHOST: int
PACKET_OUTGOING: int
if sys.version_info >= (3, 12) and sys.platform == "linux":
ETH_P_ALL: int
if sys.platform == "linux":
# Availability: Linux >= 2.6.30
AF_RDS: int
PF_RDS: int
SOL_RDS: int
RDS_CANCEL_SENT_TO: int
RDS_CMSG_RDMA_ARGS: int
RDS_CMSG_RDMA_DEST: int
RDS_CMSG_RDMA_MAP: int
RDS_CMSG_RDMA_STATUS: int
RDS_CMSG_RDMA_UPDATE: int
RDS_CONG_MONITOR: int
RDS_FREE_MR: int
RDS_GET_MR: int
RDS_GET_MR_FOR_DEST: int
RDS_RDMA_DONTWAIT: int
RDS_RDMA_FENCE: int
RDS_RDMA_INVALIDATE: int
RDS_RDMA_NOTIFY_ME: int
RDS_RDMA_READWRITE: int
RDS_RDMA_SILENT: int
RDS_RDMA_USE_ONCE: int
RDS_RECVERR: int
if sys.platform == "win32":
SIO_RCVALL: int
SIO_KEEPALIVE_VALS: int
SIO_LOOPBACK_FAST_PATH: int
RCVALL_MAX: int
RCVALL_OFF: int
RCVALL_ON: int
RCVALL_SOCKETLEVELONLY: int
if sys.platform == "linux":
AF_TIPC: int
SOL_TIPC: int
TIPC_ADDR_ID: int
TIPC_ADDR_NAME: int
TIPC_ADDR_NAMESEQ: int
TIPC_CFG_SRV: int
TIPC_CLUSTER_SCOPE: int
TIPC_CONN_TIMEOUT: int
TIPC_CRITICAL_IMPORTANCE: int
TIPC_DEST_DROPPABLE: int
TIPC_HIGH_IMPORTANCE: int
TIPC_IMPORTANCE: int
TIPC_LOW_IMPORTANCE: int
TIPC_MEDIUM_IMPORTANCE: int
TIPC_NODE_SCOPE: int
TIPC_PUBLISHED: int
TIPC_SRC_DROPPABLE: int
TIPC_SUBSCR_TIMEOUT: int
TIPC_SUB_CANCEL: int
TIPC_SUB_PORTS: int
TIPC_SUB_SERVICE: int
TIPC_TOP_SRV: int
TIPC_WAIT_FOREVER: int
TIPC_WITHDRAWN: int
TIPC_ZONE_SCOPE: int
if sys.platform == "linux":
# Availability: Linux >= 2.6.38
AF_ALG: int
SOL_ALG: int
ALG_OP_DECRYPT: int
ALG_OP_ENCRYPT: int
ALG_OP_SIGN: int
ALG_OP_VERIFY: int
ALG_SET_AEAD_ASSOCLEN: int
ALG_SET_AEAD_AUTHSIZE: int
ALG_SET_IV: int
ALG_SET_KEY: int
ALG_SET_OP: int
ALG_SET_PUBKEY: int
if sys.platform == "linux":
# Availability: Linux >= 4.8 (or maybe 3.9, CPython docs are confusing)
AF_VSOCK: int
IOCTL_VM_SOCKETS_GET_LOCAL_CID: int
VMADDR_CID_ANY: int
VMADDR_CID_HOST: int
VMADDR_PORT_ANY: int
SO_VM_SOCKETS_BUFFER_MAX_SIZE: int
SO_VM_SOCKETS_BUFFER_SIZE: int
SO_VM_SOCKETS_BUFFER_MIN_SIZE: int
VM_SOCKETS_INVALID_VERSION: int # undocumented
if sys.platform != "win32" or sys.version_info >= (3, 9):
# Documented as only available on BSD, macOS, but empirically sometimes
# available on Windows
AF_LINK: int
has_ipv6: bool
if sys.platform != "darwin":
if sys.platform != "win32" or sys.version_info >= (3, 9):
BDADDR_ANY: str
BDADDR_LOCAL: str
if sys.platform != "win32" and sys.platform != "darwin":
HCI_FILTER: int # not in NetBSD or DragonFlyBSD
HCI_TIME_STAMP: int # not in FreeBSD, NetBSD, or DragonFlyBSD
HCI_DATA_DIR: int # not in FreeBSD, NetBSD, or DragonFlyBSD
if sys.platform == "linux":
AF_QIPCRTR: int # Availability: Linux >= 4.7
if sys.version_info >= (3, 11) and sys.platform != "linux" and sys.platform != "win32" and sys.platform != "darwin":
# FreeBSD
SCM_CREDS2: int
LOCAL_CREDS: int
LOCAL_CREDS_PERSISTENT: int
if sys.version_info >= (3, 11) and sys.platform == "linux":
SO_INCOMING_CPU: int # Availability: Linux >= 3.9
if sys.version_info >= (3, 12) and sys.platform == "win32":
# Availability: Windows
AF_HYPERV: int
HV_PROTOCOL_RAW: int
HVSOCKET_CONNECT_TIMEOUT: int
HVSOCKET_CONNECT_TIMEOUT_MAX: int
HVSOCKET_CONNECTED_SUSPEND: int
HVSOCKET_ADDRESS_FLAG_PASSTHRU: int
HV_GUID_ZERO: str
HV_GUID_WILDCARD: str
HV_GUID_BROADCAST: str
HV_GUID_CHILDREN: str
HV_GUID_LOOPBACK: str
HV_GUID_PARENT: str
if sys.version_info >= (3, 12):
if sys.platform != "win32":
# Availability: Linux, FreeBSD, macOS
ETHERTYPE_ARP: int
ETHERTYPE_IP: int
ETHERTYPE_IPV6: int
ETHERTYPE_VLAN: int
# --------------------
# Semi-documented constants
# These are alluded to under the "Socket families" section in the docs
# https://docs.python.org/3/library/socket.html#socket-families
# --------------------
if sys.platform == "linux":
# Netlink is defined by Linux
AF_NETLINK: int
NETLINK_ARPD: int
NETLINK_CRYPTO: int
NETLINK_DNRTMSG: int
NETLINK_FIREWALL: int
NETLINK_IP6_FW: int
NETLINK_NFLOG: int
NETLINK_ROUTE6: int
NETLINK_ROUTE: int
NETLINK_SKIP: int
NETLINK_TAPBASE: int
NETLINK_TCPDIAG: int
NETLINK_USERSOCK: int
NETLINK_W1: int
NETLINK_XFRM: int
if sys.platform == "darwin":
PF_SYSTEM: int
SYSPROTO_CONTROL: int
if sys.platform != "darwin":
if sys.version_info >= (3, 9) or sys.platform != "win32":
AF_BLUETOOTH: int
if sys.platform != "win32" and sys.platform != "darwin":
# Linux and some BSD support is explicit in the docs
# Windows and macOS do not support in practice
BTPROTO_HCI: int
BTPROTO_L2CAP: int
BTPROTO_SCO: int # not in FreeBSD
if sys.platform != "darwin":
if sys.version_info >= (3, 9) or sys.platform != "win32":
BTPROTO_RFCOMM: int
if sys.version_info >= (3, 9) and sys.platform == "linux":
UDPLITE_RECV_CSCOV: int
UDPLITE_SEND_CSCOV: int
# --------------------
# Documented under socket.shutdown
# --------------------
SHUT_RD: int
SHUT_RDWR: int
SHUT_WR: int
# --------------------
# Undocumented constants
# --------------------
# Undocumented address families
AF_APPLETALK: int
AF_DECnet: int
AF_IPX: int
AF_SNA: int
if sys.platform != "win32":
AF_ROUTE: int
AF_SYSTEM: int
if sys.platform != "darwin":
AF_IRDA: int
if sys.platform != "win32" and sys.platform != "darwin":
AF_AAL5: int
AF_ASH: int
AF_ATMPVC: int
AF_ATMSVC: int
AF_AX25: int
AF_BRIDGE: int
AF_ECONET: int
AF_KEY: int
AF_LLC: int
AF_NETBEUI: int
AF_NETROM: int
AF_PPPOX: int
AF_ROSE: int
AF_SECURITY: int
AF_WANPIPE: int
AF_X25: int
# Miscellaneous undocumented
if sys.platform != "win32":
LOCAL_PEERCRED: int
if sys.platform != "win32" and sys.platform != "darwin":
IPX_TYPE: int
# ===== Exceptions =====
error = OSError
class herror(error): ...
class gaierror(error): ...
if sys.version_info >= (3, 10):
timeout = TimeoutError
else:
class timeout(error): ...
# ===== Classes =====
class socket:
@property
def family(self) -> int: ...
@property
def type(self) -> int: ...
@property
def proto(self) -> int: ...
@property
def timeout(self) -> float | None: ...
if sys.platform == "win32":
def __init__(
self, family: int = ..., type: int = ..., proto: int = ..., fileno: SupportsIndex | bytes | None = ...
) -> None: ...
else:
def __init__(self, family: int = ..., type: int = ..., proto: int = ..., fileno: SupportsIndex | None = ...) -> None: ...
def bind(self, address: _Address, /) -> None: ...
def close(self) -> None: ...
def connect(self, address: _Address, /) -> None: ...
def connect_ex(self, address: _Address, /) -> int: ...
def detach(self) -> int: ...
def fileno(self) -> int: ...
def getpeername(self) -> _RetAddress: ...
def getsockname(self) -> _RetAddress: ...
@overload
def getsockopt(self, level: int, optname: int, /) -> int: ...
@overload
def getsockopt(self, level: int, optname: int, buflen: int, /) -> bytes: ...
def getblocking(self) -> bool: ...
def gettimeout(self) -> float | None: ...
if sys.platform == "win32":
def ioctl(self, control: int, option: int | tuple[int, int, int] | bool, /) -> None: ...
def listen(self, backlog: int = ..., /) -> None: ...
def recv(self, bufsize: int, flags: int = ..., /) -> bytes: ...
def recvfrom(self, bufsize: int, flags: int = ..., /) -> tuple[bytes, _RetAddress]: ...
if sys.platform != "win32":
def recvmsg(self, bufsize: int, ancbufsize: int = ..., flags: int = ..., /) -> tuple[bytes, list[_CMSG], int, Any]: ...
def recvmsg_into(
self, buffers: Iterable[WriteableBuffer], ancbufsize: int = ..., flags: int = ..., /
) -> tuple[int, list[_CMSG], int, Any]: ...
def recvfrom_into(self, buffer: WriteableBuffer, nbytes: int = ..., flags: int = ...) -> tuple[int, _RetAddress]: ...
def recv_into(self, buffer: WriteableBuffer, nbytes: int = ..., flags: int = ...) -> int: ...
def send(self, data: ReadableBuffer, flags: int = ..., /) -> int: ...
def sendall(self, data: ReadableBuffer, flags: int = ..., /) -> None: ...
@overload
def sendto(self, data: ReadableBuffer, address: _Address, /) -> int: ...
@overload
def sendto(self, data: ReadableBuffer, flags: int, address: _Address, /) -> int: ...
if sys.platform != "win32":
def sendmsg(
self,
buffers: Iterable[ReadableBuffer],
ancdata: Iterable[_CMSGArg] = ...,
flags: int = ...,
address: _Address | None = ...,
/,
) -> int: ...
if sys.platform == "linux":
def sendmsg_afalg(
self, msg: Iterable[ReadableBuffer] = ..., *, op: int, iv: Any = ..., assoclen: int = ..., flags: int = ...
) -> int: ...
def setblocking(self, flag: bool, /) -> None: ...
def settimeout(self, value: float | None, /) -> None: ...
@overload
def setsockopt(self, level: int, optname: int, value: int | ReadableBuffer, /) -> None: ...
@overload
def setsockopt(self, level: int, optname: int, value: None, optlen: int, /) -> None: ...
if sys.platform == "win32":
def share(self, process_id: int, /) -> bytes: ...
def shutdown(self, how: int, /) -> None: ...
SocketType = socket
# ===== Functions =====
def close(fd: SupportsIndex, /) -> None: ...
def dup(fd: SupportsIndex, /) -> int: ...
# the 5th tuple item is an address
def getaddrinfo(
host: bytes | str | None,
port: bytes | str | int | None,
family: int = ...,
type: int = ...,
proto: int = ...,
flags: int = ...,
) -> list[tuple[int, int, int, str, tuple[str, int] | tuple[str, int, int, int]]]: ...
def gethostbyname(hostname: str, /) -> str: ...
def gethostbyname_ex(hostname: str, /) -> tuple[str, list[str], list[str]]: ...
def gethostname() -> str: ...
def gethostbyaddr(ip_address: str, /) -> tuple[str, list[str], list[str]]: ...
def getnameinfo(sockaddr: tuple[str, int] | tuple[str, int, int, int], flags: int, /) -> tuple[str, str]: ...
def getprotobyname(protocolname: str, /) -> int: ...
def getservbyname(servicename: str, protocolname: str = ..., /) -> int: ...
def getservbyport(port: int, protocolname: str = ..., /) -> str: ...
def ntohl(x: int, /) -> int: ... # param & ret val are 32-bit ints
def ntohs(x: int, /) -> int: ... # param & ret val are 16-bit ints
def htonl(x: int, /) -> int: ... # param & ret val are 32-bit ints
def htons(x: int, /) -> int: ... # param & ret val are 16-bit ints
def inet_aton(ip_addr: str, /) -> bytes: ... # ret val 4 bytes in length
def inet_ntoa(packed_ip: ReadableBuffer, /) -> str: ...
def inet_pton(address_family: int, ip_string: str, /) -> bytes: ...
def inet_ntop(address_family: int, packed_ip: ReadableBuffer, /) -> str: ...
def getdefaulttimeout() -> float | None: ...
def setdefaulttimeout(timeout: float | None, /) -> None: ...
if sys.platform != "win32":
def sethostname(name: str, /) -> None: ...
def CMSG_LEN(length: int, /) -> int: ...
def CMSG_SPACE(length: int, /) -> int: ...
def socketpair(family: int = ..., type: int = ..., proto: int = ..., /) -> tuple[socket, socket]: ...
def if_nameindex() -> list[tuple[int, str]]: ...
def if_nametoindex(oname: str, /) -> int: ...
def if_indextoname(index: int, /) -> str: ...
CAPI: object

View File

@@ -0,0 +1,117 @@
import sys
from typing import Literal
SF_APPEND: Literal[0x00040000]
SF_ARCHIVED: Literal[0x00010000]
SF_IMMUTABLE: Literal[0x00020000]
SF_NOUNLINK: Literal[0x00100000]
SF_SNAPSHOT: Literal[0x00200000]
ST_MODE: Literal[0]
ST_INO: Literal[1]
ST_DEV: Literal[2]
ST_NLINK: Literal[3]
ST_UID: Literal[4]
ST_GID: Literal[5]
ST_SIZE: Literal[6]
ST_ATIME: Literal[7]
ST_MTIME: Literal[8]
ST_CTIME: Literal[9]
S_IFIFO: Literal[0o010000]
S_IFLNK: Literal[0o120000]
S_IFREG: Literal[0o100000]
S_IFSOCK: Literal[0o140000]
S_IFBLK: Literal[0o060000]
S_IFCHR: Literal[0o020000]
S_IFDIR: Literal[0o040000]
# These are 0 on systems that don't support the specific kind of file.
# Example: Linux doesn't support door files, so S_IFDOOR is 0 on linux.
S_IFDOOR: int
S_IFPORT: int
S_IFWHT: int
S_ISUID: Literal[0o4000]
S_ISGID: Literal[0o2000]
S_ISVTX: Literal[0o1000]
S_IRWXU: Literal[0o0700]
S_IRUSR: Literal[0o0400]
S_IWUSR: Literal[0o0200]
S_IXUSR: Literal[0o0100]
S_IRWXG: Literal[0o0070]
S_IRGRP: Literal[0o0040]
S_IWGRP: Literal[0o0020]
S_IXGRP: Literal[0o0010]
S_IRWXO: Literal[0o0007]
S_IROTH: Literal[0o0004]
S_IWOTH: Literal[0o0002]
S_IXOTH: Literal[0o0001]
S_ENFMT: Literal[0o2000]
S_IREAD: Literal[0o0400]
S_IWRITE: Literal[0o0200]
S_IEXEC: Literal[0o0100]
UF_APPEND: Literal[0x00000004]
UF_COMPRESSED: Literal[0x00000020] # OS X 10.6+ only
UF_HIDDEN: Literal[0x00008000] # OX X 10.5+ only
UF_IMMUTABLE: Literal[0x00000002]
UF_NODUMP: Literal[0x00000001]
UF_NOUNLINK: Literal[0x00000010]
UF_OPAQUE: Literal[0x00000008]
def S_IMODE(mode: int, /) -> int: ...
def S_IFMT(mode: int, /) -> int: ...
def S_ISBLK(mode: int, /) -> bool: ...
def S_ISCHR(mode: int, /) -> bool: ...
def S_ISDIR(mode: int, /) -> bool: ...
def S_ISDOOR(mode: int, /) -> bool: ...
def S_ISFIFO(mode: int, /) -> bool: ...
def S_ISLNK(mode: int, /) -> bool: ...
def S_ISPORT(mode: int, /) -> bool: ...
def S_ISREG(mode: int, /) -> bool: ...
def S_ISSOCK(mode: int, /) -> bool: ...
def S_ISWHT(mode: int, /) -> bool: ...
def filemode(mode: int, /) -> str: ...
if sys.platform == "win32":
IO_REPARSE_TAG_SYMLINK: int
IO_REPARSE_TAG_MOUNT_POINT: int
IO_REPARSE_TAG_APPEXECLINK: int
if sys.platform == "win32":
FILE_ATTRIBUTE_ARCHIVE: Literal[32]
FILE_ATTRIBUTE_COMPRESSED: Literal[2048]
FILE_ATTRIBUTE_DEVICE: Literal[64]
FILE_ATTRIBUTE_DIRECTORY: Literal[16]
FILE_ATTRIBUTE_ENCRYPTED: Literal[16384]
FILE_ATTRIBUTE_HIDDEN: Literal[2]
FILE_ATTRIBUTE_INTEGRITY_STREAM: Literal[32768]
FILE_ATTRIBUTE_NORMAL: Literal[128]
FILE_ATTRIBUTE_NOT_CONTENT_INDEXED: Literal[8192]
FILE_ATTRIBUTE_NO_SCRUB_DATA: Literal[131072]
FILE_ATTRIBUTE_OFFLINE: Literal[4096]
FILE_ATTRIBUTE_READONLY: Literal[1]
FILE_ATTRIBUTE_REPARSE_POINT: Literal[1024]
FILE_ATTRIBUTE_SPARSE_FILE: Literal[512]
FILE_ATTRIBUTE_SYSTEM: Literal[4]
FILE_ATTRIBUTE_TEMPORARY: Literal[256]
FILE_ATTRIBUTE_VIRTUAL: Literal[65536]
if sys.version_info >= (3, 13):
SF_SETTABLE: Literal[0x3FFF0000]
# https://github.com/python/cpython/issues/114081#issuecomment-2119017790
# SF_RESTRICTED: Literal[0x00080000]
SF_FIRMLINK: Literal[0x00800000]
SF_DATALESS: Literal[0x40000000]
SF_SUPPORTED: Literal[0x9F0000]
SF_SYNTHETIC: Literal[0xC0000000]
UF_TRACKED: Literal[0x00000040]
UF_DATAVAULT: Literal[0x00000080]
UF_SETTABLE: Literal[0x0000FFFF]

View File

@@ -0,0 +1,59 @@
import sys
from _typeshed import structseq
from collections.abc import Callable
from threading import Thread
from types import TracebackType
from typing import Any, Final, NoReturn, final, overload
from typing_extensions import TypeVarTuple, Unpack
_Ts = TypeVarTuple("_Ts")
error = RuntimeError
def _count() -> int: ...
@final
class LockType:
def acquire(self, blocking: bool = ..., timeout: float = ...) -> bool: ...
def release(self) -> None: ...
def locked(self) -> bool: ...
def __enter__(self) -> bool: ...
def __exit__(
self, type: type[BaseException] | None, value: BaseException | None, traceback: TracebackType | None
) -> None: ...
@overload
def start_new_thread(function: Callable[[Unpack[_Ts]], object], args: tuple[Unpack[_Ts]]) -> int: ...
@overload
def start_new_thread(function: Callable[..., object], args: tuple[Any, ...], kwargs: dict[str, Any]) -> int: ...
def interrupt_main() -> None: ...
def exit() -> NoReturn: ...
def allocate_lock() -> LockType: ...
def get_ident() -> int: ...
def stack_size(size: int = ...) -> int: ...
TIMEOUT_MAX: float
def get_native_id() -> int: ... # only available on some platforms
@final
class _ExceptHookArgs(structseq[Any], tuple[type[BaseException], BaseException | None, TracebackType | None, Thread | None]):
if sys.version_info >= (3, 10):
__match_args__: Final = ("exc_type", "exc_value", "exc_traceback", "thread")
@property
def exc_type(self) -> type[BaseException]: ...
@property
def exc_value(self) -> BaseException | None: ...
@property
def exc_traceback(self) -> TracebackType | None: ...
@property
def thread(self) -> Thread | None: ...
_excepthook: Callable[[_ExceptHookArgs], Any]
if sys.version_info >= (3, 12):
def daemon_threads_allowed() -> bool: ...
class _local:
def __getattribute__(self, name: str, /) -> Any: ...
def __setattr__(self, name: str, value: Any, /) -> None: ...
def __delattr__(self, name: str, /) -> None: ...

View File

@@ -0,0 +1,17 @@
from typing import Any
from typing_extensions import TypeAlias
from weakref import ReferenceType
__all__ = ["local"]
_LocalDict: TypeAlias = dict[Any, Any]
class _localimpl:
key: str
dicts: dict[int, tuple[ReferenceType[Any], _LocalDict]]
def get_dict(self) -> _LocalDict: ...
def create_dict(self) -> _LocalDict: ...
class local:
def __getattribute__(self, name: str) -> Any: ...
def __setattr__(self, name: str, value: Any) -> None: ...
def __delattr__(self, name: str) -> None: ...

View File

@@ -0,0 +1,121 @@
import sys
from typing import Any, ClassVar, Literal, final
# _tkinter is meant to be only used internally by tkinter, but some tkinter
# functions e.g. return _tkinter.Tcl_Obj objects. Tcl_Obj represents a Tcl
# object that hasn't been converted to a string.
#
# There are not many ways to get Tcl_Objs from tkinter, and I'm not sure if the
# only existing ways are supposed to return Tcl_Objs as opposed to returning
# strings. Here's one of these things that return Tcl_Objs:
#
# >>> import tkinter
# >>> text = tkinter.Text()
# >>> text.tag_add('foo', '1.0', 'end')
# >>> text.tag_ranges('foo')
# (<textindex object: '1.0'>, <textindex object: '2.0'>)
@final
class Tcl_Obj:
@property
def string(self) -> str: ...
@property
def typename(self) -> str: ...
__hash__: ClassVar[None] # type: ignore[assignment]
def __eq__(self, value, /): ...
def __ge__(self, value, /): ...
def __gt__(self, value, /): ...
def __le__(self, value, /): ...
def __lt__(self, value, /): ...
def __ne__(self, value, /): ...
class TclError(Exception): ...
# This class allows running Tcl code. Tkinter uses it internally a lot, and
# it's often handy to drop a piece of Tcl code into a tkinter program. Example:
#
# >>> import tkinter, _tkinter
# >>> tkapp = tkinter.Tk().tk
# >>> isinstance(tkapp, _tkinter.TkappType)
# True
# >>> tkapp.call('set', 'foo', (1,2,3))
# (1, 2, 3)
# >>> tkapp.eval('return $foo')
# '1 2 3'
# >>>
#
# call args can be pretty much anything. Also, call(some_tuple) is same as call(*some_tuple).
#
# eval always returns str because _tkinter_tkapp_eval_impl in _tkinter.c calls
# Tkapp_UnicodeResult, and it returns a string when it succeeds.
@final
class TkappType:
# Please keep in sync with tkinter.Tk
def adderrorinfo(self, msg, /): ...
def call(self, command: Any, /, *args: Any) -> Any: ...
def createcommand(self, name, func, /): ...
if sys.platform != "win32":
def createfilehandler(self, file, mask, func, /): ...
def deletefilehandler(self, file, /): ...
def createtimerhandler(self, milliseconds, func, /): ...
def deletecommand(self, name, /): ...
def dooneevent(self, flags: int = 0, /): ...
def eval(self, script: str, /) -> str: ...
def evalfile(self, fileName, /): ...
def exprboolean(self, s, /): ...
def exprdouble(self, s, /): ...
def exprlong(self, s, /): ...
def exprstring(self, s, /): ...
def getboolean(self, arg, /): ...
def getdouble(self, arg, /): ...
def getint(self, arg, /): ...
def getvar(self, *args, **kwargs): ...
def globalgetvar(self, *args, **kwargs): ...
def globalsetvar(self, *args, **kwargs): ...
def globalunsetvar(self, *args, **kwargs): ...
def interpaddr(self): ...
def loadtk(self) -> None: ...
def mainloop(self, threshold: int = 0, /): ...
def quit(self): ...
def record(self, script, /): ...
def setvar(self, *ags, **kwargs): ...
if sys.version_info < (3, 11):
def split(self, arg, /): ...
def splitlist(self, arg, /): ...
def unsetvar(self, *args, **kwargs): ...
def wantobjects(self, *args, **kwargs): ...
def willdispatch(self): ...
# These should be kept in sync with tkinter.tix constants, except ALL_EVENTS which doesn't match TCL_ALL_EVENTS
ALL_EVENTS: Literal[-3]
FILE_EVENTS: Literal[8]
IDLE_EVENTS: Literal[32]
TIMER_EVENTS: Literal[16]
WINDOW_EVENTS: Literal[4]
DONT_WAIT: Literal[2]
EXCEPTION: Literal[8]
READABLE: Literal[2]
WRITABLE: Literal[4]
TCL_VERSION: str
TK_VERSION: str
@final
class TkttType:
def deletetimerhandler(self): ...
def create(
screenName: str | None = None,
baseName: str = "",
className: str = "Tk",
interactive: bool = False,
wantobjects: bool = False,
wantTk: bool = True,
sync: bool = False,
use: str | None = None,
/,
): ...
def getbusywaitinterval(): ...
def setbusywaitinterval(new_val, /): ...

View File

@@ -0,0 +1,17 @@
import sys
from collections.abc import Sequence
from tracemalloc import _FrameTuple, _TraceTuple
def _get_object_traceback(obj: object, /) -> Sequence[_FrameTuple] | None: ...
def _get_traces() -> Sequence[_TraceTuple]: ...
def clear_traces() -> None: ...
def get_traceback_limit() -> int: ...
def get_traced_memory() -> tuple[int, int]: ...
def get_tracemalloc_memory() -> int: ...
def is_tracing() -> bool: ...
if sys.version_info >= (3, 9):
def reset_peak() -> None: ...
def start(nframe: int = 1, /) -> None: ...
def stop() -> None: ...

View File

@@ -0,0 +1,34 @@
# Utility types for typeshed
This package and its submodules contains various common types used by
typeshed. It can also be used by packages outside typeshed, but beware
the API stability guarantees below.
## Usage
The `_typeshed` package and its types do not exist at runtime, but can be
used freely in stubs (`.pyi`) files. To import the types from this package in
implementation (`.py`) files, use the following construct:
```python
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from _typeshed import ...
```
Types can then be used in annotations by either quoting them or
using:
```python
from __future__ import annotations
```
## API Stability
You can use this package and its submodules outside of typeshed, but we
guarantee only limited API stability. Items marked as "stable" will not be
removed or changed in an incompatible way for at least one year.
Before making such a change, the "stable" moniker will be removed
and we will mark the type in question as deprecated. No guarantees
are made about unmarked types.

View File

@@ -0,0 +1,363 @@
# Utility types for typeshed
#
# See the README.md file in this directory for more information.
import sys
from collections.abc import Awaitable, Callable, Iterable, Sequence, Set as AbstractSet, Sized
from dataclasses import Field
from os import PathLike
from types import FrameType, TracebackType
from typing import (
Any,
AnyStr,
ClassVar,
Final,
Generic,
Literal,
Protocol,
SupportsFloat,
SupportsIndex,
SupportsInt,
TypeVar,
final,
overload,
)
from typing_extensions import Buffer, LiteralString, TypeAlias
_KT = TypeVar("_KT")
_KT_co = TypeVar("_KT_co", covariant=True)
_KT_contra = TypeVar("_KT_contra", contravariant=True)
_VT = TypeVar("_VT")
_VT_co = TypeVar("_VT_co", covariant=True)
_T = TypeVar("_T")
_T_co = TypeVar("_T_co", covariant=True)
_T_contra = TypeVar("_T_contra", contravariant=True)
# Alternative to `typing_extensions.Self`, exclusively for use with `__new__`
# in metaclasses:
# def __new__(cls: type[Self], ...) -> Self: ...
# In other cases, use `typing_extensions.Self`.
Self = TypeVar("Self") # noqa: Y001
# covariant version of typing.AnyStr, useful for protocols
AnyStr_co = TypeVar("AnyStr_co", str, bytes, covariant=True) # noqa: Y001
# For partially known annotations. Usually, fields where type annotations
# haven't been added are left unannotated, but in some situations this
# isn't possible or a type is already partially known. In cases like these,
# use Incomplete instead of Any as a marker. For example, use
# "Incomplete | None" instead of "Any | None".
Incomplete: TypeAlias = Any # stable
# To describe a function parameter that is unused and will work with anything.
Unused: TypeAlias = object # stable
# Marker for return types that include None, but where forcing the user to
# check for None can be detrimental. Sometimes called "the Any trick". See
# CONTRIBUTING.md for more information.
MaybeNone: TypeAlias = Any # stable
# Used to mark arguments that default to a sentinel value. This prevents
# stubtest from complaining about the default value not matching.
#
# def foo(x: int | None = sentinel) -> None: ...
#
# In cases where the sentinel object is exported and can be used by user code,
# a construct like this is better:
#
# _SentinelType = NewType("_SentinelType", object)
# sentinel: _SentinelType
# def foo(x: int | None | _SentinelType = ...) -> None: ...
sentinel: Any
# stable
class IdentityFunction(Protocol):
def __call__(self, x: _T, /) -> _T: ...
# stable
class SupportsNext(Protocol[_T_co]):
def __next__(self) -> _T_co: ...
# stable
class SupportsAnext(Protocol[_T_co]):
def __anext__(self) -> Awaitable[_T_co]: ...
# Comparison protocols
class SupportsDunderLT(Protocol[_T_contra]):
def __lt__(self, other: _T_contra, /) -> bool: ...
class SupportsDunderGT(Protocol[_T_contra]):
def __gt__(self, other: _T_contra, /) -> bool: ...
class SupportsDunderLE(Protocol[_T_contra]):
def __le__(self, other: _T_contra, /) -> bool: ...
class SupportsDunderGE(Protocol[_T_contra]):
def __ge__(self, other: _T_contra, /) -> bool: ...
class SupportsAllComparisons(
SupportsDunderLT[Any], SupportsDunderGT[Any], SupportsDunderLE[Any], SupportsDunderGE[Any], Protocol
): ...
SupportsRichComparison: TypeAlias = SupportsDunderLT[Any] | SupportsDunderGT[Any]
SupportsRichComparisonT = TypeVar("SupportsRichComparisonT", bound=SupportsRichComparison) # noqa: Y001
# Dunder protocols
class SupportsAdd(Protocol[_T_contra, _T_co]):
def __add__(self, x: _T_contra, /) -> _T_co: ...
class SupportsRAdd(Protocol[_T_contra, _T_co]):
def __radd__(self, x: _T_contra, /) -> _T_co: ...
class SupportsSub(Protocol[_T_contra, _T_co]):
def __sub__(self, x: _T_contra, /) -> _T_co: ...
class SupportsRSub(Protocol[_T_contra, _T_co]):
def __rsub__(self, x: _T_contra, /) -> _T_co: ...
class SupportsDivMod(Protocol[_T_contra, _T_co]):
def __divmod__(self, other: _T_contra, /) -> _T_co: ...
class SupportsRDivMod(Protocol[_T_contra, _T_co]):
def __rdivmod__(self, other: _T_contra, /) -> _T_co: ...
# This protocol is generic over the iterator type, while Iterable is
# generic over the type that is iterated over.
class SupportsIter(Protocol[_T_co]):
def __iter__(self) -> _T_co: ...
# This protocol is generic over the iterator type, while AsyncIterable is
# generic over the type that is iterated over.
class SupportsAiter(Protocol[_T_co]):
def __aiter__(self) -> _T_co: ...
class SupportsLenAndGetItem(Protocol[_T_co]):
def __len__(self) -> int: ...
def __getitem__(self, k: int, /) -> _T_co: ...
class SupportsTrunc(Protocol):
def __trunc__(self) -> int: ...
# Mapping-like protocols
# stable
class SupportsItems(Protocol[_KT_co, _VT_co]):
def items(self) -> AbstractSet[tuple[_KT_co, _VT_co]]: ...
# stable
class SupportsKeysAndGetItem(Protocol[_KT, _VT_co]):
def keys(self) -> Iterable[_KT]: ...
def __getitem__(self, key: _KT, /) -> _VT_co: ...
# This protocol is currently under discussion. Use SupportsContainsAndGetItem
# instead, if you require the __contains__ method.
# See https://github.com/python/typeshed/issues/11822.
class SupportsGetItem(Protocol[_KT_contra, _VT_co]):
def __contains__(self, x: Any, /) -> bool: ...
def __getitem__(self, key: _KT_contra, /) -> _VT_co: ...
# stable
class SupportsContainsAndGetItem(Protocol[_KT_contra, _VT_co]):
def __contains__(self, x: Any, /) -> bool: ...
def __getitem__(self, key: _KT_contra, /) -> _VT_co: ...
# stable
class SupportsItemAccess(Protocol[_KT_contra, _VT]):
def __contains__(self, x: Any, /) -> bool: ...
def __getitem__(self, key: _KT_contra, /) -> _VT: ...
def __setitem__(self, key: _KT_contra, value: _VT, /) -> None: ...
def __delitem__(self, key: _KT_contra, /) -> None: ...
StrPath: TypeAlias = str | PathLike[str] # stable
BytesPath: TypeAlias = bytes | PathLike[bytes] # stable
GenericPath: TypeAlias = AnyStr | PathLike[AnyStr]
StrOrBytesPath: TypeAlias = str | bytes | PathLike[str] | PathLike[bytes] # stable
OpenTextModeUpdating: TypeAlias = Literal[
"r+",
"+r",
"rt+",
"r+t",
"+rt",
"tr+",
"t+r",
"+tr",
"w+",
"+w",
"wt+",
"w+t",
"+wt",
"tw+",
"t+w",
"+tw",
"a+",
"+a",
"at+",
"a+t",
"+at",
"ta+",
"t+a",
"+ta",
"x+",
"+x",
"xt+",
"x+t",
"+xt",
"tx+",
"t+x",
"+tx",
]
OpenTextModeWriting: TypeAlias = Literal["w", "wt", "tw", "a", "at", "ta", "x", "xt", "tx"]
OpenTextModeReading: TypeAlias = Literal["r", "rt", "tr", "U", "rU", "Ur", "rtU", "rUt", "Urt", "trU", "tUr", "Utr"]
OpenTextMode: TypeAlias = OpenTextModeUpdating | OpenTextModeWriting | OpenTextModeReading
OpenBinaryModeUpdating: TypeAlias = Literal[
"rb+",
"r+b",
"+rb",
"br+",
"b+r",
"+br",
"wb+",
"w+b",
"+wb",
"bw+",
"b+w",
"+bw",
"ab+",
"a+b",
"+ab",
"ba+",
"b+a",
"+ba",
"xb+",
"x+b",
"+xb",
"bx+",
"b+x",
"+bx",
]
OpenBinaryModeWriting: TypeAlias = Literal["wb", "bw", "ab", "ba", "xb", "bx"]
OpenBinaryModeReading: TypeAlias = Literal["rb", "br", "rbU", "rUb", "Urb", "brU", "bUr", "Ubr"]
OpenBinaryMode: TypeAlias = OpenBinaryModeUpdating | OpenBinaryModeReading | OpenBinaryModeWriting
# stable
class HasFileno(Protocol):
def fileno(self) -> int: ...
FileDescriptor: TypeAlias = int # stable
FileDescriptorLike: TypeAlias = int | HasFileno # stable
FileDescriptorOrPath: TypeAlias = int | StrOrBytesPath
# stable
class SupportsRead(Protocol[_T_co]):
def read(self, length: int = ..., /) -> _T_co: ...
# stable
class SupportsReadline(Protocol[_T_co]):
def readline(self, length: int = ..., /) -> _T_co: ...
# stable
class SupportsNoArgReadline(Protocol[_T_co]):
def readline(self) -> _T_co: ...
# stable
class SupportsWrite(Protocol[_T_contra]):
def write(self, s: _T_contra, /) -> object: ...
# stable
class SupportsFlush(Protocol):
def flush(self) -> object: ...
# Unfortunately PEP 688 does not allow us to distinguish read-only
# from writable buffers. We use these aliases for readability for now.
# Perhaps a future extension of the buffer protocol will allow us to
# distinguish these cases in the type system.
ReadOnlyBuffer: TypeAlias = Buffer # stable
# Anything that implements the read-write buffer interface.
WriteableBuffer: TypeAlias = Buffer
# Same as WriteableBuffer, but also includes read-only buffer types (like bytes).
ReadableBuffer: TypeAlias = Buffer # stable
class SliceableBuffer(Buffer, Protocol):
def __getitem__(self, slice: slice, /) -> Sequence[int]: ...
class IndexableBuffer(Buffer, Protocol):
def __getitem__(self, i: int, /) -> int: ...
class SupportsGetItemBuffer(SliceableBuffer, IndexableBuffer, Protocol):
def __contains__(self, x: Any, /) -> bool: ...
@overload
def __getitem__(self, slice: slice, /) -> Sequence[int]: ...
@overload
def __getitem__(self, i: int, /) -> int: ...
class SizedBuffer(Sized, Buffer, Protocol): ...
# for compatibility with third-party stubs that may use this
_BufferWithLen: TypeAlias = SizedBuffer # not stable # noqa: Y047
ExcInfo: TypeAlias = tuple[type[BaseException], BaseException, TracebackType]
OptExcInfo: TypeAlias = ExcInfo | tuple[None, None, None]
# stable
if sys.version_info >= (3, 10):
from types import NoneType as NoneType
else:
# Used by type checkers for checks involving None (does not exist at runtime)
@final
class NoneType:
def __bool__(self) -> Literal[False]: ...
# This is an internal CPython type that is like, but subtly different from, a NamedTuple
# Subclasses of this type are found in multiple modules.
# In typeshed, `structseq` is only ever used as a mixin in combination with a fixed-length `Tuple`
# See discussion at #6546 & #6560
# `structseq` classes are unsubclassable, so are all decorated with `@final`.
class structseq(Generic[_T_co]):
n_fields: Final[int]
n_unnamed_fields: Final[int]
n_sequence_fields: Final[int]
# The first parameter will generally only take an iterable of a specific length.
# E.g. `os.uname_result` takes any iterable of length exactly 5.
#
# The second parameter will accept a dict of any kind without raising an exception,
# but only has any meaning if you supply it a dict where the keys are strings.
# https://github.com/python/typeshed/pull/6560#discussion_r767149830
def __new__(cls: type[Self], sequence: Iterable[_T_co], dict: dict[str, Any] = ...) -> Self: ...
if sys.version_info >= (3, 13):
def __replace__(self: Self, **kwargs: Any) -> Self: ...
# Superset of typing.AnyStr that also includes LiteralString
AnyOrLiteralStr = TypeVar("AnyOrLiteralStr", str, bytes, LiteralString) # noqa: Y001
# Represents when str or LiteralStr is acceptable. Useful for string processing
# APIs where literalness of return value depends on literalness of inputs
StrOrLiteralStr = TypeVar("StrOrLiteralStr", LiteralString, str) # noqa: Y001
# Objects suitable to be passed to sys.setprofile, threading.setprofile, and similar
ProfileFunction: TypeAlias = Callable[[FrameType, str, Any], object]
# Objects suitable to be passed to sys.settrace, threading.settrace, and similar
TraceFunction: TypeAlias = Callable[[FrameType, str, Any], TraceFunction | None]
# experimental
# Might not work as expected for pyright, see
# https://github.com/python/typeshed/pull/9362
# https://github.com/microsoft/pyright/issues/4339
class DataclassInstance(Protocol):
__dataclass_fields__: ClassVar[dict[str, Field[Any]]]
# Anything that can be passed to the int/float constructors
ConvertibleToInt: TypeAlias = str | ReadableBuffer | SupportsInt | SupportsIndex | SupportsTrunc
ConvertibleToFloat: TypeAlias = str | ReadableBuffer | SupportsFloat | SupportsIndex
# A few classes updated from Foo(str, Enum) to Foo(StrEnum). This is a convenience so these
# can be accurate on all python versions without getting too wordy
if sys.version_info >= (3, 11):
from enum import StrEnum as StrEnum
else:
from enum import Enum
class StrEnum(str, Enum): ...

View File

@@ -0,0 +1,37 @@
# PEP 249 Database API 2.0 Types
# https://www.python.org/dev/peps/pep-0249/
from collections.abc import Mapping, Sequence
from typing import Any, Protocol
from typing_extensions import TypeAlias
DBAPITypeCode: TypeAlias = Any | None
# Strictly speaking, this should be a Sequence, but the type system does
# not support fixed-length sequences.
DBAPIColumnDescription: TypeAlias = tuple[str, DBAPITypeCode, int | None, int | None, int | None, int | None, bool | None]
class DBAPIConnection(Protocol):
def close(self) -> object: ...
def commit(self) -> object: ...
# optional:
# def rollback(self) -> Any: ...
def cursor(self) -> DBAPICursor: ...
class DBAPICursor(Protocol):
@property
def description(self) -> Sequence[DBAPIColumnDescription] | None: ...
@property
def rowcount(self) -> int: ...
# optional:
# def callproc(self, procname: str, parameters: Sequence[Any] = ..., /) -> Sequence[Any]: ...
def close(self) -> object: ...
def execute(self, operation: str, parameters: Sequence[Any] | Mapping[str, Any] = ..., /) -> object: ...
def executemany(self, operation: str, seq_of_parameters: Sequence[Sequence[Any]], /) -> object: ...
def fetchone(self) -> Sequence[Any] | None: ...
def fetchmany(self, size: int = ..., /) -> Sequence[Sequence[Any]]: ...
def fetchall(self) -> Sequence[Sequence[Any]]: ...
# optional:
# def nextset(self) -> None | Literal[True]: ...
arraysize: int
def setinputsizes(self, sizes: Sequence[DBAPITypeCode | int | None], /) -> object: ...
def setoutputsize(self, size: int, column: int = ..., /) -> object: ...

View File

@@ -0,0 +1,18 @@
# Implicit protocols used in importlib.
# We intentionally omit deprecated and optional methods.
from collections.abc import Sequence
from importlib.machinery import ModuleSpec
from types import ModuleType
from typing import Protocol
__all__ = ["LoaderProtocol", "MetaPathFinderProtocol", "PathEntryFinderProtocol"]
class LoaderProtocol(Protocol):
def load_module(self, fullname: str, /) -> ModuleType: ...
class MetaPathFinderProtocol(Protocol):
def find_spec(self, fullname: str, path: Sequence[str] | None, target: ModuleType | None = ..., /) -> ModuleSpec | None: ...
class PathEntryFinderProtocol(Protocol):
def find_spec(self, fullname: str, target: ModuleType | None = ..., /) -> ModuleSpec | None: ...

View File

@@ -0,0 +1,44 @@
# Types to support PEP 3333 (WSGI)
#
# Obsolete since Python 3.11: Use wsgiref.types instead.
#
# See the README.md file in this directory for more information.
import sys
from _typeshed import OptExcInfo
from collections.abc import Callable, Iterable, Iterator
from typing import Any, Protocol
from typing_extensions import TypeAlias
class _Readable(Protocol):
def read(self, size: int = ..., /) -> bytes: ...
# Optional: def close(self) -> object: ...
if sys.version_info >= (3, 11):
from wsgiref.types import *
else:
# stable
class StartResponse(Protocol):
def __call__(
self, status: str, headers: list[tuple[str, str]], exc_info: OptExcInfo | None = ..., /
) -> Callable[[bytes], object]: ...
WSGIEnvironment: TypeAlias = dict[str, Any] # stable
WSGIApplication: TypeAlias = Callable[[WSGIEnvironment, StartResponse], Iterable[bytes]] # stable
# WSGI input streams per PEP 3333, stable
class InputStream(Protocol):
def read(self, size: int = ..., /) -> bytes: ...
def readline(self, size: int = ..., /) -> bytes: ...
def readlines(self, hint: int = ..., /) -> list[bytes]: ...
def __iter__(self) -> Iterator[bytes]: ...
# WSGI error streams per PEP 3333, stable
class ErrorStream(Protocol):
def flush(self) -> object: ...
def write(self, s: str, /) -> object: ...
def writelines(self, seq: list[str], /) -> object: ...
# Optional file wrapper in wsgi.file_wrapper
class FileWrapper(Protocol):
def __call__(self, file: _Readable, block_size: int = ..., /) -> Iterable[bytes]: ...

View File

@@ -0,0 +1,9 @@
# See the README.md file in this directory for more information.
from typing import Any, Protocol
# As defined https://docs.python.org/3/library/xml.dom.html#domimplementation-objects
class DOMImplementation(Protocol):
def hasFeature(self, feature: str, version: str | None, /) -> bool: ...
def createDocument(self, namespaceUri: str, qualifiedName: str, doctype: Any | None, /) -> Any: ...
def createDocumentType(self, qualifiedName: str, publicId: str, systemId: str, /) -> Any: ...

View File

@@ -0,0 +1,55 @@
import sys
from typing import Any, overload
_defaultaction: str
_onceregistry: dict[Any, Any]
filters: list[tuple[str, str | None, type[Warning], str | None, int]]
if sys.version_info >= (3, 12):
@overload
def warn(
message: str,
category: type[Warning] | None = None,
stacklevel: int = 1,
source: Any | None = None,
*,
skip_file_prefixes: tuple[str, ...] = (),
) -> None: ...
@overload
def warn(
message: Warning,
category: Any = None,
stacklevel: int = 1,
source: Any | None = None,
*,
skip_file_prefixes: tuple[str, ...] = (),
) -> None: ...
else:
@overload
def warn(message: str, category: type[Warning] | None = None, stacklevel: int = 1, source: Any | None = None) -> None: ...
@overload
def warn(message: Warning, category: Any = None, stacklevel: int = 1, source: Any | None = None) -> None: ...
@overload
def warn_explicit(
message: str,
category: type[Warning],
filename: str,
lineno: int,
module: str | None = ...,
registry: dict[str | tuple[str, type[Warning], int], int] | None = ...,
module_globals: dict[str, Any] | None = ...,
source: Any | None = ...,
) -> None: ...
@overload
def warn_explicit(
message: Warning,
category: Any,
filename: str,
lineno: int,
module: str | None = ...,
registry: dict[str | tuple[str, type[Warning], int], int] | None = ...,
module_globals: dict[str, Any] | None = ...,
source: Any | None = ...,
) -> None: ...

View File

@@ -0,0 +1,41 @@
import sys
from collections.abc import Callable
from typing import Any, Generic, TypeVar, final, overload
from typing_extensions import Self
if sys.version_info >= (3, 9):
from types import GenericAlias
_C = TypeVar("_C", bound=Callable[..., Any])
_T = TypeVar("_T")
@final
class CallableProxyType(Generic[_C]): # "weakcallableproxy"
def __eq__(self, value: object, /) -> bool: ...
def __getattr__(self, attr: str) -> Any: ...
__call__: _C
@final
class ProxyType(Generic[_T]): # "weakproxy"
def __eq__(self, value: object, /) -> bool: ...
def __getattr__(self, attr: str) -> Any: ...
class ReferenceType(Generic[_T]):
__callback__: Callable[[ReferenceType[_T]], Any]
def __new__(cls, o: _T, callback: Callable[[ReferenceType[_T]], Any] | None = ..., /) -> Self: ...
def __call__(self) -> _T | None: ...
def __eq__(self, value: object, /) -> bool: ...
def __hash__(self) -> int: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
ref = ReferenceType
def getweakrefcount(object: Any, /) -> int: ...
def getweakrefs(object: Any, /) -> list[Any]: ...
# Return CallableProxyType if object is callable, ProxyType otherwise
@overload
def proxy(object: _C, callback: Callable[[_C], Any] | None = None, /) -> CallableProxyType[_C]: ...
@overload
def proxy(object: _T, callback: Callable[[_T], Any] | None = None, /) -> Any: ...

View File

@@ -0,0 +1,51 @@
import sys
from collections.abc import Iterable, Iterator, MutableSet
from typing import Any, TypeVar, overload
from typing_extensions import Self
if sys.version_info >= (3, 9):
from types import GenericAlias
__all__ = ["WeakSet"]
_S = TypeVar("_S")
_T = TypeVar("_T")
class WeakSet(MutableSet[_T]):
@overload
def __init__(self, data: None = None) -> None: ...
@overload
def __init__(self, data: Iterable[_T]) -> None: ...
def add(self, item: _T) -> None: ...
def discard(self, item: _T) -> None: ...
def copy(self) -> Self: ...
def remove(self, item: _T) -> None: ...
def update(self, other: Iterable[_T]) -> None: ...
def __contains__(self, item: object) -> bool: ...
def __len__(self) -> int: ...
def __iter__(self) -> Iterator[_T]: ...
def __ior__(self, other: Iterable[_T]) -> Self: ... # type: ignore[override,misc]
def difference(self, other: Iterable[_T]) -> Self: ...
def __sub__(self, other: Iterable[Any]) -> Self: ...
def difference_update(self, other: Iterable[Any]) -> None: ...
def __isub__(self, other: Iterable[Any]) -> Self: ...
def intersection(self, other: Iterable[_T]) -> Self: ...
def __and__(self, other: Iterable[Any]) -> Self: ...
def intersection_update(self, other: Iterable[Any]) -> None: ...
def __iand__(self, other: Iterable[Any]) -> Self: ...
def issubset(self, other: Iterable[_T]) -> bool: ...
def __le__(self, other: Iterable[_T]) -> bool: ...
def __lt__(self, other: Iterable[_T]) -> bool: ...
def issuperset(self, other: Iterable[_T]) -> bool: ...
def __ge__(self, other: Iterable[_T]) -> bool: ...
def __gt__(self, other: Iterable[_T]) -> bool: ...
def __eq__(self, other: object) -> bool: ...
def symmetric_difference(self, other: Iterable[_S]) -> WeakSet[_S | _T]: ...
def __xor__(self, other: Iterable[_S]) -> WeakSet[_S | _T]: ...
def symmetric_difference_update(self, other: Iterable[_T]) -> None: ...
def __ixor__(self, other: Iterable[_T]) -> Self: ... # type: ignore[override,misc]
def union(self, other: Iterable[_S]) -> WeakSet[_S | _T]: ...
def __or__(self, other: Iterable[_S]) -> WeakSet[_S | _T]: ...
def isdisjoint(self, other: Iterable[_T]) -> bool: ...
if sys.version_info >= (3, 9):
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...

View File

@@ -0,0 +1,255 @@
import sys
from _typeshed import ReadableBuffer
from collections.abc import Sequence
from typing import Any, Literal, NoReturn, final, overload
if sys.platform == "win32":
ABOVE_NORMAL_PRIORITY_CLASS: Literal[0x8000]
BELOW_NORMAL_PRIORITY_CLASS: Literal[0x4000]
CREATE_BREAKAWAY_FROM_JOB: Literal[0x1000000]
CREATE_DEFAULT_ERROR_MODE: Literal[0x4000000]
CREATE_NO_WINDOW: Literal[0x8000000]
CREATE_NEW_CONSOLE: Literal[0x10]
CREATE_NEW_PROCESS_GROUP: Literal[0x200]
DETACHED_PROCESS: Literal[8]
DUPLICATE_CLOSE_SOURCE: Literal[1]
DUPLICATE_SAME_ACCESS: Literal[2]
ERROR_ALREADY_EXISTS: Literal[183]
ERROR_BROKEN_PIPE: Literal[109]
ERROR_IO_PENDING: Literal[997]
ERROR_MORE_DATA: Literal[234]
ERROR_NETNAME_DELETED: Literal[64]
ERROR_NO_DATA: Literal[232]
ERROR_NO_SYSTEM_RESOURCES: Literal[1450]
ERROR_OPERATION_ABORTED: Literal[995]
ERROR_PIPE_BUSY: Literal[231]
ERROR_PIPE_CONNECTED: Literal[535]
ERROR_SEM_TIMEOUT: Literal[121]
FILE_FLAG_FIRST_PIPE_INSTANCE: Literal[0x80000]
FILE_FLAG_OVERLAPPED: Literal[0x40000000]
FILE_GENERIC_READ: Literal[1179785]
FILE_GENERIC_WRITE: Literal[1179926]
FILE_MAP_ALL_ACCESS: Literal[983071]
FILE_MAP_COPY: Literal[1]
FILE_MAP_EXECUTE: Literal[32]
FILE_MAP_READ: Literal[4]
FILE_MAP_WRITE: Literal[2]
FILE_TYPE_CHAR: Literal[2]
FILE_TYPE_DISK: Literal[1]
FILE_TYPE_PIPE: Literal[3]
FILE_TYPE_REMOTE: Literal[32768]
FILE_TYPE_UNKNOWN: Literal[0]
GENERIC_READ: Literal[0x80000000]
GENERIC_WRITE: Literal[0x40000000]
HIGH_PRIORITY_CLASS: Literal[0x80]
INFINITE: Literal[0xFFFFFFFF]
# Ignore the Flake8 error -- flake8-pyi assumes
# most numbers this long will be implementation details,
# but here we can see that it's a power of 2
INVALID_HANDLE_VALUE: Literal[0xFFFFFFFFFFFFFFFF] # noqa: Y054
IDLE_PRIORITY_CLASS: Literal[0x40]
NORMAL_PRIORITY_CLASS: Literal[0x20]
REALTIME_PRIORITY_CLASS: Literal[0x100]
NMPWAIT_WAIT_FOREVER: Literal[0xFFFFFFFF]
MEM_COMMIT: Literal[0x1000]
MEM_FREE: Literal[0x10000]
MEM_IMAGE: Literal[0x1000000]
MEM_MAPPED: Literal[0x40000]
MEM_PRIVATE: Literal[0x20000]
MEM_RESERVE: Literal[0x2000]
NULL: Literal[0]
OPEN_EXISTING: Literal[3]
PIPE_ACCESS_DUPLEX: Literal[3]
PIPE_ACCESS_INBOUND: Literal[1]
PIPE_READMODE_MESSAGE: Literal[2]
PIPE_TYPE_MESSAGE: Literal[4]
PIPE_UNLIMITED_INSTANCES: Literal[255]
PIPE_WAIT: Literal[0]
PAGE_EXECUTE: Literal[0x10]
PAGE_EXECUTE_READ: Literal[0x20]
PAGE_EXECUTE_READWRITE: Literal[0x40]
PAGE_EXECUTE_WRITECOPY: Literal[0x80]
PAGE_GUARD: Literal[0x100]
PAGE_NOACCESS: Literal[0x1]
PAGE_NOCACHE: Literal[0x200]
PAGE_READONLY: Literal[0x2]
PAGE_READWRITE: Literal[0x4]
PAGE_WRITECOMBINE: Literal[0x400]
PAGE_WRITECOPY: Literal[0x8]
PROCESS_ALL_ACCESS: Literal[0x1FFFFF]
PROCESS_DUP_HANDLE: Literal[0x40]
SEC_COMMIT: Literal[0x8000000]
SEC_IMAGE: Literal[0x1000000]
SEC_LARGE_PAGES: Literal[0x80000000]
SEC_NOCACHE: Literal[0x10000000]
SEC_RESERVE: Literal[0x4000000]
SEC_WRITECOMBINE: Literal[0x40000000]
STARTF_USESHOWWINDOW: Literal[0x1]
STARTF_USESTDHANDLES: Literal[0x100]
STD_ERROR_HANDLE: Literal[0xFFFFFFF4]
STD_OUTPUT_HANDLE: Literal[0xFFFFFFF5]
STD_INPUT_HANDLE: Literal[0xFFFFFFF6]
STILL_ACTIVE: Literal[259]
SW_HIDE: Literal[0]
SYNCHRONIZE: Literal[0x100000]
WAIT_ABANDONED_0: Literal[128]
WAIT_OBJECT_0: Literal[0]
WAIT_TIMEOUT: Literal[258]
if sys.version_info >= (3, 10):
LOCALE_NAME_INVARIANT: str
LOCALE_NAME_MAX_LENGTH: int
LOCALE_NAME_SYSTEM_DEFAULT: str
LOCALE_NAME_USER_DEFAULT: str | None
LCMAP_FULLWIDTH: int
LCMAP_HALFWIDTH: int
LCMAP_HIRAGANA: int
LCMAP_KATAKANA: int
LCMAP_LINGUISTIC_CASING: int
LCMAP_LOWERCASE: int
LCMAP_SIMPLIFIED_CHINESE: int
LCMAP_TITLECASE: int
LCMAP_TRADITIONAL_CHINESE: int
LCMAP_UPPERCASE: int
if sys.version_info >= (3, 12):
COPYFILE2_CALLBACK_CHUNK_STARTED: Literal[1]
COPYFILE2_CALLBACK_CHUNK_FINISHED: Literal[2]
COPYFILE2_CALLBACK_STREAM_STARTED: Literal[3]
COPYFILE2_CALLBACK_STREAM_FINISHED: Literal[4]
COPYFILE2_CALLBACK_POLL_CONTINUE: Literal[5]
COPYFILE2_CALLBACK_ERROR: Literal[6]
COPYFILE2_PROGRESS_CONTINUE: Literal[0]
COPYFILE2_PROGRESS_CANCEL: Literal[1]
COPYFILE2_PROGRESS_STOP: Literal[2]
COPYFILE2_PROGRESS_QUIET: Literal[3]
COPYFILE2_PROGRESS_PAUSE: Literal[4]
COPY_FILE_FAIL_IF_EXISTS: Literal[0x1]
COPY_FILE_RESTARTABLE: Literal[0x2]
COPY_FILE_OPEN_SOURCE_FOR_WRITE: Literal[0x4]
COPY_FILE_ALLOW_DECRYPTED_DESTINATION: Literal[0x8]
COPY_FILE_COPY_SYMLINK: Literal[0x800]
COPY_FILE_NO_BUFFERING: Literal[0x1000]
COPY_FILE_REQUEST_SECURITY_PRIVILEGES: Literal[0x2000]
COPY_FILE_RESUME_FROM_PAUSE: Literal[0x4000]
COPY_FILE_NO_OFFLOAD: Literal[0x40000]
COPY_FILE_REQUEST_COMPRESSED_TRAFFIC: Literal[0x10000000]
ERROR_ACCESS_DENIED: Literal[5]
ERROR_PRIVILEGE_NOT_HELD: Literal[1314]
def CloseHandle(handle: int, /) -> None: ...
@overload
def ConnectNamedPipe(handle: int, overlapped: Literal[True]) -> Overlapped: ...
@overload
def ConnectNamedPipe(handle: int, overlapped: Literal[False] = False) -> None: ...
@overload
def ConnectNamedPipe(handle: int, overlapped: bool) -> Overlapped | None: ...
def CreateFile(
file_name: str,
desired_access: int,
share_mode: int,
security_attributes: int,
creation_disposition: int,
flags_and_attributes: int,
template_file: int,
/,
) -> int: ...
def CreateJunction(src_path: str, dst_path: str, /) -> None: ...
def CreateNamedPipe(
name: str,
open_mode: int,
pipe_mode: int,
max_instances: int,
out_buffer_size: int,
in_buffer_size: int,
default_timeout: int,
security_attributes: int,
/,
) -> int: ...
def CreatePipe(pipe_attrs: Any, size: int, /) -> tuple[int, int]: ...
def CreateProcess(
application_name: str | None,
command_line: str | None,
proc_attrs: Any,
thread_attrs: Any,
inherit_handles: bool,
creation_flags: int,
env_mapping: dict[str, str],
current_directory: str | None,
startup_info: Any,
/,
) -> tuple[int, int, int, int]: ...
def DuplicateHandle(
source_process_handle: int,
source_handle: int,
target_process_handle: int,
desired_access: int,
inherit_handle: bool,
options: int = 0,
/,
) -> int: ...
def ExitProcess(ExitCode: int, /) -> NoReturn: ...
def GetACP() -> int: ...
def GetFileType(handle: int) -> int: ...
def GetCurrentProcess() -> int: ...
def GetExitCodeProcess(process: int, /) -> int: ...
def GetLastError() -> int: ...
def GetModuleFileName(module_handle: int, /) -> str: ...
def GetStdHandle(std_handle: int, /) -> int: ...
def GetVersion() -> int: ...
def OpenProcess(desired_access: int, inherit_handle: bool, process_id: int, /) -> int: ...
def PeekNamedPipe(handle: int, size: int = 0, /) -> tuple[int, int] | tuple[bytes, int, int]: ...
if sys.version_info >= (3, 10):
def LCMapStringEx(locale: str, flags: int, src: str) -> str: ...
def UnmapViewOfFile(address: int, /) -> None: ...
@overload
def ReadFile(handle: int, size: int, overlapped: Literal[True]) -> tuple[Overlapped, int]: ...
@overload
def ReadFile(handle: int, size: int, overlapped: Literal[False] = False) -> tuple[bytes, int]: ...
@overload
def ReadFile(handle: int, size: int, overlapped: int | bool) -> tuple[Any, int]: ...
def SetNamedPipeHandleState(
named_pipe: int, mode: int | None, max_collection_count: int | None, collect_data_timeout: int | None, /
) -> None: ...
def TerminateProcess(handle: int, exit_code: int, /) -> None: ...
def WaitForMultipleObjects(handle_seq: Sequence[int], wait_flag: bool, milliseconds: int = 0xFFFFFFFF, /) -> int: ...
def WaitForSingleObject(handle: int, milliseconds: int, /) -> int: ...
def WaitNamedPipe(name: str, timeout: int, /) -> None: ...
@overload
def WriteFile(handle: int, buffer: ReadableBuffer, overlapped: Literal[True]) -> tuple[Overlapped, int]: ...
@overload
def WriteFile(handle: int, buffer: ReadableBuffer, overlapped: Literal[False] = False) -> tuple[int, int]: ...
@overload
def WriteFile(handle: int, buffer: ReadableBuffer, overlapped: int | bool) -> tuple[Any, int]: ...
@final
class Overlapped:
event: int
def GetOverlappedResult(self, wait: bool, /) -> tuple[int, int]: ...
def cancel(self) -> None: ...
def getbuffer(self) -> bytes | None: ...
if sys.version_info >= (3, 12):
def CopyFile2(existing_file_name: str, new_file_name: str, flags: int, progress_routine: int | None = None) -> int: ...
def NeedCurrentDirectoryForExePath(exe_name: str, /) -> bool: ...

View File

@@ -0,0 +1,51 @@
import _typeshed
import sys
from _typeshed import SupportsWrite
from collections.abc import Callable
from typing import Any, Literal, TypeVar
from typing_extensions import Concatenate, ParamSpec, deprecated
_T = TypeVar("_T")
_R_co = TypeVar("_R_co", covariant=True)
_FuncT = TypeVar("_FuncT", bound=Callable[..., Any])
_P = ParamSpec("_P")
# These definitions have special processing in mypy
class ABCMeta(type):
__abstractmethods__: frozenset[str]
if sys.version_info >= (3, 11):
def __new__(
mcls: type[_typeshed.Self], name: str, bases: tuple[type, ...], namespace: dict[str, Any], /, **kwargs: Any
) -> _typeshed.Self: ...
else:
def __new__(
mcls: type[_typeshed.Self], name: str, bases: tuple[type, ...], namespace: dict[str, Any], **kwargs: Any
) -> _typeshed.Self: ...
def __instancecheck__(cls: ABCMeta, instance: Any) -> bool: ...
def __subclasscheck__(cls: ABCMeta, subclass: type) -> bool: ...
def _dump_registry(cls: ABCMeta, file: SupportsWrite[str] | None = None) -> None: ...
def register(cls: ABCMeta, subclass: type[_T]) -> type[_T]: ...
def abstractmethod(funcobj: _FuncT) -> _FuncT: ...
@deprecated("Deprecated, use 'classmethod' with 'abstractmethod' instead")
class abstractclassmethod(classmethod[_T, _P, _R_co]):
__isabstractmethod__: Literal[True]
def __init__(self, callable: Callable[Concatenate[type[_T], _P], _R_co]) -> None: ...
@deprecated("Deprecated, use 'staticmethod' with 'abstractmethod' instead")
class abstractstaticmethod(staticmethod[_P, _R_co]):
__isabstractmethod__: Literal[True]
def __init__(self, callable: Callable[_P, _R_co]) -> None: ...
@deprecated("Deprecated, use 'property' with 'abstractmethod' instead")
class abstractproperty(property):
__isabstractmethod__: Literal[True]
class ABC(metaclass=ABCMeta):
__slots__ = ()
def get_cache_token() -> object: ...
if sys.version_info >= (3, 10):
def update_abstractmethods(cls: type[_T]) -> type[_T]: ...

View File

@@ -0,0 +1,91 @@
import sys
from types import TracebackType
from typing import IO, Any, Literal, NamedTuple, overload
from typing_extensions import Self, TypeAlias
if sys.version_info >= (3, 9):
__all__ = ["Error", "open"]
else:
__all__ = ["Error", "open", "openfp"]
class Error(Exception): ...
class _aifc_params(NamedTuple):
nchannels: int
sampwidth: int
framerate: int
nframes: int
comptype: bytes
compname: bytes
_File: TypeAlias = str | IO[bytes]
_Marker: TypeAlias = tuple[int, int, bytes]
class Aifc_read:
def __init__(self, f: _File) -> None: ...
def __enter__(self) -> Self: ...
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None
) -> None: ...
def initfp(self, file: IO[bytes]) -> None: ...
def getfp(self) -> IO[bytes]: ...
def rewind(self) -> None: ...
def close(self) -> None: ...
def tell(self) -> int: ...
def getnchannels(self) -> int: ...
def getnframes(self) -> int: ...
def getsampwidth(self) -> int: ...
def getframerate(self) -> int: ...
def getcomptype(self) -> bytes: ...
def getcompname(self) -> bytes: ...
def getparams(self) -> _aifc_params: ...
def getmarkers(self) -> list[_Marker] | None: ...
def getmark(self, id: int) -> _Marker: ...
def setpos(self, pos: int) -> None: ...
def readframes(self, nframes: int) -> bytes: ...
class Aifc_write:
def __init__(self, f: _File) -> None: ...
def __del__(self) -> None: ...
def __enter__(self) -> Self: ...
def __exit__(
self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None
) -> None: ...
def initfp(self, file: IO[bytes]) -> None: ...
def aiff(self) -> None: ...
def aifc(self) -> None: ...
def setnchannels(self, nchannels: int) -> None: ...
def getnchannels(self) -> int: ...
def setsampwidth(self, sampwidth: int) -> None: ...
def getsampwidth(self) -> int: ...
def setframerate(self, framerate: int) -> None: ...
def getframerate(self) -> int: ...
def setnframes(self, nframes: int) -> None: ...
def getnframes(self) -> int: ...
def setcomptype(self, comptype: bytes, compname: bytes) -> None: ...
def getcomptype(self) -> bytes: ...
def getcompname(self) -> bytes: ...
def setparams(self, params: tuple[int, int, int, int, bytes, bytes]) -> None: ...
def getparams(self) -> _aifc_params: ...
def setmark(self, id: int, pos: int, name: bytes) -> None: ...
def getmark(self, id: int) -> _Marker: ...
def getmarkers(self) -> list[_Marker] | None: ...
def tell(self) -> int: ...
def writeframesraw(self, data: Any) -> None: ... # Actual type for data is Buffer Protocol
def writeframes(self, data: Any) -> None: ...
def close(self) -> None: ...
@overload
def open(f: _File, mode: Literal["r", "rb"]) -> Aifc_read: ...
@overload
def open(f: _File, mode: Literal["w", "wb"]) -> Aifc_write: ...
@overload
def open(f: _File, mode: str | None = None) -> Any: ...
if sys.version_info < (3, 9):
@overload
def openfp(f: _File, mode: Literal["r", "rb"]) -> Aifc_read: ...
@overload
def openfp(f: _File, mode: Literal["w", "wb"]) -> Aifc_write: ...
@overload
def openfp(f: _File, mode: str | None = None) -> Any: ...

View File

@@ -0,0 +1,3 @@
from _typeshed import ReadableBuffer
def geohash(latitude: float, longitude: float, datedow: ReadableBuffer) -> None: ...

View File

@@ -0,0 +1,753 @@
import sys
from _typeshed import sentinel
from collections.abc import Callable, Generator, Iterable, Sequence
from re import Pattern
from typing import IO, Any, Generic, Literal, NewType, NoReturn, Protocol, TypeVar, overload
from typing_extensions import Self, TypeAlias, deprecated
__all__ = [
"ArgumentParser",
"ArgumentError",
"ArgumentTypeError",
"FileType",
"HelpFormatter",
"ArgumentDefaultsHelpFormatter",
"RawDescriptionHelpFormatter",
"RawTextHelpFormatter",
"MetavarTypeHelpFormatter",
"Namespace",
"Action",
"ONE_OR_MORE",
"OPTIONAL",
"PARSER",
"REMAINDER",
"SUPPRESS",
"ZERO_OR_MORE",
]
if sys.version_info >= (3, 9):
__all__ += ["BooleanOptionalAction"]
_T = TypeVar("_T")
_ActionT = TypeVar("_ActionT", bound=Action)
_ArgumentParserT = TypeVar("_ArgumentParserT", bound=ArgumentParser)
_N = TypeVar("_N")
# more precisely, Literal["store", "store_const", "store_true",
# "store_false", "append", "append_const", "count", "help", "version",
# "extend"], but using this would make it hard to annotate callers
# that don't use a literal argument
_ActionStr: TypeAlias = str
# more precisely, Literal["?", "*", "+", "...", "A...",
# "==SUPPRESS=="], but using this would make it hard to annotate
# callers that don't use a literal argument
_NArgsStr: TypeAlias = str
ONE_OR_MORE: Literal["+"]
OPTIONAL: Literal["?"]
PARSER: Literal["A..."]
REMAINDER: Literal["..."]
_SUPPRESS_T = NewType("_SUPPRESS_T", str)
SUPPRESS: _SUPPRESS_T | str # not using Literal because argparse sometimes compares SUPPRESS with is
# the | str is there so that foo = argparse.SUPPRESS; foo = "test" checks out in mypy
ZERO_OR_MORE: Literal["*"]
_UNRECOGNIZED_ARGS_ATTR: str # undocumented
class ArgumentError(Exception):
argument_name: str | None
message: str
def __init__(self, argument: Action | None, message: str) -> None: ...
# undocumented
class _AttributeHolder:
def _get_kwargs(self) -> list[tuple[str, Any]]: ...
def _get_args(self) -> list[Any]: ...
# undocumented
class _ActionsContainer:
description: str | None
prefix_chars: str
argument_default: Any
conflict_handler: str
_registries: dict[str, dict[Any, Any]]
_actions: list[Action]
_option_string_actions: dict[str, Action]
_action_groups: list[_ArgumentGroup]
_mutually_exclusive_groups: list[_MutuallyExclusiveGroup]
_defaults: dict[str, Any]
_negative_number_matcher: Pattern[str]
_has_negative_number_optionals: list[bool]
def __init__(self, description: str | None, prefix_chars: str, argument_default: Any, conflict_handler: str) -> None: ...
def register(self, registry_name: str, value: Any, object: Any) -> None: ...
def _registry_get(self, registry_name: str, value: Any, default: Any = None) -> Any: ...
def set_defaults(self, **kwargs: Any) -> None: ...
def get_default(self, dest: str) -> Any: ...
def add_argument(
self,
*name_or_flags: str,
action: _ActionStr | type[Action] = ...,
nargs: int | _NArgsStr | _SUPPRESS_T | None = None,
const: Any = ...,
default: Any = ...,
type: Callable[[str], _T] | FileType = ...,
choices: Iterable[_T] | None = ...,
required: bool = ...,
help: str | None = ...,
metavar: str | tuple[str, ...] | None = ...,
dest: str | None = ...,
version: str = ...,
**kwargs: Any,
) -> Action: ...
def add_argument_group(
self,
title: str | None = None,
description: str | None = None,
*,
prefix_chars: str = ...,
argument_default: Any = ...,
conflict_handler: str = ...,
) -> _ArgumentGroup: ...
def add_mutually_exclusive_group(self, *, required: bool = False) -> _MutuallyExclusiveGroup: ...
def _add_action(self, action: _ActionT) -> _ActionT: ...
def _remove_action(self, action: Action) -> None: ...
def _add_container_actions(self, container: _ActionsContainer) -> None: ...
def _get_positional_kwargs(self, dest: str, **kwargs: Any) -> dict[str, Any]: ...
def _get_optional_kwargs(self, *args: Any, **kwargs: Any) -> dict[str, Any]: ...
def _pop_action_class(self, kwargs: Any, default: type[Action] | None = None) -> type[Action]: ...
def _get_handler(self) -> Callable[[Action, Iterable[tuple[str, Action]]], Any]: ...
def _check_conflict(self, action: Action) -> None: ...
def _handle_conflict_error(self, action: Action, conflicting_actions: Iterable[tuple[str, Action]]) -> NoReturn: ...
def _handle_conflict_resolve(self, action: Action, conflicting_actions: Iterable[tuple[str, Action]]) -> None: ...
class _FormatterClass(Protocol):
def __call__(self, *, prog: str) -> HelpFormatter: ...
class ArgumentParser(_AttributeHolder, _ActionsContainer):
prog: str
usage: str | None
epilog: str | None
formatter_class: _FormatterClass
fromfile_prefix_chars: str | None
add_help: bool
allow_abbrev: bool
# undocumented
_positionals: _ArgumentGroup
_optionals: _ArgumentGroup
_subparsers: _ArgumentGroup | None
# Note: the constructor arguments are also used in _SubParsersAction.add_parser.
if sys.version_info >= (3, 9):
def __init__(
self,
prog: str | None = None,
usage: str | None = None,
description: str | None = None,
epilog: str | None = None,
parents: Sequence[ArgumentParser] = [],
formatter_class: _FormatterClass = ...,
prefix_chars: str = "-",
fromfile_prefix_chars: str | None = None,
argument_default: Any = None,
conflict_handler: str = "error",
add_help: bool = True,
allow_abbrev: bool = True,
exit_on_error: bool = True,
) -> None: ...
else:
def __init__(
self,
prog: str | None = None,
usage: str | None = None,
description: str | None = None,
epilog: str | None = None,
parents: Sequence[ArgumentParser] = [],
formatter_class: _FormatterClass = ...,
prefix_chars: str = "-",
fromfile_prefix_chars: str | None = None,
argument_default: Any = None,
conflict_handler: str = "error",
add_help: bool = True,
allow_abbrev: bool = True,
) -> None: ...
@overload
def parse_args(self, args: Sequence[str] | None = None, namespace: None = None) -> Namespace: ...
@overload
def parse_args(self, args: Sequence[str] | None, namespace: _N) -> _N: ...
@overload
def parse_args(self, *, namespace: _N) -> _N: ...
@overload
def add_subparsers(
self: _ArgumentParserT,
*,
title: str = ...,
description: str | None = ...,
prog: str = ...,
action: type[Action] = ...,
option_string: str = ...,
dest: str | None = ...,
required: bool = ...,
help: str | None = ...,
metavar: str | None = ...,
) -> _SubParsersAction[_ArgumentParserT]: ...
@overload
def add_subparsers(
self,
*,
title: str = ...,
description: str | None = ...,
prog: str = ...,
parser_class: type[_ArgumentParserT],
action: type[Action] = ...,
option_string: str = ...,
dest: str | None = ...,
required: bool = ...,
help: str | None = ...,
metavar: str | None = ...,
) -> _SubParsersAction[_ArgumentParserT]: ...
def print_usage(self, file: IO[str] | None = None) -> None: ...
def print_help(self, file: IO[str] | None = None) -> None: ...
def format_usage(self) -> str: ...
def format_help(self) -> str: ...
@overload
def parse_known_args(self, args: Sequence[str] | None = None, namespace: None = None) -> tuple[Namespace, list[str]]: ...
@overload
def parse_known_args(self, args: Sequence[str] | None, namespace: _N) -> tuple[_N, list[str]]: ...
@overload
def parse_known_args(self, *, namespace: _N) -> tuple[_N, list[str]]: ...
def convert_arg_line_to_args(self, arg_line: str) -> list[str]: ...
def exit(self, status: int = 0, message: str | None = None) -> NoReturn: ...
def error(self, message: str) -> NoReturn: ...
@overload
def parse_intermixed_args(self, args: Sequence[str] | None = None, namespace: None = None) -> Namespace: ...
@overload
def parse_intermixed_args(self, args: Sequence[str] | None, namespace: _N) -> _N: ...
@overload
def parse_intermixed_args(self, *, namespace: _N) -> _N: ...
@overload
def parse_known_intermixed_args(
self, args: Sequence[str] | None = None, namespace: None = None
) -> tuple[Namespace, list[str]]: ...
@overload
def parse_known_intermixed_args(self, args: Sequence[str] | None, namespace: _N) -> tuple[_N, list[str]]: ...
@overload
def parse_known_intermixed_args(self, *, namespace: _N) -> tuple[_N, list[str]]: ...
# undocumented
def _get_optional_actions(self) -> list[Action]: ...
def _get_positional_actions(self) -> list[Action]: ...
def _parse_known_args(self, arg_strings: list[str], namespace: Namespace) -> tuple[Namespace, list[str]]: ...
def _read_args_from_files(self, arg_strings: list[str]) -> list[str]: ...
def _match_argument(self, action: Action, arg_strings_pattern: str) -> int: ...
def _match_arguments_partial(self, actions: Sequence[Action], arg_strings_pattern: str) -> list[int]: ...
def _parse_optional(self, arg_string: str) -> tuple[Action | None, str, str | None] | None: ...
def _get_option_tuples(self, option_string: str) -> list[tuple[Action, str, str | None]]: ...
def _get_nargs_pattern(self, action: Action) -> str: ...
def _get_values(self, action: Action, arg_strings: list[str]) -> Any: ...
def _get_value(self, action: Action, arg_string: str) -> Any: ...
def _check_value(self, action: Action, value: Any) -> None: ...
def _get_formatter(self) -> HelpFormatter: ...
def _print_message(self, message: str, file: IO[str] | None = None) -> None: ...
class HelpFormatter:
# undocumented
_prog: str
_indent_increment: int
_max_help_position: int
_width: int
_current_indent: int
_level: int
_action_max_length: int
_root_section: _Section
_current_section: _Section
_whitespace_matcher: Pattern[str]
_long_break_matcher: Pattern[str]
class _Section:
formatter: HelpFormatter
heading: str | None
parent: Self | None
items: list[tuple[Callable[..., str], Iterable[Any]]]
def __init__(self, formatter: HelpFormatter, parent: Self | None, heading: str | None = None) -> None: ...
def format_help(self) -> str: ...
def __init__(self, prog: str, indent_increment: int = 2, max_help_position: int = 24, width: int | None = None) -> None: ...
def _indent(self) -> None: ...
def _dedent(self) -> None: ...
def _add_item(self, func: Callable[..., str], args: Iterable[Any]) -> None: ...
def start_section(self, heading: str | None) -> None: ...
def end_section(self) -> None: ...
def add_text(self, text: str | None) -> None: ...
def add_usage(
self, usage: str | None, actions: Iterable[Action], groups: Iterable[_MutuallyExclusiveGroup], prefix: str | None = None
) -> None: ...
def add_argument(self, action: Action) -> None: ...
def add_arguments(self, actions: Iterable[Action]) -> None: ...
def format_help(self) -> str: ...
def _join_parts(self, part_strings: Iterable[str]) -> str: ...
def _format_usage(
self, usage: str | None, actions: Iterable[Action], groups: Iterable[_MutuallyExclusiveGroup], prefix: str | None
) -> str: ...
def _format_actions_usage(self, actions: Iterable[Action], groups: Iterable[_MutuallyExclusiveGroup]) -> str: ...
def _format_text(self, text: str) -> str: ...
def _format_action(self, action: Action) -> str: ...
def _format_action_invocation(self, action: Action) -> str: ...
def _metavar_formatter(self, action: Action, default_metavar: str) -> Callable[[int], tuple[str, ...]]: ...
def _format_args(self, action: Action, default_metavar: str) -> str: ...
def _expand_help(self, action: Action) -> str: ...
def _iter_indented_subactions(self, action: Action) -> Generator[Action, None, None]: ...
def _split_lines(self, text: str, width: int) -> list[str]: ...
def _fill_text(self, text: str, width: int, indent: str) -> str: ...
def _get_help_string(self, action: Action) -> str | None: ...
def _get_default_metavar_for_optional(self, action: Action) -> str: ...
def _get_default_metavar_for_positional(self, action: Action) -> str: ...
class RawDescriptionHelpFormatter(HelpFormatter): ...
class RawTextHelpFormatter(RawDescriptionHelpFormatter): ...
class ArgumentDefaultsHelpFormatter(HelpFormatter): ...
class MetavarTypeHelpFormatter(HelpFormatter): ...
class Action(_AttributeHolder):
option_strings: Sequence[str]
dest: str
nargs: int | str | None
const: Any
default: Any
type: Callable[[str], Any] | FileType | None
choices: Iterable[Any] | None
required: bool
help: str | None
metavar: str | tuple[str, ...] | None
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
nargs: int | str | None = None,
const: _T | None = None,
default: _T | str | None = None,
type: Callable[[str], _T] | FileType | None = None,
choices: Iterable[_T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
dest: str,
nargs: int | str | None = None,
const: _T | None = None,
default: _T | str | None = None,
type: Callable[[str], _T] | FileType | None = None,
choices: Iterable[_T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
def __call__(
self, parser: ArgumentParser, namespace: Namespace, values: str | Sequence[Any] | None, option_string: str | None = None
) -> None: ...
if sys.version_info >= (3, 9):
def format_usage(self) -> str: ...
if sys.version_info >= (3, 12):
class BooleanOptionalAction(Action):
if sys.version_info >= (3, 13):
@overload
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool | None = None,
*,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
@overload
@deprecated("The `type`, `choices`, and `metavar` parameters are ignored and will be removed in Python 3.14.")
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: _T | bool | None = None,
type: Callable[[str], _T] | FileType | None = sentinel,
choices: Iterable[_T] | None = sentinel,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = sentinel,
deprecated: bool = False,
) -> None: ...
else:
@overload
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool | None = None,
*,
required: bool = False,
help: str | None = None,
) -> None: ...
@overload
@deprecated("The `type`, `choices`, and `metavar` parameters are ignored and will be removed in Python 3.14.")
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: _T | bool | None = None,
type: Callable[[str], _T] | FileType | None = sentinel,
choices: Iterable[_T] | None = sentinel,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = sentinel,
) -> None: ...
elif sys.version_info >= (3, 9):
class BooleanOptionalAction(Action):
@overload
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool | None = None,
*,
required: bool = False,
help: str | None = None,
) -> None: ...
@overload
@deprecated("The `type`, `choices`, and `metavar` parameters are ignored and will be removed in Python 3.14.")
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: _T | bool | None = None,
type: Callable[[str], _T] | FileType | None = None,
choices: Iterable[_T] | None = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
class Namespace(_AttributeHolder):
def __init__(self, **kwargs: Any) -> None: ...
def __getattr__(self, name: str) -> Any: ...
def __setattr__(self, name: str, value: Any, /) -> None: ...
def __contains__(self, key: str) -> bool: ...
def __eq__(self, other: object) -> bool: ...
class FileType:
# undocumented
_mode: str
_bufsize: int
_encoding: str | None
_errors: str | None
def __init__(self, mode: str = "r", bufsize: int = -1, encoding: str | None = None, errors: str | None = None) -> None: ...
def __call__(self, string: str) -> IO[Any]: ...
# undocumented
class _ArgumentGroup(_ActionsContainer):
title: str | None
_group_actions: list[Action]
def __init__(
self,
container: _ActionsContainer,
title: str | None = None,
description: str | None = None,
*,
prefix_chars: str = ...,
argument_default: Any = ...,
conflict_handler: str = ...,
) -> None: ...
# undocumented
class _MutuallyExclusiveGroup(_ArgumentGroup):
required: bool
_container: _ActionsContainer
def __init__(self, container: _ActionsContainer, required: bool = False) -> None: ...
# undocumented
class _StoreAction(Action): ...
# undocumented
class _StoreConstAction(Action):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any | None = None,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
deprecated: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any | None = None,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
# undocumented
class _StoreTrueAction(_StoreConstAction):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool = False,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self, option_strings: Sequence[str], dest: str, default: bool = False, required: bool = False, help: str | None = None
) -> None: ...
# undocumented
class _StoreFalseAction(_StoreConstAction):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: bool = True,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self, option_strings: Sequence[str], dest: str, default: bool = True, required: bool = False, help: str | None = None
) -> None: ...
# undocumented
class _AppendAction(Action): ...
# undocumented
class _ExtendAction(_AppendAction): ...
# undocumented
class _AppendConstAction(Action):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any | None = None,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
deprecated: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any | None = None,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
dest: str,
const: Any,
default: Any = None,
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
# undocumented
class _CountAction(Action):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str,
default: Any = None,
required: bool = False,
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self, option_strings: Sequence[str], dest: str, default: Any = None, required: bool = False, help: str | None = None
) -> None: ...
# undocumented
class _HelpAction(Action):
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
deprecated: bool = False,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
) -> None: ...
# undocumented
class _VersionAction(Action):
version: str | None
if sys.version_info >= (3, 13):
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
deprecated: bool = False,
) -> None: ...
elif sys.version_info >= (3, 11):
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str | None = None,
) -> None: ...
else:
def __init__(
self,
option_strings: Sequence[str],
version: str | None = None,
dest: str = "==SUPPRESS==",
default: str = "==SUPPRESS==",
help: str = "show program's version number and exit",
) -> None: ...
# undocumented
class _SubParsersAction(Action, Generic[_ArgumentParserT]):
_ChoicesPseudoAction: type[Any] # nested class
_prog_prefix: str
_parser_class: type[_ArgumentParserT]
_name_parser_map: dict[str, _ArgumentParserT]
choices: dict[str, _ArgumentParserT]
_choices_actions: list[Action]
def __init__(
self,
option_strings: Sequence[str],
prog: str,
parser_class: type[_ArgumentParserT],
dest: str = "==SUPPRESS==",
required: bool = False,
help: str | None = None,
metavar: str | tuple[str, ...] | None = None,
) -> None: ...
# Note: `add_parser` accepts all kwargs of `ArgumentParser.__init__`. It also
# accepts its own `help` and `aliases` kwargs.
if sys.version_info >= (3, 13):
def add_parser(
self,
name: str,
*,
deprecated: bool = False,
help: str | None = ...,
aliases: Sequence[str] = ...,
# Kwargs from ArgumentParser constructor
prog: str | None = ...,
usage: str | None = ...,
description: str | None = ...,
epilog: str | None = ...,
parents: Sequence[_ArgumentParserT] = ...,
formatter_class: _FormatterClass = ...,
prefix_chars: str = ...,
fromfile_prefix_chars: str | None = ...,
argument_default: Any = ...,
conflict_handler: str = ...,
add_help: bool = ...,
allow_abbrev: bool = ...,
exit_on_error: bool = ...,
) -> _ArgumentParserT: ...
elif sys.version_info >= (3, 9):
def add_parser(
self,
name: str,
*,
help: str | None = ...,
aliases: Sequence[str] = ...,
# Kwargs from ArgumentParser constructor
prog: str | None = ...,
usage: str | None = ...,
description: str | None = ...,
epilog: str | None = ...,
parents: Sequence[_ArgumentParserT] = ...,
formatter_class: _FormatterClass = ...,
prefix_chars: str = ...,
fromfile_prefix_chars: str | None = ...,
argument_default: Any = ...,
conflict_handler: str = ...,
add_help: bool = ...,
allow_abbrev: bool = ...,
exit_on_error: bool = ...,
) -> _ArgumentParserT: ...
else:
def add_parser(
self,
name: str,
*,
help: str | None = ...,
aliases: Sequence[str] = ...,
# Kwargs from ArgumentParser constructor
prog: str | None = ...,
usage: str | None = ...,
description: str | None = ...,
epilog: str | None = ...,
parents: Sequence[_ArgumentParserT] = ...,
formatter_class: _FormatterClass = ...,
prefix_chars: str = ...,
fromfile_prefix_chars: str | None = ...,
argument_default: Any = ...,
conflict_handler: str = ...,
add_help: bool = ...,
allow_abbrev: bool = ...,
) -> _ArgumentParserT: ...
def _get_subactions(self) -> list[Action]: ...
# undocumented
class ArgumentTypeError(Exception): ...
# undocumented
def _get_action_name(argument: Action | None) -> str | None: ...

View File

@@ -0,0 +1,92 @@
import sys
from _typeshed import ReadableBuffer, SupportsRead, SupportsWrite
from collections.abc import Iterable
# pytype crashes if array inherits from collections.abc.MutableSequence instead of typing.MutableSequence
from typing import Any, Literal, MutableSequence, SupportsIndex, TypeVar, overload # noqa: Y022
from typing_extensions import Self, TypeAlias
if sys.version_info >= (3, 12):
from types import GenericAlias
_IntTypeCode: TypeAlias = Literal["b", "B", "h", "H", "i", "I", "l", "L", "q", "Q"]
_FloatTypeCode: TypeAlias = Literal["f", "d"]
_UnicodeTypeCode: TypeAlias = Literal["u"]
_TypeCode: TypeAlias = _IntTypeCode | _FloatTypeCode | _UnicodeTypeCode
_T = TypeVar("_T", int, float, str)
typecodes: str
class array(MutableSequence[_T]):
@property
def typecode(self) -> _TypeCode: ...
@property
def itemsize(self) -> int: ...
@overload
def __init__(self: array[int], typecode: _IntTypeCode, initializer: bytes | bytearray | Iterable[int] = ..., /) -> None: ...
@overload
def __init__(
self: array[float], typecode: _FloatTypeCode, initializer: bytes | bytearray | Iterable[float] = ..., /
) -> None: ...
@overload
def __init__(
self: array[str], typecode: _UnicodeTypeCode, initializer: bytes | bytearray | Iterable[str] = ..., /
) -> None: ...
@overload
def __init__(self, typecode: str, initializer: Iterable[_T], /) -> None: ...
@overload
def __init__(self, typecode: str, initializer: bytes | bytearray = ..., /) -> None: ...
def append(self, v: _T, /) -> None: ...
def buffer_info(self) -> tuple[int, int]: ...
def byteswap(self) -> None: ...
def count(self, v: _T, /) -> int: ...
def extend(self, bb: Iterable[_T], /) -> None: ...
def frombytes(self, buffer: ReadableBuffer, /) -> None: ...
def fromfile(self, f: SupportsRead[bytes], n: int, /) -> None: ...
def fromlist(self, list: list[_T], /) -> None: ...
def fromunicode(self, ustr: str, /) -> None: ...
if sys.version_info >= (3, 10):
def index(self, v: _T, start: int = 0, stop: int = sys.maxsize, /) -> int: ...
else:
def index(self, v: _T, /) -> int: ... # type: ignore[override]
def insert(self, i: int, v: _T, /) -> None: ...
def pop(self, i: int = -1, /) -> _T: ...
def remove(self, v: _T, /) -> None: ...
def tobytes(self) -> bytes: ...
def tofile(self, f: SupportsWrite[bytes], /) -> None: ...
def tolist(self) -> list[_T]: ...
def tounicode(self) -> str: ...
if sys.version_info < (3, 9):
def fromstring(self, buffer: str | ReadableBuffer, /) -> None: ...
def tostring(self) -> bytes: ...
def __len__(self) -> int: ...
@overload
def __getitem__(self, key: SupportsIndex, /) -> _T: ...
@overload
def __getitem__(self, key: slice, /) -> array[_T]: ...
@overload # type: ignore[override]
def __setitem__(self, key: SupportsIndex, value: _T, /) -> None: ...
@overload
def __setitem__(self, key: slice, value: array[_T], /) -> None: ...
def __delitem__(self, key: SupportsIndex | slice, /) -> None: ...
def __add__(self, value: array[_T], /) -> array[_T]: ...
def __eq__(self, value: object, /) -> bool: ...
def __ge__(self, value: array[_T], /) -> bool: ...
def __gt__(self, value: array[_T], /) -> bool: ...
def __iadd__(self, value: array[_T], /) -> Self: ... # type: ignore[override]
def __imul__(self, value: int, /) -> Self: ...
def __le__(self, value: array[_T], /) -> bool: ...
def __lt__(self, value: array[_T], /) -> bool: ...
def __mul__(self, value: int, /) -> array[_T]: ...
def __rmul__(self, value: int, /) -> array[_T]: ...
def __copy__(self) -> array[_T]: ...
def __deepcopy__(self, unused: Any, /) -> array[_T]: ...
def __buffer__(self, flags: int, /) -> memoryview: ...
def __release_buffer__(self, buffer: memoryview, /) -> None: ...
if sys.version_info >= (3, 12):
def __class_getitem__(cls, item: Any, /) -> GenericAlias: ...
ArrayType = array

Some files were not shown because too many files have changed in this diff Show More